id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.09156
Consensus-Based Leader-Follower Formation Tracking for Control-Affine Nonlinear Multiagent Systems
In the typical multiagent formation tracking problem centered on consensus, the prevailing assumption in the literature is that the agents' nonlinear models can be approximated by integrator systems, by their feedback-linearized equivalents, or by dynamics composed of deterministic linear and nonlinear terms. The resulting approaches associated with such assumptions, however, are hardly applicable to general nonlinear systems. To this end, we present consensus-based control laws for multiagent formation tracking in finite-dimensional state space, with the agents represented by a more general class of dynamics: control-affine nonlinear systems. The agents also exchange information via a leader-follower communication topology modeled as an undirected and connected graph with a single leader node. By leveraging standard tools from algebraic graph theory and Lyapunov analysis, we first derive a locally asymptotically stabilizing formation tracking law. Next, to demonstrate the effectiveness of our approach, we present results from numerical simulations of an example in robotics. These results -- together with a comparison of the formation errors obtained with our approach and those realized via an optimization-based method -- further validate our theoretical propositions.
Clinton Enwerem, John S. Baras
2023-09-17T04:44:47Z
http://arxiv.org/abs/2309.09156v1
# Consensus-Based Leader-Follower Formation Tracking for Control-Affine Nonlinear Multiagent Systems* ###### Abstract In the typical multiagent formation tracking problem centered on consensus, the prevailing assumption in the literature is that the agents' nonlinear models can be approximated by integrator systems, by their feedback-linearized equivalents, or by dynamics composed of deterministic linear and nonlinear terms. The resulting approaches associated with such assumptions, however, are hardly applicable to general nonlinear systems. To this end, we present consensus-based control laws for multiagent formation tracking in finite-dimensional state space, with the agents represented by a more general class of dynamics: control-affine nonlinear systems. The agents also exchange information via a leader-follower communication topology modeled as an undirected and connected graph with a single leader node. By leveraging standard tools from algebraic graph theory and Lyapunov analysis, we first derive a locally asymptotically stabilizing formation tracking law. Next, to demonstrate the effectiveness of our approach, we present results from numerical simulations of an example in robotics. These results -- together with a comparison of the formation errors obtained with our approach and those realized via an optimization-based method -- further validate our theoretical propositions. multiagent systems, consensus, formation control, control-affine nonlinear systems ## I Introduction Multiagent formation tracking -- the task of controlling a group of dynamic units (or agents) to maintain a desired formation while following a reference trajectory -- has risen to become evidently one of the most popular topics in cooperative control and related fields, owing to its numerous practical applications. The formation _tracking_ problem is distinct from the nominal formation _control_ problem (where the focus instead is only on the agents converging to a desired formation) and has traditionally been tackled using ideas from fields such as learning theory, graph theory, optimization, and control theory, to name a handful. However, studies featuring methods from the aforementioned areas typically consider the agents' dynamic models as linear systems -- a common simplification from their general and more complex nonlinear representations. Within the consensus control domain particularly, several research articles [1, 2] have studied the formation tracking problem, following the seminal work of [3]. These studies have been mostly adhoc, however, extending the aforementioned ideas to only time-invariant linear systems or systems with dynamics comprising deterministic linear and nonlinear terms. Unfortunately, the control schemes designed under these approximations are largely not generalizable and cannot be extended to the nonlinear case as a result. This significant research gap thus motivates the need for results applicable to a more general class of systems. ### _Prior Work_ Results demonstrating the application of consensus to formation control for nonlinear systems began to appear fairly recently, starting in [4], which featured a consensus-based event-triggered control scheme, with the agents represented by nonlinear systems with a scalar control coefficient. Following that, in a closely-related work [5], the authors laid down consensus-based formation control laws for a class of nonlinear multiagent systems -- with models similar to those studied in [4]. More recently, consensus-based formation control laws were given in [6], but for nonlinear systems with linear-in-state drift terms. In our work, we derive original locally asymptotically stable consensus-based formation tracking laws for a more general class of nonlinear systems -- affine-in-control systems with a state-dependent drift term. Following [7, 8], we present general consensus control rules that guarantee asymptotic decay of the formation error of the multiagent system (MAS). In contrast to the aforementioned studies, we apply the agreement protocol to the problem of formation tracking, where the agents are modeled by general control-affine nonlinear systems. ### _Main Contributions_ We contribute the following to the state of the art in consensus-based formation tracking: (i) an original consensus-based formation tracking control scheme -- for control-affine nonlinear systems -- that guarantees asymptotic convergence of each agent's consensus error to the corresponding relative state, thus preserving the formation (Section III), and (ii) numerical formation tracking simulations that juxtapose the performance of our approach with that obtained by an optimization-based method (Section V). ## II Notation & Mathematical Preliminaries Throughout, we denote vectors and matrices with boldface font (e.g., \(\mathbf{x}\) and \(\mathbf{A}\)), with corresponding elements in regular italicized font, i.e., \(x_{i}\) denotes the \(i^{\text{th}}\) element of the vector \(\mathbf{x}\), while \(A_{ij}\) is the element occupying the \(i^{\text{th}}\) row and \(j^{\text{th}}\) column of \(\mathbf{A}\). \(\mathbf{A}^{T}\) denotes the transpose of matrix \(\mathbf{A}\), \(\times\) (in the context of three-dimensional vectors) represents the cross product, and \(\otimes\) is the standard Kronecker product operator. Unless otherwise noted, \(\mathbf{I}_{n}\) is the \(n\times n\) identity matrix, \(\mathbf{\kappa}_{n}\) is the vector \(\left[\begin{array}{cc}\kappa&\kappa&\kappa\end{array}\right]^{T}\in\mathbb{R}^{n}\), with \(\kappa\in\mathbb{R}\), and \(||\mathbf{x}||=(\sum_{i=1}^{n}(x_{i})^{2})^{\frac{1}{2}}\) is the vector Euclidean norm. \(\text{diag}([\star_{i}]_{i\in I})\) is the \(|I|\times|I|\) diagonal matrix, with \(I\) an index set. The \(i^{\text{th}}\) block, \(\star_{i}\), can either be a scalar, vector, or matrix, and will be clear from the context. \(\mathbb{R}_{+}\) is the set \(\{\alpha\in\mathbb{R}\mid\alpha>0\}\). To set the stage for the theoretical framework and problem formulation to follow, we now introduce several key lemmas, definitions, and theorems, to be invoked in the proofs to come. **Definition 1** (Control-Affine Nonlinear System): _A control-affine nonlinear system is the system (here subscripted by \(i\) to represent the \(i^{\text{th}}\) agent for notational cohesion):_ \[\dot{\mathbf{x}}_{i} =f(\mathbf{x}_{i})+g(\mathbf{x}_{i})\mathbf{u}_{i} \tag{1a}\] \[\mathbf{y}_{i} =h(\mathbf{x}_{i}), \tag{1b}\] _where \(\mathbf{x}_{i}\in\mathcal{X}\subset\mathbb{R}^{n}\), \(\mathbf{u}_{i}\in\mathcal{U}\subseteq\mathbb{R}^{m}\), and \(\mathbf{y}_{i}\in\mathbb{R}^{q}\) are, respectively, the state, control, and output vectors, with \(m\) not necessarily equal to \(q\). \(f(\cdot)\), \(g(\cdot)\), and \(h(\cdot)\) are, respectively, the drift and control vector fields, and the output map -- with appropriate dimensions -- which are assumed to be sufficiently-smooth functions (\(\mathcal{C}^{n};n\geq 1\)), with \(f(\mathbf{0}_{n})=\mathbf{0}_{n}\), so that \(\mathbf{x}_{i}=\mathbf{0}_{n}\) is an open-loop equilibrium point of (1), and \(h(\mathbf{0}_{n})=\mathbf{0}_{q}\). \(\mathcal{X}\) is a compact set containing \(\mathbf{0}_{n}\), while admissible control signals take values in the set, \(\mathcal{U}\), of piecewise continuous and absolutely-integrable functions, i.e., the set \(\{\mathbf{u}_{i}(t)\mid\int_{0}^{t}|\mathbf{u}_{i}(\tau)|d\tau<\infty\}\)._ **Definition 2** (Graph Theory): _An undirected, finite, graph (hereafter graph) \(\mathcal{G}\) is the tuple \((\mathcal{V},\mathcal{E})\), with \(\mathcal{V}\) equal to the non-empty set \(\{v_{1},v_{2},\ldots,v_{k}\}\) of \(k\) distinct elements, called nodes or vertices and \(\mathcal{E}\) (the edge set) \(\subseteq\mathcal{V}\times\mathcal{V}=\{\{v_{i},v_{j}\}\mid v_{i},v_{j}\in \mathcal{V}\}\). The graph is weighted if there exists a map \(w:\mathcal{E}\mapsto\mathbb{R}\) that associates to each edge, a real number, denoted as \(w_{ij}\). It is unweighted otherwise. A path is a sequence of distinct vertices \(v_{1},v_{2},\ldots,v_{m}\), where every consecutive pair of vertices (i.e., \(v_{i}\) and \(v_{i+1}\); \(i=1,\ldots,m\)) is joined by an edge in \(\mathcal{E}\). A graph is connected if every node in the graph is connected to every other node by a path; the graph is disconnected otherwise. The neighborhood set, \(N_{i}\), is the set \(\{v_{j}\in\mathcal{V}\mid\{v_{i},v_{j}\}\in\mathcal{E},\ j\neq i\}\). The graph adjacency and Laplacian matrices (denoted, respectively, as \(\mathbf{\mathcal{A}}(\mathcal{G})\) and \(\mathbf{\mathcal{L}}(\mathcal{G})\)) associated with \(\mathcal{G}\), are the matrices:_ \[\mathbf{\mathcal{A}}(\mathcal{G}) =[\mathcal{A}_{ij}],\text{ where }\mathcal{A}_{ij}=\begin{cases}1, \text{ if }\{v_{i},v_{j}\}\in\mathcal{E}\\ 0,\text{ otherwise}\end{cases} \tag{2a}\] \[\mathbf{\mathcal{L}}(\mathcal{G}) =[\mathcal{L}_{ij}],\text{ where }\mathcal{L}_{ij}=\begin{cases} \sum_{j\in N_{i}}\mathcal{A}_{ij},\text{ if }i=j\\ -\mathcal{A}_{ij},\text{ otherwise.}\end{cases} \tag{2b}\] _Hereafter, for brevity, we shall drop the \(\mathcal{G}\) argument in the notations for the adjacency matrix and graph Laplacian._ **Definition 3** (Leader-Follower Formation Tracking): _Consider an interconnected system of \(N+1\) identical agents -- each with dynamics described by (1a) -- on the (connected) line graph depicted in Figure 1. For convenience, let the MAS comprise a single leader (with index \(L\)) and \(N\) followers. The formation tracking problem is to synthesize controls, \(\mathbf{u}_{i}\), under which the follower agents converge to states that respect the inter-agent distance constraints imposed by a specified formation rule, as the leader agent independently tracks a known reference trajectory, \(\mathbf{x}_{L}^{r}\). Formally, taking \(\mathbf{\xi}_{i}\in\mathbb{R}^{n}\) to be the desired (goal) state of the \(i^{\text{th}}\) follower agent (\(i=1,2,\ldots,N\)), given by the formation specification, and \(\mathbf{x}_{i}\) to be the \(i^{\text{th}}\) agent's actual state, the formation tracking problem is to find a \(\mathbf{u}_{i}\) that drives the \(i^{\text{th}}\) follower agent so that_ \[||\mathbf{x}_{j}(t)-\mathbf{x}_{i}(t)||=||\mathbf{d}_{ij}||=\delta_{ij},\ \forall\ j\ \in N_{i},\ \forall\ t\geq 0, \tag{3}\] _and a \(\mathbf{u}_{L}\) that drives the leader such that_ \[\lim_{t\to\infty}||\mathbf{x}_{L}(t)-\mathbf{x}_{L}^{r}(t)||=0, \tag{4}\] _where \(\mathbf{d}_{ij}=(\mathbf{\xi}_{j}-\mathbf{\xi}_{i})\in\mathbb{R}^{n}\) is a constant vector, with norm \(\delta_{ij}\in\mathbb{R}_{+}\)._ Recent studies [9] have argued for the representation of the interaction in networked MAS using a three-layer multi-graph model as opposed to the ubiquitous single-layer model that often portrays only communication or information exchange. For completeness, therefore, we note here that Figure 1 depicts the connection of the agents on the communication layer; the collaboration and information-sharing layers are taken to be subsumed in the network. **Theorem 1** (Mesbahi and Egerstedt [10]): _For a connected goal formation, encoded by the graph, \(\mathcal{G}_{f}=(\mathcal{V},\mathcal{E}_{f})\), and a goal location set \(\mathcal{X}_{f}=\{\mathbf{\xi}_{1},\mathbf{\xi}_{2},\ldots,\mathbf{\xi}_{N}\}\), where \(\mathbf{\xi}_{i}\) is the goal location of the \(i^{\text{th}}\) agent, the following formation control scheme_ \[\dot{\mathbf{x}}_{i}(t)=-\sum_{j\in N_{i}}(\mathbf{x}_{i}(t)-\mathbf{x}_{j}(t))-(\mathbf{\xi}_{ i}-\mathbf{\xi}_{j}) \tag{5}\] _will drive the MAS so that the agents converge to a constant displacement of the target positions, \(\mathbf{\tau}\), i.e., for all agents, \(\mathbf{x}_{i}(t)-\mathbf{\xi}_{i}\to\mathbf{\tau}\) as \(t\to\infty\)._ **Assumption 1**: _The system in (1) is autonomous, hence associated notions of Lyapunov stability apply._ **Assumption 2**: _The network structure of the MAS is encoded by an unweighted, connected, and static graph, i.e., neither \(\mathcal{V}\) nor \(\mathcal{E}\) are time-dependent._ **Assumption 3**: _The agents are taken to be represented by models with exact state information, so we do not consider the influence of noise or exogenous disturbances. Fig. 1: An unweighted line graph on \(N+1\) vertices illustrating the leader-follower network structure under consideration. For clarity, the nodes have been renamed with the agent designations. Follower 1 (\(a_{f_{1}}\)), while alike to the other followers, is the only agent with the leader in its neighborhood set. Clearly, the network structure is a tree and not a complete graph. **Assumption 4**: _The leader is driven by an independent control (\(\mathbf{u}_{\text{track}}\)), so that it asymptotically tracks its reference, \(\mathbf{x}_{L}^{T}(t)\ \forall\ t\in[0,T]\), with \(T\) finite._ **Assumption 5**: _There exists a positive definite matrix \(\mathbf{P}\in\mathbb{R}^{n\times n}\) such that the following inequality holds:_ \[(\mathbf{x}_{i}-\mathbf{x}_{j})^{T}\mathbf{P}(f(\mathbf{x}_{i})-f(\mathbf{x}_{j}))\leq 0\] \[\forall\ i,j=1,2,\ldots,N\ \text{and}\ t\geq 0, \tag{6}\] _which places bounds on the relative inter-agent states of the unforced nonlinear dynamics for the multiagent system._ **Assumption 6** ([7], Remark 1): _There exists a positive definite \(N\times N\) matrix \(\mathbf{M}\) such that:_ \[\mathbf{M}\mathbf{\mathcal{L}}=\mathbf{I}_{N}, \tag{7}\] _which translates to an existence of the graph Laplacian's inverse and hence, the connectivity of the associated graph (see Lemma 1)._ ## III Main Result: Consensus-based Formation Maintenance Algorithm Suppose Assumption 4 holds. We define the \(i^{\text{th}}\) consensus error as: \[\mathbf{e}_{i}=\sum_{j\in N_{i}}\mathcal{A}_{ij}(\mathbf{x}_{i}-\mathbf{x}_{j}). \tag{8}\] Then, from Theorem 1, the inter-agent distance constraints (together with the tracking requirement) can be achieved by respectively selecting the following control law for the \(i^{\text{th}}\) follower and leader: \[\mathbf{u}_{i}(t)=-g^{T}(\mathbf{x}_{i}(t))\mathbf{P}(\mathbf{e}_{i}(t)+\mathbf{d}_{i }),\ i=1,2,\ldots,N, \tag{9}\] \[\mathbf{u}_{L}(t)=-g^{T}(\mathbf{x}_{L}(t))\mathbf{P}(\mathbf{e}_{L}(t)+\mathbf{d}_{L })+\mathbf{u}_{\text{track}}, \tag{10}\] where \(\mathbf{d}_{i}=\sum_{j\in N_{i}}\mathbf{d}_{ij}\), and \(\mathbf{P}\) satisfies (6). **Remark 1**: _We will show that, with (9), the follower agents reach consensus (asymptotically) on the inter-agent distances specified by the formation rule. That is, we can think of the consensus error as converging to the prescribed agent-to-agent distances (as opposed to zero in the nominal consensus case), which implies asymptotic decay of the formation error. Also notice that, since \(N_{L}=\{a_{f_{1}}\}\), the leader is driven by a control signal (10) that balances the tracking and formation maintenance requirements._ To prove the validity of this result, we first define the following ensemble notation (for the follower-only network): \[\mathbf{x}=\begin{bmatrix}\mathbf{x}_{1}^{T}&\mathbf{x}_{2}^{T}&\ldots&\mathbf{x}_ {N}^{T}\end{bmatrix}^{T}\in\mathbb{R}^{Nn}, \tag{11}\] \[\mathbf{e}=\begin{bmatrix}\mathbf{e}_{1}^{T}&\mathbf{e}_{2}^{T}&\ldots&\mathbf{e}_ {N}^{T}\end{bmatrix}^{T}\in\mathbb{R}^{Nn},\] (12) \[\mathbf{u}=\begin{bmatrix}\mathbf{u}_{1}^{T}&\mathbf{u}_{2}^{T}&\ldots&\mathbf{u }_{N}^{T}\end{bmatrix}^{T}\in\mathbb{R}^{Nm},\] (13) \[\mathbf{G}(\mathbf{x})=\text{diag}\Big{(}\left[g(\mathbf{x}_{1})&g(\mathbf{x}_{2}) &\ldots&g(\mathbf{x}_{N})\right]\Big{)}\in\mathbb{R}^{Nn\times Nm}, \tag{14}\] \[\mathbf{F}(\mathbf{x})=\begin{bmatrix}f^{T}(\mathbf{x}_{1})&f^{T}(\mathbf{x}_{2})&\ldots&f^{T }(\mathbf{x}_{N})\end{bmatrix}^{T}\in\mathbb{R}^{Nn}, \tag{15}\] and \[\mathbf{d}=\begin{bmatrix}\mathbf{d}_{1}^{T}&\mathbf{d}_{2}^{T}&\ldots&\mathbf{d}_ {N}^{T}\end{bmatrix}^{T}\in\mathbb{R}^{Nn}. \tag{16}\] We can then write (9) as: \[\mathbf{u}(t)=-\mathbf{G}^{T}(\mathbf{x}(t))(\mathbf{I}_{N}\otimes\mathbf{P})(\mathbf{e}(t)+\mathbf{d}), \tag{17}\] with the equation \[\dot{\mathbf{x}}=\mathbf{F}(\mathbf{x})+\mathbf{G}(\mathbf{x})\mathbf{u} \tag{18}\] representing the ensemble dynamics of the MAS in block notation. With this block formulation, it is easy to see that \(\mathbf{e}\) in (12) satisfies the following dynamical equation: \[\dot{\mathbf{e}}=(\mathbf{\mathcal{L}}\otimes\mathbf{I}_{n})\dot{\mathbf{x}}=(\mathbf{\mathcal{L}} \otimes\mathbf{I}_{n})(\mathbf{F}(\mathbf{x})+\mathbf{G}(\mathbf{x})\mathbf{u}). \tag{19}\] Next, we introduce key lemmas (mostly adapted from [7]) which will be useful for proving the results to follow. **Lemma 1** ([10]): _For a connected leader-follower graph on \(N+1\) vertices with one leader and \(N\) followers, if \(\mathbf{\mathcal{L}}\in\mathbb{R}^{N\times N}\) is the Laplacian corresponding to the follower network, then \(\mathbf{\mathcal{L}}\) is real, symmetric, and positive definite, with its eigenvalues related by: \(0<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{N}\)._ **Lemma 2** (Schur Complement Lemma [11]): _For any real and symmetric matrix \(\mathbf{K}=\begin{bmatrix}\mathbf{K}_{11}&\mathbf{K}_{12}\\ \mathbf{K}_{21}&\mathbf{K}_{22}\end{bmatrix}\), with \(\mathbf{K}_{21}=\mathbf{K}_{12}^{T}\), the following statements are equivalent: (i) \(\mathbf{K}<0\); (ii) \(\mathbf{K}_{11}<0\), \(\mathbf{K}_{22}^{T}-\mathbf{K}_{12}^{T}\mathbf{K}_{11}^{-1}\mathbf{K}_{12}<0\); (iii) \(\mathbf{K}_{22}<0\), \(\mathbf{K}_{11}^{T}-\mathbf{K}_{12}\mathbf{K}_{22}^{T}\mathbf{K}_{12}^{T}<0\)._ **Lemma 3** ([7]): _For a connected graph \(\mathcal{G}\) with Laplacian \(\mathbf{\mathcal{L}}\) and adjacency matrix \(\mathbf{\mathcal{A}}\) = [\(\mathbf{A}_{ij}\)], for any \(\mathbf{h}=[\mathbf{h}_{1}^{T}\mathbf{h}_{2}^{T}\ldots\mathbf{h}_{N}^{T}]^{T}\) and \(\mathbf{k}=[\mathbf{k}_{1}^{T}\mathbf{k}_{2}^{T}\ldots\mathbf{k}_{N}^{T}]^{T}\) in \(\mathbb{R}^{Nn}\),_ \[2\mathbf{h}^{T}(\mathbf{\mathcal{L}}\otimes\mathbf{I}_{n})\mathbf{k}=\sum_{i=1}^ {N}\sum_{j=1}^{N}\mathcal{A}_{ij}(\mathbf{h}_{i}-\mathbf{h}_{j})^{T}(\mathbf{k}_{i}-\mathbf{k} _{j}). \tag{20}\] **Lemma 4**: _If Assumptions 2 and 5 hold, then:_ \[(\mathbf{e}+\mathbf{d})^{T}(\mathbf{I}_{N}\otimes\mathbf{P})\mathbf{F}(\mathbf{x})\leq 0. \tag{21}\] _Proof:_ (Motivated by [7]) From (15), we can write \[(\mathbf{I}_{N}\otimes\mathbf{P})\mathbf{F}(\mathbf{x})\] \[=\begin{bmatrix}f^{T}(\mathbf{x}_{1})\mathbf{P}&f^{T}(\mathbf{x}_{2})\mathbf{P}& \ldots&f^{T}(\mathbf{x}_{N})\mathbf{P}\end{bmatrix}^{T} \tag{22}\] \[\triangleq\mathbf{F}^{\mathbf{P}}(\mathbf{x}). \tag{23}\] From the left-hand side of (21), it follows that: \[(\mathbf{e}+\mathbf{d})^{T}(\mathbf{I}_{N}\otimes\mathbf{P})\mathbf{F}(\mathbf{x})\] \[=(\mathbf{e}+\mathbf{d})^{T}\mathbf{F}^{\mathbf{P}}(\mathbf{x})\] \[=(\mathbf{x}^{T}(\mathbf{\mathcal{L}}\otimes\mathbf{I}_{n})+\mathbf{d}^{T})\mathbf{F} ^{\mathbf{P}}(\mathbf{x}) \tag{24}\] Invoking Lemma 3, we can write (24) as: \[(\mathbf{e}+\mathbf{d})^{T}(\mathbf{I}_{N}\otimes\mathbf{P})\mathbf{F}(\mathbf{x})\] \[=\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\mathcal{A}_{ij}(\mathbf{x}_{i}- \mathbf{x}_{j})^{T}\mathbf{P}(f(\mathbf{x}_{i})-f(\mathbf{x}_{j}))\] \[\quad+\sum_{j\in N_{i}}\mathbf{d}_{ij}^{T}\mathbf{P}(f(\mathbf{x}_{i})-f(\mathbf{x }_{j})), \tag{25}\] which is \(\leq 0\) by Assumption 5 and since \(\mathcal{A}_{ij}\geq 0\ \forall\ i,j=1,2,\ldots,N\) under Assumption 2. **Theorem 2**: _Suppose Assumptions 2 and 5 hold. With the control defined in (9), the followers in the MAS will eventually converge to states respecting (3)._ _Proof:_ (Motivated by [7]) We begin the proof by selecting the Lyapunov function candidate: \[V=\frac{1}{2}(\mathbf{e}(t)+\mathbf{d})^{T}(\mathbf{M}\otimes\mathbf{P})(\mathbf{e}(t)+\mathbf{d}), \tag{26}\] where \(\mathbf{P}\) and \(\mathbf{M}\) are as previously defined in Assumptions (5) and (6), respectively. Then, omitting the time and \(\mathbf{x}\) arguments for conciseness, we can write the time derivative of \(V\) along the trajectories of the closed-loop system as: \[\dot{V} =(\mathbf{e}+\mathbf{d})^{T}(\mathbf{M}\otimes\mathbf{P})\dot{\mathbf{e}} \tag{27}\] \[=(\mathbf{e}+\mathbf{d})^{T}(\mathbf{M}\otimes\mathbf{P})(\mathbf{\mathcal{L}} \otimes\mathbf{I}_{n})(\mathbf{F}+\mathbf{Gu})\] (28) \[=(\mathbf{e}+\mathbf{d})^{T}(\mathbf{M}\mathbf{\mathcal{L}}\otimes\mathbf{I}_{n})( \mathbf{I}_{N}\otimes\mathbf{P})(\mathbf{F}+\mathbf{Gu}). \tag{29}\] By (17) and under Assumption 6, we can then write: \[\dot{V} =(\mathbf{e}+\mathbf{d})^{T}(\mathbf{I}_{N}\otimes\mathbf{P})(\mathbf{F}+\mathbf{Gu}) \tag{30}\] \[\dot{V} =(\mathbf{e}+\mathbf{d})^{T}(\mathbf{I}_{N}\otimes\mathbf{P})\mathbf{F}\] \[-(\mathbf{e}+\mathbf{d})^{T}(\mathbf{I}_{N}\otimes\mathbf{P})\mathbf{G}\mathbf{G}^{T}(\bm {I}_{N}\otimes\mathbf{P})(\mathbf{e}+\mathbf{d}). \tag{31}\] By Lemma 4 and since the second term of (31) is always negative, it follows that \(\dot{V}\leq 0\). Thus, the Lyapunov function (26) will decrease along the trajectories of the closed-loop system, which implies (local) asymptotic stability of the equilibrium point of (19). Since, with the change of variables \(\mathbf{z}(t)=\mathbf{e}(t)+\mathbf{d}\), it is easy to see that \(\dot{\mathbf{z}}\equiv\dot{\mathbf{e}}\). Thus, from (31) and (19), it follows that \(\mathbf{z}(t)\to\mathbf{0}_{n}\) as \(t\to\infty\implies||\mathbf{e}(t)||\to||\mathbf{d}||\) as \(t\to\infty\), thus completing the proof. **Corollary 1**: _Define the formation error for (18) as \(\mathbf{f}(t)=\mathbf{d}-\mathbf{e}(t)\). Since \(||\mathbf{e}(t)||\to||\mathbf{d}||\) as \(t\to\infty\), it immediately follows that \(||\mathbf{f}(t)||\to||\mathbf{d}-\mathbf{d}||=0\), as \(t\to\infty\), thus confirming asymptotic decay of the MAS's formation error._ **Remark 2**: _For the popular case of the formation tracking problem, where the agents' models are linear time-invariant systems -- equivalent to setting \(f(\mathbf{x}_{i})=\mathbf{A}\mathbf{x}_{i}\) and \(g(\mathbf{x}_{i})=\mathbf{B}\) in (1a), with \(\mathbf{A}\) and \(\mathbf{B}\) constant matrices of appropriate dimensions (see [6] for example) -- the prevailing approach (assuming full controllability of the linear system) is to define \(\mathbf{u}_{i}\) in terms of some control gain matrix \(\mathbf{K}\), solve for a positive definite matrix \(\mathbf{P}\) that satisfies the linear system's algebraic Riccati equation, and express \(\mathbf{K}\) in terms of \(\mathbf{P}\), e.g., \(\mathbf{K}=-\mathbf{B}^{T}\mathbf{P}^{-1}\). The interested reader can consult [12] for a detailed treatment on the topic._ **Theorem 3**: _Suppose a \(\mathbf{P}\) satisfying (6) exists. Then, it must be the case that \(\mathbf{P}\) also satisfies the following linear matrix inequality (LMI)_ \[\begin{bmatrix}\mathbf{0}\in\mathbb{R}^{Nn\times Nn}&\bar{\mathbf{P}}_{Nn}^{\frac{1}{ 2}}\\ \bar{\mathbf{P}}_{Nn}^{\frac{1}{2}}&-\mathbf{\Delta}\mathbf{f}\mathbf{D}^{T}\end{bmatrix}\leq 0, \tag{32}\] _where \(\mathbf{D}\in\mathbb{R}^{Nn\times N}\) is the matrix_ \[[d_{ij}],\ \text{with}\ d_{ij}=\begin{cases}\mathbf{0}_{n},\ \text{if}\ i=j\\ \mathbf{d}_{ij},\ \text{otherwise},\end{cases} \tag{33}\] \[\bar{\mathbf{P}}_{Nn}=\mathbf{I}_{N}\otimes\mathbf{P}^{-1},\ \text{and} \tag{34}\] \(\mathbf{\Delta}f\in\mathbb{R}^{N\times N}\) _is the matrix_ \[[\delta f_{ij}],\ \text{with}\ \delta f_{ij}=\begin{cases}\mathbf{0}_{n},\ \text{if}\ i=j\\ f(\mathbf{\xi}_{i})-f(\mathbf{\xi}_{j}),\ \text{otherwise}.\end{cases} \tag{35}\] _Proof:_ We begin the proof by noting (from (6)) that \(\mathbf{P}\) must satisfy: \[(\mathbf{\xi}_{i}-\mathbf{\xi}_{j})^{T}\mathbf{P}(f(\mathbf{\xi}_{i})-f(\mathbf{\xi}_ {j}))\leq 0 \tag{36}\] \[\implies\mathbf{d}_{ij}^{T}\mathbf{P}(f(\mathbf{\xi}_{i})-f(\mathbf{\xi}_{j})) \leq 0. \tag{37}\] By block diagonalization (as in (33) and (35)) and using the Kronecker product, we can show that (37) is equivalent to \[\mathbf{D}^{T}(\mathbf{I}_{N}\otimes\mathbf{P})\mathbf{\Delta}f\leq 0, \tag{38}\] with \(\mathbf{D}\) and \(\mathbf{\Delta}f\) already defined. Invoking Lemma 2, it is straightforward to show that (38) can be expressed as (32), which ends the proof. **Remark 3**: _In the preceding theorem, a change of variables (34) was necessary, to simplify the notation and allow for an immediate invocation of the Schur complement lemma. However, it is trivial to show that the original matrix of interest (\(\mathbf{P}\)) can be readily obtained from the uppermost-left \(n\times n\) block of \(\bar{\mathbf{P}}_{Nn}^{-1}\). We are also guaranteed a factorization of the form \(\bar{\mathbf{P}}_{Nn}=\bar{\mathbf{P}}_{Nn}^{\frac{1}{2}}\bar{\mathbf{P}}_{Nn}^{\frac{2}{2}}\), since \(\bar{\mathbf{P}}_{Nn}\succ 0\) and \(\otimes\) preserves positive definiteness._ We now present the following algorithm for formation maintenance. \(T_{\Sigma}\) and \(\mathbf{x}_{i}(0)\) are the total number of unity-spaced time steps (in \([0,T]\), for a given interval) and the initial state of the \(i^{\text{th}}\) follower agent, respectively. **Inputs:**\(\mathbf{D},\mathbf{\Delta}f,N,N_{i},\mathbf{\mathcal{A}},\delta t,g(\cdot),f(\cdot)\), \(\mathbf{x}_{i}(0)\), \(\mathbf{x}_{L}(k)\) **Outputs:**\(\mathbf{P},\mathbf{e}_{i}(k),\mathbf{x}_{i}(k);\ i=1,2,\ldots,N\) Solve (32) for \(\bar{\mathbf{P}}_{Nn}\) and obtain \(\mathbf{P}\) (see Remark 3) \(\mathbf{k}\gets 0\) **to \(T_{\Sigma}\)****do** **for**\(i\gets 1\) **to \(N\)****do** **Calculate**\(\mathbf{e}_{i}\) **from (**8**)** **Substitute**\(\mathbf{P}\) **and**\(\mathbf{e}_{i}\) **from above steps in (**9**)** \(\mathbf{x}_{i}(k+1)\gets f(\mathbf{x}_{i}(k))+g(\mathbf{x}_{i}(k))\mathbf{u}_{i}(k)\) **end for** **end for** ## IV Formation Specification This section follows our previous work [13]; however, here, the position of \(i^{\text{th}}\) agent is in \(\mathbb{R}^{3}\) and not the plane. As in [13], we consider a triangular formation (with one leader and three followers) and also assume this formation to be feasible and rigid (in the sense of [10], SS6), and invariant to homogeneous transformations, with appropriate dimensions. ## V Simulation Example As an example, consider the following dynamics for a six degrees-of-freedom quadrotor, adapted from [14] and described here (for brevity) using the Newton-Euler equations: \[\left.\begin{array}{ll}\dot{\mathbf{p}}_{\mathcal{I}}&=v_{\mathcal{I}}\\ \dot{v}_{\mathcal{I}}&=\frac{1}{m}f_{\mathcal{I}}\\ \dot{\mathbf{\zeta}}_{\mathcal{I}}&=T_{\mathbf{v}}^{-1}\omega_{\mathcal{B}}\\ \dot{\mathbf{\omega}}_{\mathcal{B}}&=J^{-1}(\mathbf{\tau}_{\mathcal{B}}-\mathbf{\omega_{ \mathcal{B}}}\times\mathbf{J}_{\mathbf{\omega_{\mathcal{B}}}}),\end{array}\right\} \tag{39}\] where the variables subscripted by \(\mathcal{I}\) represent quantities in the inertial frame, while those with a \(\mathcal{B}\) subscript pertain to the quadrotor's body frame. \(\mathbf{p}_{\mathcal{I}}=\left[\begin{smallmatrix}x&y&z\\ \end{smallmatrix}\right]^{T}\) and \(\mathbf{v}_{\mathcal{I}}=\left[\begin{smallmatrix}v_{x}&v_{y}&v_{z}\\ \end{smallmatrix}\right]^{T}\) are, respectively, vectors of the \(X\), \(Y\), and \(Z\) positions and linear velocities of the quadrotor, \(\mathbf{\zeta}_{\mathcal{I}}=\left[\begin{smallmatrix}\phi&\theta&\psi\\ \end{smallmatrix}\right]^{T}\) is the quadrotor's attitude vector (of roll, pitch, and yaw angles), while \(\mathbf{\omega}_{\mathcal{B}}=\left[\begin{smallmatrix}p&q&r\\ \end{smallmatrix}\right]^{T}\) is the vector of corresponding body-frame attitude rates. The parameters \(m\), \(\mathbf{J}=\text{diag}(\left[\begin{smallmatrix}I_{xx}&I_{yy}&I_{zz}\\ \end{smallmatrix}\right])\), and \(g\), represent the mass, inertia matrix, and acceleration due to gravity, respectively. Finally, \(\mathbf{f}_{\mathcal{I}}=\left[\begin{smallmatrix}f_{x}^{t}&f_{x}^{t}&f_{x}^{t} \end{smallmatrix}\right]^{T}\) and \(\mathbf{\tau}_{\mathcal{B}}=\left[\begin{smallmatrix}\tau_{x}&\tau_{y}&\tau_{x} \\ \end{smallmatrix}\right]^{T}\) are, respectively, vectors of the total thrust force (\(f_{\mathcal{I}}^{t}\)) and torques (about the roll, pitch, and yaw axes) applied to the quadrotor, while \(\mathbf{T}_{\mathcal{B}}^{x}\in\mathbb{R}^{3\times 3}\) is a matrix that transforms the body-frame angular velocities to the inertial frame. It has been shown in [15] that the model in (39) can be expressed as a control-affine nonlinear system (1a), but we will skip the details to adhere to page limits. To simulate the quadrotor dynamics, we select the parameters of the Crazyflie 2.1 quadcopter (Table I), adapted from [16]. With some calculations, it can be verified that \(\mathbf{P}=\mathbf{I}_{12}\) satisfies (38). For numerical simulation, we take the leader's trajectory tracking law, \(\mathbf{u}_{\text{track}}\) (see Assumption 4), to be a trajectory-error minimizing control law, after the manner of [13]. We also select a loop (figure eight) trajectory and set \(\mathbf{x}_{1}(0)=\mathbf{x}_{\text{1md}}\), \(\mathbf{x}_{2}(0)=\mathbf{x}_{\text{2md}}\), \(\mathbf{x}_{3}(0)=\mathbf{x}_{\text{3md}}\), \(\delta_{L1}=1\) m, and \(\delta_{12}=\delta_{13}=0.8\) m. \(\mathbf{x}_{\text{1md}}\), \(\mathbf{x}_{\text{2md}}\), and \(\mathbf{x}_{\text{3md}}\) are random initial state vectors for the first, second, and third followers, respectively. ## VI Numerical Results & Discussion As a prelude to discussing the consensus-based formation tracking results, Fig. (a)a shows the trajectory tracking performance of the leader. Here, we see that the independent optimal control algorithm drives the leader (from an arbitrary initial state) so that it accurately tracks the desired trajectory, thus satisfying Assumption 4. In Fig. (b)b, the trajectory tracking and formation maintenance results are presented. Here, it is clear that under the consensus-derived control law, the trajectories of the followers closely track that of the leader resulting in tight trajectory tracking with formation persistence. Finally, to give a numerical sense of the formation tracking performance of our proposed control scheme, we present the tracking and formation errors of the MAS on Table II, with corresponding plots depicted on Figs. (a)a and (b)b. These errors are given in terms of the root-mean-square error (RMSE) between the leader and reference trajectories and the RMSE between the desired and actual inter-agent distances for each follower agent, respectively. As expected, the consensus protocol yields almost negligible formation errors, when viewed alone and also in comparison with errors obtained via a formation tracking approach based solely on optimization. ## VII Conclusions In sum, we presented results on consensus-based formation tracking for a general class of nonlinear systems -- control-affine systems with a state-dependent drift term -- and showed that, even with the MAS sharing information via a network topology encoded by a tree, precise formation tracking was still achieved. While our method delivers excellent formation tracking for a nonlinear system with a high-dimensional state space, such near-perfect performance is expected since we have assumed perfect knowledge of the agents' states. Thus, for the more interesting case where agents have uncertain dynamics, the development of similar control laws remains an open challenge. We look forward to exploring this direction in future research. ## Acknowledgement C. Enwerem thanks Erfaun Noorani of the Electrical and Computer Engineering Department at the University of Maryland, College Park, for helpful technical discussions and comments on the content and presentation of this work.
2309.11719
Long-range-enhanced surface codes
The surface code is a quantum error-correcting code for one logical qubit, protected by spatially localized parity checks in two dimensions. Due to fundamental constraints from spatial locality, storing more logical qubits requires either sacrificing the robustness of the surface code against errors or increasing the number of physical qubits. We bound the minimal number of spatially nonlocal parity checks necessary to add logical qubits to a surface code while maintaining, or improving, robustness to errors. We asymptotically saturate this bound using a family of hypergraph product codes, interpolating between the surface code and constant-rate low-density parity-check codes. Fault-tolerant protocols for logical gates in the quantum code can be inherited from its classical parent codes. We provide near-term practical implementations of this code for hardware based on trapped ions or neutral atoms in mobile optical tweezers. Long-range-enhanced surface codes outperform conventional surface codes using hundreds of physical qubits and represent a practical strategy to enhance the robustness of logical qubits to errors in near-term devices.
Yifan Hong, Matteo Marinelli, Adam M. Kaufman, Andrew Lucas
2023-09-21T01:39:31Z
http://arxiv.org/abs/2309.11719v3
# Long-range-enhanced surface codes ###### Abstract The surface code is a quantum error-correcting code for one logical qubit, protected by spatially localized parity checks in two dimensions. Due to fundamental constraints from spatial locality, storing more logical qubits requires either sacrificing the robustness of the surface code against errors or increasing the number of physical qubits. We bound the minimal number of spatially non-local parity checks necessary to add logical qubits to a surface code while maintaining, or improving, robustness to errors. We asymptotically saturate this bound using a family of hypergraph product codes, interpolating between the surface code and constant-rate low-density parity-check codes. Fault-tolerant protocols for logical operations generalize naturally to these longer-range codes, based on those from ordinary surface codes. We provide near-term practical implementations of this code for hardware based on trapped ions or neutral atoms in mobile optical tweezers. Long-range-enhanced surface codes outperform conventional surface codes using hundreds of physical qubits, and represent a practical strategy to enhance the robustness of logical qubits to errors in near-term devices. ## 1 Introduction Noise is inherent in quantum computers, and if ignored, will always destroy any quantum computational advantage. With advances in quantum hardware enabling controllable systems of hundreds of qubits, the use of quantum error correction to prolong the lifetime of quantum information is becoming feasible. At the hardware-theory interface, a key goal is to design optimal codes that leverage specific hardware-level advantages, such as gate non-locality, to mitigate the effect of key challenges, like the fidelity of few-qubit gates, or the resources required to increase the number of qubits in the system. Quantum error correction is done by starting with a Hilbert space of \(n\)_physical qubits_, and identifying a subset \(2^{k}<2^{n}\) of the possible states within Hilbert space as encoding the wave function of \(k\)_logical qubits_. The smallest number of physical qubits on which a nontrivial logical operation can act determines the code distance \(d\), and such a code is often abbreviated as \(\llbracket n,k,d\rrbracket\). A practical set of codes are _stabilizer codes_[1] in which the logical codewords are the simultaneous \(+1\) eigenstates of a commuting set of Pauli operators called the stabilizer group. A Calderbank-Shor-Steane (CSS) code [2; 3] is a stabilizer code for which the generators of this set are strictly products of Pauli \(X\)s or \(Z\)s. An important example of a CSS code is the surface code [4; 5], a leading candidate for near-term implementations of fault-tolerant quantum computation. It has local stabilizer generators supported on a checkerboard-type lattice: see Fig. 1. Fault-tolerant surface codes have been demonstrated with superconducting qubits [6], although the "break-even" point after which the logical bit is more robust than an isolated physical qubit remains to be reached. For hardware with highly biased noise (e.g. Pauli \(Z\) error much more likely than Pauli \(X\) error), elegant modifications of the surface code are known [7]. Due to the spatial locality of a surface code in two spatial dimensions, it is highly desirable for experimentalists; nearly all platforms, including atoms in optical tweezers [8; 9; 10; 11; 12; 13; 14], trapped ions [15; 16; 17; 18; 19], or superconducting qubits [20; 21; 22; 23; 24], can realize geometrically local interactions in two spatial dimensions. Unfortunately, quantum computation with \(\sim 10^{3}\) logical qubits in a surface code architecture with typical error rates of \(\sim 10^{-3}\) may require an architecture with \(\sim 10^{7}\) physical qubits [5], which could be prohibitively difficult to build in the near term. An exciting alternative are quantum low-density parity-check (qLDPC) codes, which can achieve \(k\sim n\): the overhead for encoding logical information is finite. At the same time, the stabilizers are few-body just like the surface code (but not necessarily spatially local), meaning they can in principle be me Figure 1: The 2D layout of a \(\llbracket 41,1,5\rrbracket\) surface code. Black dots and colored tiles represent physical qubits and stabilizer generators, respectively. The string-like logical operators are shown. few-qubit operations. The first qLDPC construction with a finite rate (\(k\sim n\)) and large distance (\(d\sim\sqrt{n}\)) was the hypergraph product (HGP) [25]; a series of improvements [26; 27; 28] eventually led to "good" codes with \(k\sim d\sim n\)[29; 30; 31]. Spatial locality constrains the implementation of qLDPC codes in quantum hardware. Suppose that each physical qubit is arranged in a two-dimensional grid, and qubits can only interact with other qubits a finite distance away. Then one can prove [32] that \(kd^{2}\lesssim n\): there is an unavoidable tradeoff between robustness to error (\(d\)) and number of logical qubits (\(k\)), given a fixed number of physical qubits (\(n\)). Conversely, it is known [33] that to implement a qLDPC in 2D, at least \(\sqrt{\frac{k}{n}}d\) interactions of range \(\sqrt{\frac{k}{\sqrt{n}}}\) are necessary. If we only ask for \(d=\sqrt{n}\) as in the surface code, the bounds of [33] alone admit the prospect of \(k\sim\sqrt{n}\) using interactions of \(O(1)\) range. Since [32] proves that these finite-range interactions only allow \(k=O(1)\), the cost of nonlocality in qLDPC codes is even higher than implied by [33]. Further challenges to qLDPC implementation in 2D were discussed in [34; 35]. It is thus of crucial interest to know: how many non-local stabilizers are needed to add logical qubits to a surface code, while keeping \(k\) and \(n\) fixed? If we find a code that uses the least non-locality to add logical qubits to the surface code, is it realizable in any near-term quantum hardware? This paper answers these questions. We present _long-range-enhanced surface codes_ (LRESCs): an interpolating family of hypergraph product codes that bridges the surface code with constant-rate qLDPC codes. These codes: (_1_) have (asymptotically) as few non-local stabilizers as possible, (_2_) maintain the code distance \(d\) of the surface code while adding logical qubits, i.e. increasing \(k\), (_3_) have lower logical failure rates compared to a surface code in the single-shot regime, and (_4_) interface with existing algorithms for fault-tolerant universal gate sets on surface codes. The simplest realization of the LRESC has a "hierarchical" structure similar to a recent construction [36]; however, unlike [36], LRESCs are LDPC stabilizer codes, employing as little non-locality as possible. Moreover, as we will explain, these codes are well suited for implementation using multiple different architectures for quantum computation, as the specific form of non-locality required by LRESCs is far more efficient to implement than a generic qLDPC code. ## II The Lresc ### Construction We begin by summarizing intuitively the structure of LRESCs; technical details are provided in appendices. Our construction consists of three parts, visualized in Fig. 2. (_1_) First, begin with a good classical LDPC (cLDPC) code [37; 38], which uses \(L_{0}\) classical bits to store \(O(L_{0})\) logical bits (it is thus constant rate), with \(\Theta(L_{0})\) distance. In this paper, we will focus on relatively small code sizes where \(L_{0}\sim 3-10\), both for pedagogy and near-term relevance. Note that this good cLDPC code will require non-local parity checks between the classical bits to ensure constant rate. Appendix A overviews classical codes. (_2_) Next, we increase the number of classical bits: \(L_{0}\to cL_{0}=L\), while proportionally increasing the distance of the code to \(\Theta(L)\) and keeping the number of logical bits fixed as \(O(L_{0})\). We can do so by replacing the bits of our starting code with another classical code, such as the repetition code, which stores a single logical bit in \(c\) physical bits, with codewords \(0\to 0\cdots 0\) and \(1\to 1\cdots 1\). The repetition code has spatially local parity checks when bits are laid out in one-dimension: the parity checks demand that the parity of two nearest-neighbor bits agree. We thus build a _concatenated_ code by replacing the cLDPC "physical bits" with repetition codes of length \(c\). There is no code with fewer non-local edges in one spatial dimension that has \(O(L_{0})\) logical bits and \(O(L)\) code distance (see Appendix A). Decoding this classical code has a simple interpretation: we first decode each repetition code, and then decode the size-\(L_{0}\) LDPC using the state of each repetition code as an effective "physical bit". (_3_) We now build a quantum code by taking the _hypergraph product_ (HGP) of this classical concatenated code with itself. A formal definition of the HGP is technical and relegated to Appendix B; Fig. 2 sketches the idea. We lay out two copies of the classical code of length \(L\) along the \(x\) and \(y\) directions in the plane. Based on the connections between checks and physical bits of the classical code, we lay out physical qubits and \(X\) and \(Z\) type stabilizers of the quantum code in two dimensions. Note that the hypergraph product of two classical repetition codes is the quantum surface code. Since our classical codes contain repetition code segments, our quantum code consists of two-dimensional surface code patches. Long-range parity checks from step (_1_) induce stabilizers that connect distant patches in the code, while ensuring that each stabilizer itself has low-weight (is a product of \(O(1)\)\(X\)s or \(Z\)s). These are the LRESCs. The total number of physical qubits is \(n\sim L^{2}\), the quantum code distance is \(d\sim L\), and the number of logical qubits is \(k\sim L_{0}^{2}\). Alternatively, we have constructed a code with \(d\sim\sqrt{n}\), just like the surface code, but where we have added \(k\) logical qubits at the cost of adding \(\sim L\sqrt{k}\) long-range stabilizers. ### Bounding nonlocality There is no code that has parametrically fewer long-range stabilizers in 2D than an LRESC, while maintaining the same code distance \(d\sim\sqrt{n}\). To see why, start with a code with no long-range stabilizers; the precise cutoff for "long" is unimportant. [32] tells us that there is at most a finite \(k_{0}=O(1)\) number of logicals, such that \(k_{0}d^{2}\leq Kn<(k_{0}+\delta)d^{2}\) for \(K,\delta=O(1)\). To increase the number of logical qubits from \(k_{0}\), we will need to add longer-range interactions: how many are required? Suppose that by modifying \(m-\tilde{k}\) of our local stabilizers to be spatially non-local (but still low weight, i.e. few Paulis), and removing \(\tilde{k}\) stabilizers, we can add \(\tilde{k}>\delta\) logical qubits without sacrificing code distance \(d\). To bound how small \(m\) can be, consider erasure errors on the \(O(m)\) physical qubits contained in these long-range stabilizers. By removing the erased qubits, we are left with a code protected by local stabilizers, obeying the assumptions of [32], and has a distance of at least \(d-m^{\prime}\), where \(m^{\prime}=O(m)\). Importantly, this residual code completely contains all the logical information [39]. The bound of [32] implies that \((k_{0}+\tilde{k})(d-m^{\prime})^{2}<(k_{0}+\delta)d^{2}\). If \(k_{0}=O(1)\), we thus need \(m\gtrsim d\). To add logical qubits to a surface code, \(\sqrt{n}\sim L\) long-range stabilizers are required. The LRESC achieves this scaling, and is asymptotically as local as possible for finite \(k>k_{0}\). ### Quantum error correction Quantum error correction (QEC) for stabilizer codes is typically done by extracting the eigenvalues of all stabilizers, which can be deduced by measuring a set of generators called the check set; the outcomes of these measurements comprise the error syndrome. Decoding then proceeds by finding a suitable correction operator according to the syndrome. The combination of the original error and the correction then either leaves the codespace unchanged (success) or enacts an undesirable logical operation (fail). We conduct numerical simulations of QEC, using both a code-capacity (clean syndromes) and a phenomenological (noisy syndromes) noise model under a local, stochastic depolarization channel (single-qubit \(X\), \(Y\) or \(Z\) errors are equally likely) with probability \(p\): see Fig. 3. For the phenomenological noise model we assume that syndromes with weight \(w\) are incorrectly measured with probability \(wp\); this mimics the experimental way that such syndromes are measured, as we will explain later. We implement a single-stage decoder utilizing belief propagation with ordered-statistics [40] post-processing (BP+OSD): the "min-sum" and "combination-sweep" (\(\lambda=20\)) variants of BP and OSD are used respectively [41]. Syndrome errors are accounted for by adding an additional variable node for each check node in the Tanner graph [42; 43]. For the phenomenological model, 100 noisy QEC cycles are performed, where a clean cycle is performed internally after each noisy cycle to ensure that residual errors are successfully controlled. For the LRESCs, we use parent codes (1) \([3(4),2,2(4)]\), (2) \([6(2),2,4(2)]\) and (3) \([8(3),4,4(3)]\), where \([n^{\prime}(c),k^{\prime},d^{\prime}(c)]\) is short for an outer \([n^{\prime},k^{\prime},d^{\prime}]\) code concatenated with an inner \([c,1,c]\) repetition code. The LRESCs have parameters \([244,4,8]\), \([\![244,4,8]\!]\), \([\![244,4,8]\!]\) and \([\![976,16,12]\!]\) respectively with common rate \(k/n\approx 1.64\%\) (61 physical qubits per logical qubit). Each successive LRESC contains more long-range interactions than the previous: see Table 1. The performance of the single-stage BP+OSD decoder on these LRESCs is compared with those of \(d=5,7,9\) surface codes on a similar layout. We observe that the first two LRESCs perform similarly to surface codes of similar distance under the code-capacity model. For phenomenological noise however, the second LRESC performs significantly better than the first. The third LRESC lowers the logical error rate by another order of magnitude. With an encoding rate of 61 physical qubits per logical qubit, we can use the same code as the one described in the previous section. The LRESC is a \([\![24,4,8]\!]\) code, which is a \([\![24,4,8]\!]\) code, which is a \([\![24,4,8]\!]\) code, which is a \([\![24,4,8]\!]\) code, which is a \([\![24,4,8]\!]\) code, which is a \([\![24,4,8]\!]\) code, which is a \([\![24,4,8]\!]\) code, which is a \([\![24,4,8]\!]\) code, which is a \([\![24,4,8]\!]\) code code. The LRESC is a \([\![24,4,8]\!]\) code, which is a \([\![24,4,8]\!]\) code, which is a \([\![24,4,8]\!]\) code code, which is a \([\![24,4,8]\!]\) code code, which is a \([\![24,4,8]\!]\) code code, which is a \([\![24,4,8]\!]\) code code, which is a \([\![24,4,8]\!]\) code code, which is a \([\![24,4,8]\!]\) code code code. The LRESC is a \([\![24,4,8]\!]\) code code, which is a \([\![24,4,8]\!]\) code code, which is a \([\![24,4,8]\!]\) code code code, which is a \([\![24,4,8]\!]\) code code code, which is a \([\![24,4,8]\!]\) code code code, which is a \([\![24,4,8]\!]\) code code code code, which is a \([\![24,4,8]\!]\) code cal qubit, the LRESCs begin to significantly outperform the surface codes of similar rate. With hundreds of physical qubits, an LRESC can reach the break-even point - where the collective logical qubit is more stable than a single isolated qubit - once one- and two-qubit operations are achieved with just above 99.9% fidelity, which in many platforms is near-term [13, 14] or within reach [44, 45, 46]. A single-shot decoder is, of course, not the optimal decoder for the surface code compared to one which utilizes the syndrome measurement histories of previous rounds. Nonetheless, we use the same decoder for all codes in order to benchmark the benefit of long-range interactions. We discuss possible avenues for improved decoding in the outlook. ### Logical operators To understand why the LRESC not only stores more logicals, but also has reduced logical error rates, we need to understand how LRESCs encode logical qubits. As hinted at previously, since logical operators _locally_ look like repetition codes in the concatenated cLDPC codes (Step 2 above), in the HGP, logical operators _locally_ look like surface code logicals, which are strings of Pauli \(X\) or \(Z\) stretching across a surface code patch. What differs from the usual surface code is the global structure of the logical operator: i.e. how strings in different patches are joined together. A sketch is shown in Fig. 4, with technical details in Appendix C. In a nutshell: the simplest logical operator in an LRESC corresponds to strings in \(O(\sqrt{k})\) of the surface code patches, corresponding to an analogous logical codeword of our cLDPC from Step 1 above. We can intuitively understand why LRESCs are more effective at protecting logical information by showing that no matter how a logical error forms via local processes, during the formation of the error, we always violate more check operators than in an ordinary surface code. Since more checks are violated, we have more opportunities to catch the physical qubit errors before they introduce a logical error. In the surface code, we can create a logical error by introducing a physical error near one boundary and then causing a cascade of additional errors on adjacent sites, i.e. growing a logical string in Fig. 4. At any step during this process, for the ordinary surface code, only one check is violated, meaning the error is almost undetected. In condensed matter physics, we can interpret this as an _anyonic_ particle that is free to diffuse around the system. In the LRESC, we can similarly grow an error through a single patch; however, when the error hits the long-range boundary, it will flip _multiple_ checks in adjacent patches (anyons are not conserved across the long-range boundaries of the LRESC). The rules for anyon splitting are discussed in Appendix \begin{table} \begin{tabular}{c|c|c|c} LRESC & edges & LR edges (embedded) & LR ratio \\ \hline \([244,4,8]\) & 924 & 60 (20) & 6.5\% (2.2\%) \\ \([244,4,8]\) & 1056 & 528 (176) & 50\% (16.7\%) \\ \([976,16,12]\) & 4224 & 1408 (704) & 33.3\% (16.7\%) \\ \end{tabular} \end{table} Table 1: The number of long-range interactions vs total number of pairwise interactions are displayed for three LRESCs. Chosen embeddings of the Tanner graphs lower the number of long-range interactions (in parentheses). Figure 3: QEC performance is numerically estimated using a single-stage BP+OSD decoder for three surface codes with increasing distance as well as LRESCs with increasing long-range connectivity (LRESCS 1, 2, 3 have parent codes \([3(4),2,2(4)]\), \([6(2),2,4(2)]\), \([8(3),4,4(3)]\) respectively). Left: \(\sim 10^{5}\) clean QEC cycles are averaged per data point. Right: \(\sim 10^{4}\) samples of 100 noisy QEC cycles are averaged per data point. Uncertainties are given by standard errors. A clean cycle is performed internally after each noisy cycle to probe the residual errors. The break-even line is plotted in dotted black. Note that to compare, e.g., the surface codes to LRESC 3, the number of physical qubits should be multiplied by 16, as one would store 16 logical qubits in 16 decoupled surface codes (vs. 16 logical qubits in a single LRESC). C. Since the error must grow across multiple patches to constitute a logical, we must inevitably flip more checks during the formation of the logical error, implying that it is easier to detect. ### Logical gates Implementing one- and two-qubit logical gates on an LRESC is (in principle) quite simple. While it might be possible to directly apply logical operations on the non-locally encoded qubits, e.g. using methods from [47, 48], we can also readily organize one of our logical qubits into a contiguous surface-code patch (e.g. moving surface-code patches in Fig. 4 so that a logical string becomes "continuous" and adjacent to the global boundaries), through which it can be passed into a traditional surface code via lattice surgery. Note that one will require surface code patches of \(O(n)\) physical qubits to implement logical gates on \(O(1)\) logical qubits, passed out of the LRESC. Once a logical is in an ordinary surface code patch, standard methods [49] can then be used to apply all logical Clifford operations in a fault tolerant way. This process can be repeated to pass multiple qubits into surface code patches, onto which two-qubit gates can be fault-tolerantly applied. To apply non-Clifford gates once a qubit has been moved into the traditional surface code patch, magic state distillation [50, 51] may be required, though [52, 53, 54] provide alternatives. ### Weight balancing There are two simple, but practical enhancements to the LRESC described thus far by modifying the parent codes. In Step 2 of the LRESC construction, notice that each "physical bit" of the cLDPC from Step 1 consists of a repetition code, but we assigned all of the "long-range" parity checks to a single bit. We can instead evenly distribute these parity checks to different bits inside of the repetition code: see Fig. 5 - so long as \(c\) is larger than the maximal number of parity checks per bit of the cLDPC (Step 1), this will mean that each physical bit is involved in at most one long-range parity check in Step 2. The second modification is to introduce auxiliary bits into the parent codes in order to reduce the weight of each long-range parity check. The parity-check constraints of a classical code can be reformulated as a boolean satisfiability problem (SAT). It is well known in computer science that any SAT problem can be decomposed into conjunctions of smaller SATs of maximum size three (3-SAT), with the potential of introducing some auxiliary bits. Moreover, this SAT \(\rightarrow\) 3-SAT decomposition can be performed in polynomial time [55]; for our linear constraints, this decomposition takes a particularly simple form, see Fig. 5. When we apply this decomposition to the parity checks of a classical code, we obtain new parity checks with bounded weights \(\leq 3\) acting on the combination of our original physical bits and some new auxiliary bits. Importantly, the code distance remains unchanged, though the relative distance may decrease by an \(O(1)\) factor if this method is applied to all checks. At the quantum level, this decomposition bears resemblance to a measurement-only version of Shor's cat-state syndrome extraction circuit [56], where we have included the cat-state ancillas and measured operators as auxiliary qubits and new stabilizer checks respectively. The modified parent codes will now have at most weight-3 parity checks with each physical bit participating in at most one long-range interaction. Furthermore, by arranging each long-range parity check to be adjacent to an endpoint of a repetition-code segment, we can always localize at least one of its long-range edges. In turn, the LRESCs will contain at most weight-6 stabilizer checks with each physical and ancilla qubit participating in at most 4 long-range interactions. These two "weight-balancing" procedures are particularly advantageous for experimental implementations, as we will discuss. ## III Experimental implementations Typically, experimental design of quantum hardware has been strongly limited by the choice of QEC code and its resulting requirements on circuit connectivity. LRESCs imply that one can exploit the tunable addition of non-locality on top of the most local of codes, the surface code, once improvements in physical error rates, or increases in physical qubit number, have been exhausted. Figure 4: A 2D layout of a LRESC with an outer \([5,2,3]\) parent code is shown. Smooth (\(X\)-type) and rough (\(Z\)-type) boundaries are located on the left (thick red) and bottom sides (thick blue) of each patch respectively, with long-range boundaries on the other (magenta) sides. Four logical operators \((X_{1},Z_{1},X_{4},Z_{4})\) corresponding to two logical qubits are drawn (solid and dashed curves). A \(Z\)-type stabilizer and an \(X\)-type error string also shown (dotted red and blue). Such a theoretical advance offers a timely new tool for improving the performance of state-of-the-art platforms, including super-conducting qubits [20; 21; 22; 23; 24], trapped-ions [15; 16; 17; 18; 19] and neutral-atom arrays [8; 9; 10; 11; 12; 13; 14] since, as we now explain, the specific type of non-locality needed for the LRESC is relatively mild in multiple experimental platforms. In superconducting circuits, novel circuit graphs have simulated many-body physics in novel geometries [57]. To realize the LRESC, one must use multiple planes of wiring [58], and we expect that this construction is doable for modest values of \(k\) (i.e. not encoding too many logical qubits). For devices with larger values of \(k\), we can also employ fault-tolerant quantum repeater networks [59; 60] to teleport ancilla qubits down a strictly two-dimensional architecture. The number of such quantum repeater rounds is constrained by the requirement that we cannot pass two logical qubits "through each other". This latter construction is quite similar to the "hierarchical codes" recently discussed in [36]. ### Trapped ions While a superconducting qubit based quantum computer may take advantage of LRESCs or hierarchical codes, we believe that the LRESC is significantly more optimized for alternative architectures. One such architecture is the Quantum Charge-Coupled Device (QCCD) approach [61] to quantum computation with trapped ions. This architecture relies on a trap device capable of confining multiple one-dimensional arrays of ions. Within these so-called "ion crystals", multi-qubit operations are achieved through laser- or microwave-induced spin-motion couplings. To facilitate interactions between ions initially residing in separate ion crystals, the architecture requires real-time shuttling, splitting, and merging operations of ion crystals that occur on faster timescales compared to the coherence time of the data qubits. This dynamic control over system connectivity is made possible through precise manipulation of electric fields that generate the trapping potentials. The high operational fidelities (up to 99.9999% single-qubit fidelity [62] and 99.94% two-qubit fidelity [63]) allowed fault-tolerant demonstrations of quantum error-correcting codes encoding a single logical qubit in small-scale QCCD processors [64; 65]. As systems with 100s of controllable qubits become available in the near future it will be feasible to incorporate LRESCs in the QCCD architecture. In particular, if state-of-the-art fidelities can be maintained for a large-scale device, then LRESC 3 from Fig. 3 surpasses the break-even point using 976 physical qubits, and a similar number of ancilla qubits, assuming two-qubit fidelity from [63]. A possible implementation of LRESCs with trapped ions is shown in Fig. 6. The envisioned architecture is structured into multiple unit cells, each representing a surface code tile (yellow tile in Fig. 2). Within each unit cell, multiple interaction regions are designed to facilitate parallel single- and two-qubit gates. Every unit cell contains both the data qubits necessary for surface code operations and the necessary ancilla ions. Data qubits are transported between interaction regions to perform the necessary two-qubit gates. During transport, unwanted motional excitations may arise due to imperfect control of applied fields. To maintain high two-qubit fidelities, Figure 5: The two different weight-balancing procedures are depicted. Top: a \([5(2),2,3(2)]\) code is modified so that all physical bits participate in at most one long-range parity check. Bottom: A weight-5 parity check is decomposed into three weight-3 checks with two additional auxiliary bits (gray circles). Figure 6: LRESC implementation using a trapped-ion Quantum Charge-Coupled Device architecture. (a) A possible quantum processor is structured into multiple unit cells, each representing a surface code tile (yellow tile in Fig. 2). Each unit cell contains the necessary data qubits (black dots) and measure qubits for parity checks (blue and red rims). Ions are transported across different interaction regions (green-shaded areas) by precise control of the voltage applied to the trap electrodes (yellow boxes) to perform the necessary local operations. Each cell also contains additional ancilla qubits (not shown) used for re-cooling operations after transport operations. Sparse non-local operations required by LRESC are performed via long-range transport of qubits across different cells (pink arrows). (b)-(c) two possible ways to perform qubit permutation withing the QCCD architecture. ancilla ions are then used to sympathetic re-cool an ion crystal following a transport operation. While qubits primarily move within a unit cell, an LRESC requires sparse long-range operations. The architecture in Fig.6 efficiently facilitates the parallel transport of multiple ions to different unit cells, thus minimizing additional challenges associated with non-local operations. The main complexity lies in the optimal scheduling of gates and the required transport operations. The amount of ion transport can be significantly reduced if the system allows manipulation of ion crystals with more than two data qubits. Such a system not only would reduce scheduling complexity but is also likely to reduce the unit cell's size, as fewer interaction regions are needed. Consequently, it would also decrease the overall execution time, since the time required to transport and re-cool a crystal is generally longer than those of two-qubit gates in medium size ion crystals [66, 67]. However, working with large ion crystals can add extra control challenges. State-of-the-art two-qubit gates between ions in long ion chains are generally slower than gates on two-qubit ion crystals and also yield lower fidelities [68, 69]. Furthermore, multi-qubit gates mediated by normal modes of motion cannot be easily executed in parallel. Therefore, we speculate that a likely optimal architecture that implements LRESCs will compromise the advantages offered by the QCCD architecture and those offered by the manipulation of medium-size ion chains. Depending on the details of the experimental apparatus (i.e. physical size of the quantum processor, qubit coherence time, maximum achievable transport speed and re-cooling times), long-range transport may cause an increased physical error rate due to the finite qubit coherence time and the longer time required for long-range ion shuttling. To mitigate this issue teleportation of the qubit state can be employed to replace long-distance transport. This approach requires generating entangled Bell pairs between two distant regions of the quantum processor using schemes for remote entanglement generation [70, 71, 72]. This scheme would also be compatible with a modular ion-trap architecture [70] composed of multiple interlinked small devices each with a limited number of qubits and correspondingly little computational power [70, 73]. ### Neutral atom arrays Perhaps the platform most likely to reap immediate benefits from LRESCs is reconfigurable atom arrays manipulated with optical tweezers [74, 75, 9, 10]. In particular, scaling to 100s of controllable qubits has already been demonstrated [76, 77, 78], while scaling to 1000s is a near-term prospect [79]; two-qubit gate fidelities of \(>98.5\%\) have been shown in multiple atomic species, with the state-of-the-art performance at 99.5% [13, 14]. Accordingly, this platform lies within an order of magnitude of the break-even point of an LRESC (see Fig. 3). As important, the optical methods used for atomic reconfigurability enable parallelism that is well-suited to the surface code and LRESCs [75, 11]. Fig. 7 illustrates a possible implementation of an LRESC using atom arrays. A static array -- formed with a spatial light modulator or optical lattice [77, 78, 79, 80, 75] -- holds atomic data qubits. The measure qubits that yield \(X\) and \(Z\) parity checks (blue and red rims) sit on a grid of traps rotated 45 degrees from the \(x/y\) axes. This array of traps is formed with crossed acoustic-optic deflectors (AOD1-MQ, AOD2-MQ in Fig. 7b) driven with a comb of radio-frequencies. This entire array can be moved by adding an overall offset frequency to the comb of tones inside each deflector, allowing any rigid array translation in the \(x\)-\(y\) plane. Such moves are used to bring all measure qubits into proximity with the appropriate neighbor, in order to exploit short-range Rydberg-mediated interactions for a two-qubit gate (orange-dashed lines) for parallelized two-qubit gates [75, 11]. Due to the short distance scales and the use of AODs, each stepwise move of the SC (top of Fig. 7) can be executed in \(\lesssim 10\mu\)s. The non-local gates that underlie LRESCs likewise can be implemented in a straight-forward fashion, with one adjustment. A pair of crossed AODs (Fig. 7b), AOD1-NL and AOD2-NL, can be used for row translations along the \(y\) direction (step 5 in Fig. 7a), as well as column translations along the \(x\) direction (step 6). For the non-local gates, both measure and data qubits are moved, which necessitates qubit transfers between different optical potentials. Such methods have been demonstrated and can be done while preserving coherence [81, 82], yet come at the price of longer timescales (\(\sim 100\mu\)s) to mitigate motional heating. In addition to allowing the core components of LRESCs, the atom array platform is compatible with other more general needs of QEC. Initialization of the qubit array into the set of optical potentials discussed can be accomplished with atomic rearrangement [80, 83, 84]. Parity checks on the measure qubits will require mid-circuit readout and reset. This can be done in-situ using qubit shelving methods, as recently demonstrated for \({}^{171}\)Yb or by using mixed atomic species [85, 86, 87] -- this circumvents the need for large moves and zoned read-out [66, 75]. Lossless state detection of neutral atoms can be slow (at best, a few milliseconds [86]); this timescale can be improved using destructive state detection [14, 88], that is then paired with a qubit reservoir for rapid replenishment [87, 89]. High fidelity single- and two-qubit gates can be accomplished at low cross-talk with the qubit separations illustrated, using a combination of tightly-focused and laser beams and globally-addressing fields [13, 14, 86, 90, 91, 92, 93, 94]. Qubit loss -- a prevalent error channel during two-qubit gates and measurement -- can be mitigated using syndrome extraction circuits and three-outcome measurements [95]. Finally, the weight-balancing procedures described earlier (Sec. 2.6), which allow for reducing the number of qubits per check (and checks per qubit), are relevant for the implementation of LRESCs in atom arrays. So long as each qubit participates in at most 4 long-range interactions, a single physical qubit will be involved in at most 8 rounds of row and column swaps - 4 local and 4 non-local - to couple all corresponding physical and ancilla qubits during syndrome extraction for one round of QEC. Using a single AOD each for long-range row and column permutations, this may require \(O(\sqrt{k})\) sequential swaps. These swaps could be further parallelized by carefully arranging the long-range edges, or by adding additional AODs, though we leave further optimization for future work. ## IV Outlook We have described the LRESC: a minimal generalization of the surface code capable of encoding multiple logical qubits without sacrificing code distance. The LRESC is well-suited for near-term hardware, where we anticipate that our fault-tolerant code might be realizable within the next few years. An immediate direction for future work is to design a better decoder for LRESCs. Depending on qubit shuttling times, a more sophisticated two-stage decoder could be designed as follows. (1) Perform multiple rounds of local syndrome measurements in the surface code patches while waiting for the long-range syndrome measurements to complete [96]. (2) Use one's favorite standard decoder (e.g. MWPM [97] or Union-Find [98]) for the multi-round syndromes within the surface-code patches and feed the output decisions into a single-stage BP+OSD decoder for global decoding. In this manner, one can strike a balance between the "fast" (but less robust) checks of the surface code and the "slower" (but more robust) long-range checks. Another future direction is to design fault-tolerant logical operations beyond what is known in the literature. Perhaps our physics-inspired analysis of the LRESC can lead to generalizations of the methods known for the surface code. The construction of the LRESC also opens possible avenues to investigate new quantum phases of matter. In particular, it suggests new "topological phases" are enabled using only a small density of long-range interactions, and can thus be investigated in experiment. In the longer term, a large-scale LRESC in which \(k\sim n\) may also be the foundation for an autonomous self-correcting quantum memory. Indeed, our proposed architecture may well represent a more convenient strategy for passive error correction versus a four-dimensional toric code [97]. It may also be more amenable to single-shot error correction than three-dimensional single-shot codes [99, 100, 101]. _Note Added.--_ As we were preparing this manuscript, a preprint [102] appeared, which also describes how hypergraph product codes can be realized efficiently in neutral atom arrays. ## Acknowledgements We thank Evan Wickenden and Charles Stahl for useful discussions, and Jeff Thompson for a careful reading of the manuscript. This work was supported by the Office of Naval Research via Grant N00014-23-1-2533 (YH, AMK, AL), the Alfred P. Sloan Foundation via Grant FG-2020-13795 (AL), NIST (AMK) and the Swiss National Science Foundation under grant 211072 (MM). Figure 7: LRESC implementation on a neutral-atom-based processor using Rydberg-mediated interactions. (a) Data qubits (black circles) and measure qubits (blue and red rims) are initialized in a static ordered 2D array generated by a spatial light modulator or optical lattice. Local parity checks are performed with sequential two-qubit gates (orange-dashed lines) performed on all measure/data qubits in parallel, where each measure qubit is transported in close proximity with a neighbor qubit (step 1-4) using fast crossed acousto-optic deflectors (AODs). Another pair of crossed AODs is used to perform non-local operations by transporting data and measure qubits between different locations of the quantum processor. (b) Two different pairs of crossed AODs are used for short-range and long-range atom transport, respectively labeled as MQ and NL. Arrows represent the transport direction for a varying radio-frequency offset in each AOD. ## Appendix A Classical LDPC codes We begin by reviewing classical low-density parity-check (cLDPC) codes [37], which play an important role in our construction. A classical linear code \(\mathcal{C}\) is specified by a set of constraints called parity checks and a set of codeword generators satisfying those constraints. The state of the system can be represented as an element of \(\mathbb{F}_{2}^{n}\), where \(\mathbb{F}_{2}=\{0,1\}\), and in \(\mathbb{F}_{2}\), \(1+1=0\). We often represent the parity checks as rows of an \(\mathbb{F}_{2}\)-valued _parity-check matrix_\(H\) and logical codewords as rows of a matrix \(G\). The statement that the codewords satisfy the parity-check constraints becomes \(HG^{\mathsf{T}}=0\). The dual code \(\mathcal{C}^{\perp}\) is defined as the code where \(G\) and \(H\) are swapped. We say a linear code is LDPC if its parity-check matrix \(H\) is sparse: the number of ones per row and column are bounded by a constant irrespective of \(n\). The code is useful if \(G\) is not sparse: the code distance \(d\) is the smallest number of \(1\)s in a codeword. We can represent any linear code as a bipartite _Tanner graph_, drawing an edge between a "variable node" \(v\) and a "check node" \(c\) if the corresponding element of \(H\) is non-zero: \(H_{cv}=1\). The Tanner graph of a repetition code is depicted in Fig. 8. All linear codes satisfy the _Singleton bound_: \[k\leq n-|C| \tag{10}\] where \(C\) is a correctable region satisfying \(|C|\geq d-1\). Correctable here means that all codewords can be successfully recovered upon erasure of \(C\). A code generated by a random sparse \(H\) has \(k\sim d\sim n\) with high probability [38]; its corresponding Tanner graph is an asymptotically good expander. However, if we arrange the variable nodes locally in one dimension, such a code will necessarily involve checks \(c\) that are non-local. If we enforce geometric locality in \(D\)-dimensional Euclidean space, then the code parameters must satisfy [32] \[kd^{1/D}\lesssim n\,. \tag{11}\] The sketch of the proof in \(D=1\) is as follows. The idea is to partition the 1D chain into disjoint, correctable segments \(C_{i}\) of length \(|C_{i}|\approx d\) where the separation between each segment is large enough (say \(r\)) so that no parity check acts in more than one segment: see Fig. 9. Since all the correctable segments do not share any checks, their union is entirely correctable. The Singleton bound (10) then imposes that \(k\leq n-|C|=|\bar{C}|=O(rn/d)\), and we thus arrive at (11) for \(r=O(1)\) and \(D=1\). Now suppose we add in \(\ell\) long-range connections to surpass (11). We can simplify avoid the long-range edges and partition the rest of the chain as before, arriving at \(|C|\rightarrow|C|-O(\ell)\) and thus \(k\to k+O(\ell)\). Hence, the number of logical bits \(k\) can scale at most linearly with the number of long-range connections \(\ell\). We now saturate the asymptotic constraints above with a cLDPC of \(d\sim n\), \(k\) logical bits, and \(O(k)\) long-range checks. A \([n^{\prime}c,k^{\prime},d^{\prime}c]\) code is produced from the concatenation of an "outer" \([n^{\prime},k^{\prime},d^{\prime}]\) code with an "inner" \([c,1,c]\) repetition code of variable length \(c\) (denoted \([n^{\prime}(c),k^{\prime},d^{\prime}(c)]\)): see Step 1 of Fig. 2. Concatenation means that we connect a single bit of each inner repetition code to the parity checks of the outer \([n^{\prime},k^{\prime},d^{\prime}]\) code. This concatenating procedure can also be interpreted as first cutting up a 1D repetition code into disconnected segments and then reconnecting these segments with long-range interactions. The only long-range checks come from the outer code, and if it is a "good" \([n^{\prime},k^{\prime},d^{\prime}]=[O(k^{\prime}),k^{\prime},\Theta(k^{\prime} )]\) cLDPC code, the concatenated code has parameters \([O(ck^{\prime}),k^{\prime},\Theta(ck^{\prime})]\) with \(O(k^{\prime})\) long-range connections, which is parametrically optimal. Since we are allowed to attach the long-range edges to _any_ bits of the inner repetition codes, we have some flexibility in designing the long-range couplings (recall Sec. 2.6). This concatenation procedure can be considered as a "dual" variant to the edge-augmentation construction of [103]: instead of having the repetition codes live on the edges of a cLDPC code, we attach them to the variable nodes themselves. For a cLDPC with average vertex degree \(\bar{w}\), the concatenated construction reduces the number of surface-code patches by a factor of \(\bar{w}^{2}\) compared to the approach in [103]. As we will later see, the "hierarchical" structure of concatenated codes also lends the dynamics to be factorized in a systematic manner: we can analyze the dynamics within the inner and outer codes separately. ## Appendix B Hypergraph product codes Using an \(\mathbb{F}_{2}^{2n}\) representation for Paulis, the stabilizer checks of a CSS code can be represented by the parity-check matrix \[H=\begin{pmatrix}H_{X}&0\\ 0&H_{Z}\end{pmatrix} \tag{12}\] where commutativity requires \(H_{Z}H_{X}^{\mathsf{T}}=0\). We use the hypergraph product (HGP) construction [25] to produce a quantum CSS code from two classical linear codes. Specifically, suppose we have two classical codes with parameters \([n_{1},k_{1},d_{1}]\), \([n_{2},k_{2},d_{2}]\) and parity-check matrix Figure 8: The Tanner graph of a \(n=6\) repetition code is illustrated. The circles and squares represent nodes (bits) and factors (parity checks) respectively. Figure 9: A 1D chain is partitioned into disconnected, correctable segments (blue) of length \(\approx d\) with separation \(\approx r\). ces \(H_{1}\), \(H_{2}\) respectively. The associated HGP code has parity-check matrices defined as \[H_{X} =(H_{1}\otimes\mathds{1}\ \ |\ \mathds{1}\otimes H_{2}^{\mathsf{T}}) \tag{11a}\] \[H_{Z} =(\mathds{1}\otimes H_{2}\ \ |\ \ H_{1}^{\mathsf{T}}\otimes \mathds{1})\,. \tag{11b}\] By construction, the orthogonality constraint \(H_{Z}H_{X}^{\mathsf{T}}=0\) is automatically satisfied. If \(H_{1}\) and \(H_{2}\) have full rank (no redundant parity checks), then the HGP code has parameters \(\llbracket O(n_{1}n_{2}),k_{1}k_{2},\min(d_{1},d_{2})\rrbracket\). Geometrically, the Tanner graph of the HGP code takes the form of a graph product between those of the two classical parent codes. Given two graphs \(\mathcal{G}_{1}=(V_{1},E_{1})\) and \(\mathcal{G}_{2}=(V_{2},E_{2})\), the product graph \(\mathcal{G}_{1}\times\mathcal{G}_{2}\) is a graph with vertices labeled by pairs \((x,y)\) where \(x\in V_{1}\) and \(y\in V_{2}\). Two vertices \((x,y)\), \((x^{\prime},y^{\prime})\) are connected by an edge if either \(x=x^{\prime}\) and \(\{y,y^{\prime}\}\in E_{2}\) or \(y=y^{\prime}\) and \(\{x,x^{\prime}\}\in E_{1}\). The steps to convert this product graph into a CSS Tanner graph are: 1. If the vertex of the product graph is of the form (node, node) or (factor, factor), then that vertex becomes a node representing a physical qubit. 2. If the vertex of the product graph is of the form (node, factor), then that vertex becomes a factor representing an \(X\) stabilizer. 3. If the vertex of the product graph is of the form (factor, node), then that vertex becomes a factor representing a \(Z\) stabilizer. Importantly, if the two parent codes are LDPC, then so is the resultant HGP code. If the two parent codes can be locally embedded in \(D_{1}\) and \(D_{2}\) spatial dimensions, then the HGP code can in \(D_{1}+D_{2}\) dimensions. The surface code is the HGP of two 1D repetition codes. The LRESC is simply the HGP of the classical concatenated code defined earlier with itself. Its parameters are \(\llbracket O(c^{2}k^{\prime 2}),k^{\prime 2},\Theta(ck^{\prime})\rrbracket\) with \(O(ck^{\prime 2})\) long-range interactions. Denoting \(L\equiv ck^{\prime}\) and \(k\equiv k^{\prime 2}\), the code parameters simplify as \(\llbracket O(L^{2}),k,\Theta(L)\rrbracket\) with \(O\left(L\sqrt{k}\right)\) long-range interactions. For \(k\ll n\), the 2D layout of this HGP code can be understood as patches of surface code of length \(O(c)\), whose boundaries are connected by long-range stabilizers: see Fig. 2. The graph product structure arranges these long-range interactions as parallel row and column couplings. In the surface code, the quantum Tanner transformation [104; 105] can reduce \(n\) to roughly \(\approx n/2\) while maintaining the same distance, producing the so-called "rotated surface code" with parameters \(\llbracket L^{2},1,L\rrbracket\). The idea is to multiply adjacent checks of the same type in order to produce new checks which commute when restricted to a sublattice; the complementary sublattice can then be discarded: see Fig. 10. The Tanner transform of a LRESC will unfortunately introduce a "diagonal" interaction for every long-range 4-cycle in the original Tanner graph. If the parent code has \(O(k^{\prime})\) long-range edges, then the HGP code will contain \(O(k^{\prime 2})=O(k)\) additional "diagonal" interactions in a 2D layout. For small platforms, the factor of 2 reduction in overhead may still be advantageous despite the increase in routing complexity. ## Appendix C Long-range boundaries and confinement In this section, we characterize the structure of logical operators in LRESCs using concepts from condensed-matter physics. We show how anyon transport properties in the LRESC are related to domain-wall dynamics in the classical parent codes. Finally, we describe how the long-range boundaries in an LRESC can lead to anyon confinement and improved single-shot decoding. ### Logical operators and boundary dynamics Interpreting the parity checks of the 1D repetition code (Fig. 8) as energetic terms in a Hamiltonian, we arrive at the 1D Ising model. A local bit flip in the 1D Ising model creates a pair of domain walls separating 1s and 0s. When these domain walls move via additional bit flips, the number of violated checks remains constant, and we say that the domain walls are "deconfined". These domain walls can then travel to opposite endpoints of the chain, flipping all physical bits in the process. Thus, a logical error in the 1D repetition code can be enacted with local processes while violating only \(O(1)\) checks. The concatenated codes mentioned earlier consist of an outer \([n^{\prime},k^{\prime},d^{\prime}]\) code and an inner repetition code. We now describe how the structure of the outer code dictates the dynamics of propagating domain walls. As a concrete example, suppose our outer code was a \([5,2,3]\) code with \[H=\begin{pmatrix}1&1&0&1&0\\ 0&1&0&0&1\\ 0&0&1&1&0\end{pmatrix}\quad,\quad G=\begin{pmatrix}1&0&1&1&0\\ 0&1&1&1\end{pmatrix} \tag{12}\] where we have expressed \(G\) in reduced row echelon (standard) form. Domain walls can freely propagate within each inner repetition code. However, upon hitting the long-range boundaries, these domain walls will excite the long-range checks of the outer cLDPC code: see Fig. 11. Satisfying the long-range checks requires locally exciting domain walls on other repetition code segments according to the codewords generated by \(G\). When a domain wall reaches a long-range check, we examine the codewords which contain a 1 at the position of its corresponding repetition-code segment. The other 1s in the codeword label the other segments which can spawn the additional domain walls, thereby satisfying all long-range parity checks. The minimum number of additional domain walls is \(d^{\prime}-1\), where \(d^{\prime}\) is the distance of the outer code. The HGP will produce two types of horizontal and vertical long-range boundaries: an \(X\)-type and a \(Z\)-type. For every long-range edge connecting a node and a check, the graph product will produce a long-range edge connecting a (node, node) \(\rightarrow\) qubit to a (check, node) \(\rightarrow\)\(Z\)-check or a (node, check) \(\rightarrow\)\(X\)-check to a (check, check) \(\rightarrow\) qubit. We denote an excitation of a \(X(Z)\)-check as a \(e\) (\(m\)) anyon. Using these conventions, we can now analyze anyon transport through the long-range boundaries. Suppose we try and move an \(e\) particle through an \(X\)-type long-range boundary (by growing its "error" string of \(Z\)s). The combination of the original and newly emerging strings must overlap on an even number of sites with each long-range \(X\)-check. This constraint is satisfied precisely by the codewords generated by \(G\). If the code distance of the outer code is \(d^{\prime}\), then the \(e\) must split into at least \(d^{\prime}-1\) additional \(e\) particles. Now if we try to move an \(e\) particle through a \(Z\)-type long-range boundary, we can simply multiply the error string by long-range \(Z\)-checks, which will extend the support of this error to additional surface-code patches given by \(H\). Growing the error strings in these other patches will create additional \(e\) particles. The rules for \(m\) particle follow analogously by switching the roles of \(X\) and \(Z\). For each surface-code patch labeled \((x,y)\) with the origin at the upper-left, we can arrange the rough (\(e\) absorbing), smooth (\(m\) absorbing) and long-range boundaries as depicted in Fig. 12. Rough boundaries are present at the bottom and smooth boundaries on the left. The top and right boundaries are the long-range boundaries. The anyon transport rules through the long-range boundaries can now be summarized as * \(e\) anyons tunnel through horizontal (vertical) boundaries according to \(G\) (\(H\)). * \(m\) anyons tunnel through horizontal (vertical) boundaries according to \(H\) (\(G\)). So the tunneling of \(e\) (\(m\)) anyons through horizontal (vertical) boundaries are analogous to that of domain walls in the classical parent code: the codewords generated by \(G\) label the \(y\) (\(x\)) coordinates of surface-code patches where additional anyons can appear. The tunneling rules in the other directions are analogous but using the dual codewords generated by \(H\). Because \(e\) and \(m\) anyons behave differently through long-range boundaries (\(G\neq H\) in general), we have lost the usual \(e\leftrightarrow m\) duality that is present in the ordinary surface code. However, if we use a _self-dual_ code where \(G\simeq H\) (e.g. [8, 4, 4] extended Hamming), then this duality is restored. Figure 11: The classical dynamics of long-range boundaries is depicted for a \([5(c),2,3(c)]\) concatenated code. A domain-wall excitation (star) is created in an inner repetition code and is transported across to the long-range boundaries (magenta lines). Left: Upon reaching this boundary, the long-range checks of the outer code will be violated (red squares). Right: Additional \(d^{\prime}-1=2\) excitations must appear amongst the other connected surface-code patches in order to complete a logical operation (codeword 11001). Figure 10: The transformation of a \([52,4,4]\) HGP code into a \([36,4,4]\) quantum Tanner code is shown. Solid black dots represent physical qubits, and red (blue) squares represent \(X\) (\(Z\)) checks. 21 long-range interactions (magenta curves) are required. A \(k=4\), \(d=4\) surface code of the same layout will require \(n\geq 64\) physical qubits. We can use the above tunneling rules to construct our logical operators. We choose the standard form of \(G\) (\(\mathds{1}_{k^{\prime}}\) on the left) as a canonical basis for our logical operators like in (11). Starting on each surface-code patch (\(1\leq x\leq k^{\prime},1\leq y\leq k^{\prime}\)) in the upper-left corner, the \(X(Z)\)-type logical operators are horizontal (vertical) lines spanning the surface-code patches given by the \(x(y)\)-th row of \(G\) with the other coordinate fixed. The \(X(Z)\)-type logical strings can be interpreted as dragging a single \(m\) (\(e\)) from a smooth (rough) boundary where they are condensed, transporting it to the long-range boundary on the opposite side, and then transporting all tunneled anyons across the new surface-code patches and absorbing them at opposing smooth (rough) boundaries. Using this procedure, we successfully construct \(X\) and \(Z\) logical operators for all \(k=k^{\prime 2}\) logical qubits. Since \(G\) is in standard form, these logical operators only intersect once inside the patches in the upper-left \(k^{\prime}\times k^{\prime}\) corner. ### Anyon confinement and single-shot decoding The presence of syndrome measurement errors is detrimental for surface code decoding, often lowering the error threshold by an order of magnitude. The intuitive reason is that error strings are only detectable at their endpoints, and so if both endpoints have a syndrome measurement error, then that string becomes undetectable. The usual scheme to account for syndrome measurement errors is to perform multiple rounds of syndrome measurements and use the global space-time history for decoding. However, the number of measurement rounds per QEC cycle will scale with the system size [97]. If a decoder is able to account for these measurement errors with a only a small overhead, then we say this decoder is capable of _single-shot_ correction. For stabilizer codes, the relation between confinement and single-shot ability has been well established [99, 106]. Confinement implies that enacting a logical operation via local moves will necessarily violate an increasing amount of stabilizers, and so even if a few measurements are faulty, there still exists a sufficient number of violated stabilizers to undo most of the error such that any residual error remains controlled over subsequent QEC cycles. The Tanner graph of a good cLDPC code is a (bipartite) expander graph. Expander graphs have the property that the boundary of a subset of vertices scales proportionally to the size of the subset. In particular, for a graph \(G=(V,E)\), one can define the (outer) _expansion profile_ \[h_{G}(r)\equiv\min_{|S|\leq r}\frac{|\partial_{+}S|}{|S|}\,, \tag{12}\] where \(\partial_{+}S\) denotes the outer boundary of \(S\subset V\), defined as the set of vertices outside of \(S\) with an edge connecting to a vertex in \(S\). There is a small tweak in the definition for bipartite graphs, but the general idea is the same. On a Tanner graph, the boundary of a subset of nodes is related to the number of violated parity checks for an error supported on that subset. An expander graph is formally a graph where \(h_{G}(r)=O(1)\) for \(r<d\sim n\), so effecting a logical error via local bit flips, or equivalently moving a domain wall, will necessarily violate a growing number of checks in the process. Interpreting the par Figure 12: Some logical (left diagram) and stabilizer (right diagram) operators are depicted for a LRESC with a parent \([5,2,3]\) outer code (11). (a) The \(X_{1}\) and \(Z_{1}\) logical operators are constructed using the codeword \(10110\in G\). (b) The \(X_{4}\) and \(Z_{4}\) logical operators are constructed using the codeword \(01111\in G\). (c) A \(Z\)-type stabilizer is constructed using \(x:11010\in H\) and \(y:10110\in G\). (d) An \(X\)-type stabilizer is constructed using \(x:10110\in G\) and \(y:00110\in H\). (e) An \(X\)-type stabilizer is constructed from a “contractible loop” through the long-range boundaries according to \(y:01001\in H\). Logical operators may be deformed through the long-range boundaries by multiplying appropriate stabilizers. ity checks as a classical Hamiltonian, we can reformulate the previous statement as the existence of macroscopic energy barriers between different ground states. For the LRESCs, as we increase the number of long-range connections in the parent code, we allow outer codes with larger \(n^{\prime}\). If these outer codes are good cLDPC codes, whose Tanner graphs are good expanders, then the LRESCs will contain more anyon confinement which should lead to better single-shot performance, as suggested in Fig. 3. Specifically, for a parent \([O(ck^{\prime}),k^{\prime},\Theta(ck^{\prime})]\) code, the syndrome weight \(|\mathbf{s}|\) for an error \(\mathbf{e}\) grows as \(|\mathbf{s}|\sim|\mathbf{e}|/c\sim|\mathbf{e}|\cdot\ell/n\) for \(|\mathbf{e}|\lesssim d\sim n\) (up to stabilizer equivalence), where \(\ell\sim k^{\prime}\) is the number of long-range interactions. If the number of long-range interactions in the parent code increases with the system size as \(\ell\sim n^{b}\) for some \(b>0\), then the corresponding LRESC satisfies the "good" confinement definition of [106] and thus has a provable single-shot (sustainable) threshold under adversarial noise. A threshold under local stochastic noise has been proven for \(b=1\) (\(c=O(1)\) limit of a LRESC), but it remains an open problem as to whether this threshold exists for \(b<1\), though numerical evidence suggests an affirmative for certain families of 3D homological product codes [106]. (We also see no physical reason why local stochastic noise would be more dangerous than adversarial noise!) Because LRESCs can systematically vary their density of long-range interactions, they provide tunable qLDPC codes to numerically benchmark sustainable thresholds for \(0<b<1\).
2309.04472
Two-point sum-rules in three-dimensional Yang-Mills theory
We compute the stress-tensor two-point function in three-dimensional Yang-Mills theory to three-loops in perturbation theory. Using its calculable shape at high momenta, we test the notion that its Borel transform is saturated at low energies by the lowest glueball state(s). This assumption provides relatively stable estimates for the mass of the lightest glueball that we compare with lattice simulations. We also provide estimates for the coupling of the lightest glueball to the stress tensor. Along the way, we comment on the extent that such estimates are non-rigorous. Lastly, we discuss the possibility of applying the sum-rule analysis to two-point functions of higher-spin operators and obtain a crude approximation for the glueball couplings to these operators.
Simon Caron-Huot, Andrzej Pokraka, Zahra Zahraee
2023-09-08T17:59:40Z
http://arxiv.org/abs/2309.04472v1
# Two-point sum-rules in three-dimensional Yang-Mills theory ###### Abstract We compute the stress-tensor two-point function in three-dimensional Yang-Mills theory to three-loops in perturbation theory. Using its calculable shape at high momenta, we test the notion that its Borel transform is saturated at low energies by the lowest glueball state(s). This assumption provides relatively stable estimates for the mass of the lightest glueball that we compare with lattice simulations. We also provide estimates for the coupling of the lightest glueball to the stress tensor. Along the way, we comment on the extent that such estimates are non-rigorous. Lastly, we discuss the possibility of applying the sum-rule analysis to two-point functions of higher-spin operators and obtain a crude approximation for the glueball couplings to these operators. ## 1 Introduction * 2 Stress-energy tensor two-point function * 2.1 One-loop two-point functions * 2.2 One- and two-loop two-point functions via the unitarity method * 2.3 Two- and three-loop two-point functions * 2.4 Superconvergent combination of two-point functions * 3 Sum-rules: estimating the glueball masses and couplings * 3.1 Dispersion relations * 3.2 Borel transformation * 3.3 Borel transformation of the perturbative result * 3.4 Borel transformation of the non-perturbative model * 3.5 One-glueball model (\(N=1\)) * 3.6 Two-glueball model (\(N=2\)) * 3.7 Comparison with lattice data * 4 Everything is consistent with unitarity! * 5 Higher-spin currents * 5.1 Higher-spin Correlation Functions * 5.2 Basis for higher-spin Operators * 5.3 One- and two-loop with unitarity method * 5.4 Two-loop higher-spin correlators * 5.5 Superconvergent Combinations * 6 Conclusions * A \(d\)-dimensional form factors * A.1 One-loop * A.2 Two-loops * A.3 Three-loops * B Computing \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\) from dimensional recurance * B.1 Dimensional recurrence and analyticity in \(d\) * B.2 Computing \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\) * C Ingredients for on-shell calculations * C.1 Stress-Tensor gluon form factors * C.2 Phase space integrals Introduction Understanding the non-perturbative dynamics of strongly coupled systems from first principles has been a long standing problem in modern quantum field theory (QFT). Arguably, the most direct calculation of non-perturbative effects come from lattice simulations where one computes QFT correlation functions in a discretized spacetime and then extrapolates to the continuum. While less direct, one can also obtain some non-perturbative information through dispersion relations that connect correlators at large (computable) space-like momenta and small momenta, as in the famous QCD sum-rules [1; 2; 3; 4]. Surprisingly, the low energy contribution to these sum-rules is often found to be numerically dominated by the lightest bound states, yielding estimates of their various properties. In light of the continuing interest in rigorous results on confining theories, we would like to revisit these old ideas in the context of three-dimensional Yang-Mills theory, where both perturbative calculations and lattice simulations are possible. The central object of our study will be the stress-energy 2-point function \[\Pi^{\mu\nu\alpha\beta}(p^{2})=i\int\mathrm{d}^{d}x\ e^{-ip\cdot x}\langle 0| \mathsf{T}\{T^{\mu\nu}(x)T^{\alpha\beta}(0)\}|0\rangle, \tag{1}\] which probes intermediate glueball states \(|G\rangle\) through its imaginary/absorptive part \[\begin{split} 2\mathrm{Im}\,\Pi^{\mu\nu\alpha\beta}(p^{2})& =\int\mathrm{d}^{d}x\ e^{-ip\cdot x}\langle 0|T^{\mu\nu}(x)T^{ \alpha\beta}(0)|0\rangle\\ &=\int\mathrm{d}^{d}x\ e^{-ip\cdot x}\oint_{G}\langle 0|T^{\mu\nu }(x)|G\rangle\langle G|T^{\alpha\beta}(0)|0\rangle\,.\end{split} \tag{2}\] Like any two-point correlator, \(\Pi^{\mu\nu\alpha\beta}\) in (1) admits a Kallen-Lehmann dispersion relation that expresses it as an integral over a spectral density (2). The latter consists of two non-negative functions, corresponding to spin-0 and spin-2 exchanges. On the one hand, at large Euclidean momenta the correlator can be calculated using perturbation theory. On the other hand, the qualitative features of the spectral density are known at low energies: we expect a sum of \(\delta\)-function contributions from stable glueballs followed by a continuum that possibly includes further resonances. The goal of this work is to explore the consequences of the dispersion relation that connects these quantities. One of our motivations is recent work on the S-matrix bootstrap [5; 6] in which scattering amplitudes of stable bound states are supplemented by form factors and two-point functions of local operators, in order to rigorously connect short- and large-distance physics. Here, we focus only on two-point functions and numerically explore less rigorous connections in the spirit of QCD sum-rules. Three-dimensional Yang-Mills is a natural model to study from this perspective since it is super-renormalizable (i.e., amenable to perturbation theory) and has interesting non-perturbative dyanmics (i.e., confinement). The presence of a (perturbative) mass scale in three-dimensional Yang-Mills theory is an additional simplification with respect to QCD where the mass scale is provided by non-perturbative condensates. At the same time, three-dimensional Yang-Mills theory (especially without fermions) is readily amenable to lattice simulations and excellent data exists on its spectrum [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Concretely, we will calculate the stress-tensor correlator (1) in pure three-dimensional Yang-Mills theory to three-loop order in perturbation theory. Following traditional sum-rules approach, we then apply a Borel transform with respect to energy to improve convergence (see equation (3.11) below). The main question is whether the Borel transform is dominated at low energies by the lightest glueball(s). We test this by assuming it is true and seeing whether it predicts reasonable values for the lowest glueball mass and its coupling to the stress tensor. The former is then compared with known lattice results, while the latter (to our knowledge) is a prediction. In principle, this method can also be extended to higher-spin operators. Knowing the set of couplings \(\langle G|\mathcal{O}^{\ell}|0\rangle\) of a given glueball to minimal-twist operators of various spins amounts to knowing its so-called lightcone wavefunction. This wavefunction is closely related but distinct from parton distribution functions (which control deep inelastic scattering at high energies) in that it controls elastic scattering at high energies [21; 22; 23]. In the QCD context, such quantities have been estimated using sum-rules for higher-spin currents [24]. We initiate the investigation of higher-spin sum-rules for three-dimensional Yang-Mills theory. In section 2, we describe the stress-energy two-point function and provide some details of our 3-loop calculation. While we quote only its three-dimensional limit in the main text, the \(d\)-dimensional results can be found in appendix A. Up to two-loops, we include cross-checks on the imaginary part using on-shell methods. In section 3, we review Borel-transformed sum-rules for two-point functions and use simple models for the spectral density to extract the glueball masses and couplings from a \(\chi^{2}\)-fit. We also comment on the comparison with lattice results. These estimates are not rigorous and we explain in section 4 that essentially any low-energy spectral density can be compatible with perturbative asymptotics. Lastly, in section 5, we compute the perturbative two-point functions of more general higher-spin operators and show the existence of "superconvergent" sum-rules. This analysis leads to a crude approximation of glueball couplings to these operators. ## 2 Stress-energy tensor two-point function In this section, we review our conventions for the YM-Lagrangian and define the stress-energy two-point functions relevant to this work. We compute the spin-0 and spin-2 two-point functions at one-loop in section 2.1. In section 2.2, we cross-check the discontinuities of one-loop two-point functions and predict the two-loop discontinuities from unitarity cuts. Then, we compute the full two-point functions at two-and three-loops in section 2.3. In section 2.4, we identify a combination of the two-point functions with particularly good behaviour near \(p^{2}=0\). This "superconvergent" combination will be central to the sum-rule analysis of section 3. The YM Lagrangian is comprised of three parts: a pure YM Lagrangian \(\mathcal{L}_{\rm YM}\), a gauge fixing condition \(\mathcal{L}_{\rm gf}\) and a ghost Lagrangian \(\mathcal{L}_{\rm gh}\). Explicitly, the total Lagrangian is (we work in mostly-plus metric signature) is \[\mathcal{L}=-\frac{1}{4g_{s}^{2}}\left(F_{\mu\nu}^{a}\right)^{2}+ \mathcal{L}_{\rm gf}+\mathcal{L}_{\rm gh} \tag{2.1}\] where \[F_{\mu\nu}^{a}=\partial_{\mu}A_{\nu}^{a}-\partial_{\nu}A_{\mu}^{ a}+f^{abc}A_{\mu}^{b}A_{\nu}^{c} \tag{2.2}\] is the YM field strength. Since we will eventually specialize to \(d=3\) spacetime dimensions rather than four, it is useful to compare the mass dimension of the coupling constant: \[[g_{s}^{2}]=4-d\to\begin{cases}0&\text{for $d=4$},\\ 1&\text{for $d=3$}.\end{cases} \tag{3}\] Comparing, we see that the coupling constant provides a natural scale in three-dimensions but not in four-dimensions. This is one of the main reasons we will be interested in \(d=3\) in this work: confinement and the bound state spectrum is controlled by the scale \(m\sim g_{s}^{2}C_{A}\) instead of being an inherently non-perturbative function of the cut off \(\Lambda_{\text{QCD}}\) in four dimensions. The stress-energy tensor is given by the expression \[T^{\mu\nu}=\frac{1}{g_{s}^{2}}\left((F^{a})^{\mu\lambda}\,(F^{a})^{\nu}_{\ \lambda}-\frac{1}{4}g^{\mu\nu}F^{2}\right). \tag{4}\] Since the \(\text{SU}(N_{c})\) gauge theory admits parity and charge conjugation symmetries, the spectral decomposition (2) admits the group theoretic expansion \[\text{Im}\,\Pi^{\mu\nu\alpha\beta}(p^{2})\propto\sum_{J,P,C}(0|T^{\mu\nu}(p^{ 2})|G_{J}^{PC})\langle G_{J}^{PC}|T^{\alpha\beta}(p^{2})|0\rangle \tag{5}\] where the overlap \((G_{J=0,2}^{++}|T^{\mu\nu}(p^{2})|0)\neq 0\) is only nonvanishing for spins \(J=0,2\) and \(PC=++\).1 In section 3, we will try to use its two-point function to extract approximations for the masses and couplings of the lowest-lying glueball states. Footnote 1: In \(2+1\) spacetime dimensions, parity \(P\) is a reflection \((x,y,t)\mapsto(-x,y,t)\) which anticommutes with the angular momentum of a particle. Thus any massive particle of spin \(J\neq 0\) comes in a degenerate multiplet \(\{|J\rangle,|-J\rangle\}\). Since any such multiplet is unitarily equivalent, the \(P\) superscript is only meaningful (in the continuum theory) for \(J=0\) states, see [8] for discussion. The stress-energy tensor two-point function has four hanging Lorentz indices. The Ward identities imply that a certain combination is transverse with respect to the external momentum \(p\) (see [25]): \[p_{\mu}\left(\Pi^{\mu\nu\alpha\beta}(p^{2})+g^{\nu\alpha}(T^{\mu\beta})+g^{ \nu\beta}(T^{\mu\alpha})-g^{\mu\nu}(T^{\alpha\beta})\right)=0\,. \tag{6}\] We focus on the vacuum state, where all the above objects are constrained by Lorentz invariance. There are only two transverse tensor structures with four Lorentz indices that are symmetric in each pair: \[\phi_{0}^{\mu\nu\alpha\beta}(p) \equiv\phi^{\mu\nu}\phi^{\alpha\beta}, \tag{7}\] \[\phi_{2}^{\mu\nu\alpha\beta}(p) \equiv\phi^{\mu\alpha}\phi^{\nu\beta}+\phi^{\mu\beta}\phi^{\nu \alpha}-c_{d}\phi^{\mu\nu}\phi^{\alpha\beta}\,, \tag{8}\] where \(c_{d}=\frac{2}{d-1}\) and \[\phi^{\mu\nu}(p)\equiv\left(g^{\mu\nu}-\frac{p^{\mu}p^{\nu}}{p^{2}}\right). \tag{9}\] Consequently, the general solution, \(\Pi^{\mu\nu\alpha\beta}\), to the Ward identities (6) has a simple form: \[\begin{split}\Pi^{\mu\nu\alpha\beta}(p^{2})&=\frac{d_{G }}{512}\left[A_{0}(p^{2})\phi_{0}^{\mu\nu\alpha\beta}\left(p\right)+A_{2}(p^{2 })\phi_{2}^{\mu\nu\alpha\beta}\left(p\right)\right]\\ &\quad+\left(g^{\mu\alpha}g^{\nu\beta}+g^{\mu\beta}g^{\nu\alpha}- g^{\mu\nu}g^{\alpha\beta}\right)\Lambda,\end{split} \tag{10}\] where we have set \(\langle T^{\mu\nu}\rangle=-\Lambda\delta^{\mu\nu}\). For future convenience, we have absorbed a numerical factor as well as a factor of \(d_{G}\): the dimension of the gauge group (\(d_{G}=N_{c}^{2}-1\) for \(G=SU(N_{c})\)). The value of \(c_{d}\) was chosen so that the spin-2 structure is traceless in each pair, which also makes it orthogonal to \(\phi_{0}\): \[\begin{split}\left(\phi_{2}\right)^{\mu}_{\ \ \mu}{}^{\alpha\beta}=0= \left(\phi_{2}\right)^{\mu\nu\alpha}_{\ \ quantum corrections at any order. The stress tensor \(T^{\mu\nu}(x)\) does not renormalize multiplicatively. It mixes additively with \(g^{\mu\nu}\mathds{1}\) at four loops [31], but this does not affect the connected correlator (13). Thus the left-hand-side is independent of the \(\overline{\text{MS}}\) scale \(\bar{\mu}\) except for possible contact terms (polynomial in \(p\)), which can only appear at two and four loops by dimensional analysis, and can only affect specific combinations of \(A_{0}\), \(A_{2}\) and \(\Lambda\) in accordance with (10). On the right-hand-side we can have mixing between the condensates, which again is only relevant starting from four loops. The most physically relevant combination, to be introduced in subsection 2.4, will turn out to cancel both two- and four-loop divergences. ### One-loop two-point functions In this section we present the one-loop two-point functions for \(d=3\). The generic \(d\) results can be found in appendix A. The two-point functions (\(A_{0}\) and \(A_{2}\)) were computed in generic dimension \(d\) using standard Feynman diagram techniques. We extract these two-point functions from the stress-tensor correlator using the tensor decomposition (10). This ensures that at each step we were working with Lorentz invariant quantities and is essential for the application of standard integration-by-parts (IBP) software such as FIRE[32]. In practice, we used Feynman rules in Feynman gauge. While gauge invariance of the two-point functions was not checked due to this choice, two other consistency checks were performed. First, the conservation of \(\Pi^{\mu\nu\alpha\beta}\) was checked by contracting a factor of \(p\) into each hanging index of \(\Pi^{\mu\nu\alpha\beta}\) while keeping the rest free. After applying IBP reduction, we find that the contraction of \(p\) with any index of \(\Pi^{\mu\nu\alpha\beta}\) vanishes. Secondly, we cross-check the discontinuity of \(\Pi^{\mu\nu\alpha\beta}\) in \(d=3\) at one- and two-loops from unitarity cuts. At one-loop, there is a single Feynman diagram (see fig. 1) that contributes to the one-loop correlation function \(\Pi^{\mu\nu\alpha\beta}\). The circled cross in figure 1 denotes the vertex associated to the stress-tensor coupling to two gluons which can be derived via standard textbook techniques [33; 34; 35]. After integral reduction and integration, the \(d=3\) two-point functions associated to the stress-tensor two-point function (10) are \[A_{0}^{(0)}(p^{2})\underset{d\to 3}{=}2(p^{2})^{3/2}+\mathcal{O}( \epsilon), \tag{14}\] \[A_{2}^{(0)}(p^{2})\underset{d\to 3}{=}(p^{2})^{3/2}+\mathcal{O}( \epsilon). \tag{15}\] Figure 1: Feynman diagram for the 1-loop \(TT\)-correlation function. The circled cross denotes the vertex associated to the insertion of a stress-tensor. ### One- and two-loop two-point functions via the unitarity method In this section, we obtain the non-analytic part of the one-loop and two-loop correlation function, \(\Pi^{\mu\nu\alpha\beta}\), with an independent calculation based on unitarity cuts. We start by writing a general ansatz for the on-shell process of creating two gluons with momentum \(p_{1}\) and \(p_{2}\) from a stress-tensor operator \(T^{\mu\nu}(p)\). Our ansatz needs to be a symmetric in \(p_{1}\) and \(p_{2}\) where the coefficients are fixed by imposing the conservation of the stress-tensor operator (i.e., \(\partial_{\mu}T_{\mu\nu}=0\)). The form factor including the color factor is then, \[\langle p_{1}^{g}p_{2}^{g}|T^{\mu\nu}(p)|0\rangle=\delta_{ab}(p_{1}^{\mu}p_{2}^ {\nu}+p_{1}^{\nu}p_{2}^{\mu}-\delta^{\mu\nu}p_{1}\cdot p_{2}). \tag{16}\] To get the discontinuity of the one-loop correlation function depicted in figure 2, we cut the diagram and glue the two sides together using the Cutckosky cutting rules. This yields, \[\begin{split}\text{Disc }\Pi^{\mu\nu\alpha\beta}(p^{2})& =\frac{-id_{G}}{2!}\int\,\frac{d^{2}p_{1}}{(2\pi)^{2}2E_{1}}\frac{ d^{2}p_{2}}{(2\pi)^{2}2E_{2}}(2\pi)^{3}\delta^{3}(p-p_{1}-p_{2})\\ &\times\langle 0|T^{\mu\nu}(p)|p_{1}^{g}p_{2}^{g}\rangle(p_{1}^{g }p_{2}^{g}|T^{\alpha\beta}(p)|0\rangle,\end{split} \tag{17}\] where \[\text{Disc}A(p^{2}){=}A(p^{2}{-}i\epsilon){-}A(p^{2}{+}i\epsilon). \tag{18}\] Extracting the spin-0 and spin-2 two-point functions (see eq. (16)) and going to the rest frame of \(p\), \[p=(\sqrt{s},0)\rightarrow\mathbf{p}_{1}+\mathbf{p}_{2}=0,\qquad E_{1}+E_{2}=\sqrt{s}, \tag{19}\] the components of the correlation function (eq. (10)) are reduced to trivial integrals over the angles. For example, \[\text{Disc }A_{0}^{(0)}(p^{2})=\frac{-512i}{16\pi}\int d\theta\int\frac{dE_{1}}{E_ {1}}\delta(\sqrt{s}-2E_{1})\frac{(2E_{1}^{2})^{2}}{4}=\frac{4is^{2}}{\sqrt{s}}. \tag{20}\] Using \(\text{Disc}\sqrt{p^{2}}=-2i\sqrt{s}\), we can undo the cut to get the non-analytic contribution to the one-loop two-point function (10) \[A_{0}^{(0)}(p^{2})=2(p^{2})^{\frac{3}{2}},\qquad A_{2}^{(0)}(p^{2})=(p^{2})^{ \frac{3}{2}}. \tag{21}\] Figure 2: Cut of the 1-loop \(TT\)-correlation function is depicted. We can get each side (eq. (16)) from basic consistency principles (Bose symmetry and conservation of stress-tensor). We can then use unitarity cut to obtain the discontinuity of the correlation function. This, of course, matches eq. (14) when \(d=3\). Next, we also compute the non-analytic part of the two-loop correlation function by employing unitarity cuts and on-shell form factors. This can then be compared with the full result including the analytic parts given in eq. (24). We start by examining the unitarity cuts of the two-loop diagrams depicted in fig. 3. Importantly, the two-cuts of the double bubbles are complex conjugate to each other. This is because the tree-level form factor \((p_{1}^{g}p_{2}^{g}|T|0)\) given by eq. (16) is real and goes as \(p^{2}\). Moreover, the one-loop form factor, which has an additional \(g_{s}C_{A}\), must scale like \((p^{2})^{1/2}\) and has a discontinuity that is purely imaginary. Thus, the double bubbles do not contribute to the discontinuity of the correlator since their imaginary part cancels when summed. This then means that the only contribution to the non-analytic part is contained in the right diagram in fig. 3. To calculate the unitarity cut in the right most diagram in fig. 3, we need the on-shell three-gluon form factor \((p_{1}^{g}p_{2}^{g}p_{3}^{g}|T^{\mu\nu}|0)\). This is obtained by studying the four-dimensional two gluons form factor and then using BCFW [36, 37] as explained in appendix C.1. We can then go back to three-dimensions by using \(\epsilon^{3d}=\frac{\epsilon^{4}+\epsilon^{-}}{2}\)2. The final result for the three-gluon form factor is, Footnote 2: This is because in three-dimensions the Lorentz group is isomorphic to \(SU(2)\) whereas in four-dimension it is isomorphic to \(SU(2)\times SU(2)\). Accordingly, the little group for massless particles changes from \(SO(2)\) to \(Z_{2}\) \[\langle p_{1}^{g}p_{2}^{g}p_{3}^{g}|T^{\mu\nu}|0\rangle=2g_{s}f^{abc}\frac{ \sum_{i=1}^{3}(p_{i}^{\mu}p^{\nu}+p^{\mu}p_{i}^{\nu}-g^{\mu\nu}p_{i}\cdot p)(p _{i}\cdot p)-p_{i}^{\mu}p_{i}^{\nu}p^{2}}{(12)\langle 23\rangle\langle 31\rangle}, \tag{22}\] where \(\langle 12\rangle^{2}=-2p_{1}\cdot p_{2}\). As a consistency check, we have verified that this equation correctly reproduces the two-gluon form factor, \(\langle p_{1}^{g}p_{2}^{g}|T^{\mu\nu}|0\rangle\), in the soft \(p_{3}\) limit. We then glue the form factors and perform the phase space integral as elucidated in appendix C.2 to obtain the non-analytic parts of the correlation function at 2-loops: \[A_{0}^{(1)}=(g_{s}^{2}C_{A})\frac{8}{3\pi}p^{2}\log(p^{2}),\qquad A_{2}^{(1)}= -(g_{s}^{2}C_{A})\frac{8}{3\pi}p^{2}\log p^{2}. \tag{23}\] Figure 3: Unitarity cuts of the diagrams contributing to the two-loop stress-tensor two-point function. The first and second diagrams are complex conjugate of each other and once added have zero discontinuity. Only the third diagram contributes to the non-analytic part. The three-gluon form factor \((p_{1}^{g}p_{2}^{g}p_{3}^{g}|T^{\mu\nu}|0)\) in the right most figure is calculated using BCFW recursion relation. As expected, this correctly reproduces the non-analytic part of the stress-tensor two-point functions computed using Feynman diagram methods eq. (24) when \(d=3\). The fact that the scale dependence cancels when we sum these two channels will be significant below. ### Two- and three-loop two-point functions As we saw in the previous section, the unitarity method gives the imaginary part of the two-loop contribution to the stress-tensor two-point function (23). However, it will be useful to also have the constant part of the two-loop contribution since it contributes to the sum-rules. In fact, we do one-loop more and compute the stress-tensor two-point function to three-loops. Here, we will use Feynman diagrams because it is easier than \(d\)-dimensional unitarity. Since the calculation methodology was reviewed in section 2.1, we simply present the two- and three-loop two-point functions in this section At two-loops the correlation function receives contributions form 8 diagrams including ghosts but only 7 topologies (see fig. 4). While there are 7 contributing topologies, there are only two scalar master integrals at two-loops (see equation (100) as well as equations (101) and (102)). When the dust settles, the two-loop \(d=3\) two-point functions are \[A_{0}^{(1)}(p^{2})\underset{d\to 3}{=}(g_{s}^{2}C_{A})p^{2}\Biggl{[}- \frac{1}{4}-\frac{4}{3\pi^{2}\epsilon}+\frac{8}{3\pi^{2}}\log\left(\frac{p^{2} }{\bar{\mu}^{2}}\right)\Biggr{]}+\mathcal{O}(\epsilon), \tag{24a}\] \[A_{2}^{(1)}(p^{2})\underset{d\to 3}{=}(g_{s}^{2}C_{A})p^{2}\Biggl{[}-1+ \frac{20}{3\pi^{2}}+\frac{4}{3\pi^{2}\epsilon}-\frac{8}{3\pi^{2}}\log\left( \frac{p^{2}}{\bar{\mu}^{2}}\right)\Biggr{]}+\mathcal{O}(\epsilon), \tag{24b}\] where \(\bar{\mu}^{2}=4\pi e^{-\gamma_{E}}\mu^{2}\) is the \(\overline{\text{MS}}\) renormalization scale. The three-loop correlation function receives contributions from a total of 41 different topologies where all consistent ways of distributing gluons and ghosts must be included. The three-loop parent topologies are listed in figure 5: all other topologies can be recovered from these by pinching a subset of propagators in a parent topology to points. Out of the 41 topologies, there are only 6 scalar master integrals (see equation (101)). The \(d\)-dimensional Figure 4: Feynman diagrams contributing to the the two-loop stress-tensor two-point function. Note that there is another diagram not shown here where the gluon sub-bubble in the last diagram is replaced by a ghost bubble. three-loop two-point functions are presented in equations (102) and (103). Taking the \(d\to 3\) limit, we find \[A_{0}^{(2)}(p^{2})\underset{d\to 3}{=}\left(g_{s}^{2}C_{A}\right)^{2} \sqrt{p^{2}}\Bigg{[}\frac{155}{384}-\frac{13}{2\pi^{2}}\Bigg{]}+\mathcal{O}( \epsilon), \tag{25a}\] \[A_{2}^{(2)}(p^{2})\underset{d\to 3}{=}\left(g_{s}^{2}C_{A}\right)^{2} \sqrt{p^{2}}\Bigg{[}\frac{431}{768}-\frac{37}{9\pi^{2}}\Bigg{]}+\mathcal{O}( \epsilon). \tag{25b}\] While most of the master integrals of (101) are easily evaluated, \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\) are particularly challenging. Even though the \(\epsilon\)-expansion of these integrals is known for \(d=4-2\epsilon\)[38; 39; 40], we had to recompute the generic \(d\) dependence from scratch in order to obtain the \(\epsilon\)-expansion of these integrals in \(d=3-2\epsilon\). The generic \(d\) dependence of these integrals was determined using the method of dimensional recurrence and analyticity in \(d\)[41]. A summary of this method along with the equations needed to recover the \(d\) dependence of these integrals is presented in appendix B. ### Superconvergent combination of two-point functions In this section, we introduce a "superconvergent" combination of two-point functions \(A_{0}\) and \(A_{2}\). This combination is exceptionally well behaved as \(p^{2}\to 0\), which we can use to ameliorate the convergence of the Kallen-Lehmann representation. Thus, it is ideally suited for the application of dispersive sum-rules in section 3. Figure 5: Parent topologies for all Feynman diagrams contributing to the three-loop stress-tensor two-point function. All other Feynman diagrams are pinches of these. All ghost contributions have been suppressed. By expanding the tensor structure of the stress-tensor two-point funciton (10), one finds a term with four uncontracted momenta: \[\Pi^{\mu\nu\alpha\beta}(p^{2})=\mathcal{A}_{0+2}(p^{2})p^{\mu}p^{\nu}p^{\alpha}p^ {\beta}+\cdots \tag{26}\] where \[\mathcal{A}_{0+2}(p^{2})\equiv\frac{1}{(p^{2})^{2}}\left(A_{0}(p^{2})+\frac{2( d-2)}{(d-1)}A_{2}(p^{2})\right). \tag{27}\] Here, the change of calligraphy from \(A\) to \(\mathcal{A}_{0+2}\) highlights that a rescaling by \(1/(p^{2})^{2}\) has been applied. Since other terms in (26) are proportional to \(p^{2}\), the combination \(\mathcal{A}_{0+2}(p^{2})\) must be non-singular around \(p^{2}=0\) in order for the correlator itself to be regular. However, thanks to the denominator in (27), it decays faster and in fact vanishes at infinite momenta. Thus, it satisfies an unsubtracted Kallen-Lehmann dispersion relation. By combining the one-loop (14), two-loop (25) and three-loop (25) results, we obtain the following perturbative result for the superconvergent two-point function \(\mathcal{A}_{0+2}\) in the three-dimensional limit: \[\mathcal{A}_{0+2}=\frac{a_{0}}{\sqrt{p^{2}}}+a_{1}\frac{g_{s}^{2}C_{A}}{p^{2} }+a_{2}\frac{(g_{s}^{2}C_{A})^{2}}{(p^{2})^{3/2}}+a_{3}\frac{(g_{s}^{2}C_{A})^ {3}}{(p^{2})^{2}}+\mathcal{O}\left(\frac{1}{(p^{2})^{5/2}}\right). \tag{28}\] Here, \[a_{0}=3,\qquad a_{1}=\frac{16}{3\pi^{2}}-\frac{5}{4}\approx-0.710,\qquad a_{2 }=\frac{247}{256}-\frac{191}{18\pi^{2}}\approx-0.110\,, \tag{29}\] and \(C_{A}\) is the quadratic Casimir of the gauge group in the adjoint representation (i.e., \(C_{A}=N_{c}\) for the gauge group \(G=SU(N_{c})\)). The superconvergent combination \(\mathcal{A}_{0+2}\) enjoys other nice properties. First, it is free from two-loop ultraviolet divergences, which can be checked explicitly by adding the two lines of (24). This is precisely as anticipated from the renormalization group argument below (13), since a two-loop divergence would have led to a non-polynomial term \(\Pi^{\mu\nu\alpha\beta}(p)\sim\frac{p^{\mu}p^{\nu}p^{\alpha}p^{\beta}}{p^{2}} \log\bar{\mu}\). Thus, all constants in (29) are unambiguous and scheme-independent. Second, even though we have not performed a four-loop calculation to determine \(a_{3}\), we can predict that this coefficient is actually independent of the gluon condensate, which cancels out in the combination \(\mathcal{A}_{0+2}\). This can be seen from the fact that the Wick contraction of two field strengths that can give rise to the OPE coefficient \(C_{\text{F}^{2}}^{\mu\nu\alpha\beta}\) in (13) following [30], cannot give rise to a \(p^{\mu}p^{\nu}p^{\alpha}p^{\beta}\) term at leading order.3 Footnote 3: Upon using the Ward identity (6) to fix all contact ambiguities and then imposing Lorentz invariance of condensates, we find specifically that \[C_{F^{2}}^{\mu\nu\alpha\beta}(p)=\frac{d-4}{d}\left(\phi_{0}^{\mu\nu\alpha \beta}(p)\frac{2(d-2)}{(d-1)^{2}}+\phi_{2}^{\mu\nu\alpha\beta}(p)\frac{1}{d- 1}+\frac{g^{\mu\alpha}g^{\nu\beta}+g^{\mu\beta}g^{\nu\alpha}-g^{\mu\nu}g^{ \alpha\beta}}{4}\right)+\mathcal{O}(g_{s}^{2}/p), \tag{30}\] which is compatible with (10) and the relation between the condensate and vacuum energy. Therefore, the perturbative calculation of \(a_{3}\) cannot display any infrared sensitivity and so must yield a finite, unambiguous constant. In eq. (28) we have still included the term \(a_{3}\) to parameterize our ignorance of the four-loop physics. Given the decreasing pattern in the above coefficients, we believe that a reasonable range is for \(a_{3}\) is \[a_{3}\in\left[-\frac{1}{10},\frac{1}{10}\right] \tag{31}\] so that \(|a_{3}|<|a_{2}|\). Further note that the result (28) is an asymptotic series in the large Euclidean region \(p^{2}\gg g_{s}^{2}C_{A}\) and its apparent singularity at \(p^{2}=0\) is an artifact of perturbation theory since \(\mathcal{A}_{0+2}\) must be regular at \(p^{2}=0\) non-perturbatively. ## 3 Sum-rules: estimating the glueball masses and couplings In this section, we review the dispersive sum-rules for the superconvergent combination \(\mathcal{A}_{0+2}\). We start by constructing dispersion relations relating the \(TT\)-correlator in the Euclidean region to the correlator in the physical region in section 3.1. Then in section 3.2, we describe how the Borel transform improves the convergence of the perturbative series of \(\mathcal{A}_{0+2}\) in the limit \(p^{2}\to 0\). From the Borel transform of \(\mathcal{A}_{0+2}\), we construct a function \(\hat{\mathcal{H}}^{2}\) that corresponds to the weighted average of the low-lying glueball masses. Then, we compare the \(\hat{\mathcal{H}}^{2}\) obtained from truncating the perturbative expression of \(\mathcal{A}_{0+2}\) to the \(\hat{\mathcal{H}}^{2}\) obtained from a non-perturbative model of \(\mathcal{A}_{0+2}\). The Borel transform of the perturbative result for \(\mathcal{A}_{0+2}\) is given in section 3.3 while the Borel transform of the non-perturbative \(\mathcal{A}_{0+2}\) is given in section 3.4. In sections 3.5 and 3.6, we optimize the parameters of the one- and two-glueball models using a \(\chi^{2}\) fit and extract estimates for the low-lying glueball masses and their couplings to the stress-tensor. Lastly, in section 3.7, we compare our values obtained from sum-rules to the lattice results. ### Dispersion relations The superconvergent two-point function \(\mathcal{A}_{0+2}\) inherits a Kallen-Lehmann representation \[\mathcal{A}_{0+2}(p^{2})=\frac{1}{\pi}\,\int_{\mathbb{R}^{+}}\mathrm{d}s\,\, \frac{\rho(s)}{p^{2}+s-i\epsilon} \tag{32}\] from the two-point functions \(A_{0}\) and \(A_{2}\) (10).4 Here, \(\rho\) is called the spectral density and is positive for timelike momenta \(q^{2}=-s<0\). The form of the spectral density follows directly form the spectral density of \(A_{0}\) and \(A_{2}\), Footnote 4: In general, the Fourier transform of any 2-point function always has a Kallén-Lehmann representation. \[\rho(s)=\sum_{i=1}^{n}\frac{2\pi g_{i}^{2}}{(m_{i}^{2})^{2}}\,\delta\left(s-m _{i}^{2}\right)+H(s)\,\Theta\left(s-s_{0}^{2}\right), \tag{33}\] where \(m_{i}\) is the mass of the \(i^{\text{th}}\) bound state, \(g_{i}\) is the residue of the \(m_{i}\) pole (also the coupling constant of the \(i^{\text{th}}\) bound state) and the continuum is assumed to start at \(s_{0}\sim 4m_{1}^{2}\). Since the sum-rules are robust against perturbations in \(s_{0}\) we set \(s_{0}=4m_{1}^{2}\). The \(s\)-dependence of \(H\) is fixed by the asymptotic form of \(\rho\), which is computable using perturbation theory. The normalization of the delta-function terms in (3.2) follows from the usual normalization of the \(A_{0}\) and \(A_{2}\) spectral densities and the fact that \(\mathcal{A}_{0+2}=A_{0+2}/(p^{2})^{2}\). The form of the spectral density determines the analytic structure of \(\mathcal{A}_{0+2}\) (see fig. 6). While the two-point functions have been computed in the limit of large spacelike momenta \(q^{2}=Q^{2}>0\) (Euclidean region), we need \(\mathcal{A}_{0+2}\) in the region of large timelike momenta. Thankfully, these regions are linked by a dispersion relation. To see this, consider the following rewriting of \(\mathcal{A}_{0+2}\) \[\mathcal{A}_{0+2}(Q^{2})=\oint_{q^{2}=Q^{2}}\frac{\mathrm{d}q^{2}}{2\pi i} \frac{\mathcal{A}_{0+2}(q^{2})}{q^{2}-Q^{2}}, \tag{3.3}\] where the contour encircles the (spacelike) point \(Q^{2}\). Next, we deform the contour so that it encircles the poles along the real axis and huges the branch cut in figure 6 \[\mathcal{A}_{0+2}(Q^{2})=\int_{0}^{\infty}\frac{\mathrm{d}s}{2\pi i}\frac{ \mathrm{Disc}\mathcal{A}_{0+2}(-s)}{s+Q^{2}} \tag{3.4}\] where \(\mathrm{Disc}\mathcal{A}_{0+2}\) is the discontinuity of \(\mathcal{A}_{0+2}\) along the branch cut pictured in fig. 6 \[\mathrm{Disc}\mathcal{A}_{0+2}(-s)\mathcal{=}\mathcal{A}_{0+2}(-s-i\epsilon) \mathcal{-A}_{0+2}(-s+i\epsilon). \tag{3.5}\] Comparing (3.1) and (3.4) we see that the spectral density is given by the discontinuity \[\rho(s)=\frac{\mathrm{Disc}\mathcal{A}_{0+2}(-s)}{2i}, \tag{3.6}\] defined by the above contour deformation. Figure 6: Analytic structure of \(\mathcal{A}_{0+2}\), which follows from the Kallén-Lehmann representation of \(A_{0+2}\) and the fact that \(A_{0+2}\to 0\) faster than \((p^{2})^{2}\) as \(p^{2}\to 0\). The minimal assumption is that there is a pole at \(q^{2}=-m_{1}^{2}\) associated to the spin-0 glueball then a brach cut that begins at \(q^{2}=s_{0}\sim 4m_{1}^{2}\). Here, \(m_{2}\) is the mass of the next bound state (spin-2 glueball), which may or may not lie below \(s_{0}\). The contour deformation used in the derivation of the dispersion relation is represented by the dashed line. ### Borel transformation The perturbative expansion of \(\mathcal{A}_{0+2}\) is an asymptotic series and thus cannot be extended to the region of small timelike momentum \(s\sim 0\). Yet, in order to extract the mass of the low-energy bound states, we need to use the perturbative results at small \(s\). To this end, we work with the Borel transform of \(\mathcal{A}_{0+2}\), which improves the convergence of the asymptotic series and hope that the improved convergence of the perturbative result overlaps with low-energy glueball physics. The Borel transformation of the superconvergent two-point function is \[\hat{\mathcal{A}}_{0+2}(M^{2})=\mathcal{B}_{M}\bigg{[}\frac{1}{\pi}\int\, \mathrm{d}s\ \frac{\rho(s)}{s+Q^{2}}\bigg{]}=\frac{1}{\pi M^{2}}\int\,\mathrm{d}s\ \rho(s)\ e^{-s/M^{2}}, \tag{3.7}\] where \(M^{2}\) is the Borel parameter [3]. A convenient way to implement the Borel transform of an asymptotic series in Euclidean momentum \(Q^{2}\) is by acting with the following differential operator [3] \[\mathcal{B}_{M}=\lim_{\begin{subarray}{c}n\to\infty\\ Q^{2}\to\infty\\ Q^{2}/n=M^{2}\end{subarray}}\frac{1}{(n-1)!}(Q^{2})^{n}\left(-\frac{\mathrm{d} }{\mathrm{d}Q^{2}}\right)^{n}. \tag{3.8}\] In particular, all polynomials in \(Q^{2}\) are killed by the Borel transform and the following accounts for most applications \[\mathcal{B}_{M}\left[\left(\frac{1}{Q^{2}}\right)^{n}\right] =\frac{1}{\Gamma(n)(M^{2})^{n}}, \tag{3.9}\] \[\mathcal{B}_{M}\left[\left(\frac{1}{Q^{2}}\right)^{n}\log Q^{2}\right] =\frac{\log(M^{2})}{\Gamma(n)(M^{2})^{n}}+\frac{\Gamma^{\prime}( n)}{\Gamma^{2}(n)(M^{2})^{n}}, \tag{3.10}\] In particular, note that the coefficient of \(1/(Q^{2})^{n}\) of the asymptotic series is suppressed by factor of \(n!\) in the Borel transform \[\mathcal{B}_{M}\bigg{[}\sum_{n\geq 0}a_{i}\frac{1}{(Q^{2})^{n}}\bigg{]}=\sum_{n \geq 0}\frac{a_{i}}{n!}\frac{1}{(M^{2})^{n}}. \tag{3.11}\] The additional factors of \(n!\) greatly improve the convergence of the Borel transformation for small Borel parameter \(M^{2}\). As a sanity check of (3.8), one can use the above to show that \[\mathcal{B}_{M}\left[\frac{1}{s+Q^{2}}\right]=\frac{1}{M^{2}}e^{-s/M^{2}}. \tag{3.12}\] Then, since \(\mathcal{A}_{0+2}(Q^{2})\) satisfies the dispersion relation (3.4), its Borel transform is exactly (3.7). The Borel transform (3.7) allows us to define a weighted average of the mass \[\hat{\mathcal{H}}^{2}\equiv\frac{\hat{\mathcal{A}}^{\prime}_{0+2}(M^{2})}{ \hat{\mathcal{A}}_{0+2}(M^{2})}=\frac{\int\mathrm{d}s\ s\ \rho(s)\ e^{-s/M^{2}}}{\int\mathrm{d}s\ \rho(s)\ e^{-s/M^{2}}} \tag{3.13}\] where \[\hat{\mathcal{A}}^{\prime}_{0+2}(M^{2})=-\frac{1}{M^{2}}\frac{\partial\left(M ^{2}\hat{\mathcal{A}}_{0+2}\right)}{\partial(1/M^{2})}=M^{2}\frac{\partial \left(M^{2}\hat{\mathcal{A}}_{0+2}\right)}{\partial M^{2}}=\frac{1}{\pi M^{2} }\int\,\mathrm{d}s\ s\ \rho(s)\ e^{-s/M^{2}}. \tag{3.14}\] Provided that the spectral density \(\rho\) is dominated by the \(m_{1}\) glueball, this quantity yields an estimate for \(m_{1}^{2}\). That is, at low \(M^{2}\), \(\hat{\mathcal{H}}^{2}\) should have a plateau at roughly the height \(m_{1}^{2}\). While this is indeed the case non-perturbatively, the truncated perturbative expression for \(\hat{\mathcal{H}}^{2}\) does not have this plateau due to the break-down of the perturbative series (as seen in figure 6(b). ### Borel transformation of the perturbative result In this section, we compute the Borel transformation of the perturbative series of superconformal two-point function. This will be used to estimate the mass and couplings of the lightest glueball states in sections 3.5 and 3.6. Using equations (10) and (11), we find that the Borel transform of the perturbative series for \(\mathcal{A}_{0+2}\) is \[\hat{\mathcal{A}}_{0+2}^{\text{pert}}(M^{2})=\mathcal{B}_{M}\left[ \text{three-loop truncation of }\mathcal{A}_{0+2}\right] \tag{14}\] \[=\frac{a_{0}}{\sqrt{\pi}\sqrt{M^{2}}}+a_{1}\frac{g_{s}^{2}C_{A}} {M^{2}}+\frac{2a_{2}}{\sqrt{\pi}}\frac{(g_{s}^{2}C_{A})^{2}}{(M^{2})^{3/2}}+a_ {3}\frac{(g_{s}^{2}C_{A})^{3}}{(M^{2})^{2}}.\] The one-, two- and three-loop Borel transforms of \(\hat{\mathcal{A}}_{0+2}^{\text{pert}}\) are plotted in figure 6(a). In particular, the two- and three-loop contributions converge very quickly for \(M^{2}\gtrsim g_{s}^{2}C_{A}\) signaling that the three-loop curve can be trusted for \(M^{2}\gtrsim g_{s}^{2}C_{A}\). However, it is uncertain how much we can trust the three-loop \(\hat{\mathcal{A}}_{0+2}^{\text{pert}}\) for \(M^{2}<g_{s}^{2}C_{A}\). In order to try and quantify the uncertainty in \(\hat{\mathcal{A}}_{0+2}^{\text{pert}}\), we have included an unknown "four-loop" term in \(\mathcal{A}_{0+2}^{\text{pert}}\). The coefficient \(a_{3}\) parameterizes the error in the perturbative result. We have set the magnitude of these coefficients to be approximately the same as as the three-loop correction to \(\mathcal{A}_{0+2}\): \(|a_{3}|<\frac{1}{10}\) (see figure 7(a)). In particular, note that the error band shrinks as \(M^{2}\to\infty\) where we are infinitely certain about the perturbative result but becomes very wide for small \(M^{2}\) where we are the most uncertain of the perturbative result. From equation (14), we compute the weighted mass average \(\hat{\mathcal{H}}_{\text{pert}}^{2}\) \[\frac{\hat{\mathcal{H}}_{\text{pert}}^{2}}{(g_{s}^{2}C_{A})^{2}}=\frac{a_{0}( \frac{M^{2}}{g_{s}^{2}C_{A})^{3}}-2a_{2}\frac{\sqrt{M^{2}}}{g_{s}^{2}C_{A}}-2 \sqrt{\pi}a_{3}}{2a_{0}\frac{(M^{2})^{3/2}}{(g_{s}^{2}C_{A})^{3}}+2\sqrt{\pi}a_ {1}\frac{M^{2}}{(g_{s}^{2}C_{A})^{2}}+4a_{2}\frac{\sqrt{M^{2}}}{g_{s}^{2}C_{A} }+2\sqrt{\pi}a_{3}} \tag{15}\] Like \(\hat{\mathcal{A}}_{0+2}^{\text{pert}}\), the two- and three-loop \(\hat{\mathcal{H}}_{\text{pert}}^{2}\) curves converge quickly for \(M^{2}\gtrsim g_{s}^{2}C_{A}\) (see figure 6(b)). We can also plot a version of \(\hat{\mathcal{H}}_{\text{pert}}^{2}\) with and error band (see figure 7(b)). ### Borel transformation of the non-perturbative model In this section, we construct an ansatz/model for the non-perturbative spectral density of the superconvergent two-point function. From this spectral density, we construct a non-perturbative Borel transform of the superconvergent two-point function \(\hat{\mathcal{A}}_{0+2}^{\text{non-pert}}\) and the analogous weighted mass average \(\hat{\mathcal{H}}_{\text{non-pert}}^{2}\). Like their perturbative cousins, these quantities will be used to estimate the mass and couplings of the lightest glueball states in sections 3.5 and 3.6. We consider the following model of the non-perturbative spectral density \[\rho(s)=\sum_{i=1}^{N}\frac{2\pi g_{i}^{2}}{m_{i}^{4}}\,\delta\left(s-m_{i}^{ 2}\right)+H(s)\,\Theta\left(s-4m_{1}^{2}\right) \tag{16}\] where \(H(s)\) is fixed by the asymptotic behaviour of the perturbative spectral density \[H(s)\equiv\text{Disc}\mathcal{A}_{0+2}^{\text{pert}}(-s)\bigg{|}_{s>4m_{1}^{2 }}=\frac{a_{0}}{\sqrt{s}}+\frac{a_{2}(g_{s}^{2}C_{A})^{2}}{s^{3/2}}+\mathcal{O} \left(\frac{1}{s^{5/2}}\right). \tag{17}\] Note that this has the gross features expected non-perturbativly: a sum of delta functions for each glueball in the spectrum and a continuum that begins at the threshold of the lightest particle \(s>4m_{1}^{2}\). Using (10), the Borel transform of \({\cal A}^{\text{non-pert}}_{0+2}\) is \[\hat{\cal A}^{\text{non-pert}}_{0+2}(M^{2})= \sum_{i=1}^{n}\frac{2g_{i}^{2}}{M^{2}m_{i}^{4}}e^{-\frac{m_{i}^{2} }{M^{2}}}+\frac{a_{0}}{\sqrt{\pi}\sqrt{M^{2}}}\text{erfc}\left(\sqrt{\frac{4m_{ 1}^{2}}{M^{2}}}\right) \tag{23}\] \[+\frac{g_{s}^{2}C_{A}}{M^{2}}-\frac{a_{2}g_{s}^{2}C_{A}}{\pi m_{1 }}e^{-\frac{4m_{1}^{2}}{M^{2}}}+\frac{2a_{2}(g_{s}^{2}C_{A})^{2}}{\sqrt{\pi}(M ^{2})^{3/2}}\text{erfc}\left(\sqrt{\frac{4m_{1}^{2}}{M^{2}}}\right)\] where \(\text{erfc}(z)=1-\text{erf}(z)\) is the complimentary error function. We can fix one parameter in our model by comparing \(\hat{\cal A}^{\text{pert}}_{0+2}\) and \(\hat{\cal A}^{\text{non-pert}}_{0+2}\) in the large \(M^{2}\) limit where we trust perturbation theory. Expanding in the large \(M^{2}\) limit yields \[\hat{\cal A}^{\text{non-pert}}_{0+2}(M^{2})=\frac{a_{0}}{\sqrt{\pi}\sqrt{M^{2} }}+\frac{1}{\pi M^{2}}\Bigg{[}\sum_{i=1}^{n}\frac{2\pi g_{i}^{2}}{m_{i}^{4}}-4 a_{0}m_{1}-a_{2}\frac{(g_{s}^{2}C_{A})^{2}}{m_{1}}\Bigg{]}+{\cal O}\left( \frac{1}{(M^{2})^{3/2}}\right). \tag{24}\] Then, requiring \[\left[\hat{\cal A}^{\text{non-pert}}_{0+2}(M^{2})-\hat{\cal A}^{\text{pert}}_ {0+2}(M^{2})\right]_{M^{2}\to\infty}={\cal O}\left(\frac{1}{(M^{2})^{3/2}} \right), \tag{25}\] fixes \[g_{1}^{2}=a_{0}\frac{2m_{1}^{5}}{\pi}+a_{1}\frac{(g_{s}^{2}C_{A})m_{1}^{4}}{2} +a_{2}\frac{(g_{s}^{2}C_{A})^{2}m_{1}^{3}}{2\pi}-\sum_{i=2}^{n}\frac{g_{i}^{2} m_{1}^{4}}{m_{i}^{4}} \tag{26}\] and guarantees that the high energy limit of \(\hat{\cal A}^{\text{non-pert}}_{0+2}\) matches \(\hat{\cal A}^{\text{pert}}_{0+2}\). To fix the remaining parameters of the model, we minimize \[\chi^{2}=\sum_{j=0}^{N}\left(\frac{\hat{\cal H}^{2}_{\text{pert}}(M_{j}^{2})|_ {a_{3}=0}-\hat{\cal H}^{2}_{\text{non-pert}}(M_{j}^{2})}{\text{Error}(\hat{ \cal H}^{2}_{\text{pert}}(M_{j}^{2}))}\right)^{2} \tag{27}\] where \(M_{j}^{2}\in R_{N}\) and \(R_{N}\) is a region of Borel parameter space \(R=M^{2}\in[R_{\text{min}},R_{\text{max}}]\) that has been discretized into \(N+1\) points. Since we know that the high energy limit of \(\rho\) is a power law, we want to sample the low energy region of \(\hat{\cal H}^{2}\) more frequently (by sampling the high energy region too much the fit can be driven to a pure power law that would only be accurate at high energies). Thus, \(R_{N}\) is logarithmically discretized \[R_{N}\ni M_{j}^{2}=\exp\left(\log R_{\text{min}}+j\ \frac{\log R_{\text{ max}}-\log R_{\text{min}}}{N}\right), \tag{28}\] for \(j=0,1,2,\ldots,N\). ### One-glueball model (\(N=1\)) In this section, we minimize (27) for a model with one-glueball (\(N=1\) in (20)) and extract estimates for the mass \(m_{1}\) and coupling \(g_{1}\). The simplest model for the spectral density is the single glueball model \[\rho_{1}(s)=\frac{2\pi g_{1}^{2}}{m_{1}^{4}}\,\delta\left(s-m_{1}^{2}\right)+H (s)\,\Theta\left(s-4m_{1}^{2}\right). \tag{29}\] The spin-0 glueball residue is fixed by (3.22) to \[g_{1}^{2}=\frac{247(g_{s}^{2}C_{A})^{2}m_{1}^{3}}{512\pi}-\frac{191(g_{s}^{2}C_{A })^{2}m_{1}^{3}}{36\pi^{3}}+\frac{8(g_{s}^{2}C_{A})m_{1}^{4}}{3\pi^{2}}-\frac{5( g_{s}^{2}C_{A})m_{1}^{4}}{8}+\frac{6m_{1}^{5}}{\pi}, \tag{3.26}\] Moreover, since \(m_{1},g_{1}^{2}>0\), we obtain a lower bound on the spin-0 mass \[\frac{m_{1}}{g_{s}^{2}C_{A}}>m_{1,\min}\equiv\frac{\sqrt{79744-9069\pi^{2}+225 \pi^{4}}+15\pi^{2}-64}{288\pi}\approx 0.226377. \tag{3.27}\] In the one-glueball model, the minimization of \(\chi^{2}\) is highly correlated to the selection of the region \(R\). In figure 9, \(\chi^{2}\) is plotted as a function of the glueball mass \(m_{1}\) for several choices of \(R\). For small \(R_{\max}\), the fit is trying to match the low energy regions best and there is a global minimum at \(m_{1}\sim 1.06(g_{s}^{2}C_{A})\). As \(R_{\max}\) is increased, what was the global minimum becomes a local minimum. The new global minimum forces \(m_{1}\) to its minimal value (where \(g_{1}^{2}=0\)). By setting \(g_{1}^{2}=0\), the fit wants to forget about the non-perturbative dynamics and instead match perturbation theory (figure 9(b)). Given the sensitivity of the optimized \(m_{1}\) on the choice of \(R_{\max}\) and the fact that using a low energy \(R_{\max}\) leads to a significant miss-match between \(\tilde{\mathcal{H}}_{\text{pert}}^{2}\) and \(\tilde{\mathcal{H}}_{\text{non-pert}}^{2}\) for \(M^{2}>1\) (figure 9(a)), we conclude that the one-glueball model does not accurately describe the non-perturbative spectral density of \(\mathcal{A}_{0+2}\). ### Two-glueball model (\(N=2\)) Having concluded that the single glueball model does not accurately represent the non-perturbative spectral density of \(\mathcal{A}_{0+2}\), we study the next simplest model containing two Figure 9: Plot of the 1-glueball \(\log_{10}(\chi^{2})\) for various choices of the fitting region \(R\). The location of the minimum is very sensitive to the choice of \(R_{\max}\). For “small” \(R_{\max}\), the minimum is roughly at \(m_{1}/(g_{s}^{2}C_{A})\sim 1\). After increasing \(R_{\max}\) sufficiently, what was a global minimum becomes a local minimum. For almost all choices of \(R_{\max}\), the minimization of \(\chi^{2}\) puts too much emphasis on the high energy region. This stamps out the non-linearities in \(\hat{\mathcal{H}}_{\text{non-pert}}^{2}\) and drives to \(g_{1}^{2}\) to zero which we deem unphysical. glueballs. We will find that the optimized value for the masses and coupling of this model are much more stable. Setting \(N=2\) in (3.17), the spectral density of the two glueball model is \[\rho_{2}(s)=2\pi\left[\frac{g_{1}^{2}}{m_{1}^{2}}\,\delta\left(s-m_{1}^{2} \right)+\frac{g_{2}^{2}}{m_{2}^{2}}\,\delta\left(s-m_{2}^{2}\right)\right]+H(s )\,\Theta\left(s-4m_{1}^{2}\right). \tag{3.28}\] By matching the asymptotics of \(\mathcal{A}_{0+2}^{\text{pert}}\) and \(\mathcal{A}_{0+2}^{\text{non-pert}}\), the coupling constant \(g_{1}^{2}\) is fixed to \[g_{1}^{2}=-\frac{g_{2}^{2}m_{1}^{4}}{m_{2}^{4}}+\frac{6m_{1}^{5}}{\pi}+\left( \frac{8}{3\pi^{2}}-\frac{5}{8}\right)m_{1}^{4}(g_{s}^{2}C_{A})+\left(\frac{247 }{512\pi}-\frac{191}{36\pi^{3}}\right)m_{1}^{3}(g_{s}^{2}C_{A})^{2}. \tag{3.29}\] Combining this with the constraints \(m_{i},g_{i}>0\) and \(m_{i+1}>m_{i}\), we find the same constraint on \(m_{1}\) as for one glueball model \[\frac{m_{1}}{g_{s}^{2}C_{A}}>m_{1,\min}\approx 0.226377 \tag{3.30}\] as well as a constraint on the coupling \(g_{2}^{2}\) \[\frac{g_{2}^{2}}{\left(g_{s}^{2}C_{A}\right)^{5}}<g_{2,\max}^{2}\equiv\frac{6 m_{1}m_{2}^{4}}{\pi(g_{s}^{2}C_{A})^{5}}-\frac{\left(15\pi^{2}-64\right)m_{2}^{4} }{24\pi^{2}(g_{s}^{2}C_{A})^{4}}+\frac{\left(2223\pi^{2}-24448\right)m_{2}^{4} }{4608\pi^{3}(g_{s}^{2}C_{A})^{3}m_{1}}. \tag{3.31}\] Minimizing the two glueball \(\chi^{2}\), we find estimates for the model parameters. In particular, the minimization of the two glueball \(\chi^{2}\) is robust to changes of the region \(R\) and the discretization parameter \(N\): Figure 10: Plots of \(\hat{\mathcal{H}}^{2}\) (the ratio of \(\hat{\mathcal{A}}_{0+2}\) and its derivative (3.13)) where the parameters have been optimized over different fitting regions \(R\) (indicated by the region between the grey vertical lines). In figure 9(a) the parameters have been optimized using a small \(R_{\max}\). While \(m_{1}\) is reasonable (\(m_{1}/g_{s}^{2}C_{A}\sim\mathcal{O}(1)\)), there is still a sizable difference between \(\hat{\mathcal{H}}^{2}_{\text{non-pert}}\) and \(\hat{\mathcal{H}}^{2}_{\text{pert}}\) for \(M^{2}\geq 1\). In figure 9(b) the parameters have been optimized using a larger \(R_{\max}\). Unsurprisingly, the high energy limit of \(\hat{\mathcal{H}}^{2}_{\text{non-pert}}\) fits \(\hat{\mathcal{H}}^{2}_{\text{pert}}\) better. By minimizing \(\chi^{2}\) over a larger region \(m_{1}\) and \(g_{1}^{2}\) are driven to their minimal values. Since \(\hat{\mathcal{H}}^{2}_{\text{pert}}\) is linear in the high energy region, the minimization sets \(g_{1}^{2}\to 0\) so that \(\hat{\mathcal{H}}^{2}_{\text{non-pert}}\) is as close to linear as possible. Given the sensitivity of the optimized parameters to the range \(R\), the single glueball model is perhaps not the best. Much like the one-glueball case, when \(R_{\rm min}\) is too large the minimization procedure puts too much emphasis on the high energy region and drives \(m_{1}\to m_{1,\rm min}\) as well as \(g_{1}^{2}\to 0\). In this case, the global minimum lies somewhere outside the allowed region in \((m_{2},g_{2}^{2})\)-space (see figure 10(a)). However, for small enough \(R_{\rm min}\), we get reasonable estimates for \(m_{1}\) and \(g_{2}^{2}\) where the global minimum is well inside the allowed region (see figure 10(b)). Choosing the right \(R_{\rm min}\) is essential to extracting good estimates. This requires finding a window where one can still trust the extrapolation of perturbation theory and where the effects of the low-lying glueballs are significant. However, since we do not know exactly where Figure 11: Density plots of \(\log(\chi^{2})\) at various values of \(m_{1}\) in the allowed region (below equation 3.29). the perturbative expansion breaks down this choice can introduce significant error into our estimates. As a sanity check, we compare the relative strength of the continuum and glueball contributions to the Borel transform of the superconvergent two-point function in figure 12. For the optimized values of \(m_{1},m_{2},g_{1}^{2}\) and \(g_{2}^{2}\), the relative strengths of the continuum and glueball contributions to the superconvergent combination align with physical expectations. Near the lowest lying glueball state, the contribution from the glueball dominates over the continuum. However, sometime after the first glueball but before the second glueball and threshold, the continuum starts to dominate. Moreover, the second glueball is subdominant in all regions and occurs below threshold. These properties are consistent with a physically reasonable spectral density and we are inclined to trust the optimized values of \(m_{1},m_{2},g_{1}^{2}\) and \(g_{2}^{2}\). We also note that the ratio of the \(m_{2}\) to \(m_{1}\) contribution to \(\hat{\mathcal{A}}_{0\text{-}2}^{\text{non-pert}}\) approaches \(1/2\) asymptotically. Since these contributions are asymptotically of the same order, this provides further evidence that the single glueball model (section 3.5) misses important effects. Roughly speaking, this means that perturbation theory is not any more sensitive to the \(m_{1}\) glueball compared to the \(m_{2}\) glueball. On the other hand, the above analysis assumed that \(a_{3}=0\) in \(\mathcal{A}_{0\text{+}2}^{\text{pert}}\). It is important to understand to what degree the unknown coefficients \(a_{3}\) can change the optimized results. To get a rough idea, we repeat the above analysis for fixed \(a_{3}\) in table 1. The only formula that changes is (3.23). Since we are not parameterizing the error in \(a_{3}\), we replace Figure 12: Plotting the individual contributions to the Borel transform of the superconvergent two-point function \(\hat{\mathcal{A}}_{0\text{+}2}^{\text{non-pert}}\). This plot illustrates the relative strength of the threshold and glueball contributions. As expected, the \(m_{1}\) glueball dominates for small Borel parameter \(M^{2}\sim m_{1}^{2}\) while the threshold contribution dominates for large Borel parameter \(M^{2}>4m_{1}^{2}\). The \(m_{2}\)-glueball is always sub-dominate as physically expected. However, the ratio of the \(m_{2}\) to \(m_{1}\) contribution approaches \(1/2\) asymptotically. \(\text{Error}(\hat{\mathcal{M}}^{2}_{\text{pert}}(M^{2}_{j}))\to\hat{\mathcal{M}}^{2} _{\text{pert}}(M^{2}_{j})\) in (3.23). From table 1, we see that our estimates are relatively insensitive to the unknown coefficient \(a_{3}\). In particular, the optimized values for the lowest-lying mass \(m_{1}\) and its coupling to the stress-tensor \(g_{1}\) are stable in the regions \[\frac{m_{1}}{g_{s}^{2}C_{A}}\in[0.92,0.94]\qquad\text{and}\qquad\frac{g_{1}^{2 }}{\left(g_{s}^{2}C_{A}\right)^{5}}\in[0.66,0.74]. \tag{3.32}\] Perhaps the insensitivity to \(a_{3}\) can be seen from the fact that \(H(s)\) (see (3.18)) does not dependent on \(a_{3}\). This means that \(a_{3}\) does not appear in the equations for the non-perturbative superconvergent two-point function \(\hat{\mathcal{A}}^{\text{non-pert}}_{0+2}\) (3.19) or the lowest-lying residue (3.22). The only place \(a_{3}\) appears is in the perturbative result for the weighted mass \(\hat{\mathcal{M}}^{2}_{\text{non-pert}}\). Thus, \(a_{3}\) enters into the \(\chi^{2}\) fit in a relatively simple way. ### Comparison with lattice data In this section, we summarize the low energy spectrum of three-dimensional YM theory predicted by lattice simulations and compare with the results of section 3.6. Roughly speaking, observables are computed in lattice simulations by directly performing the Feynman path integral over field configurations on a discretized spacetime (often done using Monte Carlo sampling). By calculating a given observable for many different lattice spacings, one can determine a best fit for the dependence on the lattice spaceing. Then extrapolating this fit to the limit of vanishing lattice spacing yields observables in the continuum theory. Fortunately, there is a lot of data from lattice simulations of three-dimensional YM [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. In particular, the spectrum for gauge group \(G=SU(N_{c})\) has been computed in [8; 17] for various \(N_{c}\). However, this data must be converted from units of the string tension \(\sigma\), which is the most accurate measurement on the lattice, to units of \(g_{s}^{2}N_{c}\). The string tension is computed from the energy of the lowest-lying state of a static quark anti-quark pair separated by a distance \(R\). If our theory has linear confinement, this \begin{table} \begin{tabular}{c|c|c|c||c|c|c} \(a_{3}\) & \(N\) & \(\frac{R_{\text{min}}}{g_{s}^{2}C_{A}}\) & \(\frac{R_{\text{max}}}{g_{s}^{2}C_{A}}\) & \(\frac{m_{1}}{g_{s}^{2}C_{A}}\) & \(\frac{g_{1}^{2}}{(g_{s}^{2}C_{A})^{5}}\) & \(\frac{m_{2}}{g_{s}^{2}C_{A}}\) & \(\frac{g_{2}^{2}}{(g_{s}^{2}C_{A})^{5}}\) \\ \hline \(\frac{1}{5}\) & 20-100 & 1-2 & 10-20 & 0.92 \(\div\) 0.94 & 0.66 \(\div\) 0.74 & 1.64 \(\div\) 1.70 & 3.48 \(\div\) 3.87 \\ \hline \(\frac{1}{10}\) & 20-100 & 1-2 & 10-20 & 0.92 \(\div\) 0.94 & 0.66 \(\div\) 0.74 & 1.65 \(\div\) 1.70 & 3.48 \(\div\) 3.87 \\ \hline \(0\) & 20-100 & 1-2 & 10-20 & 0.92 \(\div\) 0.94 & 0.66 \(\div\) 0.74 & 1.65 \(\div\) 1.70 & 3.48 \(\div\) 3.87 \\ \hline \(-\frac{1}{10}\) & 20-100 & 1-2 & 10-20 & 0.92 \(\div\) 0.94 & 0.66 \(\div\) 0.74 & 1.65 \(\div\) 1.70 & 3.48 \(\div\) 3.87 \\ \hline \(-\frac{1}{5}\) & 20-100 & 1-2 & 10-20 & 0.92 \(\div\) 0.94 & 0.66 \(\div\) 0.74 & 1.65 \(\div\) 1.70 & 3.48 \(\div\) 3.89 \\ \end{tabular} \end{table} Table 1: Table displaying mass estimates for some compatible values of the error \(a_{3}\). The above table shows that our estimates do not strongly depend on \(a_{3}\). While we suspect that only \(|a_{3}|<1/10\) are physically reasonable values, we show results with \(a_{3}\) outside this range. energy, \(E_{\rm min}(R)\), provides a definition for the static quark potential as well as a definition for the string tension \(\sigma\) in the large \(R\) limit \[E_{\rm min}(R)\equiv V_{q\bar{q}}(R)\underset{R\rightarrow\infty}{\simeq}\sigma R \tag{3.33}\] For large \(R\), this state should be thought of as static quarks attached by a confining flux tube of length \(R\). Reference [17] provides the most recent fit of the string tension in (2+1)-dimensional Yang-Mills theory \[\sqrt{\sigma}=\left(0.196573(81)-\frac{0.1162(9)}{N_{c}^{2}}\right)g_{s}^{2}N_ {c}. \tag{3.34}\] The mass values in units of \(g_{s}^{2}N_{c}\) are summarized in table 2. Comparing with table 1, we see that the sum-rule estimates for \(m_{1}\) are in good agreement with the lattice data with error between 14% and 19% for any value of \(N_{c}\). While the error for the \(m_{2}\) estimates can be much larger (up to \(\sim 48\%\)), this comparison reveals that sum-rules capture many gross features of the low-energy non-perturbative physics. The discrepancy with the lattice data is likely due to the inaccuracies in our model of the spectral density that includes only one or two glueball states and a perturbative continuum. Despite these discrepancies, we conclude that this model is still a relative good first approximation. ## 4 Everything is consistent with unitarity! In this section, we study the compatibility of the residues, \(g_{i}^{2}\), of the non-perturbative superconvergent spectral density (3.17) with the principle of unitarity and the perturbative two-point function eq. (2.28). In particular, we consider the case of a single glueball with mass \(m_{1}\) and multi-particle threshold starting at \(4m_{1}^{2}\). If the physical spectral density is indeed dominated by lightest the spin-0 glueball, this model would be a good approximation. While we have already argued against this approximation and that one should include at least two glueball states, the single glueball model is more constrained and therefore more relevant for the consistency checks. \begin{table} \begin{tabular}{c||c c|c c} \(N_{c}\) & \(m_{1}/(g_{s}^{2}N_{c})\) & \(J^{PC}\) & \(m_{2}/(g_{s}^{2}N_{c})\) & \(J^{PC}\) \\ \hline 2 & 0.79 & \(0^{++}\) & 1.15 & \(0^{++*}\) \\ 3 & 0.80 & \(0^{++}\) & 1.19 & \(0^{++*}\) \\ 4 & 0.80 & \(0^{++}\) & 1.22 & \(0^{++*}\) \\ ⋮ & & & & \\ \(\infty\) & 0.81 & \(0^{++}\) & 1.24 & \(0^{++*}\) \\ \end{tabular} \end{table} Table 2: The first two masses and states in the spectrum computed by lattice simulations [17]. Note that we have only included states with quantum numbers \(J=C=+\) since the stress tensor has quantum numbers \(J=C=+\). With this restriction the lowest lying states are are always \(0^{++}\) and its excited state \(0^{++*}\). Recalling eq. (3.25), the spectral density for the single glueball model is \[\rho(s)=\frac{2\pi g_{1}^{2}}{(m_{1}^{2})^{2}}\delta(s-m_{1}^{2})+H(s)\,\Theta\left( s-s_{0}^{2}\right). \tag{4.1}\] Checking unitarity of the correlation function boils down to checking the positivity of this spectral density. To impose positivity, we consider a coordinate transformation of the \(s\)-plane that maps the upper half-half plane to the unit disk while moving the pole to the origin and the branch cut to the boundary of the unit disk (see [42; 43]): \[s\to z=\frac{\sqrt{4m_{1}^{2}-s}-\sqrt{3}m_{1}}{\sqrt{4m_{1}^{2}-s}+\sqrt{3}m_ {1}}. \tag{4.2}\] Now, the series expansion around \(z=0\) is convergent with a finite radius of convergence. Specifically, the superconvergent combination, \(\mathcal{A}_{0+2}\), becomes \[\mathcal{A}_{0+2}(z)=\frac{-g_{1}^{2}}{12m_{1}^{6}z}+\sum_{n=0}^{\infty}c_{n}z ^{n}, \tag{4.3}\] which can be truncated at some large but finite cutoff \(N\). On the other hand, the perturbative expansion for large Euclidean momenta (\(s<0\)) maps to an asymptotic expansion around \(z=1\). Comparison with the three-loop perturbative result fixes three of the \(c_{i}\)'s. Next, we impose positivity on the boundary of the unit disk in the \(z\)-plane \[\rho(z)=\frac{-\pi g_{1}^{2}}{6m_{1}^{6}}\delta(z)+\sum_{n=0}^{N}c_{n}\text{Im }(z^{n})\geq 0, \tag{4.4}\] where \(z=e^{i\theta}\) and \(\theta\in(0,\pi)\). In practice, we truncate the sum (4.4) at \(N=40\) and impose positivity for 2000 evenly-spaced points on the boundary of the disc. Using simple Mathematica functions (FindMinimum and FindMaximum), we minimize/maximize the residue \(g_{1}^{2}\) over the variables \(c_{i=4,\ldots,40}\) while enforcing the positivity condition (4.4) at the boundary points.5 Footnote 5: We have checked that the numerical solutions to the \(c_{i}\) are stable when we change the number of boundary points. Unfortunately, positivity of the physical cut alone is not enough to get a finite upper bound for the residue since both terms in (4.4) can be arbitrarily large positive numbers. We illustrate this in figure 13 (right) where we plot a positive spectral density with large residue \(g_{1}^{2}=10^{6}(g_{s}^{2}C_{A})^{5}\) and mass \(m_{1}/g_{s}^{2}C_{A}=1\). On the other hand, positivity only yields a trivial lower-bound for the residue. The minimization procedure returns negative values for the residue and adding more terms to the ansatz only increases the negativity of the residue. However, since the residue must be positive, we conclude that the minimal value of the residue must be zero. In figure 13 (left), we plot a positive spectral density with a small residue \(g_{1}^{2}=10^{-6}(g_{s}^{2}C_{A})^{5}\) and mass \(m_{1}/g_{s}^{2}C_{A}=1\) that is consistent with unitarity and the perturbative results. In this section, we solved the minimization/maximization problem of the residue \(g_{1}^{2}\) for a wide range of mass values. For each mass, we find a spectral density that is compatible with unitarity and the asymptoics predicted by perturbation theory. Hence, we conclude that one can construct a spectral density compatible with unitarity and perturbation theory for any mass and residue in the single glueball model. ## 5 Higher-spin currents In this section, we extend our analysis to more general operators, _i.e.,_ higher-spin currents \(O^{\mu_{1}\cdots\mu_{\ell}}(p)\) with even spin \(\ell\). As pointed out in the introduction, correlation functions of such operators contain important information about glueball lightcone wavefunctions, which are closely related to parton distribution functions. However, to apply the methods of section 3, we first need to identify superconvergent combinations of the higher-spin two-point functions. In section 5.1, we analyze the tensor structure of higher-spin two-point functions of interest. Then, we fix a basis of the higher-spin operators in section 5.2. The coefficients of the tensor structures for the basis operators are then computed to two-loops. In section 5.3, we compute the imaginary part of these tensor structure coefficients using unitarity cuts at one- and two-loops. While just the imaginary parts are enough to verify the existence of superconvergent combinations, we also compute these coefficients using Feynman diagrams (section 5.4) since the sum-rules are sensitive to more than just the imaginary parts. Lastly, in section 5.5, we explicitly show the existence of superconvergent combinations for higher-spin two-point functions and give a crude method for extracting the higher-spin residues. ### Higher-spin Correlation Functions We start our analysis of higher-spin operators by noting that higher-spin fields can be defined as traceless symmetric combinations of covariant derivatives acting on the field strength Figure 13: Log-log plot of the spectral density (4.4) at \(m_{1}/(g_{s}^{2}C_{A})=1\) as well as the asymptotic behaviour obtained from the perturbative loop expansion (equation (2.28)). For the spectral density (4.4), we set \(N=40\). The left plot corresponds to a spectral density with a large value of residue \((g_{1}^{2}=10^{6}(g_{s}^{2}C_{A})^{5})\) and the right plot corresponds to a one with small value of residue \((g_{1}^{2}=10^{-6}(g_{s}^{2}C_{A})^{5})\). This exemplifies the argument that unitarity and the asymptotic behaviour of the correlation function are not strong enough to give an upper bound on the residue. In contrary to \(T_{\mu\nu}\), these operators are not conserved and will have many more possible tensor structures. At spin-2, there is only one operator: the stress-tensor. Explicitly, \[O_{2}^{\mu_{1}\mu_{2}}(x)=(F^{a})^{\mu_{1}}_{\lambda}(F^{a})^{\mu_{2}\lambda}- \frac{g^{\mu_{1}\mu_{2}}}{d}(F^{a})_{\mu\lambda}(F^{a})^{\mu\lambda}. \tag{108}\] For higher-spin operators with spin \(\ell\), there are \(\ell/2\) possible structures: \[\begin{split}& O_{0,\ell}^{\mu_{1}\ldots\mu_{\ell}}(x)=\frac{1}{( \ell-2)!}[D^{(\mu_{1}}\ldots D^{\mu_{\frac{\ell}{2}-1}}(F^{a})^{\mu_{\frac{ \ell}{2}}}_{\lambda}D^{\mu_{\frac{\ell}{2}+1}}\ldots D^{\mu_{\ell}-1}(F^{a})^ {\mu_{\ell})\lambda}-\text{trace}],\\ & O_{i,\ell}^{\mu_{1}\ldots\mu_{\ell}}(x)=D^{\mu_{1}}\ldots D^{ \mu_{i}}O_{0,\ell-i}^{\mu_{i+1}\ldots\mu_{\ell}}\quad\text{for}\qquad i=2, \ldots,\ell-2.\end{split} \tag{109}\] Since we are interested in even spin operators, \(\ell\) and \(i\) are even numbers. Any combination of these \(\ell/2\) structures is a viable option for higher-spin operators. Later, we will introduce the criterion we use to select our basis of spin-\(\ell\) operators. To avoid working with indices, we utilize null vector representation for symmetric traceless tensor of spinning states, where we introduce \(d+1\)-dimensional null vectors \(v\) with \(v^{2}=0\), \[f_{\mu_{1}\ldots\mu_{\ell}}\leftrightarrow f(v)\equiv f_{\mu_{1}\ldots\mu_{ \ell}}v^{\mu_{1}}\ldots v^{\mu_{\ell}}. \tag{110}\] \(f(v)\) can then be proved to be a harmonic polynomial of its \(d\) variable [44]. Once we have the function \(f(v)\), we can reconstruct \(f_{\mu_{1}\ldots\mu_{\ell}}\) using Thomas-Todorov operator (see [44]): \[D_{v}^{\mu}=\left(\frac{d}{2}-1+v\cdot\frac{\partial}{\partial v}\right)\frac {\partial}{\partial v_{\mu}}-\frac{1}{2}v^{\mu}\frac{\partial^{2}}{\partial v \cdot\partial v}. \tag{111}\] This differential operator imposes tracelessness directly by removing the trace. Since our goal is to study \((O_{\ell}(v_{1},p)O_{\ell^{\prime}}(v_{2},-p))\) correlation functions, we need to understand how to extract different tensor structures. Because the correlation function must be invariant under little group transformations (transformations that keep the momentum, \(p\), fixed), \(O_{\ell}(p)\) and \(O_{\ell^{\prime}}(-p)\) must have opposite helicity under rotation around the \(p\)-axis. This means that the correlation function can be written as a sum over expressions with fixed helicity, \(j\), under these rotations. The helicity \(j\) is an integer between \(0\) and \(j_{\text{max}}=\min(\ell,\ell^{\prime})\). It is thus useful to define vectors that parameterize the directions perpendicular to \(p\): \[\left(v_{i}^{\perp}\right)^{\mu}=v_{i}^{\mu}-\frac{v_{i}\cdot p}{p^{2}}p^{\mu}. \tag{112}\] These vectors are orthogonal to \(p\) and transform nicely under the little group. Along with \(p^{\mu}\), these vectors span all possible tensor structures. Thus, the correlation function can then be represented as, \[\begin{split}&\langle O_{\ell}(v_{1},p)O_{\ell^{\prime}}(v_{2},-p) \rangle=\sum_{j=0}^{\min(\ell,\ell^{\prime})}[\pi_{j}]_{\ell,\ell^{\prime}}(v,v^{\prime},p)A_{j}^{\ell,\ell^{\prime}}(p),\\ &[\pi_{j}]_{\ell,\ell^{\prime}}(v,v^{\prime},p)=(v_{1}^{\perp} \cdot v_{1}^{\perp})^{\frac{\ell}{2}}(v_{2}^{\perp}\cdot v_{2}^{\perp})^{\frac {\ell^{\prime}}{2}}T_{j}(\cos\theta),\end{split} \tag{113}\] where \[\cos\theta=\frac{v_{1}^{\perp}\cdot v_{2}^{\perp}}{|v_{1}^{\perp} \|v_{2}^{\perp}|}. \tag{108}\] Each helicity \(j\) structure in this expansion corresponds to a channel of spin \(j\) states in the Kallen-Lehmann spectral decomposition of the correlator since it has the correct transformation under the little group. ### Basis for higher-spin Operators In this section, we define a "nice" basis for the operators (eqs. (106) and (107)). To find this basis, we first write the on-shell matrix elements \((p_{1}^{g}p_{2}^{g}|O_{i,\ell}|0)\) corresponding to these operators, \[\begin{split}\langle p_{1}^{g}p_{2}^{g}|O_{2}|0\rangle=& 2(v\cdot p_{1})(v\cdot p_{2})\delta^{a_{1}a_{2}}, \\ \langle p_{1}^{g}p_{2}^{g}|O_{i,\ell}|0\rangle=& 2 \left((v\cdot p_{1})+(v\cdot p_{2})\right)^{i}\left((v\cdot p_{1})(v\cdot p _{2})\right)^{\frac{\ell-i}{2}}.\end{split} \tag{109}\] There is a nice way of presenting these spin \(\ell\) structures by introducing the angle \(\phi\) via, \[\cos\phi=\frac{v^{\perp}\cdot p_{1}^{\perp}}{|v^{\perp}\|p_{1}^{ \perp}|}=\frac{v\cdot(p_{1}-p_{2})}{(v\cdot p)}. \tag{110}\] In terms of this angle, the structures of eq. (109) simplify to: \[\langle p_{1}^{g}p_{2}^{g}|O_{i,\ell}|0\rangle=2^{i-\ell+1}(v\cdot p)^{\ell} \sin^{\ell-i}\phi. \tag{111}\] We can then understand the decomposition of each of these two-point functions in the helicity basis \(e^{ij\phi}\) using Chebyshev polynomials as a basis6: Footnote 6: This is because glueball states with spin \(m\) would be states with helicity \(e^{im\phi}=T_{m}(\cos\phi)+iU(\cos\phi)\) and \(e^{-im\phi}=T_{m}(\cos\phi)-iU(\cos\phi)\) \[\sin^{2n}\phi=-\frac{1}{\sqrt{\pi}}\frac{\Gamma\left(\frac{1}{2}+n \right)}{\Gamma(n+1)}+2^{1-2n}\sum_{j=0}^{n}(-1)^{j}\binom{2n}{n-j}T_{2j}(\cos \phi). \tag{112}\] We see that each of the form factors \(\langle p_{1}^{g}p_{2}^{g}|O_{i,\ell}|0\rangle\) have all even helicities from \(0\) to \(\ell\). However, we can construct linear combinations of the \(O_{i,\ell}\) operators such that the combinations only include helicity \(\ell\), \(-\ell\) and \(0\): \[\langle p_{1}^{g}p_{2}^{g}|\mathcal{Q}_{\ell}(p,v)|0\rangle=\delta^{ab}(-1)^{ \frac{\ell}{2}}2^{-2}(v\cdot p)^{\ell}(T_{\ell}(\cos\phi)-1). \tag{113}\] From this equation it is easy to read the relations between \(\mathcal{Q}_{\ell}\) and \(O_{i,\ell}\), \[\mathcal{Q}_{\ell}=\sum_{m=0}^{\ell/2}\sum_{k=0}^{\ell-2m}2^{2m+2k-1}\binom{ \ell}{2m}\binom{\ell-2m}{k}(-1)^{\frac{\ell+2k+2m}{2}}O_{\ell-2m-2k,\ell}+ \frac{(-1)^{\frac{\ell}{2}+1}}{8}O_{\ell,\ell}. \tag{114}\] For instance for \(\mathcal{Q}_{4}\) we have, \[\mathcal{Q}_{4}=2^{4}\left(O_{0,4}-\frac{1}{4}O_{2,4}\right). \tag{115}\] This choice of basis then means that at one-loop we get: \[\left(\mathcal{Q}_{\ell}\mathcal{Q}_{\ell^{\prime}}\right)=\frac{d_{G}}{512} \left(\pi_{0}A_{0}^{\ell,\ell^{\prime}(0)}(p^{2})+\delta_{\ell\ell^{\prime}} \pi_{\ell}A_{\ell}^{\ell,\ell^{\prime}(0)}(p^{2})\right). \tag{116}\] However, note that at higher loops, all other projection channels appear again. ### One- and two-loop with unitarity method Here, we will illustrate how to use unitarity methods to calculate the imaginary part of one- and two- loop correlation functions of spin-2 and spin 4 operators following section 2.2. For one-loop, the needed phase space integrals are: \[\begin{split}\text{Disc}\left((\mathcal{Q}_{\ell}\mathcal{Q}_{\ell^ {\prime}})\right)&=\frac{-id_{G}}{2!}(-1)^{\frac{\ell_{\ell}\ell^ {\prime}}{2}}2^{-\ell-\ell^{\prime}}\int\frac{d^{2}p_{1}}{(2\pi)^{2}2E_{1}} \frac{d^{2}p_{2}}{(2\pi)^{2}2E_{2}}(2\pi)^{3}\delta^{3}(p-p_{1}-p_{2})\\ &\times(v_{1}\cdot p)^{\ell}(T_{\ell}(\cos\phi_{1})-1)(v_{2}\cdot p )^{\ell}\big{(}T_{\ell^{\prime}}(\cos\phi_{2})-1\big{)},\end{split} \tag{111}\] where we have used eq. (109) for the form factors inside the integral. We can then perform the phase space integral as illustrated in section 2.2 to obtain the correlation functions of any spin \(\ell\) and \(\ell^{\prime}\). The coefficients are simple to obtain for general spins: \[A_{0}^{\ell,\ell^{\prime}(0)}=2(p^{2})^{\frac{\ell_{\ell}\ell^{\prime}-1}{2}} \quad A_{\ell}^{\ell,\ell^{\prime}(0)}=(p^{2})^{\frac{\ell_{\ell}\ell^{ \prime}-1}{2}}. \tag{112}\] For example, the matrix-valued correlator of spin-2 and spin-4 operators can be written as: \[\begin{split}\begin{pmatrix}(p^{2})^{-2}(\mathcal{Q}_{2}\mathcal{ Q}_{2})&(p^{2})^{-3}(\mathcal{Q}_{2}\mathcal{Q}_{4})\\ (p^{2})^{-3}(\mathcal{Q}_{4}\mathcal{Q}_{2})&(p^{2})^{-4}(\mathcal{Q}_{4} \mathcal{Q}_{4})\end{pmatrix}\bigg{|}_{\text{1-loop}}=\frac{d_{G}}{512\sqrt{p ^{2}}}\Bigg{(}\pi_{0}\underbrace{\begin{pmatrix}2&2\\ 2&2\end{pmatrix}}_{M_{0}^{(0)}}+\pi_{2}\underbrace{\begin{pmatrix}1&0\\ 0&0\end{pmatrix}}_{M_{2}^{(0)}}+\pi_{4}\underbrace{\begin{pmatrix}0&0\\ 0&1\end{pmatrix}}_{M_{4}^{(0)}}\Bigg{)}.\end{split} \tag{113}\] This takes care of one-loop analysis. Next, we examine these correlators at two-loops. As discussed in section 2.2, the only ingredient we need is the on-shell form factor \(\langle p_{1}^{g}p_{2}^{g}p_{3}^{g}|\mathcal{Q}_{\ell}|0\rangle\) as the other two cuts in fig. 3 cancel each other via the same argument presented in that section. To find this form factor, we use universality of the collinear and soft limit in the theory in addition to Bose symmetry. Basically, we first obtain the universal splitting factor appearing in the collinear limit by taking \(p_{1}\) and \(p_{2}\) to be parallel in the stress-tensor form factor \(\langle p_{1}^{g}p_{2}^{g}p_{3}^{g}|\mathcal{T}^{\mu\nu}|0\rangle\) (eq. (2.22)) to compare with \(\langle p_{2}^{g}p_{3}^{g}|\mathcal{T}^{\mu\nu}|0\rangle\) in eq. 2.16. This yields the following the following splitting factor: \[\text{SP}=\frac{2g_{s}}{\sqrt{z(1-z)}(12)}(1-z+z^{2}). \tag{114}\] Using this splitting factor, we obtain the three-gluon form factors: \[\begin{split}\langle p_{1}^{g}p_{2}^{g}p_{3}^{g}|\mathcal{Q}_{2} |0\rangle=&\frac{-16(p_{1}\cdot v)^{2}(p_{2}\cdot p_{3})+8p^{2}(p_{1} \cdot v)(p_{2}\cdot v)+16(p_{1}\cdot p_{2})(v\cdot p_{1})(p_{1}\cdot p_{2})+ \text{perms}}{(12)(23)(31)},\\ \langle p_{1}^{g}p_{2}^{g}p_{3}^{g}|\mathcal{Q}_{4}|0\rangle=& \frac{32\sqrt{2}}{(12)\langle 23\rangle(31)}\Big{(}-s_{23}(p_{1}\cdot v)^{4}+2s_{12 }(p_{2}\cdot v)(p_{1}\cdot v)^{3}+s_{13}(p_{2}\cdot v)(p_{1}\cdot v)^{3}\\ &+7s_{23}(p_{2}\cdot v)(p_{1}\cdot v)^{3}-4s_{12}(p_{2}\cdot v)^{ 2}(p_{1}\cdot v)^{2}-7s_{13}(p_{2}\cdot v)^{2}(p_{1}\cdot v)^{2}-s_{12}\\ &\times(p_{2}\cdot v)(p_{3}\cdot v)(p_{1}\cdot v)^{2}+4s_{23}(p_{ 2}\cdot v)(p_{3}\cdot v)(p_{1}\cdot v)^{2}+\text{perms}\Big{)}.\end{split} \tag{115}\] The three-gluon form factors are used to obtain the non-analytic part of two-loop correction to the correlation function of \(\mathcal{Q}_{2}\) and \(\mathcal{Q}_{4}\) through phase space integral explained in appendix C.2. We will postpone writing the explicit results of this calculation to the next section (eqs. (108) and 109) in which we do the one- and two-loop calculations using Feynman diagrams to obtain both analytic and non-analytic part. ### Two-loop higher-spin correlators Now that we fixed the basis for higher-spin operators, we use the Feynman diagram approach to obtain the two-loops contributions to the correlation functions of higher-spin operators. The calculation is almost identical to that in section 5.4: one replaces the vertices associated to the stress-tensor with the vertices generated by equations (101), (102) and (103). Unlike in section 2 where we only had even spin structures, these higher-spin operators couple to both even and odd spin states. The two-loop correction for the correlation functions \(\mathcal{Q}_{2}\) and \(\mathcal{Q}_{4}\) in eq, (106) are: \[\begin{pmatrix}(p^{2})^{-2}\langle\mathcal{Q}_{2}\mathcal{Q}_{2}\rangle&(p^{2} )^{-3}\langle\mathcal{Q}_{2}\mathcal{Q}_{4}\rangle\\ (p^{2})^{-3}\langle\mathcal{Q}_{4}\mathcal{Q}_{2}\rangle&(p^{2})^{-4}\langle \mathcal{Q}_{4}\mathcal{Q}_{4}\rangle\end{pmatrix}\bigg{|}_{2\text{-loop}}= \frac{d_{G}g_{s}^{2}C_{A}}{512p^{2}}\left(\frac{\bar{\mu}^{2}}{p^{2}}\right)^{ 2\varepsilon}\sum_{J=0}^{4}\pi^{J}M_{J}^{(1)}, \tag{107}\] where \[M_{0}^{(1)}=\begin{pmatrix}-\frac{1}{4}-\frac{4}{3\pi^{2}}-\frac {4}{3\pi^{2}\varepsilon}&-\frac{1}{4}-\frac{272}{525\pi^{2}}-\frac{16}{15\pi^ {2}\varepsilon}\\ -\frac{1}{4}-\frac{272}{525\pi^{2}}-\frac{16}{15\pi^{2}\varepsilon}&-\frac{1} {4}-\frac{14833664}{10735725\pi^{2}}-\frac{768}{715\pi^{2}\varepsilon}\end{pmatrix}, \tag{108}\] \[M_{1}^{(1)}=\begin{pmatrix}0&0\\ 0&-\frac{4632064}{5010005\pi^{2}}+\frac{1536}{1001\pi^{2}\varepsilon}\end{pmatrix},\] \[M_{4}^{(1)}=\begin{pmatrix}0&0\\ 0&-2+\frac{4487877952}{225450225\pi^{2}}+\frac{158272}{45045\pi^{2}\varepsilon }\end{pmatrix}.\] We emphasize that the non-analytic part of (107) was cross-checked by a unitary computation. ### Superconvergent Combinations In section 2.4 we introduced the "superconvergent" combination for stress-tensor 2-point function which is well-behaved non-purturbativly in the \(p^{2}\to 0\) limit and well-suited for the application of dispersive sum-rules in section 3. In this section, we demonstrate the existence of such combinations for spinning correlation functions. We focus on the spin-2 and spin-4 operators for which we use the two-loop perturbative result obtained in previous subsection inside Borel sum-rules to extract crude estimates of their coupling to the lowest-lying spin 0 particle. This analysis is parallel to the analysis in section 3.5. By following the argument in section 2.4, we see that the superconvergent combination for the correlator \(\langle O_{\ell}O_{\ell^{\prime}}\rangle\) is the coefficient of \(p^{\mu_{1}}\dots p^{\mu_{\ell}}p^{\nu_{1}}\dots p^{\nu_{\ell^{\prime}}}\). From eq. (102) it can be seen that this coefficient is given by: \[\mathcal{A}_{\ell,\ell^{\prime}}(p)=\sum_{j=0}^{\text{min}\ (\ell,\ell^{\prime})}T_{j}(-1)\frac{A_{j}^{\ell,\ell^{\prime}}(p)}{(p^{2})^{ \frac{\ell_{\ell}\ell^{\prime}}{2}}}=\sum_{j=0}^{\text{min}\ (\ell,\ell^{\prime})}(-1)^{j}\frac{A_{j}^{\ell,\ell^{\prime}}(p)}{(p^{2})^{ \frac{\ell_{\ell}\ell^{\prime}}{2}}}. \tag{109}\] Thus, the resulting superconvergent version of the matrix \(\big{\langle}\mathcal{Q}_{\ell}\mathcal{Q}_{\ell^{\prime}}\big{\rangle}\) to two-loops in perturbation theory is \[\begin{pmatrix}\mathcal{A}_{2,2}^{\text{pert}}&\mathcal{A}_{2,4}^{ \text{pert}}\\ \mathcal{A}_{4,2}^{\text{pert}}&\mathcal{A}_{4,4}^{\text{pert}}\end{pmatrix} =\begin{pmatrix}3&2\\ 2&3\end{pmatrix}\frac{1}{\sqrt{p^{2}}}-\begin{pmatrix}\frac{5}{4}-\frac{16}{3 \pi^{2}}&\frac{1}{4}+\frac{1216}{315\pi^{2}}\\ \frac{1}{4}+\frac{1216}{315\pi^{2}}&\frac{9}{4}-\frac{603392}{315135\pi^{2}} \end{pmatrix}\frac{g_{s}^{2}C_{A}}{p^{2}}+\mathcal{O}\left(\frac{1}{(p^{2})^{3/2 }}\right),\] \[\approx\begin{pmatrix}3&2\\ 2&3\end{pmatrix}\frac{1}{\sqrt{p^{2}}}-\begin{pmatrix}0.710&0.641\\ 0.641&1.798\end{pmatrix}\frac{g_{s}^{2}C_{A}}{p^{2}}. \tag{100}\] We see that in all of these equation the logarithms vanish, as anticipated. Like in section 3, the superconvergent two-point functions inherit a Kallen-Lehmann spectral representation from the Kallen-Lehmann representation of the \(\big{\langle}\mathcal{Q}_{\ell}\mathcal{Q}_{\ell^{\prime}}\big{\rangle}\): \[\rho_{2,2}(s) =\eqref{eq:2.2}=\sum_{i=1}^{N}\frac{2\pi g_{s}^{2}}{m_{i}^{4}} \,\delta\left(s-m_{i}^{2}\right)+H(s)\,\Theta\left(s-4m_{1}^{2}\right), \tag{101}\] \[\rho_{2,4}(s) =\sum_{i=1}^{N}\frac{2\pi g_{2,4;i}^{2}}{m_{i}^{6}}\,\delta\left( s-m_{i}^{2}\right)+H_{2,4}(s)\,\Theta\left(s-4m_{1}^{2}\right),\] (102) \[\rho_{4,4}(s) =\sum_{i=1}^{N}\frac{2\pi g_{4,4;i}^{2}}{m_{i}^{8}}\,\delta\left( s-m_{i}^{2}\right)+H_{4,4}(s)\,\Theta\left(s-4m_{1}^{2}\right), \tag{103}\] where \[H_{i,j}(s)=\text{Disc}\mathcal{A}_{i,j}^{\text{pert}}(-s)\Big{|}_{s>4m_{1}^{2 }}. \tag{104}\] Note that due to contributions from odd-spin glueballs, the spectral density \(\rho_{4,4}\) is not guaranteed to be positive. This is because odd-spin glueballs contribute with the wrong sign. Requiring that the Borel transforms of \(\mathcal{A}_{i,j}^{\text{pert}}\) and \(\mathcal{A}_{i,j}^{\text{non-pert}}\) match asymptotically, \[\left[\mathcal{A}_{i,j}^{\text{non-pert}}(M^{2})-\mathcal{A}_{i,j}^{\text{pert }}(M^{2})\right]_{M^{2}\to\infty}=\mathcal{O}\left(\frac{1}{(M^{2})^{3/2}} \right), \tag{105}\] places constraints on the model parameters. Explicitly, \[0 =2\sum_{i=1}^{\infty}\frac{g_{i}^{2}}{m_{i}^{4}}-\frac{12m_{1}}{ \pi}+\left(\frac{15}{12}-\frac{16}{3\pi^{2}}\right)(g_{s}^{2}C_{A}), \tag{106}\] \[0 =2\sum_{i=1}^{\infty}\frac{g_{2,4;i}^{2}}{m_{i}^{6}}-\frac{8m_{1 }}{\pi}+\left(\frac{1}{4}+\frac{1216}{315\pi^{2}}\right)(g_{s}^{2}C_{A}),\] (107) \[0 =2\sum_{i=1}^{\infty}\frac{g_{2,4;i}^{2}}{m_{i}^{8}}-\frac{12m_{1 }}{\pi}+\left(\frac{9}{4}-\frac{603392}{135135\pi^{2}}\right)(g_{s}^{2}C_{A}). \tag{108}\] For a very crude approximation of the residue \(g_{1}^{2}\), one can neglect all \(g_{i>1}^{2}\) and solve the above equations for the coupling \(m_{1}\)-coupling. Such an approximation is crude because as discovered in figure 12, the asymptotic contributions of the \(m_{1}\) and \(m_{2}\) glueballs are comparable. Thus, it is questionable as to whether we can neglect the \(m_{2}\) glueball. Conclusions The basic idea of QCD sum-rules is the notion that the spectral density is well-approximated by a sum of delta-function(s) and a perturbatively calculable continuum. In this work, we have tested this notion for three-dimensional Yang-Mills theory. In section 2, we calculated the stress-tensor two-point function to 3-loops (\(\sim\alpha_{s}^{2}\)) and extract a perturbative approximation to the spectral density above the continuum threshold (section 3) from the stress-tensor two-point function. Then, this is used to construct a model for the non-perturbative spectral density that, in turn, defines the non-perturbative stress-tensor two-point function. The masses and couplings (to the stress-tensor) of the first two glueballs in the spectrum were estimated by analyzing the Borel transformations of perturbative and non-perturbative stress-tensor two-point functions. While our estimates are not rigorous, there exists a reasonable range of parameters in the non-perturbative model where one finds stable results that are within \(14-19\%\) of the lattice data. Here, it was important to work with the Borel transformation of the two-point functions in order to improve the convergence of the perturbative expansion. It was also crucial to combine the spin-0 and spin-2 parts (\(A_{0}\) and \(A_{2}\)) of the stress-tensor two-point function into a "superconvergent" sum (27). Otherwise, we would have had to use a subtracted dispersion relation that removes the connection between the pole (glueball) and cut (continuum) contributions; the Borel transform of such a subtracted dispersion relation kills the first term in perturbation theory increasing the sensitivity to non-perturbative condensates. The existence of this superconvergent combination is tied to the spin of the stress tensor. For similar reasons, there also exists superconvergent dispersion relations for scattering amplitudes of spinning particles [45; 46]. In principle, it would be possible to extend our analysis of the stress-tensor correlator to four-loops since the non-perturbative condensate \(\langle 0|F^{2}|0\rangle\) does not appear in the superconvergent combination. Even if the condensate did appear this would not necessarily be a showstopper since the \(\overline{\text{MS}}\) condensate has been extracted from a combination of lattice and perturbative techniques [26; 27]. Furthermore, one could possibly bound the lattice regularized condensate using the bootstrap techniques of [47; 48]. We also showed that it is mathematically possible to find positive spectral densities that display the correct asymptotic behavior at large energies that are compatible with essentially any mass spectrum and residue strength (section 4). Finally, anticipating applications to higher moments of hadron wavefunctions (form factors of higher-spin lowest-twist operators), we also verified that superconvergent combinations of higher-spin operators exist (section 5). While we provide a crude method for approximating the higher-spin residues, we leave the analysis of higher-spin sum-rules to future work. Our results in three-dimensional Yang-Mills theory adds numerical evidence to the effect that the Borel transform of perturbation theory can give a reasonable approximation to continuum spectral densities at finite energy, even when using a finite number of terms. This is similar to what has long been observed in the QCD context. Of course, it has never been clear how to rigorously justify this approximation and we do not claim to have ameliorated this state of affairs. ###### Acknowledgements. A.P. is grateful for support provided by the National Science and Engineering Council of Canada and the Fonds de Recherche du Quebec -- Nature et Technologies. A.P. is also supported by the Simons Investigator Award #376208 of A. Volovich. S.C.H.'s work is supported in parts by the National Science and Engineering Council of Canada (NSERC) and by the Canada Research Chair program, reference number CRC-2022-00421. S.C.H.'s work is additionally supported by a Simons Fellowships in Theoretical Physics and by the Simons Collaboration on the non-perturbative Bootstrap. Z.Z. is funded by Fonds de Recherche du Quebec -- Nature et Technologies, and the Simons Foundation through the Simons Collaboration on the non-perturbative Bootstrap. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement number 949077). ## Appendix A \(d\)-dimensional form factors In this appendix, we present the \(d\)-dimensional stress-tensor two-point functions up to three-loops. At three-loops, not all master integrals are know in closed form for generic \(d\). However, once \(d\) is fixed these integrals can be computed via dimensional recursion [41]. ### One-loop The one-loop two-point functions for generic dimension are \[A_{0}^{(1)}(p^{2};d) =\frac{512}{8(d-1)^{2}}\left[(d-4)^{2}(d-2)p^{4}I_{1}^{(1)}(p^{2 };d)\right], \tag{110}\] \[A_{2}^{(1)}(p^{2};d) =\frac{512}{8(d-1)(d+1)}\left[(2d^{2}-3d-8)p^{4}I_{1}^{(1)}(p^{2 };d)\right], \tag{111}\] where \(I_{1}^{(1)}\) is the scalar bubble integral and the normalization is determined by equation (10). While the bubble integral is trivial, we quote it here so that our conventions are explicit \[I_{1}^{(1)}(p^{2};d)\equiv B_{1,1}(p^{2};d) \tag{112}\] where \[B_{a,b}(k^{2};d) =\int\frac{\mathrm{d}^{d}\ell}{i(2\pi)^{d}}\frac{1}{\left[\ell^{ 2}\right]^{a}\left[\left(\ell+k\right)^{2}\right]^{b}}\] \[=\frac{1}{\left(4\pi\right)^{\frac{d}{2}}}\frac{\Gamma\left(a+b- \frac{d}{2}\right)\Gamma\left(\frac{d}{2}-a\right)\Gamma\left(\frac{d}{2}-b \right)}{\Gamma\left(a\right)\Gamma\left(b\right)\Gamma\left(d-a-b\right)} \left(k^{2}\right)^{\frac{d}{2}-\left(a+b\right)}. \tag{113}\] With the exception of the three-loop master integrals \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\), all other two- and three-loop master integrals can be computed in closed form from recursive use of (113). ### Two-loops The two-loop contributions to the stress-tensor two-point functions in generic dimension are \[A_{0}^{(2)}(p^{2};d) = \frac{512(g_{s}^{2}C_{A})}{8(d{-}1)^{2}}\Bigg{[}(d{-}4)\left(d^{3}{ -}16d^{2}{+}68d{-}88\right)p^{4}I_{1}^{(2)}(p^{2};d) \tag{100}\] \[\qquad-\frac{16}{3}\left(4d^{3}{-}33d^{2}{+}94d{-}92\right)p^{2}I_ {2}^{(2)}(p^{2};d)\Bigg{]}\] \[A_{2}^{(2)}(p^{2};d) = -\frac{512(g_{s}^{2}C_{A})}{8(d{-}1)(d{+}1)}\Bigg{[}\frac{8(d^{4}{ -}8d^{3}{+}16d^{2}{+}20d{-}68)p^{4}}{(d{-}4)(d{-}2)}I_{1}^{(2)}(p^{2};d)\] (101) \[\qquad+\frac{8(13d^{5}{-}129d^{4}{+}462d^{3}{-}572d^{2}{-}376d{+} 1088)p^{2}}{3(d{-}4)^{2}(d{-}2)}I_{2}^{(2)}(p^{2};d)\Bigg{]}\] where the master integrals are (102) These master integrals are easily evaluated by repeated used of (100). ### Three-loops The three-loop contributions to the stress-tensor two-point functions in generic dimension are \[A_{0}^{(3)}(p^{2};d) = \frac{512(g_{s}^{2}C_{A})^{2}}{8(d{-}1)^{2}}\Bigg{[}{-}\frac{3(d {-}4)^{2}(d{-}3)(d{-}2)(3d{-}8)p^{8}I_{1}^{(3)}(p^{2};d)}{4(2d{-}7)(2d{-}5)} \tag{103}\] \[-\frac{(d^{3}{-}16d^{2}{+}68d{-}88)^{2}p^{4}I_{3}^{(3)}(p^{2};d) }{d{-}2}\] \[+\Big{(}657d^{7}{-}11454d^{6}{+}85564d^{5}{-}354832d^{4}{+}880176d ^{3}{-}1299616d^{2}\] \[\qquad+1048384d{-}350976\Big{)}\frac{p^{2}I_{5}^{(3)}(p^{2};d)}{ 2(d{-}4)(d{-}2)(d{-}1)(2d{-}5)}\] \[-\Big{(}108d^{8}{-}2661d^{7}{+}28822d^{6}{-}177546d^{5}{+}674735d^ {4}{-}1607602d^{3}\] \[\qquad+2325996d^{2}{-}1848920d{+}607968\Big{)}\frac{p^{4}I_{2}^{ (3)}(p^{2};d)}{2(d{-}2)(d{-}1)(2d{-}7)(2d{-}5)}\] \[+\Big{(}192d^{10}{-}6947d^{9}{+}105470d^{8}{-}907248d^{7}{+}49586 64d^{6}{-}18113645d^{5}\] \[\qquad+44930982d^{4}{-}74791460d^{3}{+}79854504d^{2}{-}49204128d\] \[\qquad+13194496\Big{)}\frac{p^{2}I_{4}^{(3)}(p^{2};d)}{(d{-}4)(d{ -}3)(d{-}2)(d{-}1)(2d{-}7)(2d{-}5)}\] \[+\Big{(}162d^{11}{-}5487d^{10}{+}87553d^{9}{-}858385d^{8}{+}567322 1d^{7}{-}26253008d^{6}\] \[\qquad+86068824d^{5}{-}198637272d^{4}{+}314636144d^{3}{-}324171296 d^{2}\] \[\qquad+194410240d{-}50999296\Big{)}\frac{I_{6}^{(3)}(p^{2};d)}{( d{-}4)^{2}(d{-}3)^{2}(d{-}2)(d{-}1)(2d{-}7)}\Bigg{]}\] and \[A_{2}^{(3)}(p^{2};d)=\frac{512(g_{s}^{2}C_{A})^{2}}{8(d{-}1)(d{+}1) }\bigg{[}-\frac{\big{(}16d^{5}{-}149d^{4}{+}397d^{3}{+}142d^{2}{-}1832d{+}1696 \big{)}\,p^{8}I_{1}^{(3)}(p^{2};d)}{4(d{-}2)(2d{-}7)(2d{-}5)}\] \[-\frac{8\big{(}4d^{8}{-}62d^{7}{+}371d^{6}{-}939d^{5}{+}128d^{4}{ +}4260d^{3}{-}7712d^{2}{+}3584d{+}384\big{)}\,p^{4}I_{3}^{(3)}(p^{2};d)}{(d{-}4 )^{2}(d{-}2)^{2}(d{-}1)d}\] \[+\Big{(}1042d^{10}{-}19207d^{9}{+}147122d^{8}{-}588708d^{7}{+}11996 32d^{6}{-}543184d^{5}{-}3040032d^{4}\] \[+7331904d^{3}{-}7007488d^{2}{+}2514944d{+}24576\Big{)}\frac{p^{2} I_{5}^{(3)}(p^{2};d)}{2(d{-}4)^{3}(d{-}2)^{2}(d{-}1)d(2d{-}5)}\] \[-\Big{(}1680d^{12}{-}43447d^{11}{+}499154d^{10}{-}3324848d^{9}{+}1 3961672d^{8}{-}36985777d^{7}\] \[+54553314d^{6}{-}11375804d^{5}{-}120445352d^{4}{+}236351744d^{3}{ -}195105152d^{2}\] \[+60414976d{+}1720320\Big{)}\frac{p^{2}I_{4}^{(3)}(p^{2};d)}{(d{-} 4)^{3}(d{-}3)(d{-}2)^{2}(d{-}1)d(2d{-}7)(2d{-}5)}\] \[-\Big{(}72d^{12}{-}1812d^{11}{+}24945d^{10}{-}234230d^{9}{+}14983 16d^{8}{-}6288301d^{7}{+}16330266d^{6}\] \[-22168812d^{5}{+}1696440d^{4}{+}41289728d^{3}{-}55822976d^{2}{+}2 5437184d{-}1720320\Big{)}\] \[\times\frac{p^{4}I_{2}^{(3)}(p^{2};d)}{2(d{-}4)^{2}(d{-}2)^{2}(d{ -}1)d(2d{-}7)(2d{-}5)(3d{-}8)}\] \[-\Big{(}432d^{15}{-}27558d^{14}{+}582633d^{13}{-}6158463d^{12}{+}3 5473743d^{11}{-}89675899d^{10}\] \[-197920872d^{9}{+}2586125488d^{8}{-}10618482072d^{7}{+}25226597520d ^{6}\] \[-36849379104d^{5}{+}30607655680d^{4}{-}9263259648d^{3}{-}49955553 28d^{2}\] \[+3991977984d{-}421134336\Big{)}\frac{I_{6}^{(3)}(p^{2};d)}{3(d{ -}4)^{4}(d{-}3)^{2}(d{-}2)^{2}(d{-}1)d(2d{-}7)(3d{-}8)}\bigg{]}\] (A.9) where \[I_{1}^{(3)}(p^{2};d)=\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{ -.-. While \(\varepsilon\)-expansion of \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\) are know near four-dimensions [38; 40], we must compute these expansions from scratch near three-dimensions since certain simplifications in four-dimensions are _not_ present in three-dimensions. We use the method of dimensional recursion [41] (reviewed in appendix B) to find \(d\)-dimensional formulas for \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\). In three-dimensions, these integrals simplify to \[\begin{split}& I_{1}^{(3)}(p^{2})\underset{d\to 3}{=}-\frac{(2\pi^{2}-39)}{192\pi^{2}(p^{2} )^{7/2}},\\ & I_{2}^{(3)}(p^{2})\underset{d\to 3}{=}\frac{1}{512(p^{2})^{3/2}}. \end{split} \tag{115}\] Since the coefficients of \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\) in equations (111) and (112) are finite in the limit \(d\to 3\), the above formulas are sufficient for determining the three-loop contributions to \(A_{0}\) and \(A_{2}\). ## Appendix B Computing \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\) from dimensional reccurance In this appendix, we provide a short overview of the method of dimensional recurrence B.1 and provide formulas to compute \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\) in any dimension (section B.2). ### Dimensional recurrence and analyticity in \(d\) In this shot review of the method of dimensional recurrence and analyticity in \(d\)[41], we keep the discussion general. We specify to the integral family relevant to \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\) in section B.2. Suppose that we are given a family of Feynman integrals \(\mathbf{I}\) that is closed under IBP relations. Then, this family satisfies the following dimensional recurrence relation \[\mathbf{I}(d+2)=\underline{\mathbf{R}}(d)\cdot\mathbf{I}(d). \tag{116}\] Additionally, all Feynman integrals have the following projective parametric representation7 Footnote 7: By projective, we mean that \(I\) is invariant under the rescaling of \(\mathbf{x}\): \(\mathbf{x}\to\lambda\mathbf{x}\). \[I=\int\left(\prod_{i=1}^{L}\frac{\mathrm{d}^{d}\ell_{i}}{i(2\pi)^{d}}\right) \!\!\left(\prod_{j=1}^{N}\frac{1}{D_{j}^{n_{j}}}\right)=\Gamma(\omega)\int_{( \mathbb{R}^{+})^{N}}\mathrm{d}^{N}\mathbf{x}\!\left(\prod_{i=1}^{N}\frac{x_{i}^{n _{i}-1}}{\Gamma(n_{i})}\right)\frac{\delta(1-h(\mathbf{x}))}{\mathcal{U}^{\frac{d }{2}-\omega}\mathcal{F}^{\omega}} \tag{117}\] where \(\omega(d)=\frac{d}{2}-|\mathbf{n}|\) is the superficial degree of divergence, \(x_{i}\) is the Schwinger parameter associated to the propagator \(D_{j}\), \(h(\mathbf{x})\) is any degree 1 homogeneous polynomial, and, \(\mathcal{U}\) and \(\mathcal{F}\) are the first and second Symanzik polynomials. Using the projective representation (117), one can bound the large imaginary \(d\) limit of a Feynman integral \[|I(d)|\lesssim\text{const.}\times|\text{Im}\,d|^{\omega(\text{Re}\,d)-\frac{1}{ 2}}e^{-\frac{\pi}{4}L\,\text{Im}\,d}. \tag{118}\] Then, using the above bound and provided that there exists a strip \(S=\{d\in\mathbb{C}|d_{\text{min}}<\text{Re}\,d<d_{\text{max}}\}\) that is known to be free from poles, the homogenous solution to the recurrence relation (116) can be constructed. The first step to solve the recurrence relation (145) is to define the so-called summing factors \(\mathbf{\Sigma}(d)\) such that \[\frac{\Sigma_{i}(d+2)}{\Sigma_{i}(d)}=R_{ii}(d) \tag{146}\] Then defining the rescaled integrals \(J_{i}(d)=I_{i}(d)/\Sigma_{i}(d)\) and \(r_{i}(d)=\sum_{j\neq i}R_{ij}I_{j}(d)/\Sigma_{i}(d)\) the recurrence relation (145) becomes \[\mathbf{J}(d+2)=\mathbf{J}(d)+\mathbf{r}(d). \tag{147}\] The general solution to (147) consists of a homogeneous and a inhomogeneous solution \(\mathbf{J}(d)=\mathbf{J}_{\text{hom}}(d)+\mathbf{J}_{\text{inhom}}(d)\). The homogeneous solution \(\mathbf{J}_{\text{hom}}(d)=\mathbf{f}(d)\) can be any periodic in \(d\) with period 2: \(\mathbf{f}(d+2)=\mathbf{f}(d)\). Since the product \(f_{i}(d)\Sigma_{i}(d)\) must obey the bound (144), choosing \(\Sigma_{i}\) such that it comes as close as possible to saturating it maximally constrains the form of \(f_{i}\). In particular, it is always possible to find a \(\Sigma_{i}\) such that (144) forces \(|f(d)|<|\text{Im}\,d|^{\nu}e^{\pi|\text{Im}\,d|}\) for some \(\nu\). Then, the only 2-periodic function of \(d\) that satisfy this bound is \(\cot\) (or \(\tan\)). Thus, \(f_{i}\) has the following form \[f_{i}(d)=b_{i0}+\sum_{j}^{n_{ij}}\sum_{k=1}^{L}b_{ijk}\cot^{k} \left(\frac{\pi}{2}(d-q_{ij})\right) \tag{148}\] where the \(q_{ij}\) are poles that appear in \(J_{i,\text{inhom}}\), \(n_{ij}\) is the number of distinct \(q_{ij}\) and \(L\) is the maximal order of any pole. Then, the \(b_{ijk}\)'s are fixed by requiring that \(I_{i}\) is free from all poles in the strip \(S_{i}\). Sometimes, this requirement will not fix all \(b_{ijk}\)'s and one has to generate additional conditions. Additional conditions can be generated by relating \(I_{i}\) to \(\tilde{I}_{i}\) via an IBP relation and then requiring that \(\tilde{I}_{i}\) is pole free in its strip \(\tilde{S}_{i}\). For example, squaring all propagators defines and integral with a larger finite strip. Since this new new integral is related to the old integral via an IBP relation, requiring that the new integral is free from all poles in its enlarged strip may impose new constraints on the old integral. To obtain the inhomogeneous solution, we split \(r_{i}(d)\) into two pieces \(r_{i}(d)=r_{i}^{+}(d)+r_{i}^{-}(d)\) where \(r_{i}^{+}(d+2k)\sim a^{k}\) and \(r_{i}^{-}(d-2k)\sim a^{k}\) in the large \(k\) limit with \(0<a<1\). Then, the inhomogeneous solution \(\mathbf{J}_{\text{inhom}}(d)=\mathbf{g}(d)\) becomes \[g_{i}(d)=\sum_{k=0}^{\infty}r_{i}^{+}(d+2k)+\sum_{k=1}^{\infty}r_ {i}^{-}(d-2k). \tag{149}\] Since each term in the sum is suppressed by some \(a^{k}\) this series converges exponentially. While each integral in the family usually contributes only to \(r^{+}\) or \(r^{-}\) sometimes it is necessary to split an integral into two pieces (this will be the case for \(I_{1}^{(3)}\)). This method expresses integrals in terms of (nested) sums that converge rapidly. In practice, one computes these sums numerically to many digits and then applies the PSLQ algorithm to recover analytic results. ### Computing \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\) Using the formalism outlined in the previous section, we evaluate the integrals \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\) for \(d=3\). Before being able to apply the methods from section B.1, we must check if \(I_{1}^{(3)}\) and \(I_{2}^{(3)}\) have a strip of width at least two that is free from poles. The integral \(I_{1}^{(3)}\) has a strip of width two: \(S_{1}=\{d\in\mathbb{C}\}\frac{10}{3}<\operatorname{Re}d<\frac{16}{3}\}\) where \(d=\frac{16}{3}\) is the minimal UV divergence and \(d=\frac{10}{3}\) is the maximal IR divergence. On the other hand, \(I_{2}^{(3)}\) does not have a strip of width two since its minimal UV divergence is at \(d=4\) and its maximal IR divergence is at \(d=\frac{8}{3}\). In order to use the methods of the previous section, we replace \(I^{(3)}\) by the related integral \[\tilde{I}_{2}^{(3)}=\raisebox{-14.226378pt}{\includegraphics[width=14.22637 8pt]{./.eps}} \tag{112}\] where each dotted propagator is squared. By selectively squaring the propagators of \(I_{2}^{(3)}\), we have enlarged the strip \(S_{2}\) to \(\tilde{S}_{2}=\{d\in\mathbb{C}|\frac{14}{3}<\operatorname{Re}d<\frac{20}{3}\}\). Once \(\tilde{I}_{2}^{(3)}\) is known \(I_{2}^{(3)}\) is determined via the IBP relation \[I_{2}^{(3)}(p^{2};d) =\frac{16(d-5)\ \left(p^{2}\right)^{2}\tilde{I}_{2}^{(3)}(p^{2};d)}{3(d -3)(3d-14)(3d-10)\ U(d)}\] \[\qquad-\frac{4(2d-5)(3d-8)\ T(d)\ \left(p^{2}\right)^{-1}I_{6}^{(3)}(p^{2};d)}{3(d-6)^{2}(d-5)(d-4)^{2}(3d-14)( 3d-10)\ U(d)}, \tag{113}\] where \[U(d) =3d^{2}-33d+92, \tag{114}\] \[T(d) =3429d^{7}-109566d^{6}+1491897d^{5}-11216508d^{4}\] \[\qquad+50262008d^{3}-134170880d^{2}+197449040d-123506880. \tag{115}\] Now, we can define a new family of integrals \[\boldsymbol{I}^{\prime}=\left(\frac{I_{1}^{(3)}}{\left(p^{2}\right)^{\omega_ {1}}},\ \frac{\tilde{I}_{2}^{(3)}}{\left(p^{2}\right)^{\tilde{\omega}_{2}}},\ \frac{I_{3}^{(3)}}{\left(p^{2}\right)^{\omega_{3}}},\ \ldots,\ \frac{I_{6}^{(3)}}{\left(p^{2}\right)^{ \omega_{6}}}\right) \tag{116}\] for which the formalism of section B.1 is applicable. Normalizing by \(\left(p^{2}\right)^{-\omega_{i}}\) where \(\omega_{i}\) is the superficial degree of divergence of \(I_{i}^{(3)}\), ensures that the basis \(I^{\prime}\) is dimensionless. This family of integrals satisfies the recurrence relation \[\boldsymbol{I}^{\prime}(d+2)=\underline{\boldsymbol{R}}(d)\cdot\boldsymbol{I} ^{\prime}(d). \tag{117}\] where \(\underline{\underline{R}}(d)\) is a lower triangular 6\(\times\)6-matrix. We also define the following summing factors that (almost) saturate the bound (B.3) \[\Sigma^{\prime}_{1}(d) =\frac{1}{(4\pi)^{\frac{d}{2}}}\left(\frac{7}{2}-d\right)\Gamma \left(6-2d\right)\Gamma\left(\frac{d}{2}-2\right),\] (B.14) \[\Sigma^{\prime}_{2}(d) =\frac{U(d)\ \Gamma\left(\frac{3}{2}-\frac{d}{2}\right)\Gamma \left(\frac{8}{3}-\frac{d}{2}\right)\Gamma\left(\frac{10}{2}-\frac{d}{2} \right)\sec\left(\frac{\pi}{2}d\right)}{4^{d}\ 3^{\frac{3d}{2}}\ \pi^{\frac{d}{2}}\ (d-5)}.\] (B.15) With this, the inhomogeneous solutions are given by (B.7). We remark here that \(r_{2}^{-}=0\) and \(r_{1}^{-}\) only receives contribution from the homogeneous solution of \(I_{2}^{\prime}\). All other integrals contributes to \(r_{1}^{+}\) and \(r_{2}^{-}\). The final piece is the homogeneous solutions \[f_{1}^{\prime}(d) =-\frac{16\pi^{3}}{9}\Bigg{[}-15\cot^{3}\left(\frac{\pi}{2}(d-4) \right)+16\cot\left(\frac{\pi}{2}(d-4)\right)+9\cot\left(\frac{\pi}{2}(d-5)\right)\] \[\qquad\qquad\qquad\qquad-2\cot\left(\frac{\pi}{2}\left(d-\frac{1 0}{3}\right)\right)-2\cot\left(\frac{\pi}{2}\left(d-\frac{14}{3}\right) \right)\Bigg{]},\] (B.16) \[f_{2}^{\prime}(d) =2187\sqrt{3}\ \pi^{\frac{3}{2}}\cot\left(\frac{\pi}{2}(d-6) \right)\left(1-\cot^{2}\left(\frac{\pi}{2}(d-6)\right)\right).\] (B.17) While requiring \(I_{2}^{\prime}\) to be free of poles in the strip \(\tilde{S}_{2}\) fixes all the coefficients of \(f_{2}^{\prime}\), requiring \(I_{1}^{\prime}\) to be free from poles in \(S_{1}\) leaves one coefficient of \(f_{1}^{\prime}\) unfixed. The remaining coefficient was fixed by requiring the integral obtained by squaring all propagators of \(I_{1}^{\prime}\), which is related to \(I_{1}^{\prime}\) by IBP relations, to be free from poles in its strip. Putting all the pieces together yields expressions for \(I_{1}^{\prime}\) and \(I_{2}^{\prime}\) \[I_{i=1,2}^{\prime}(d)=\Sigma^{\prime}_{i}(d)\left[f_{i}^{\prime}(d)+g_{i}^{ \prime}(d)\right].\] (B.18) For a given \(d\), the infinite sum in \(g_{i}^{\prime}\) can be truncated and evaluated numerically. Then, analytic expressions for \(I_{1}^{\prime}\) and \(I_{2}^{\prime}\) are recovered using the PSLQ algorithm. Once \(I_{1}^{\prime}\) and \(I_{2}^{\prime}\) are known for a given \(d\), we can determine the integrals we actually need \[I_{1}^{(3)}(p^{2};d) =(p^{2})^{\omega_{1}}I_{1}^{\prime}(p^{2};d),\] \[I_{2}^{(3)}(p^{2};d) =\frac{16(3-5)\ (p^{2})^{2+\tilde{\omega}_{2}}I_{2}^{\prime}(p^{2};d)}{ 3(3-3)(3d-14)(3d-10)\ U(d)}\] (B.19) \[\qquad-\frac{4(2d-5)(3d-8)\ T(d)\ (p^{2})^{\tilde{\omega}_{2}-1}I_{6}^{(3)}(p ^{2};d)}{3(d-6)^{2}(d-5)(d-4)^{2}(3d-14)(3d-10)\ U(d)}.\] Here, we have used the IBP relation (B.9) and the definition of the primed-basis (B.12). For \(d=3\), we find (A.11). ## Appendix C Ingredients for on-shell calculations In this appendix we presents further details for obtaining the results of sections 2.2 and 5.3. ### Stress-Tensor gluon form factors In this section we discuss the derivation of eq. (2.22) using BCFW method [36; 37]. We start by writing the form factor \(\langle p_{1}^{g}p_{2}^{g}|T^{\mu\nu}(p)|0\rangle\) in four-dimensions using spinor-helicity variables, \[\langle p_{1}^{-}p_{2}^{+}|T^{\mu\nu}(p)|0\rangle^{4d}=\delta^{ab}\frac{\{1^{ \dot{\alpha}}(1^{\dot{\beta}}(1^{\dot{\gamma}}p_{\dot{\gamma}}^{\alpha}\sigma_{ \alpha\dot{\alpha}}^{\mu}(1^{\dot{\theta}}p_{\dot{\rho}}^{\beta}\sigma_{\beta \dot{\beta}}^{\nu},\] \[\langle 12\rangle^{2}},\] (C.1) where \(\alpha,\beta,\gamma\) and \(\rho\) (and their dotted) version indices are \(SU(2)\) indices. We can then obtain \(\langle p_{1}^{+}p_{2}^{+}p_{3}^{-}|T^{\mu\nu}(p)|0\rangle\) by shifting \(p_{3}\) and \(p_{2}\) as follows, \[|\hat{2}]=|\hat{2}]\quad|\hat{3}]=|3]+z|2]\quad|\hat{2})=|2)-z|3)\quad|\hat{3} )=|3),\] (C.2) The on-shell form factor is then given as: \[\langle p_{1}^{+}p_{2}^{+}p_{3}^{-}|T^{\mu\nu}(p)|0\rangle^{4d}=\langle\hat{ P}_{12}^{+}\tilde{p_{3}}^{-}[T^{\mu\nu}(p)|0\rangle\frac{1}{P_{12}^{2}}M_{3}(p_{1}^{+},\hat{p}_{2}^{2},-\hat{P}_{12}^{-})\] (C.3) where the 3-gluon on-shell form factor can be written as, \[M_{3}(p_{1}^{+},\hat{p}_{2}^{2},-\hat{P}_{12}^{-})=g_{s}f^{bcd}\frac{[1\hat{2 }]^{3}}{[1\hat{P}_{12}][\hat{2}\hat{P}_{12}]}.\] (C.4) With the little bit manipulation eq. (C.3) can be written as, \[\langle p_{1}^{+}p_{2}^{+}p_{3}^{-}|T^{\mu\nu}(p)|0\rangle^{4d}=2g_{s}f^{bcd} \frac{\langle 3\langle 3\langle 3p\sigma^{\mu}(3p\sigma^{\nu}\rangle \langle 12\rangle\langle 23\rangle\langle 31\rangle}.\] (C.5) Here we omitted the \(SU(2)\) indices. To go to three dimensions, we use the relation between 3d and 4d polarization, i.e., \(\epsilon^{3d}=\frac{\epsilon^{4}+\epsilon^{-}}{2}\). This yields the result in eq. (2.22). ### Phase space integrals In this section we we discuss the phase space integral yielding the non-analytic part of the two-loop results in sections 2.2 and 5.3. As discussed in the main text in section 2.2, the only cut diagram contributing to non-analytic two loop results is the most right diagram in figure 3. So the on-shell form factors needed for two loop calculations are \(\langle p_{1}^{g}p_{2}^{g}p_{3}^{g}|O|0\rangle\). We can calculate the discontinuity by gluing sides of the diagram in 3 together using, \[\text{Disc}\left(\langle OO^{\prime}\rangle\right)= (2f^{abc}g_{s})^{2}\frac{-i}{3!}\int\,\frac{d^{2}p_{1}}{(2\pi)^{2}2 E_{1}}\frac{d^{2}p_{2}}{(2\pi)^{2}2E_{2}}\frac{d^{2}p_{3}}{(2\pi)^{2}2E_{3}}\] \[\qquad\times(2\pi)^{3}\delta^{3}(p-p_{1}-p_{2}-p_{3})(0|O(p)|p_{1} ^{g}p_{2}^{g}p_{3}^{g})\langle p_{1}^{g}p_{2}^{g}p_{3}^{g}|O^{\prime}(p)|0\rangle.\] (C.6) We can then do the projection to different spin at this level to obtain the integrands which are scalar functions of \(p_{1},p_{2}\) and \(p_{3}\), \[\text{Disc}A_{j}^{(1)}=-ig_{s}^{2}C_{A}\frac{16}{3\pi^{3}}\int\frac{d^{2}p_{1} }{E_{1}}\frac{d^{2}p_{2}}{E_{2}}\frac{d^{2}p_{3}}{E_{3}}\delta^{3}(p-p_{1}-p_{ 2}-p_{3})I_{j}(p_{1},p_{2},p_{3}).\] (C.7) To do the integral, we go to the rest frame of \(p\) and define the usual parameters for 3-body phase space calculation, \[p=(p^{0},0,0),\qquad x_{i}=\frac{p_{i}\cdot p}{p^{2}},\qquad x_{1}+x_{2}+x_{3}=1. \tag{111}\] Now we can write \(I_{j}(p_{1},p_{2},p_{3})\) in terms of \(x_{i}\)s. Further, using spatial \(\delta\)-function we can integrate \(x_{3}\) trivially and write the remaining integrals as, \[\text{Disc}A^{(1)}_{j}=ig_{s}^{2}C_{A}\frac{16}{3\pi^{3}}\int\frac{d^{2}x_{1}} {x_{1}^{2}}\frac{d^{2}x_{2}}{x_{2}^{2}}\delta(\theta-\theta_{*})\frac{I_{j}(x _{1},x_{2})}{\sin\theta_{*}}, \tag{112}\] where \(\cos\theta_{*}=(1/2+x_{1}x_{2}-x_{2}-x_{1})/x_{1}x_{2}\). Now the angular integrals can be done and we are left with the two one-dimensional integrals: \[\text{Disc}A^{(1)}_{j}=ig_{s}^{2}C_{A}\frac{16}{3\pi^{2}}\int_{\frac{1}{2}-x_{ 1}}^{\frac{1}{2}}dx_{2}\int_{0}^{\frac{1}{2}}dx_{1}\frac{x_{1}^{2}x_{2}^{2}I_{ j}(x_{1},x_{2})}{(\frac{1}{2}-x_{1})(\frac{1}{2}-x_{2})(x_{1}+x_{2}-\frac{1}{2})}. \tag{113}\] These integrals can then be simply calculated to obtain the non-analytic two loop results quoted in the paper.
2309.14107
Wav2vec-based Detection and Severity Level Classification of Dysarthria from Speech
Automatic detection and severity level classification of dysarthria directly from acoustic speech signals can be used as a tool in medical diagnosis. In this work, the pre-trained wav2vec 2.0 model is studied as a feature extractor to build detection and severity level classification systems for dysarthric speech. The experiments were carried out with the popularly used UA-speech database. In the detection experiments, the results revealed that the best performance was obtained using the embeddings from the first layer of the wav2vec model that yielded an absolute improvement of 1.23% in accuracy compared to the best performing baseline feature (spectrogram). In the studied severity level classification task, the results revealed that the embeddings from the final layer gave an absolute improvement of 10.62% in accuracy compared to the best baseline features (mel-frequency cepstral coefficients).
Farhad Javanmardi, Saska Tirronen, Manila Kodali, Sudarsana Reddy Kadiri, Paavo Alku
2023-09-25T13:00:33Z
http://arxiv.org/abs/2309.14107v2
# Wav2vec-Based Detection and Severity Level Classification of Dysarthria From Speech ###### Abstract Automatic detection and severity level classification of dysarthria directly from acoustic speech signals can be used as a tool in medical diagnosis. In this work, the pre-trained wav2vec 2.0 model is studied as a feature extractor to build detection and severity level classification systems for dysarthric speech. The experiments were carried out with the popularly used UA-speech database. In the detection experiments, the results revealed that the best performance was obtained using the embeddings from the first layer of the wav2vec model that yielded an absolute improvement of 1.23% in accuracy compared to the best performing baseline feature (spectrogram). In the studied severity level classification task, the results revealed that the embeddings from the final layer gave an absolute improvement of 10.62% in accuracy compared to the best baseline features (mel-frequency cepstral coefficients). Farhad Javanmardi, Saska Tirronen, Manila Kodali, Sudarsana Reddy Kadiri, and Paavo Alku Department of Signal Processing and Acoustics, Aalto University, Finland. Dysarthria, Severity level classification, Wav2vec 2.0, MFCCs. ## 1 Introduction Dysarthria is a neuro-motor disorder caused by neurological damage of the motor component of speech production. Dysarthria occurs either due to a neurological injury (i.e., cerebral palsy, stroke) or due to a neurodegenerative disease (i.e., Parkinson's disease, Huntington's disease). Dysarthric speech is often associated with atypical speech prosody, and reduced tongue flexibility and imprecise articulation, all of which impact speech intelligibility [1]. Assessment of speech intelligibility is essential in distinguishing the progression of dysarthria. Speech assessment is generally performed in voice clinics by speech-language pathologists who conduct intelligibility tests to identify the potential presence of dysarthria as well as its severity level [2]. Subjective intelligibility tests are costly and laborious, and they are prone to biases of pathologists due to their familiarity and experience with patients. Hence, the design of an objective method for the severity assessment of dysarthric speech is important. The assessment of dysarthric speech is carried out in two phases consisting of (1) the identification of the presence of dysarthria and (2) the estimation of the severity level of the disease. Both of these phases are important diagnostic steps that are needed to make clinical decisions on medication and therapy of the patient. This work focuses on both of the above-mentioned phases by studying speech-based detection and severity level classification of dysarthria. In both of these topics, the current study focuses on the use of the pre-trained wav2vec model to extract the features [3]. Automatic detection and severity level classification of dysarthria from speech is enabled using data-driven approaches based on supervised learning. This involves building machine learning models that are trained using speech data which has been collected from patients and which has been labelled by speech-language pathologists. Many automatic dysarthria detection and severity level classification methods are based on acoustic features that characterize the salient aspects in the production of dysarthric speech [4, 5]. Abnormal variations of pronunciation, prosody and voice quality were investigated by using the sentence-level features in [5]. In [6, 7], authors investigated the single frequency filtering -based features for dysarthric speech detection and 4-class intelligibility classification (very low, low, medium, and high). In [8], auditory distinctive features (based on models of mid-external ear and basilar membrane) were proposed for the assessment of dysarthria. Their study also showed that the combination of auditory features with mel-frequency cepstral coefficients (MFCCs) improves the intelligibility estimation of dysarthric speech. The short and long-term temporal measures based on the log-energy dynamics and auditory inspired modulation spectral features were investigated for dysarthric speech intelligibility assessment in [9]. Glottal source features along with the OpenSmile features [10] were investigated both in detection of dysarthric speech as well as in classification of the intelligibility of dysarthric speech in [11, 12]. Linear weighted combination of articulation, phonation, and prosody features of speech were used in intelligibility estimation in [13, 14]. Recently, in [15] authors explored the use of different spectro-temporal representations, such as the spectrogram and mel-spectrogram, and convolutional neural networks in intelligibility assessment of dysarthric speech. In a few recent years, pre-trained neural network models have become popular for various speech technology tasks, such as automatic speech recognition (ASR), speaker recognition and emotion recognition [3, 16, 17, 18]. In the current study, we take advantage of an effective pre-trained model, wav2vec, in speech-based detection and severity level classification of dysarthria. The topic is motivated by recent findings reported in recognition of pathological speech that have shown good performance for the wav2vec models [16, 19]. These models are generally first pre-trained in an unsupervised manner on large speech datasets and then fine-tuned on a small set to perform the required task. In this work, we experiment with the wav2vec model shared on HuggingFace [20]. The main contributions of the study are: * Layer-by-layer analysis of the utility of the wav2vec features in the detection of dysarthria (healthy \(vs\). dysarthric) and in the classification of the severity level (very low, low, medium, and high) of dysarthria from speech. * Systematic comparison between popularly used spectral features and the wav2vec features. ## 2 The Detection and Severity Level Classification Systems This work studied the following two classification problems: (1) a binary classification problem to distinguish dysarthric speech from healthy speech (i.e., detection problem), and (2) a multi-class classification problem to classify the severity level of dysarthria from speech to 4 classes (very low, low, medium, and high). In both problems, a pre-trained wave2vec 2.0 [3, 16] model is used as a feature extractor, and a support vector machine (SVM) is used as a classifier. A schematic block diagrams of the systems built for the two aforementioned problems is shown in Figure 1. The following two sub-sections describe the feature extraction and classification methods in more detail. ### Pre-trained feature extractor In building the classification systems for both problems, we use the pre-trained wav2vec 2.0 [3] model as a feature extractor. We utilize Facebook's wav2vec 2.0 model that has been trained with 960 hours of audio from the Librispeech corpus [3]. The model was originally trained to be used in ASR, which implies that the final layers of the network have learnt to extract embeddings that contain information about phoneme identity of speech. However, the embeddings from the first layers of the network also contain information about phones, which makes them useful features also for many other speech-related tasks than ASR [21]. We take use of the inputs of the first transformer block of the context network of the wav2vec model, as well as of the output embeddings of each of the 12 transformer blocks. As the embeddings are extracted for each non-overlapping 20 ms frame of the signal, we compute the average over the frames to obtain the final feature vectors. The dimensionality of the embeddings is 768, which is also the dimensionality of the features that we finally use in our classifiers. These features will be referred to as the wav2vec features in this paper. To refer to the individual layers, we use the layer numbers, i.e., "wav2vec-N" refers to the features associated to the \(N\)th layer of the model. ### Classifiers In this study, a binary SVM classifier is used to distinguish between healthy and dysarthric samples. For the multi-class classification between the four severity levels (very low, low, medium, and high), SVM with the one-vs-one architecture [22] was used. Both in the binary classification and in the 4-class classification, we used radial basis function as kernel, and a regularization parameter value of 1. For gamma, we used scaling according to the dimensionality and variance of data, written as \(\gamma=1/(D\cdot Var(X))\), where \(Var(X)\) is the variance of the training data, and D is the dimension of the feature vectors. ## 3 Experimental Setup ### Database of dysarthric speech This study uses the UA-speech database of dysarthric speech [4]. The database consists of 765 isolated words recorded in three blocks (B1, B2, and B3) by 15 dysarthric speakers (four female and eleven male) diagnosed with cerebral palsy and 13 healthy controls (four female and nine male). The overall intelligibility of each of the dysarthric speakers has been assessed using the subjective evaluations with 5 native listeners. Based on the intelligibility ratings, the dysarthric speakers have been grouped into four severity categories: very low (4 speakers), low (3 speakers), medium (3 speakers) and high (5 speakers). Each block contains 255 words in which 155 words are common to all three blocks, and the remaining 100 words differ across the blocks. An eight-microphone array was used for speech recording. Speech was sampled at 16 kHz and each microphone was spaced at intervals of 1.5 in. The current study was carried out using the speech utterances from all three blocks of each speaker, recorded by microphone number 6 of the array. More details of the UA-speech database can be found in [4]. ### Baseline features used for comparison In order to compare the performance of the wav2vec features, three popularly used spectral features (spectrogram, mel-spectrogram and MFCCs) were considered as they were shown in [7] to provide good discrimination between dysarthric and healthy speech. All these feature representations are derived using the Librosa toolkit [23]. #### 3.2.1 Spectrogram The spectrogram features were computed by taking the logarithm of the amplitude spectrum that was estimated using a 1024-point fast Fourier transform (FFT) with the Hamming windowing in frames of 25 ms with a shift of 5 ms. Finally, the features of the spectrogram were averaged over the time axis yielding a 513-dimensional feature vector per utterance. #### 3.2.2 Mel-Spectrogram The mel-spectrogram features were computed by applying a mel-filterbank with 80 filters on amplitude spectrum. The resulting mel-spectrogram was mapped to a decibel scale through logarithm. By averaging the mel-spectrogram features the over time axis, a 80-dimensional feature vector was obtained per utterance. #### 3.2.3 MFCCs To compute the MFCCs, the discrete cosine transform (DCT) was used to transform the mel-scale spectrum into mel-cepstrum. The first 13 cepstral coefficients (including the \(0^{th}\) coefficient) were considered and their delta & double-delta coefficients were computed, which yielded a 39-dimensional MFCC feature vector per utterance. ### Training and testing The binary detection experiments were conducted with leave-one-speaker-out (LOSO) cross-validation, where speech signals of one speaker was considered as test data and speech signals of the remaining 27 speakers were used for training the SVM classifier. Both training and testing data were z-score normalized using the mean and Figure 1: The proposed systems for (a) detection of dysarthric speech and for (b) severity level classification of dysarthric speech. standard deviation of the training data. In each iteration, the evaluation metrics were saved. This process was repeated for 28 iterations (equaling to the number of speakers), and finally the evaluation metrics were averaged over the 28 iterations. For the severity level classification experiments, three dysarthric speakers were left out (one male speaker from "very low" level of intelligibility and two male speakers from "high" level of intelligibility) to have the same number of dysarthric speakers in each class. Experiments were carried out using the 12 remaining dysarthric speakers. In each iteration, speech signals of four speakers (one from each class) were considered as test data and speech signals of the remaining speakers were used to train the classifier. By considering one speaker from each class for testing the SVM classifier, a total of 81 (3*3*3*3) iterations (training and testing process) were performed. ### Evaluation metrics In order to evaluate the performance of the dysarthria detection systems, the following four commonly used evaluation metrics were considered in this study: accuracy (ACC), sensitivity (SE), specificity (SP), F1-score (F1) apart from the confusion matrices. For the severity level classification systems, mean accuracy and class-wise accuracies were used for assessing the performance of the systems. ## 4 Results This section reports the results obtained using the wav2vec features and the baseline features. The results are reported in Section 4.1 for the detection problem (i.e., the binary classification problem), and in Section 4.2 for the 4-class severity level classification problem. ### Severity level classification of dysarthric speech The performance in term of accuracy are shown in Figure 3 for the three baseline features and for all the 13 wav2vec features for the severity level classification experiments. Table 3 shows the overall classification accuracy together with the class-wise accuracies for the baseline features and for the two best wav2vec features. From Figure 3, it can be clearly seen that almost all the wav2vec features (except for wav2vec-1) outperformed the three baseline features in terms of the mean accuracy. Interestingly, unlike in the detection problem, the best-performing wav2vec features were the ones obtained from the final layers. In fact, there is a rising trend in the accuracy when moving from the first layer towards the final layer. This result was expected because the severity of dysarthria is associated with the intelligibility of speech (e.g., phoneme identity) and the wav2vec pre-trained model is designed for the ASR tasks, therefore the final layers can effectively learn information related to the linguistic contents of speech. Among the baseline features, it can be observed that the MFCCs and spectrogram performed better than the mel-spectrogram (chance level is 25%). Compared to the best baseline (MFCCs), wav2vec-12 and wav2vec-13 gave an absolute improvement of 8.88% and 10.62%, respectively. The results also show that for the wav2vec features, the class-wise accuracies are relatively less biased towards the two extreme ends of the severity scale ("very low" and "high" levels of dysarthria severity) compared to the baseline features. ## 5 Summary and Conclusions In this study, we explored the state-of-the-art pre-trained wav2vec model to extract features in the context of dysarthric speech detection and in the context of severity level classification of dysarthria. A comparison of the wav2vec features with the popularly used spectral and cepstral-based baseline features (spectrogram, mel-spectrogram, and MFCCs) was carried out both in the detection problem and in the severity level classification problem. The results of the dysarthric speech detection experiments indicated that the features extracted from the first layer (wav2vec-1) outperformed the baseline and other wav2vec features by showing an absolute improvement of 1.23% in accuracy compared to the best performing baseline feature (spectrogram). This indicates that the starting layers of the wav2vec model has learned generic speech features that can be effectively used for the detection of dysarthric speech. The results of the severity level classification experiments showed that the features extracted from the final layers (wav2vec-12 and wav2vec-13) performed better than the baseline and the other wav2vec features. Compared to MFCCs (the best baseline feature), an absolute improvement of 8.88% and 10.62% was given by wav2vec-12 and wave2vec-13, respectively. We argue that the improved performance that was obtained using the features from the final layers is due to the wav2vec model's capability to extract features that contain information related to the linguistic contents associated to speech intelligibility (which is directly related to the severity level of dysarthria). Taken together, the experimental findings of the study indicate that the classification systems seem to be generalizable to unseen speakers using the features from the starting layers in the detection task, and the features from the final layers in the multi-class classification task. However, further research is required in order to study the generalizability of the wav2vec features for other disorders and to study also the performance of these features in cross-database scenarios. Figure 3: Severity level classification accuracy of dysarthric speech for the three baseline features (spectrogram, mel-spectrogram and MFCCs) and for all the 13 wav2vec features. The blue bars represent the features derived from the wav2vec model, with the tick labels indicating the index of the corresponding layer. Heights of the bars represent the mean accuracy. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline Feature & ACC [\%] & \(C_{very-low}\) & \(C_{low}\) & \(C_{medium}\) & \(C_{high}\) \\ \hline \hline \multicolumn{6}{|c|}{**Baseline features**} \\ \hline \hline Spectrogram & 33.26 & 44.54 & 16.65 & 14.95 & 56.87 \\ \hline Mel-spectrogram & 26.21 & 32.10 & 13.71 & 16.09 & 42.94 \\ \hline MFCCs & 33.94 & 50.63 & 7.32 & 21.02 & 56.76 \\ \hline \hline \multicolumn{6}{|c|}{**Wav2vec features**} \\ \hline \hline wav2vec-12 & **42.82** & 63.10 & 22.77 & 16.51 & 68.91 \\ \hline wav2vec-13 & **44.56** & 56.09 & 23.21 & 18.77 & 80.16 \\ \hline \end{tabular} \end{table} Table 3: Dysarthria severity level classification accuracies and class-wise accuracies for the three baseline features along with the two best wav2vec features. Here ACC refers to accuracy and C refers to class.
2303.00129
Cell Division and Motility Enable Hexatic Order in Biological Tissues
Biological tissues transform between solid-like and liquid-like states in many fundamental physiological events. Recent experimental observations further suggest that in two-dimensional epithelial tissues these solid-liquid transformations can happen via intermediate states akin to the intermediate hexatic phases observed in equilibrium two-dimensional melting. The hexatic phase is characterized by quasi-long-range (power-law) orientational order but no translational order, thus endowing some structure to an otherwise structureless fluid. While it has been shown that hexatic order in tissue models can be induced by motility and thermal fluctuations, the role of cell division and apoptosis (birth and death) has remained poorly understood, despite its fundamental biological role. Here we study the effect of cell division and apoptosis on global hexatic order within the framework of the self-propelled Voronoi model of tissue. Although cell division naively destroys order and active motility facilitates deformations, we show that their combined action drives a liquid-hexatic-liquid transformation as the motility increases. The hexatic phase is accessed by the delicate balance of dislocation defect generation from cell division and the active binding of disclination-antidisclination pairs from motility. We formulate a mean-field model to elucidate this competition between cell division and motility and the consequent development of hexatic order.
Yiwen Tang, Siyuan Chen, Mark J. Bowick, Dapeng Bi
2023-02-28T23:20:42Z
http://arxiv.org/abs/2303.00129v2
# Cell Division and Motility Enable Hexatic Order in Biological Tissues ###### Abstract Biological tissues transform between solid-like and liquid-like states in many fundamental physiological events. Recent experimental observations further suggest that in two-dimensional epithelial tissues these solid-liquid transformations can happen via intermediate states akin to the intermediate hexatic phases observed in equilibrium two-dimensional melting. The hexatic phase is characterized by quasi-long-range (power-law) orientational order but no translational order, thus endowing some structure to an otherwise structureless fluid. While it has been shown that hexatic order in tissue models can be induced by motility and thermal fluctuations, the role of cell division and apoptosis (birth and death) has remained poorly understood, despite its fundamental biological role. Here we study the effect of cell division and apoptosis on global hexatic order within the framework of the self-propelled Voronoi model of tissue. Although cell division naively destroys order and active motility facilitates deformations, we show that their combined action drives a liquid-hexatic-liquid transformation as the motility increases. The hexatic phase is accessed by the delicate balance of dislocation defect generation from cell division and the active binding of disclination-antidisclination pairs from motility. We formulate a mean-field model to elucidate this competition between cell division and motility and the consequent development of hexatic order. Organ surfaces are often covered with confluent monolayers of epithelial or endothelial cells which provide functional separation from the surrounding environment. During development these cells grow, divide and move, dynamically reorganizing the entire tissue. Regulated by a complex set of chemical and mechanical signaling pathways[1; 2; 3; 4], tissue frequently undergoes a transition from a structureless fluid-like state to a state capable of supporting a variety of stresses, most notably elastic stresses[5; 6; 7; 8; 9; 10; 11]. Such transformations have recently been analyzed as a crossover from a liquid to an amorphous solid [12; 13]. In two-dimensional (2D) systems in equilibrium, however, liquids can develop rigidity via two consecutive transitions, corresponding to the development of first orientational order, without translational order, and then the subsequent addition of translational order (a solid) [14; 15]. The intermediate phase with (quasi-long-range) orientational order but translational disorder is known as the hexatic phase and has been shown to occur in a very wide variety of physical systems[16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. The hexatic is a particular type of structured fluid since it flows like a fluid but has orientational rigidity. Previous theoretical and computational models of dense tissues have studied the emergence of hexatic order, with focus on the effects of thermal fluctuations [30; 31] and motility [32; 33; 34].The modeling usual looks at the inverse process of melting from the crystalline state. Realistic tissues, however, are very rarely crystalline with a few exceptions [35; 36]. Cell division and apoptosis almost always destroy the crystalline state [37] and yet there has been no direct observation of the hexatic phase in _in vitro_ biological tissues, including those undergoing a solid-fluid transition [5; 6; 8; 38]. Recent _in vivo_ experiments on Drosophila embryos have uncovered hexatic order during development with cell division [39; 40] along with the associated increase of orientational correlations [41]. The mechanism behind the emergence of this orientational order has remained unclear. Here we analyze whether biological systems can exhibit this rather subtle phase by studying numerically and analytically the self-propelled Voronoi (SPV) model of cellular tissue including cell division and death[13]. We compare a variety of structural properties with and without division, including translational and orientational order parameters, order field correlation functions in space, order field susceptibility, and topological defect densities. We find that the interplay of division/apoptosis and cell motility does indeed give rise to a hexatic regime. In the absence of cell division, the model undergoes a crystal-hexatic and hexatic-liquid transition. With both cell division and motility, the model is driven through distinct liquid-hexatic and hexatic-liquid transitions with a re-entrant state, or phase, diagram. While cell motility is typically thought to disorder, we show that the combined effect of cell division and cell motility allows access to the hexatic state. A key role in this process is played by topological defects, both disclinations and dislocations. ## II Result ### Model We model a 2D cell layer using the Self-Propelled Voronoi (SPV) [13] version of the vertex model [42; 43; 44; 45; 46]. In the SPV model, the cell centers \(\vec{r}_{i}\) are used to identify each cell and the positions of all cell centers serve as the degrees of freedom in the SPV model. The cell shapes and the cellular network is determined based on the Voronoi tesselation [47; 48] of the cell centers. \[E=\sum_{i=1}^{N}[K_{A}(A_{i}-A_{0})^{2}+K_{P}(P_{i}-P_{0})^{2}] \tag{1}\] with the preferred area \(A_{0}\), the preferred perimeter \(P_{0}\), the area elastic constant \(K_{A}\), and the perimeter elastic constant \(K_{P}\). The \(A_{i}^{2}\) term results from cell volume incompressibility and the monolayer's resistance to height fluctuations[49]. The \(P_{i}^{2}\) term originates from the active contractility of the actomyosin subcellular cortex, while \(P_{i}\) represents effective cell membrane tension due to cell-cell adhesion and cortical tension. The target shape index \(p_{0}=P_{0}/\sqrt{A_{0}}\) effectively characterizes the competition between cell-cell adhesion and cortical tension, acting as a signature for the solid-liquid phase transition. Apart from the effective mechanical interaction force \(\mathbf{F}_{i}=-\nabla_{i}E\), cells are self-propelled. A self-propulsion force is exerted along the cell polarity direction \(\mathbf{\hat{n}_{i}}=(\cos\theta_{i},\sin\theta_{i})\), where \(\theta_{i}\) is the polarity angle. The self-propulsion has a constant magnitude \(v_{0}/\mu\), with the inverse of a frictional drag \(\mu\). The equation of motion for each cell is given by \[\frac{\mathrm{d}\vec{r}_{i}}{\mathrm{d}t}=\mu\vec{F}_{i}+v_{0}\mathbf{\hat{n} _{i}}. \tag{2}\] The polarity angle obeys rotational diffusion: \(d\theta_{i}/dt=\eta_{i}(t)\), where \(\eta_{i}(t)\) is white-noise (\(\langle\eta_{i}(t)\eta_{j}(t^{\prime})\rangle=2D_{r}\delta(t-t^{\prime}) \delta_{ij}\)), with \(D_{r}\) the rotational diffusion rate. In addition to the polarized self-propulsion of each cell, cell division and apoptosis serve as another source of active drive in a living tissues [50; 51; 52; 37]. Each cell has a probability \(\gamma_{0}\mathrm{d}t\) to divide. For each cell division, a daughter cell is introduced by randomly seeding a point at a distance of \(d=0.1\) (in units of the average cell diameter) near the mother cell. During the relaxation after cell division, the daughter and mother cells are pushed apart by the interaction force. We have checked that the results are independent of the choice of \(d\). In order to study the density-independent effects of cell-division, we keep the number density of the tissue constant by implementing apoptosis at the same rate as division. Apoptosis is then performed on randomly chosen cells, which removes the cell from the tissue. This simulation scheme mimics the maintenance of homeostatic balance in a tissue [53; 54]. The model can be nondimensionalized by expressing all lengths in units of \(\sqrt{A}\), where \(\bar{A}\) is the average cell area in the tissue and time in units of \(1/(\mu K_{A}\bar{A})\). Three independent parameters remain the cell division/apoptosis rate \(\gamma_{0}\), the magnitude of motility \(v_{0}\), and the cell shape index \(p_{0}\). Throughout the simulations, we choose \(D_{r}=1\) without loss of generality. The confluent tissue with \(N\) cells is simulated in a square box (\(\sqrt{N}\times\sqrt{N}\)) under periodic boundary conditions. We numerically simulate the model using the open-source software cellGPU[55]. The simulations start with a crystalline initial state in which cell centers form a triangular lattice. Eq. 2 is numerically integrated for \(2\times 10^{6}\) steps at a step size of \(\Delta t=0.05\). For all data presented, the analysis is based on the steady-state regime of the simulations (final \(5\times 10^{5}\) steps). In the supplementary material (Fig.7), we also simulate the model starting from amorphous states to demonstrate that the results are independent of the initial conditions. We set \(p_{0}=3.6\) in our simulations. During the melting process, the tissues undergo a transition from crystalline to hexatic to liquid as motility increases[32]. ### Order Parameter and Correlation Translational and orientational symmetries distinguish the phases states, crystalline solid, hexatic and liquid. The 2d system in the solid phase has quasi-long-range translational order and long-range orientational order while the liquid phase has no long-range order of either kind. These two symmetries are related but not completely dependent. The system in the hexatic phase has no long-range translational order but retains quasi-long-range orientational order [14; 15]. The local bond-orientational order for each cell \(\vec{r}_{j}\) is evaluated by \[\psi_{6}(\vec{r}_{j})=\frac{1}{\sum_{i=1}^{z_{j}}l_{ij}}\sum_{i=1}^{z_{j}}l_{ij }\exp{(i6\theta_{i}^{j})}, \tag{3}\] where the sum runs over the \(n\) Voronoi neighbors of the cell and is weighted by their shared edge length[56]. \(\theta_{i}^{j}\) is the angle of the neighboring joint vector \((\vec{r}_{i}-\vec{r}_{j})\) to a reference axis. The translational order is quantified by \(\psi_{T}(\vec{r}_{j})=\exp{(i\vec{G}_{r}\vec{r}_{j})}\), where \(\vec{G}_{r}\) represents a reciprocal vector in reciprocal space. In Fig. 1, we plot the global order parameters \(\Psi_{6}=\frac{1}{N}\sum_{j=1}^{N}\psi_{6}(\vec{r}_{j})\) and \(\Psi_{T}=\frac{1}{N}\sum_{j=1}^{N}\psi_{T}(\vec{r}_{j})\) as a function of \(v_{0}\) to quantify the degree of order in the tissue. With no cell division (black lines), the tissue has a crystalline structure at low \(v_{0}\) where both \(\Psi_{T},\Psi_{6}\) are close to 1. The order parameters decrease monotonically with increasing \(v_{0}\). For \(0.35\lesssim v_{0}\lesssim 0.45\), the tissue lacks translational order but retains orientational order, suggesting the existence of a hexatic phase before melting into a liquid phase at higher \(v_{0}\). This result is consistent with the solid-hexatic-liquid melting scenario in the previous study using a similar model [32]. When cells divide (color lines in Fig. 1), \(\Psi_{T}\) is always close to zero at any value of \(v_{0}\). This clearly illustrates that activity due to cell cycling (division/death) always destroys the translational order and therefore forbids the formation of permanently frozen structures [37]. Remarkably, while an actively dividing tissue lacks transitional order, it retains orientational order for a large range of \(v_{0}\) values. This suggests the emergence of a hexatic phase at intermediate \(v_{0}\) values. A transition from liquid to hexatic to liquid is visualized by the structure factor \(S(\mathbf{q})\) for various \(v_{0}\) at a fixed division rate. In order to find the location of the transition points between different phases, we next compute the bond-orientational and translational correlation functions. They are given by \[g_{\alpha}(r)=\langle\psi_{\alpha}^{*}(r)\psi_{\alpha}(0)\rangle \tag{4}\] with \(r=|\vec{r}_{i}-\vec{r}_{j}|\) and \(\alpha=6,T\) corresponding to orientational order and translational order, respectively. The peaks of correlations are fitted as a power law decay \(g_{\alpha}(r)\sim r^{-\eta_{\alpha}}\) (long-range order) and an exponential decay \(g_{\alpha}(r)\sim e^{-r/\xi_{6}}\) (short-range order). KTHNY theory [14; 15; 57; 58; 59] predicts \(\eta_{6}=1/4\) at the hexatic-liquid transition point and \(\eta_{T}=1/3\) at the crystal-hexatic transition point [60; 61]. The correlations are drawn and compared with reference line (\(\eta_{T}=1/3\) or \(\eta_{6}=1/4\)) in Fig.2 and Fig.8. Melting (without cell division) allows quasi-long-range translational order at low \(v_{0}\), decaying as a power law with \(\eta_{T}\leq 1/3\). The translational order with cell division decays faster. Cell division also promotes the decay of bond-orientational correlations, but the low \(\gamma_{0}\) still allows for quasi-long-range \(g_{6}(r)\) with \(\eta_{6}\leq 1/4\) at intermediate \(v_{0}\) range. A broken translational symmetry without broken orientational symmetry characterizes the emergence of a hexatic state. The exponential law fits the orientational order better in both low-and high-\(v_{0}\) liquid phases. The fitting exponents \(\eta_{6}\) and \(\xi_{6}\) at fixed division rate \(\gamma_{0}=2\times 10^{-5}\) are shown in Fig.3 and in Fig. 9 for the case of no cell division. These results confirm the emergence of two distinct liquid-hexatic and hexatic-liquid transition points when there is cell division. The correlations in the hexatic indeed carry long-range order, which is well-fitted by power-law decays, \(g_{6}(r)\sim r^{-\eta_{6}}\), while outside the region the system is short-range correlated, which can be fitted to exponential decays \(g_{6}(r)\sim e^{-r/\xi_{6}}\). As the hexatic phase is approached from either side, \(\xi_{6}\) grows rapidly, consistent with a diverging correlation length. Despite excellent agreements with the KTHNY model, the correlation functions and the associated quantities \((\xi_{6},\eta_{6})\) near the onset of hexatic states suffer from large sample-to-sample variations as shown in Fig. 3(a,b). We have confirmed that this is not due to finite-size effects, since even at large system sizes, the behavior of \(g_{6}(r)\) can range from exponential decay to a power-law de Figure 2: **The (a,c) translational and (b,d) bond-orientational correlation.** (a) For intermediate cell motility \(v_{0}=0.35\) there is quasi-long-range translational order only in the absence of cell divisions(red topmost line ): the decay is power law \(g_{T}(r)\sim r^{-\eta_{T}}\) with \(\eta_{T}\leq 1/3\). For any other non-zero division rate, there is a lack of translational symmetry as indicated by an exponential decay of \(g_{T}(r)\). (b) For division rate \(\gamma_{0}\leq 4\times 10^{-5}\), there is quasi-long-range order in the bond-orientations as they decay as a power-law \(g_{6}(r)\sim r^{-\eta_{6}}\) with \(\eta_{6}\leq 1/4\). At higher \(\gamma_{0}\) values \(g_{6}(r)\) lacks long-range order. (c) For small cell division rate \(\gamma_{0}=2\times 10^{-5}\), all \(g_{T}(r)\)s decay exponentially. (d) while at intermediate \(v_{0}\) range \([0.25,0.45]\), \(g_{6}\) is quasi-long-range. The broken translational but preserved orientational symmetry characterizes the emergence of a hexatic state. Figure 1: **Quantifying (a) the translational order parameter \(\Psi_{T}\) and (b) the orientational order parameter \(\Psi_{6}\) as a function of the cell motility \(v_{0}\) at various division rates \(\gamma_{0}\).** (a,b)In the absence of cell division (black line), the tissue has crystalline structure where both \(\Psi_{T},\Psi_{6}\) are close to \(1\). With increasing \(v_{0}\) the order parameters decrease monotonically. For \(0.35\lesssim v_{0}\lesssim 0.45\), the tissue lacks translational order but retains orientational order, suggesting the existence of a hexatic phase before melting into a liquid phase at higher \(v_{0}\). When cells divide (colored lines), the activity due to cell cycling (division/death) always destroys the translational order and forbids a crystalline solid at any \(v_{0}\). Interestingly, the orientational order is preserved at intermediate \(v_{0}\) values. (c)The structure factor \(S(\mathbf{q})\) are plotted for various \(v_{0}\) at fixed division rate \(\gamma_{0}=2\times 10^{-5}\). Strikingly, the increased motility combined with cell division drives a transition from liquid to hexatic to liquid. cay (Fig. 10). Consequently, \((\xi_{6},\eta_{6})\) cannot be used to pinpoint the precise location of the liquid-hexatic and hexatic-liquid transitions. ### Susceptibility We next take advantage of the large fluctuations that arise near critical points by using the order parameter susceptibility to pinpoint the transitions. The susceptibility is given by \[\chi_{\alpha}=N(\langle|\Psi_{\alpha}|^{2}\rangle-\langle|\Psi_{\alpha}| \rangle^{2}), \tag{5}\] characterizes the fluctuations in the translational (\(\chi_{T}\)) and orientational (\(\chi_{6}\)) order parameters. Since \(\chi_{\alpha}\) is essentially an integral of the correlation function, it is expected to be more robust to finite-size or finite-time effects [20; 31]. In the absence of division, during the melting process (shown in Fig.11), there is a sharp divergence of \(\chi_{T}\) at \(v_{0}=0.35\), indicating the crystal-hexatic transition. On the other hand, \(\chi_{6}\) diverges at \(v_{0}=0.46\pm 0.01\), which corresponds to the hexatic-liquid transition. By performing analysis at system sizes ranging from \(N=2430\) to \(38880\), we confirm that these divergences are robust to finite-size effects. In contrast, \(\chi_{6}\) with cell division (at \(\gamma_{0}=2\times 10^{-5}/s\)) generate two peaks ( Fig.3(c)). The divergence of \(\chi_{6}\) determines two distinct transition points at \(v_{0}=0.25\pm 0.01\) and at \(v_{0}=0.45\pm 0.01\). Whereas the second point is a vestige from the hexatic-liquid transition in the absence of cell division. The first transition point emerges solely due to cell division. Here, a state that otherwise would be a crystal in the absence of cell divisions, becomes hexatic when cells divide. Exploring various cell division rates \(\gamma_{0}\) and active motilities \(v_{0}\) at fixed \(p_{0}=3.6\), we plot the \(v_{0}-\gamma_{0}\) phase diagram (Fig.4). Colour indicates the mean magnitude of the global orientational order over tens of thousands of frames. Black dots mark the peaks of \(\chi_{6}\) at various division rates. The two transition points approach each other and annihilate as the division rate increases. ### Disclinations and Dislocations According to KTHNY theory [57; 58; 59; 14; 15], the distinct phases crystalline, hexatic and liquid are characterized by the distributions of the basic topological defects known as disclinations and dislocations. Whereas the pure crystalline phase is defect free, or equivalently all defects are tightly bound in defect-antidefect pairs, the hexatic phase has a non-vanishing density of free dislocations and the liquid phase has a non-vanishing density of free disclinations. If \(z_{i}\) denotes the coordination number (number of Figure 3: **Pinpointing the location of the hexatic phase boundary.** (a) The correlation length \(\xi_{6}\) and (b) the power-law decay exponent \(\eta_{6}\) of the orientational correlation function are shown as functions of \(v_{0}\) and at constant \(\gamma_{0}=2\times 10^{-5}\). Here, \(\xi_{6}\) is shown in the range where exponential decay is observed while \(\eta_{6}\) is shown when the orientation order is quasi-long-ranged. Circles represent the fitting exponents for different seeds, and the solid lines average the seeds. (c) The hexatic order parameter susceptibility \(\chi_{6}\) for the same parameter range as in (a,b) exhibit two distinct peaks (\(v_{0}=0.25\pm 0.01\) and \(v_{0}=0.45\pm 0.01\)), which determine the location of the hexatic boundary. The \(\chi_{6}\) in the vicinity of the transition is calculated based on 50 independent simulations. Figure 4: **Phase Diagram as a function of cell division rate \(\gamma_{0}\) and motility \(v_{0}\).** While translational symmetry is always destroyed by cell division, orientational symmetry can be preserved. Colour indicates the mean magnitude of the global orientational order parameter. Black dots are the hexatic-liquid transition points, obtained from the divergence of \(\chi_{6}\). neighbors) of the \(i\)th cell, then we can define an associated disclination charge \(q_{i}=6-z_{i}\)[62]. Hexagonal cells are thus "neutral", pentagonal cells have charge +1, heptagonal cells charge -1 and so on. Dislocations, the defects that disrupt translational order but preserve orientational order, correspond to tightly bound \(5-7\) pairs. They are neutral as disclinations but possess a net vectorial charge, the Burgers vector. We approximate the Burgers vector by the displacement vector separating the 5 and the connected 7. In general there will be clusters of connected defects and one must measure the associated disclination and dislocation charges of the entire cluster. The density of disclinations and dislocations are calculated by their volume fraction averaged over time. As shown in Fig.5(a) cell division creates dislocations at a rate dependent on motility. Division tends to disorder, favoring a liquid. What about motility? At low motility, division disordering wins. At high motility both processes disorder, leading again to a liquid. But for a significant range of intermediate motilites, we see that the number density of free disclinations falls to zero whereas the free dislocation density is finite. How is this possible? In this intermediate regime we hypothesize that disclinations are able to explore sufficient configuration space to access local free energy minima at which all disclinations find their anti-disclinations and bind into dislocations, thus leading to a hexatic. Fig.5(b) shows the defect density dependence on cell division rate at a fixed motility in the hexatic regime, showing that sufficiently high division rate leads to a non-zero density of free disclinations, thus melting the hexatic to a liquid. Fig.5(c) is a representative snapshot of a hexatic state(\(v_{0}=0.35\), \(\gamma_{0}=2\times 10^{-5}\)). Note the presence of dislocation complexes but no isolated disclinations. Video shows a dynamic evolution of states with various values of cell motility at a fixed division rate. The densities of dislocations and disclinations are indicated by color as a function of cell division rate \(\gamma_{0}\) and motility \(v_{0}\) at fixed \(p_{0}=3.6\) in Fig. 12. Black dots mark the same data in Fig. 4 ### Meanfield Model To further understand the emergence of hexatic order through cell division we develop a simple mean-field model (Fig.6) incorporating competition between cell division and motility. We simplify the state of a small cell cluster using a mean-field approximation that allows three states: (a) crystalline state ("ordered"), (b) an isolated single dislocation, and (c) an isolated single disclination. These states are associated with energies the \(\epsilon_{0}\), \(\epsilon_{1}\), and \(\epsilon_{2}\), respectively. Suppose a common value of the energy at the top of the wells \(\epsilon_{up}\), the energy barrier heights are \(\Delta\epsilon_{i}=\epsilon_{up}-\epsilon_{i}\). Since dislocations and disclinations are excitations of the ordered state, we assume \(\Delta\epsilon_{1}=c_{1}\Delta\epsilon_{0}\) and \(\Delta\epsilon_{2}=c_{2}\Delta\epsilon_{0}\) with \(1>c_{1}>c_{2}\). We also assume the motility force is approximately Brownian (this is exact in the limit of \(D_{r}\gg 1\)), thus providing a source of uncorrelated fluctuations. While our simulations use \(D_{r}=1\), the motility-induced fluctuations are approximately white stochastic noise [13]. In this limit an effective temperature \(T_{\text{eff}}=1/\beta\propto v_{0}^{2}\) adequately describes the fluctuations. Transitions between states arise from fluctuations over Figure 5: **Topological defects in the tissue.** (a) The volume densities of dislocations and disclinations are plotted as functions of \(v_{0}\) at constant cell division rate \(\gamma_{0}=2\times 10^{-5}\). (b) The same quantites are plotted at a constant \(v_{0}=0.35\) and varying \(\gamma_{0}\) Dislocations consistently appear prior to the appearance of disclinations. The disclination density is low in the hexatic range and grows over the dislocation density in the division-domain range. (c) The snapshot for \(v_{0}=0.35\) and \(\gamma_{0}=2\times 10^{-5}\) has some dislocation clusters but no disclination. Blue represents cells with disclination charge \(q_{i}=1\), red represents cells with \(q_{i}=-1\), and dark red represents cells with \(q_{i}=-2\). Figure 6: **A mean field description for defect dynamics in a tissue** (a) Here the crystal state is the global energy minimum while disclcation and disclination states are local minima. Their energies are given by \(\epsilon_{0}\), \(\epsilon_{1}\), and \(\epsilon_{2}\) respectively. In the absence of cell division, the transition rates among the three states are modeled as activated processes given by Eq. 6. Cell divisions provide an additional energy injection for cells to overcome barriers. The transition rates are modified with an adding \(\gamma_{0}\). (b) Based on the mean-field theory, the density of disclination is plotted as a function of dimensionless parameters \(\gamma_{0}/R\) and \((\beta\Delta\epsilon_{0})^{-1/2}\). energy barriers, as illustrated in Fig. 6(a). In the absence of cell divisions, the states evolve following \[\begin{split}\frac{\mathrm{d}\rho_{0}}{\mathrm{d}t}&= Re^{-\beta\Delta\epsilon_{1}}\rho_{1}-Re^{-\beta\Delta\epsilon_{0}}\rho_{0}\\ \frac{\mathrm{d}\rho_{1}}{\mathrm{d}t}&=Re^{-\beta \Delta\epsilon_{0}}\rho_{0}+Re^{-\beta\Delta\epsilon_{2}}\rho_{2}-2Re^{-\beta \Delta\epsilon_{1}}\rho_{1}\\ \frac{\mathrm{d}\rho_{2}}{\mathrm{d}t}&=Re^{-\beta \Delta\epsilon_{1}}\rho_{1}-Re^{-\beta\Delta\epsilon_{2}}\rho_{2}\end{split} \tag{6}\] Here \(R\) is the attempt frequency between two states, which is assumed to be the same for all states. In the high temperature limit (\(\beta\to 0\)), the steady-state solution of Eq. 6 gives \(\rho_{0}=\rho_{1}=\rho_{2}\approx 1/3\), while in the low temperature limit (\(\beta\rightarrow\infty\)), \(\rho_{0}=1\), \(\rho_{1}=\rho_{2}=0\). When cell divisions occur, we assume that the barrier crossing between states is facilitated by the rate of division and the transition rates are shown in Fig. 6(b). Then Eq.6 is modified to be \[\begin{split}\frac{\mathrm{d}\rho_{0}}{\mathrm{d}t}& =(Re^{-\beta\Delta\epsilon_{1}}+\gamma_{0})\rho_{1}-(Re^{-\beta \Delta\epsilon_{0}}+\gamma_{0})\rho_{0}\\ \frac{\mathrm{d}\rho_{1}}{\mathrm{d}t}&=(Re^{-\beta \Delta\epsilon_{0}}+\gamma_{0})\rho_{0}+(Re^{-\beta\Delta\epsilon_{2}}+\gamma_ {0})\rho_{2}\\ &-2(Re^{-\beta\Delta\epsilon_{1}}+\gamma_{0})\rho_{1}\\ \frac{\mathrm{d}\rho_{2}}{\mathrm{d}t}&=(Re^{-\beta \Delta\epsilon_{1}}+\gamma_{0})\rho_{1}-(Re^{-\beta\Delta\epsilon_{2}}+\gamma_ {0})\rho_{2}\end{split} \tag{7}\] In the high temperature/velocity limit (\(\beta\to 0\)), the energy discrepancy of the states is negligible, and the steady state leads to the same result as the no-division case, \(\rho_{0}=\rho_{1}=\rho_{2}\approx 1/3\). However, in the low temperature/velocity limit (\(\beta\rightarrow\infty\)), the dominant of division term leads to \(\rho_{0}:\rho_{1}:\rho_{2}=1:1:1\). In Fig.6(c), we draw the density of disclination as a function of dimensionless parameters \(\gamma_{0}/R\) and \((\beta\Delta\epsilon_{0})^{-1/2}\). We arbitrarily set \(c_{1}=0.8\), \(c_{2}=0.6\) and thick the contour line with \(\rho_{2}=0.05\) as the hexatic boundary, where the density of disclination is low. \(\gamma_{0}/R\) represents the division rate and \((\beta\Delta\epsilon_{0})^{-1/2}\) represents the magnitude of velocity. The contour of the density of disclination based on the simple mean field model behaves similarly to the hexatic-liquid transition line in Fig. 4/Fig. 12. As more disclinations are generated, the tissue transit from a hexatic state to a liquid state. Within the liquid range, the density of disclination is asymmetric for \(v_{0}\) as the simulation data (Fig. 12(a)), increasing faster in the low motility range than in the high motility range. Besides, the contour is robust to the choice of the density of disclination. ## III Discussion and Conclusion Combining cell division and apoptosis with the motility built into the self-propelled Voronoi model shows a sizeable region of the (motility,division-rate) parameter space to be in the orientationally-ordered hexatic phase. The evidence includes the standard order parameters, correlation functions and susceptibilities, along with the number density of dislocations and disclinations. A simple mean-field theory is developed and shown to explain the broad features of the state diagram. Switching off cell division leads to a low-motility crystalline phase, as known from previous work[32]. Any degree of cell division whatsoever destabilizes the crystalline phase, leading to the re-entrant sequence liquid-hexatic-liquid as motility increases at a fixed division rate. This may explain why most of the experiments in living tissues[5, 6, 8, 38] observe at best amorphous solid states. Cell division is the norm. Our work can be directly applied to understand how hexatic order arises in the early _Drosophila_ embryo, as reported in [39, 40, 41]. In the syncytial embryo, cell nuclei interact via f-actin caps situated near the apical surface. At the same time, each nucleus is enclosed in a microtubule basket. While both actin and microtubule networks can generate active forces, it has been suggested that the microtubules are responsible for the active pushing/pulling forces between nuclei and the f-actin caps provide a passive mechanical rigidity. At this developmental stage, the hexatic order parameter can reach as high as \(0.45-0.55\)[39]. When cells undergo rounds of syncytial division, the hexatic order parameter initially decreases but can recover back to \(~{}0.45-0.55\) as a consequence of the active fluctuations from microtubules and passive repulsive interactions from the f-actin network. Such processes are well-described by our theoretical model. Along with cell division, active microtubule-driven forces are captured by our random motility force \(v_{0}\). Birth and death, along with motility, generate dislocation arrays and so new configuration pathways, including those over barriers to local hexatic minima in the free energy landscape. The experimental observation that hexatic order is destabilized when microtubule activity is lowered [39] maps to the transition from hexatic to liquid at low activity and fixed division rate -- this would correspond to the lower part of a vertical line in Fig.4. It would be interesting to determine whether the experimental up-regulation of microtubule activity would also liquefy the hexatic state. The subtle balance required to establish hexatic order in equilibrium means that it is often confined to a rather narrow region of the relevant parameter space. Our findings suggest that cell division provides a new way of exploring the configuration space of physical systems, as noted above. In particular, the dynamics of dislocation defects generated by cell-division, both self-propelled and relaxational, promotes fluctuations over barriers separating the hexatic phase from crystalline or liquid phases. This phenomenon, which we may call defect-driven structure development, may well have implications beyond biological systems. In terms of the configuration space explored by the vertex model, cell division and apoptosis correspond to adding T2 moves (or interstitial insertion/deletion) to the allowed lattice updates - this yields a more efficient exploration of the space of all Voronoi tesselations and thus better routes to local hexatic minima [63; 64; 65]. It is remarkable that the early work of Swope and Andersen [66] found the hexatic phase by employing a grand canonical ensemble in which particles are added and removed. The mechanism we find here is very different from that found in colloids [20] and models of active particles [67], where packing density plays a crucial role. We have taken cell division to be isotropic. The inclusion of oriented cell divisions, however, would only enhance hexatic order. Recent work [51] has shown that oriented cell divisions can give rise to novel four-fold orientational order _in vivo_ through active defect climb, where defects introduced into the nascent lattice by cell divisions are healed by subsequent divisions along a well-defined global polarity axis. The effect of oriented divisions on hexatic order is a subject for the future. _Acknowledgements_ -- This work was supported in part by NSF DMR-2046683 (Y.T. and D. B.), PHY-1748958 (D. B. and M. J. B.), the Center for Theoretical Biological Physics NSF PHY-2019745 (Y. T. and D. B.), Alfred P. Sloan Foundation (Y. T. and D. B.) and The Human Frontier Science Program (Y. T. and D. B.))
2310.00011
Joint Self-supervised Depth and Optical Flow Estimation towards Dynamic Objects
Significant attention has been attracted to deep learning-based depth estimates. Dynamic objects become the most hard problems in inter-frame-supervised depth estimates due to the uncertainty in adjacent frames. Thus, integrating optical flow information with depth estimation is a feasible solution, as the optical flow is an essential motion representation. In this work, we construct a joint inter-frame-supervised depth and optical flow estimation framework, which predicts depths in various motions by minimizing pixel wrap errors in bilateral photometric re-projections and optical vectors. For motion segmentation, we adaptively segment the preliminary estimated optical flow map with large areas of connectivity. In self-supervised depth estimation, different motion regions are predicted independently and then composite into a complete depth. Further, the pose and depth estimations re-synthesize the optical flow maps, serving to compute reconstruction errors with the preliminary predictions. Our proposed joint depth and optical flow estimation outperforms existing depth estimators on the KITTI Depth dataset, both with and without Cityscapes pretraining. Additionally, our optical flow results demonstrate competitive performance on the KITTI Flow 2015 dataset.
Zhengyang Lu, Ying Chen
2023-09-07T04:00:52Z
http://arxiv.org/abs/2310.00011v1
# Joint Self-supervised Depth and Optical Flow Estimation towards Dynamic Objects ###### Abstract Significant attention has been attracted to deep learning-based depth estimates. Dynamic objects become the most hard problems in inter-frame-supervised depth estimates due to the uncertainty in adjacent frames. Thus, integrating optical flow information with depth estimation is a feasible solution, as the optical flow is an essential motion representation. In this work, we construct a joint inter-frame-supervised depth and optical flow estimation framework, which predicts depths in various motions by minimizing pixel wrap errors in bilateral photometric re-projections and optical vectors. For motion segmentation, we adaptively segment the preliminary estimated optical flow map with large areas of connectivity. In self-supervised depth estimation, different motion regions are predicted independently and then composite into a complete depth. Further, the pose and depth estimations re-synthesize the optical flow maps, serving to compute reconstruction errors with the preliminary predictions. Our proposed joint depth and optical flow estimation outperforms existing depth estimators on the KITTI Depth dataset, both with and without Cityscapes pretraining. Additionally, our optical flow results demonstrate competitive performance on the KITTI Flow 2015 dataset. **Keywords: Self-supervised depth estimation, Optical flow estimation, Bilateral constraint.** ## 1 Introduction With the explosion of deep-learning technologies, depth estimation demonstrates promise for stereoscopic perception in complex scenes, which facilitates high-level computer visions, involving human-machine understanding[1, 2], stereoscopic perception, scene segmentation, driving assistance and behaviour prediction[3]. In fact, bio-vision systems can perceive real-world scenes without barriers, whereby systems pre-trained with sufficient prior information can measure accurate depth maps. While the binocular mechanism is widely spread in bio-vision systems, depth perception remains sensitive in monocular conditions. Besides, inferring depths from single images with deep-learning models remain exceedingly challenging, as an ill-posed vision task. Deep learning-based depth estimators have been extensively explored for years, yielding unparalleled accuracies against classic methods. Existing supervised models[4, 5, 6, 7, 8, 9] can predict accurate depths from monocular images by formulating the depth estimates as a regression issue. Godard[10] provided a consistent binocular framework, allowing supervision by left-right pairs without labelled depths. Lu[11] leveraged the Fourier perspective to construct a robust depth estimator with a pyramid frequency network. The Mono-Former[12], the first CNN-Transformer for depth estimation, was conceived for multi-scene generalization. Self-supervised depth estimators provide a universal framework with binocular stereo images or continuous frame supervision, which alleviates laborious annotation works[13, 14]. The inter-frame-supervised method was first proposed as Monodepth2[15], providing a label-free depth framework via joint estimation of camera poses and inverse depths. Johnston[16] leveraged a self-attentive mechanism and discrete disparity reconstruction to learn accurate depths in self-supervision. Guizilini[17] presented a multi-task framework, simultaneously estimating depth, optical flow, and scene flow to integrate multiple tasks via image synthesis and geometric constraints. Recurrent Multi-Scale Feature Modulation(RMSFM)[18] designed multi-scale modulations with successive depth updates to improve the coarse-to-fine performance. Due to the neglect of contextual consistency between multi-scale features, Guizilini[19] introduced the Self-Distilled Feature Aggregation (SDFA) module, which enables simultaneous aggregation of low-scale and high-scale features while maintaining contextual consistency. Two common solutions to the problem of dynamic objects in depth estimation methods, which incorporate optical flow information, can be found in the literature. The first solution involves using optical flow to track the motion of dynamic objects and refine the depth map[20]. The second solution leverages information from motion segmentation to identify dynamic objects and remove the impact of their dynamic characteristics from the depth map[21]. In stationary scenes with moving viewpoints, the optical flow map carries the same information as the camera transformation and the depth map from the inter-frame-supervised methods. In other words, ideal optical flow maps can be equivalently decomposed into camera transformations and depths without occlusion components. Hence, camera pose estimation in inter-frame supervision can be considered as a regression issue, estimating eigenvalues from static components that dominate the scene. In order to construct a collaborative framework that focuses on dynamic objects, we unite two intrinsically homogeneous tasks, namely inter-frame-supervised depth and optical flow estimation. First, independent motion direction regions are separated from the optical flow estimation results. Next, each segmented region is fed into the depth module to predict inverse depths and camera transformation, respectively. In addition, the optical flow, depth and pose network are constrained by bilateral photometric re-projection loss and optical flow reconstruction loss, which are derived from the estimated depths and camera transformation. Relative to established self-supervised depth estimation approaches, the novel method exhibits remarkable improvements in accuracy, attributable to the advancements in addressing dynamic object problem. Simultaneously, the bidirectional reprojection constraint bolsters the robustness of the self-supervised mechanism. Specifically, the multi-task framework focusing on dynamic objects outperforms existing researches on the KITTI Depth dataset. The contributions of the multi-task framework are outlined: * We construct a joint inter-frame-supervised depth and optical flow estimation framework, which predicts depths in different motions by minimizing pixel wrap errors between the photometric re-projections and optical vectors. * In optical flow-based motion segmentation, we adaptively segment the preliminary estimated optical flow map by connectivity. * For bilateral inter-frame-supervised depth estimates, each motion region is predicted independently before the complete depth map composition. Further, the pose and depth predictions re-synthesize the optical flow maps, serving to compute synthesis errors with preliminary predictions. * The proposed joint framework outperforms advanced depth and optical flow estimators on KITTI Depth and Flow dataset. ## 2 Methodology To constrain the optical flow and ego-motion consistency, we demonstrate an inter-frame-supervised depth and optical flow estimation framework, which predicts depths by minimizing pixel wrap errors between the photometric re-projections and optical vectors. ### Overview As indicated in Fig.1, the joint depth and optical framework focusing on the dynamic objects comprises three modules: 1) Optical flow-based motion segmentation; 2) Bilateral inter-frame-supervised depth estimation; and 3) Optical flow synthesis. The optical flow-based motion segmentation is intended to separate pixel regions with heterogeneous motion directions. Then, depth and pose estimations are performed independently in dynamic and static regions to compute re-projection errors with bilateral constraints. Finally, the optical flow map can be reconstructed from the predicted depths and camera pose, whose endpoint errors with the raw optical flow optimize the two-stage framework. Optical flow-based motion segmentation serves a critical function in the network. The purpose of this module is to distinguish between pixel regions that exhibit heterogeneous motion directions. Optical flow, essentially the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and the scene, is used to effectively segment the image into regions based on the direction and magnitude of motion. This segmentation process allows the network to handle complex scenes where multiple objects may be moving in different directions. Following this segmentation process, depth and pose estimations are conducted independently in both dynamic and static regions. The aim here is to compute reprojection errors with bilateral constraints. The depth estimation is performed using a bilateral inter-frame-supervised approach, which takes into account both the previous and subsequent frames to make more accurate depth estimations. The pose estimation, on the other hand, is concerned with determining the orientation and position of the camera relative to the scene. The bilateral constraints act as a regulatory mechanism to ensure that these estimations remain consistent and accurate across all frames. Lastly, the optical flow map is reconstructed from the predicted depths and the estimated camera pose. This reconstructed optical flow map provides a detailed representation of the motion within the scene. The endpoint errors, which are the differences between the reconstructed optical flow map and the original optical flow, are then used to optimize the two-stage framework. This process is instrumental in refining the performance of the system, allowing it to improve its accuracy over time and adapt to changing conditions. Figure 1: The overview of the joint optical flow and inter-frame-supervised depth estimation towards dynamic objects. Depth networks for static and motion components share the same weights, as do the pose networks. ### Optical flow-based motion segmentation Following FlowNet [22], a standard U-net [23] is leveraged to predict preliminary optical flow maps, which guides the motion separation. In adjacent frames, ideal perspective-variable regions provide continuous optical vectors. Rigid object in relative motion is considered as virtual perspective transformations, that is, relative motion regions represent continuous vectors. Therefore, it is feasible to segment relative moving components in the same scene with the optical flow method. To segment regions with heterogeneous motion direction, the preliminary predicted optical flow requires mean convolution operations to smooth the vectors due to crude output. To retrieve sharp outlines, a Sobel operator is applied to filter the smoothed optical flow map. Finally, the main relative motion regions are selected by filling the approximately enclosed outline according to the given boundary threshold. These regions are determined by an eight-connected pixel traversal [24]. For further processing, segmentation areas are padded with zero pixel values. Furthermore, if massive motion components are erroneously segmented as a static region, their pose estimation is unique. In other words, only the dominant camera transformations are obtained in wrong segmentations and motion forms of small misplaced regions are omitted. Hence, the error in the inter-frame-supervised depth module arises from the pixel sets whose motion forms are erroneously represented. It is worth noting that these region segmentation errors are penalized in the optical flow reconstruction loss. ### Bilateral Inter-frame-supervised depth estimation As results from optical flow-based segmentation, components with heterogeneous motion directions are separated. We prefer to address static components as primary motion direction regions and dynamic ones as minor regions, as motion is absolute in essence. For the primary motion direction regions, a VGG-based PoseNet [25] is applied to estimate the ego-motion between adjacent static frames \(R_{s,t}\) and \(R_{s,t+1}\): \[\begin{split} T_{s,t\to t+1}&=PoseNet\left(R_{s,t},R_ {s,t+1}\right)\\ R_{s,t\to t+1}&=R_{s,t}\langle project(D_{s,t},T_{s,t+1 \to t},K)\rangle\end{split} \tag{1}\] Besides, the corresponding backward re-projection process can be expressed as: \[\begin{split} T_{s,t+1\to t}&=PoseNet\left(R_{s,t+1 },R_{s,t}\right)\\ R_{s,t+1\to t}&=R_{s,t+1}\langle project(D_{s,t+1},T_{s,t \to t+1},K)\rangle\end{split} \tag{2}\] where \(T\) donates the camera pose transformation between two frames, \(K\) donates the camera intrinsic parameters, \(\langle\rangle\) donates the per-pixel sampling [26] and \(project()\) donates the coordinate re-projection [27]. Similar to primary motion regions, the forward and backward photometric re-projections for minor motion regions \(R_{m,t\to t+1}\) and \(R_{m,t+1\to t}\) are derived in the same way. Therefore, the photometric error \(L_{pe}\) comprises SmoothL1 and SSIM[28]: \[L_{pe}(I_{1},I_{2})=\alpha(1-\text{SSIM}(I_{1},I_{2}))+(1-2\alpha)\|I_{1}-I_{2} \|_{1}. \tag{3}\] where \(\alpha=0.45\). Following previous inter-frame-supervision works [15], to address scene occlusions, the bilateral photometric re-projection loss \(\mathcal{L}_{ph,s}\) is deployed to the primary motion regions: \[\mathcal{L}_{ph,s}=L_{pe}(R_{s,t+1},R_{s,t\to t+1})+L_{pe}(R_{s,t},R_{s,t+1 \to t}) \tag{4}\] Same as \(\mathcal{L}_{ph,s}\), the photometric re-projection loss \(\mathcal{L}_{ph,m}\) for minor motion regions can be derived in similar operation. Finally, by combining the various motion components, the integral re-projection loss \(\mathcal{L}_{ph}\) is: \[\mathcal{L}_{ph}=\mathcal{L}_{ph,s}+\mathcal{L}_{ph,m} \tag{5}\] Above derivations only consider the two motion regions case, but real-world scenarios exist multi-motion regions, for example, multiple driving cars in lanes. Hence, the same re-projection method is adapted to count each heterogeneous motion individually, as an additional \(\mathcal{L}_{ph,m}\). Therefore, the re-projection loss \(L_{ph}\) for multiple motion regions is expressed as: \[L_{ph}=L_{ph,s}+\sum_{m=1}^{k}L_{ph,m} \tag{6}\] where \(k\) donates the number of dynamic regions and \(m\) donates the number of motion regions. ### Optical flow synthesis The optical flow is a composite pixel-level representation of the depth map and the camera transformation that allow interconversion in the static scene. Thus, the reconstructed optical flow for static components \(\hat{O}_{s,t}\) can be defined as: \[\hat{O}_{s,t}=project(D_{s,t},T_{s,t+1\to t},K) \tag{7}\] Obviously, motion components' optical flow \(\hat{O}_{m,t}\) have the same form. Then, we combine the static and motion components as: \[\hat{O}_{t}=\hat{O}_{s,t}+\hat{O}_{m,t} \tag{8}\] Following previous works, the optical flow module applies the endpoint error, which is the L2 distance between the vectors and predictions. Hence, the optical flow synthesis loss \(L_{flow}\) can be computed as: \[\mathcal{L}_{flow}=\|\hat{O}_{t}-O_{t}\|_{2} \tag{9}\] In network optimization, the depth network loss function applies re-projection loss and optical flow synthesis loss, while the pose network loss function also applies re-projection loss and flow synthesis loss for joint optimization. Therefore, the loss function for depth network \(\mathcal{L}_{depth}\) and pose network \(\mathcal{L}_{pose}\) can be formulated as: \[\mathcal{L}_{depth} =\mathcal{L}_{ph}+\lambda\mathcal{L}_{flow} \tag{10}\] \[\mathcal{L}_{pose} =\mathcal{L}_{ph}+\lambda\mathcal{L}_{flow}\] where \(\mathcal{L}_{ph}\) represents the re-projection loss, \(\mathcal{L}_{flow}\) represents the flow synthesis loss, and \(\lambda\) is the weight coefficient for the loss balance. The \(\lambda\) is set to 0.1 based on the experimental results. Meanwhile, the optical flow network loss function is optimized solely using flow reconstruction loss. The optical flow network loss function \(\mathcal{L}_{optical}\) can be expressed as: \[\mathcal{L}_{optical}=\mathcal{L}_{flow} \tag{11}\] ## 3 Experiments ### Experiments Settings #### 3.1.1 Datasets The KITTI depth prediction dataset [29] is extensively employed for outdoor scene depth estimation, comprising 42,949 training, 1,000 validation and 500 test samples, which have sparse depth pixel annotations. For network processing, images are scaled to 352\(\times\)1216 to adapt the convolution interface. Median scaling [27] is implemented to normal scale values due to previous depth estimators being infeasible to capture certain scales. #### 3.1.2 Metrics For fairness, relative depths are bounded to a given distance between \(0m\) and \(120m\) and compared with existing depth estimators by standard metrics: Absolute Relative Error (AbsRel), Square Relative Error (SqRel), Root Mean Square Error (RMS), Root Mean Square Error in Logarithmic operation(RMS(\(log\))) [30] and Accuracies of three thresholds. #### 3.1.3 Experiment Details The proposed framework is implemented on the PyTorch [31] platform and executed on 2 Nvidia RTX2080 GPUs. We employ VGG-16 [25] as the PoseNet encoder whose initial network adopts the pre-trained model's weights on ImageNet classification [32]. For the optical flow and depth network, standard end-to-end U-net backbones are deployed, which facilitates further deployment. Furthermore, the learning rate for PoseNet is \(10^{-4}\) and for depth and optical flow network are \(10^{-3}\), which reduces into 10% every 20 epochs. In the motion segmentation, the smooth operation conducts three times with kernels of 3, 5 and 9, followed by a Sobel operation with a threshold of 0.5 and a motion area filter with a minimum of 3000 pixels. ### Ablation Experiments To determine hyper-parameters for motion segmentations, ablation experiments with various thresholds are exhibited in Table.1. 'S' represents the Segmentation-based Method for depth estimation, which is based on optical flow segmentation, M represents the Monodepth2[15] and 'BiE' denotes the Bilateral Re-projection Error. As expected, the bilateral constraint substantially improves each motion region's pose and depth estimation. Meanwhile, the minimum area setting filters the small and incorrect motion regions. Among the above operations, the optical flow-based motion segmentation provides crucial improvements, achieving 0.0950 error on AbsRel, 0.6180 on SqRel, 3.940 on RMS and 0.1680 on RMS(\(log\)). Compared to the original monodepth2, the most crucial enhancement in the proposed methodology is attributed to the dynamic object segmentation mechanism, which results in an 8.9% decrease in AbsRel. Concurrently, the bidirectional constraint contributes to a significant improvement, approximately a 2.8% decrease in AbsRel. In the multivariate experiments, the model selected the optimal combination, which corresponds to a minimum area filter value of 3,000 pixels and a bilateral error constraint. The ablation experiments reveal that each threshold remarkably improved all 7 metrics. Among the thresholds, the bilateral constraint brings the most potent improvement, which means that most noise in the optical flow map is successfully filtered. Moreover, visual depths and optical flows maps are exemplified in Fig.2. As illustrated in Fig.2, the ablation experiments with the optimal combination showcase accurate visual depth maps and optical flow maps, which are visually consistent with the ground truth depth results. Specifically, the proposed method successfully reconstructs the slender lampposts, although there is an inconsistency in the thickness of the lampposts' upper and lower ends. Compared to the ground-truth optical flow map, the lamppost optical flow estimated by the proposed method appears visually more reasonable. \begin{table} \begin{tabular}{c|c|c c c c|c c c} \hline \hline Method & \(T_{R}\) & AbsRel & SqRel & RMS & RMS(\(log\)) & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline M(w/o BiE) & - & 0.1150 & 0.9030 & 4.8630 & 0.1930 & 0.877 & 0.959 & 0.981 \\ S(w/o BiE) & 1000 & 0.1050 & 0.7820 & 4.5980 & 0.1810 & 0.886 & 0.967 & 0.984 \\ S(w/o BiE) & 3000 & 0.0970 & 0.6470 & 3.9910 & 0.1690 & 0.899 & 0.968 & 0.984 \\ S(w/o BiE) & 5000 & 0.0980 & 0.6450 & 3.9980 & 0.1670 & 0.901 & 0.970 & 0.988 \\ \hline M(with BiE) & - & 0.1120 & 0.7880 & 4.6020 & 0.1900 & 0.873 & 0.961 & 0.981 \\ S(with BiE) & 1000 & 0.1020 & 0.6900 & 4.2180 & 0.1701 & 0.898 & 0.969 & 0.987 \\ S(with BiE) & 3000 & 0.0950 & 0.6180 & 3.9400 & 0.1680 & 0.904 & 0.969 & 0.988 \\ S(with BiE) & 5000 & 0.0960 & 0.6390 & 3.9720 & 0.1689 & 0.900 & 0.968 & 0.985 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative results with multiple settings, bilateral re-projection error(BiE) and minimum area loss \(T_{R}\), on KITTI depth dataset. ### Depth Comparison with Existing Methods In this section, we conduct a quantitative and qualitative comparison of existing depth estimation methods on the KITTI dataset. The experimental results analyze the performance of various depth estimation techniques based on inter-frame supervision mechanisms, which include multiple non-pretrained self-supervised depth estimation methods. In Fig.3, Most existing methods successfully estimate the lane scene's depth maps. Among these methods, the proposed method with the joint depth and optical flow estimation framework significantly outperforms existing methods, particularly in predicting the occluded areas of objects, as seen in the car edges and lamppost reconstruction in the upper and lower images, respectively. The primary reason for this performance improvement is that the optical flow estimation can approximate the relative position relationships between occlusions and the scene, thus assisting in depth prediction. As experimental results in Table.2, the proposed method without pre-train outperforms other existing methods considerably. The optimal metrics are denoted in bold, while the second-best results are indicated in italics. The proposed method achieves an AbsRel of 0.0950, a SqRel of 0.6180, an RMS of 3.940, and a RMS(\(log\)) of 0.1680. Without pre-training, the proposed method reaches the highest accuracy across all metrics. Notably, the second-best depth estimation model, DRAFT [17], employs a large amount of ground truth optical flow for supervision, while our method is entirely self-supervised. Therefore, the proposed self-supervised method represents the optimal solution for depth estimation tasks. Figure 2: Visual results and zoomed objects on the Eigen splits. Both depth and optical flow results are provided for comparisons with ground-truths. Following above evaluations, the experiments also compare our framework with existing methods on the KITTI dataset pre-trained Cityscapes. The visual results are presented in Fig.4, while the quantitative results are shown in Table.3. Fig.4 displays the comparison between advanced methods and the proposed method with the pre-trained Cityscapes. All visual methods successfully reconstructed the depth maps of the lane scenes. Compare to other advanced methods, our motion segmentation-based joint optical flow and depth estimation method yields more accurate car edges in the upper image and neater road barriers in the lower image. \begin{table} \begin{tabular}{l|c|c c c c|c c c} \hline \hline Models & Dataset & AbsRel & SqRel & RMS & RMS(\(log\)) & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline Monodepth [10] & K & 0.1480 & 1.2550 & 5.7320 & 0.2250 & 0.808 & 0.936 & 0.973 \\ GeoNet [33] & K & 0.1550 & 1.2960 & 5.8570 & 0.2230 & 0.793 & 0.931 & 0.973 \\ StructDepth [34] & K & 0.1410 & 1.0260 & 5.2910 & 0.2150 & 0.816 & 0.945 & 0.979 \\ BiCycDepth [35] & K & 0.1330 & 1.1260 & 5.5150 & 0.2310 & 0.826 & 0.934 & 0.969 \\ Monodepth2 [15] & K & 0.1150 & 0.9030 & 4.8630 & 0.1930 & 0.877 & 0.959 & 0.981 \\ PackNet-SfM [36] & K & 0.1110 & 0.7850 & 4.6010 & 0.1890 & 0.878 & 0.960 & 0.982 \\ SGDepth [37] & K & 0.1170 & 0.9070 & 4.8440 & 0.1960 & 0.875 & 0.958 & 0.980 \\ RMSFM6 [18] & K & 0.1120 & 0.8060 & 4.7040 & 0.1910 & 0.878 & 0.960 & 0.981 \\ Mono-Former [12] & K & 0.1080 & 0.8060 & 4.5940 & 0.1840 & 0.884 & 0.963 & 0.983 \\ DRAFT [17] & K & _0.0970_ & _0.6470_ & _3.9910_ & _0.1690_ & _0.899_ & _0.968_ & _0.984_ \\ Ours & K & **0.0950** & **0.6180** & **3.9400** & **0.1680** & **0.904** & **0.969** & **0.988** \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative depth results on Eigen split. Extensive depth estimators are trained on KITTI depth(K). Figure 3: Visual results on KITTI with inter-frame-supervised methods. Our method is superior to advanced methods in edge sharpness for occluded objects. Therefore, in the visual result comparison, the proposed method demonstrates higher accuracy on the depth estimation task. As shown in Table.3, 'K+CS' denotes the depth estimation model tested on the KITTI dataset and pretrained on the Cityscapes dataset. Our method exhibits an AbsRel of 0.0940, a SqRel of 0.6030, an RMS of 3.892, and an RMS(\(log\)) of 0.1640. Similarly, all metrics for the pre-trained depth estimation model have achieved the highest accuracy. To further analyze the model, we evaluate the model size and single-frame running run of existing depth estimation methods, with an input at the standard size of 352\(\times\)1216 in KITTI dataset. As indicated in Table.4, the proposed method has the smallest parameter number, while the second-smallest size of PackNet-SfM [36] is 10.3% larger. Among depth estimators with similar accuracy, the complexity of the proposed method is much lower than other methods. \begin{table} \begin{tabular}{l|c|c c c|c c c} \hline \hline Models & Dataset & AbsRel & SqRel & RMS & RMS(\(log\)) & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline Monodepth [10] & K+CS & 0.1240 & 1.0760 & 5.3110 & 0.2190 & 0.847 & 0.942 & 0.973 \\ GeoNet [33] & K+CS & 0.1530 & 1.3280 & 5.7370 & 0.2320 & 0.802 & 0.934 & 0.972 \\ PackNet-SfM [36] & K+CS & 0.1080 & _0.7270_ & 4.4260 & 0.1840 & 0.885 & _0.963_ & 0.982 \\ BiCycDepth [35] & K+CS & 0.1180 & 0.9960 & 5.1340 & 0.2150 & 0.849 & 0.945 & 0.975 \\ SGDepth [37] & K+CS & 0.1170 & 0.9070 & 4.8440 & 0.1960 & 0.875 & 0.958 & 0.980 \\ Mono-Former [12] & K+CS & 0.1060 & 0.8390 & 4.6270 & 0.1830 & 0.889 & 0.962 & _0.983_ \\ SemanticGuide[19] & K+CS & _0.1000_ & 0.7610 & _4.2700_ & _0.1750_ & _0.902_ & _0.965_ & 0.982 \\ Ours & K+CS & **0.0940** & **0.6030** & **3.8920** & **0.1640** & **0.905** & **0.973** & **0.989** \\ \hline \hline \end{tabular} \end{table} Table 3: Quantitative depth results on Eigen split. Extensive depth estimators are trained on KITTI dataset with pre-trained CityScapes [38] (K+CS). Figure 4: Visual results on KITTI pre-trained on CityScapes with inter-frame-supervised methods. In summary, the qualitative and quantitative results of the depth estimation experiments demonstrate that the proposed joint depth and optical flow estimation method, based on optical flow segmentation, successfully reconstructs accurate depth maps of outdoor scenes with moving objects, surpassing most advanced methods in an efficacious way. ### Optical Flow Comparison with Existing Methods In the joint task of depth and optical flow estimation, besides comparing the depth estimation experiment results, we conduct quantitative and qualitative comparisons of optical flow prediction results with existing methods. The visual results are provided in Fig5, while the quantitative outcomes are shown in Table.5. \begin{table} \begin{tabular}{l r r} \hline \hline Method & Model Size & Running Time \\ \hline PackNet-SfM [36] & 102.8M & 190.024ms \\ Mono-Former [12] & 756.3M & 2302.958ms \\ Ours & 93.2M & 143.130ms \\ \hline \hline \end{tabular} \end{table} Table 4: Complexity comparison of existing depth estimation methods Figure 5: Visual optical flow results on KITTI pre-trained on CityScapes. As depicted in Fig.5, the proposed method is visually compared with the DRAFT [17]. From the optical flow estimation results, it can be observed that both the DRAFT method and the proposed method are visually accurate and reasonable, reconstructing the optical flow information of large areas with the same motion characteristics. Notably, compared to the previous best method, DRAFT, our approach offers more accurate reconstruction of moving object boundaries, such as the thin tree trunks in the upper image and the car contours in the lower image. As shown in Table.5, the proposed method achieves the best optical flow accuracy in the EPE metric and second-best in the F1-all metric. Compared to the second-best optical flow estimation method, DRAFT, although our method's error increases by 5.53% in the F1-all metric, it reduces the error by 4.70% in the EPE metric. Consequently, our approach remains highly competitive in the optical flow estimation task. Above experimental results demonstrate that the proposed method successfully reconstructs optical flow maps of outdoor scenes with various moving objects in the optical flow estimation task, outperforming most advanced methods. ## 4 Conclusion In this work, we constrain the inter-frame-supervised depth and optical flow estimation, incorporating ego-motion segmentation to separate heterogeneous motion components. Optical flow maps in a single motion direction can be equivalently decomposed into camera transformations and depths, allowing for independent depth and pose estimations in dynamic and static regions. Additionally, we treat ego-motion estimation in inter-frame supervision as a regression problem. Further, optical flow synthesis derives from the inverse depth and ego-motion re-projections, aiming to penalize the errors between synthesis and preliminary estimates. Resulting from the joint training with the two modules, optical flow and inter-frame-supervised depth module, extensive experiments confirm that the proposed framework yields the most advanced metrics on the KITTI depth dataset, both with and without pre-training on CityScapes. \begin{table} \begin{tabular}{l|c c} \hline \hline Models & EPE & F1-all \\ \hline HDD [39] & 13.70 & 24.00 \\ PWCNet [40] & 10.35 & 33.70 \\ FlowNet2 [41] & 10.10 & 29.90 \\ DFNet [42] & 8.98 & 26.00 \\ RAFT [43] & 5.04 & 17.40 \\ TrianFlow [44] & 3.60 & 18.05 \\ DRAFT [17] & 2.55 & **14.81** \\ \hline Ours & **2.43** & 15.63 \\ \hline \hline \end{tabular} \end{table} Table 5: Quantitative optical flow results on KITTI Flow dataset.
2309.12576
Understanding Patterns of Deep Learning ModelEvolution in Network Architecture Search
Network Architecture Search and specifically Regularized Evolution is a common way to refine the structure of a deep learning model.However, little is known about how models empirically evolve over time which has design implications for designing caching policies, refining the search algorithm for particular applications, and other important use cases.In this work, we algorithmically analyze and quantitatively characterize the patterns of model evolution for a set of models from the Candle project and the Nasbench-201 search space.We show how the evolution of the model structure is influenced by the regularized evolution algorithm. We describe how evolutionary patterns appear in distributed settings and opportunities for caching and improved scheduling. Lastly, we describe the conditions that affect when particular model architectures rise and fall in popularity based on their frequency of acting as a donor in a sliding window.
Robert Underwood, Meghana Madhastha, Randal Burns, Bogdan Nicolae
2023-09-22T02:12:47Z
http://arxiv.org/abs/2309.12576v1
# Understanding Patterns of Deep Learning Model Evolution in Network Architecture Search ###### Abstract Network Architecture Search and specifically Regularized Evolution is a common way to refine the structure of a deep learning model. However, little is known about how models empirically evolve over time which has design implications for designing caching policies, refining the search algorithm for particular applications, and other important use cases. In this work, we algorithmically analyze and quantitatively characterize the patterns of model evolution for a set of models from the Candle project and the Nasbench-201 search space. We show how the evolution of the model structure is influenced by the regularized evolution algorithm. We describe how evolutionary patterns appear in distributed settings and opportunities for caching and improved scheduling. Lastly, we describe the conditions that affect when particular model architectures rise and fall in popularity based on their frequency of acting as a donor in a sliding window. Transfer Learning, AI, Network Architecture Search, Regularized Evolution, Characterization Study + Footnote †: Funding was provided by US Department of Energy ## I Introduction Network Architecture Search (NAS) is a foundational method for identifying viable deep learning (DL) model architectures suitable to solve a variety of problems. Unlike trail-and-error approaches that are time-consuming and whose quality depends heavily on the experience of DL experts, NAS explores a large number of candidate models from a search space that is based on a set of rules that define what choices are possible and how they can be combined to obtain valid DL model candidates. With the increasing complexity of the problems solved with DL, NAS is quickly becoming the only viable approach to producing high-quality DL model architectures. It has been used for Cancer Research [1], to shape the development of large foundation models [2], and to design models that optimize the performance of particular applications [3]. NAS is a computationally and resource-intensive process that often takes hundreds of state-of-art GPUs to find good candidates. It is difficult for two reasons: 1) the search space of DL model architectures is huge - in some cases larger than \(10^{57}\) possible candidates, which are far too large to exhaustively search even when distributed over an entire HPC system, and 2) evaluating each candidate is very expensive, taking in some cases multiple minutes to hours to train even a single epoch using state of the art GPUs. Therefore, the problem of how to scale NAS is critical. Frameworks such as DeepHyper [4, 3] adopt a master-worker paradigm to scale NAS on supercomputing resources. Specifically, the master is responsible to generate new DL model candidates and pass them to workers for evaluation. However, a naive strategy that randomly samples the search space to generate new DL model candidates performs poorly, generating DL model candidates of low quality for a given amount of time and resources spent. As a consequence, more informed strategies have been proposed to generate DL model candidates. One such popular strategy is _regularized evolution_[5], which is inspired by genetic algorithms. It consists of two stages: 1) generate an initial random population of \(N\) candidates; 2) evolve the population of \(N\) candidates by randomly selecting a subset \(K\in N\), take the best performer in \(K\), 3) perform a single mutation of the architecture of the best performer to produce a new candidate, 4) train the new candidate to obtain a score; 5) replace the oldest model in the population with the new result. Regularized evolution can be complemented by _transfer learning_[6] to further increase the quality of the identified DL model candidates and to reduce the duration of the search [7]. Specifically, given two related problems A and B, if we already have a trained DL model \(M_{A}\) to solve A, then, instead of training a new DL model \(M_{B}\) from scratch to solve B, we could start from a variation of \(M_{A}\) that retains some or all of the layers in \(M_{A}\) while initializing the new layers with random weights. In this case, we provide a better starting point for \(M_{B}\) that "transfers" the knowledge from \(M_{A}\), which is likely to make \(M_{B}\) converge faster. In addition, the transferred layers in \(M_{B}\) are often "frozen" during the training, which means they are not updated during the back-propagation, thereby accelerating every training iteration. At the same time, the amount of data needed to perform the training can often be reduced without sacrificing accuracy. These aspects combined lead to a significant reduction in training duration [8], assuming there is an efficient DL model repository that hides the I/O overheads of storing/loading the weights of the DL models necessary to perform the transfer learning in a scalable manner [9]. However, techniques to accelerate NAS with transfer learning and to produce DL models of better quality requires a deep understanding of how the candidates are generated and evolved. In our paper, we attempt to characterize the candidates that are identified during NAS under regularized evolution to enable future improvements to systems software conducting NAS by answering the following research questions through a combination of empirical study and algorithmic analysis: (RQ1) How does the architecture of the candidates evolve structurally (e.g., do popular mutations appear earlier or later in the candidate architecture) and does it change over time? (RQ2) How do model evolution patterns change in the context of asynchronous distributed workers where there is incomplete knowledge of model performance? (RQ3) When does a candidate become popular (and therefore a frequent source of transfer learning) and when does it become less popular (and therefore less relevant)? (RQ4) How does the DL model candidate quality evolve over time during NAS? Answers to these questions have broad implications for the design of scalable NASs including caching strategies of popular tensors used for transfer learning by the search strategy (both regularized evolution and other genetic algorithms), distribution strategy of candidates to workers for evaluation, and improvements to the search strategy itself. Specifically, by understanding when models become popular, we can design efficient caching and prefetching techniques targeted at groups of tensors used during transfer learning that leverage probabilities that a candidate will actually become a popular transfer donor. By understanding where mutations occur in the graph of a candidate architecture, we can understand better how many layers might need to be retrained after transfer or differ substantially from the architecture of the previous candidates, which we can use to refine the search strategy. Additionally, if top-performing candidates tend to have mutations in the layers close to the end of the architecture, we can bias the search strategy to favor such mutations and need to retrain fewer layers. By understanding when a model falls out of popularity, we can design cache eviction strategies to best utilize effective cache space. Furthermore, answering these questions would allow the refinement of heuristics to optimize transfer learning in the context of NAS. This paper contributes with a case study that involves real-world DL models from the Candle Project [1] and test problems from NAS benchmarks [10] to bring qualitative and quantitative answers to these questions. We summarize our key contributions as follows: * We introduce a characterization methodology for the patterns emerging in the evolution of the DL model candidates during NAS. In particular, we insist on general aspects applicable to all research questions, such as: 1) selecting search spaces; 2) how to efficiently evaluate the search spaces; 3) how to instrument the DL tools used for large-scale NAS and how and efficiently collect comprehensive traces without compromising the performance (Section IV). * We study how the architecture of DL model candidates evolves structurally over time. In doing so we aim to answer the question of whether mutations tend to appear earlier or later in the model architecture over time. We show that random selection of both location and specific mutations has an impact on the kinds of models observed during NAS (Section V-A) * We study how model evolution patterns occur in a distributed setting. In particular, we show that 1) there are temporal localities of accessing particular tensor both with respect to a single worker and across workers; 2) we can leverage these temporal localities by delay scheduling decisions for the purpose of co-scheduling groups of candidate evaluations that share a common structure (Section V-B) * We study how the popularity of DL model candidates evolves during NAS and what conditions trigger the rise and fall in popularity. In particular, we show that models can be classified into popularity tiers and that we can determine thresholds for when a model moves between popularity tiers based on the frequency of it acting as a donor within a sliding window. (Sections V-C and V-D). In the subsequent sections, we describe the NAS process, summarize prior characterization studies for genetic algorithms, describe our experimental methodology, present the results of our study, and discuss them in the context of our research questions. ## II Background While deep neural networks have been tremendously successful at learning tasks. However, existing state-of-the-art networks have been manually designed by researchers. Neural Architecture Search enables us to automate architecture engineering by searching through a series of potential architectures, estimating the performance of each architecture on a dataset and finding the best one. A typical neural architecture workflow consists of the following steps. First, a search space is constructed. The search space is a graph with each edge(in NASLib[10]) or node(in DeepHyper[11]) consisting of a number of operations that we can choose from thus creating a combinatorial search space. The graph itself is fixed beforehand with fixed blocks/layers from prior wisdom of building architectures. The next component is the search algorithm itself. Search spaces consist of millions of potential architectures to choose from. The search algorithm determines the order in which these architectures are searched. One such method is to perform a brute-force search at random. However, one can use information about models already evaluated to guide the search. Genetic algorithms and Bayesian optimization are the most commonly used state-of-the-art search techniques. In this work we will focus on regularized evolution but similar principles apply to other search methods too. The third component is performance predictions. Training each candidate architecture for a large number of epochs is computationally efficient. There exist various techniques to estimate the architecture performance without fully training it. Here, we train the model for a small number of epochs. Genetic algorithms like regularized evolution are characterized by how the approach initialization, selection, evolution, and retirement. Initialization is how the initial population of candidates is created - often by random selections from the search space. Once the population is initialized, the selection process determines a subset of models from the population to "evolve" via mutation. The mutation process describes how a selected candidate undergoes changes to form a new candidate solution. As new better performing models are explored, a retirement process determines which is removed from the population for future mutation. We specifically use the parallel implementation of Regularized Evolution from DeepHyper[11] A listing of the model algorithm is provided in Algorithm 1. We discuss its implementation more in Section IV. As we evaluate and mutate architectures, we would like to save these models so they can be referenced in the future. For example, if we would like to transfer weights from an existing model in order to speed up the training process, we need the capability to save models in the repository and retrive them on demand. To determine the best model to transfer from, we match the metadata graphs and find the model with the longest common metadata[7]. Previous work [9] looks at building a model repository designed to store and access these models efficiently. ``` 0:\(s\in\mathcal{N}^{+},p\in\mathcal{N}^{+},N\in\mathcal{N}^{+},s<p<N\) from random import sample pop \(\leftarrow\)[] for parallel taskloop \(i=1\ldots p\)do\(\triangleright\) Stage 1 \(\texttt{c}\leftarrow\) sample(search_space), k=1)) \(\texttt{quality}\leftarrow\) evaluate(c) \(\texttt{lock}()\) pop.append((c, quality)) \(\texttt{unlock}()\) endfor for parallel taskloop evaluated=\(p\ldots N\)do\(\triangleright\) Stage 2 \(\texttt{lock}()\) \(\texttt{sampled}\leftarrow\) sample(pop, k=s) \(\texttt{unlock}()\) \(\texttt{best}\leftarrow\) max(sampled, by=\(\lambda\) i: i.quality) \(\texttt{c}\leftarrow\) mutate(best, search_space) \(\texttt{quality}\leftarrow\) evaluate(c) \(\texttt{lock}()\) pop.pop_left()\(\triangleright\) discard oldest \(\texttt{pop.append((c, quality))}\) \(\texttt{unlock}()\) endfor returnpop ``` **Algorithm 1** Parallel Regularized Evolution ## III Related Work To the best of our knowledge, there are no papers that characterize the behavior of regularized evolution to address our specific research questions. There are a few papers that attempt to consider the set of candidates evaluated by genetic algorithms, and none of the papers that we evaluated did this in the context of NAS and the constraints that it presents. One such paper is [12] looks at a set of genetic algorithms and evaluates them on their mean time to solution (quality), number of evaluations (time) on a number of problems design to be challenging (inseparable, hill climbing resilient, non-linear, non-symmetric). While these properties are true of NAS problems, it says nothing about the kinds of candidates that appear from the search process. The Regularized Evolution paper [5] does not even evaluate the algorithm in this way, instead preferring to evaluate the candidates found on their quality as measured in accuracy, the model cost in FLOPs, and time to the highest accuracy. In contrast, our paper considers at a fine-grained level what models are selected during regularized evolution and how the model selection and accuracy improvement are related from a probabilistic perspective. ## IV Characterization Methodology First, we will discuss commonalities in the experimental setup between all of our research questions. The goal of these research questions is to start from the structure of the models generated by the search algorithm, and then proceed to how these models evolve over the course of the search to facilitate designing systems or changes that could effect the performance of the search. In Section V, we will then introduce each research question in turn with the specific motivations for each question, analysis from the structure of regularized evolution that informs the answer to the questions, empirical observations, and finally takeaways for each question. ### _How to Choose the Search Spaces_ As stated in the introduction, NAS is very expensive, and we are limited in the number of complete NASs that we can perform. To complement the limited ability to perform empirical observations, we will carefully study the structure of the algorithm to inform what we expect to generalize to other NAS traces given sufficient resources to run time. Our evaluation includes two benchmarks - one from a well-established NAS (Nasbench201 on the CIFAR-10 dataset)[13], and one from a real-world scientific application (Candle-ATTN)[1, 3]. Given our limited ability to conduct exhaustive evaluations, we choose our two spaces to complement each other - Nasbench201 with CIFAR-10 has only \(\approx 4.8\times 10^{5}\) models, but its developers used nearly 100 TPU 1 years to exhaustively evaluate its search space and compiled the results in a queriable database allowing us to more rapidly explore it search space and consider longer searches. CANDLE-ATTN has a much larger search space, but because we do not have a results database for the \(\approx 3.1\times 10^{57}\) models in the CandleATTN search space, must utilize HPC resources in parallel in order to collect even small, but meaningful results in a reasonable amount of time. Footnote 1: Google’s proprietary AI accelerator card CIFAR-10 is a well-known benchmark problem is an image classification task 40k training images, 10k validation images, and 10 image classes. The model archetype for CIFAR-10 is a convolutional network ending with a dense classifier. CANDLE-ATTN is a cancer drug interaction model that attempts to the binary classification of a particular drug will interact with a particular type of tumor. Candle itself is a larger suite of benchmark problems with ATTN representing one of the larger models both in terms of its search space, and the size of the models being searched. Its search space is defined in [3]. The model archetype for ATTN is a dense fully connected network with skip connections. Together these search spaces allow us to consider both large search spaces and a larger number of candidates. ### _Evaluating The Search Spaces_ For each search space, we apply the parallel version of regularized evolution from DeepHyper[11] described in Algorithm 1. In the parallel setting, we make all changes to the population set atomically so that there are always \(p\) candidates in the population in stage 2 when sampling of the population occurs. In each case, we choose a total number of 1000 candidates, with a population size of 100, and a sample size of 5. We use the quality function defined by the application - validation accuracy in both cases. These settings provide a consistent basis from which to study the algorithms' performance across search spaces while utilizing parallel and distributed HPC resources. ### _Instrumentation of AI Tools and Collection of Traces_ As we conduct the search, we collect traces of the execution. To conduct our experiments, we made the following modifications to DeepHyper: 1) We created a variant of the BaseTrainer class to capture at fine granularity the specific tensors that were considered during the trace process using the method from [9] to track individual tensors. 2) We implemented and integrated a primitive, greedy form of transfer learning into DeepHyper using the model repository from [9]. 3) We integrated NASLib's CIFAR and Candle's ATTN search space into DeepHyper in order to be able to evaluate these two models 4) We modified DeepHyper's built-in trace capability to produce more detailed traces taking care to not significantly perturb the timing by avoiding additional I/O operations during the search. These traces contain the timestamp that a model evaluation begins, the timestamp of when a model evaluation ends, the worker id that conducted the search, the architecture sequence that specifies which choices were taken for each variable node in the network architecture, the quality of the model when evaluated. We will use these traces to analyze the search process in order to better understand the model evolution process and address our research candidates. To make this process efficient, we gather the traces in memory on each transfer server and then write out the traces at the end of the search to avoid perturbing the runtime of the search process. To address our research questions, we will consider different aspects of these traces to better understand the structure of searches. We conduct our experiments on the Polaris machine at ALCF. Polaris has hardware and software as summarized in Table I. This hardware and software is both well suited for NAS but is also representative of hardware and software at leading supercomputing centers. ## V Results Here we will analyze the results of the traces described in Section IV. In each of the following subsections, we describe how we will parse the trace files described above and how that informs the answers to each of our key research questions. In each subsection, we will present a _motivation_ for asking this particular research question and motivate our choice of _method_ to answer the question. After that, we will proceed to an _algorithmic analysis_ of the problem based on the structure of the algorithm and the insights we can obtain without running experiments. We then present and discuss the empirical _observations_ that we obtain and their significance to designing systems to accelerate NAS. Finally, we conclude each subsection with a discussion of the key _takeaways_ from both the algorithmic analysis and observations. _How do model architectures evolve structurally over time - do mutations tend to be earlier or later in the model architecture over time?_ #### V-A1 Motivation and Method We begin by studying the structural evolution of models over time. Understanding how the structure of the models evolves over time has implications for the effectiveness of transfer learning in accelerating NAS. For example, when transfer learning is performed, all layers that occur after a mutation must be "invalidated" and re-trained to account for differences in the values of their inputs. Thus understanding where a mutation tends to occur in the structure of a model informs how often transfers occur over the course of a search process in expectation. Tries augmented with the percentage of candidates that include a particular prefix in a search are well suited to answering the question of the overall structural evolution of the models because they concisely encode the path-dependent nature of the structure of architecture sequences visited over the course of a NAS. As a trie, paths through the graph start on the left and proceed to the right. During the procession, the path accumulates the various subsequences that occur over the course of the search. For example, the top node in the \begin{table} \begin{tabular}{l c} \hline \hline Hardware & Description \\ \hline CPU & 32 Core AMD EYPC Zen 3 \\ GPU & 4 Nvidia A100 (40GB) with NVLINK (600GB/s) \\ SSD & 2 TB \\ RAM & 512 GB DDR4 (204.8GB/s) \\ PCIe & 64 GB/s \\ NETWORK & HPE Slingshot 10 \\ \hline \hline \end{tabular} \end{table} TABLE I: Hardware and Software rightmost column of Figure 0(a) represents the sequence "0-1-1-2-1-2", and this sequence represents 1.5% of all of the transfers that occur in the course of the search. However, the shorter prefix of "0-1-1" in the same subfigure occurs 4.9% out of all of the searches. To enable comparisons between tries from the same application, each node is labeled with both the variable node choice ID, in addition to the percentage of architecture sequences that contain this particular variable node choice at this point in the sequence, a color code is also used where the red-er a node is the more likely it was to occur in an architecture sequence in the search, and the blue-er the less likely it was to appear. For clarity, Figure 1 presents a subset of the variable node choices as a trie for the CIFAR search space when each model is trained for 50 or 150 epochs respectively. Specifically, each subfigure has been pruned to remove variable node choices that appeared in fewer than 1% of the models allowing us to focus on the most prevent prefixes which are the ones that we would like to potentially cache in transfer learning. #### Iii-B2 Algorithmic Analysis There are two key aspects to note about the regularized evolution algorithm that affect the structure of model architecture produced by it that have implications for transfer learning and the search process overall: 1) where regularized evolution performs its mutation and 2) how it performs its mutation after the mutation location is selected. Regularized Evolution mutates a uniformly distributed random layer in the architecture sequence. Because the distribution is random in expectation, the model architecture sequence is mutated in the middle of the architecture sequence. This has implications for transfer learning: when transfer learning is performed, all of the layers after a mutation are need to be retrained. This limits the number of layers that potentially could be transferred in expectation because fewer than 50% of the sequence is going to be transferred. The fact that in expectation, mutations happen in the middle of the architecture sequence, is something that potentially can be manipulated by manipulating the probability distribution used to select where to perform the mutation. More work is needed to explore the impacts of this kind of biasing in favor of transfer learning. Next, Regularized Evolution mutates that randomly selected layer into a random value not equal to the current value. Now, if the processes were fully random, one would expect that the empirical probability of observing a particular variable node choice would be equal for each variable node. But, the process is not fully random - it is guided during stage 2 by the selection of the highest performing model in the sample. We can exploit this lack of complete randomness in stage 2 to effect better caching in this stage whereas our options are more limited in stage 1. #### Iii-B3 Observations First, we can consider the expectation from the theory that the mutations will tend to occur in the middle of the architecture sequence. The effects of this are clearly seen in the tries in Figure 1. The tries show the most common prefixes are dense for the first 3 layers and become much sparser as the trie continues to the right. Next, we consider how the choice of mutation from a particular variable node. In the figure, for example, we see variable node choice 2 occurs 42% of the time disproportionately many times of the 5 possible choices for this variable node. Here again, we see selection effects as the search favors high-performing nodes. If we consider the remaining variable nodes in the architecture sequence, we would observe the same Fig. 1: Trie of the architecture sequences visited over the search when training for 50 and 150 epochs respectively. Architecture sequences appearing in fewer than 1% of the searched candidates are omitted for clarity. Color scale corresponds to the number of transfers that a model was included in. Nodes are annotated with the variable node choice and the percentages of candidates that prefix occurred in. behavior for the remaining slots - certain variable node choices are far more popular than others. What can we observe from the differences between the two different training lengths? As we can observe, the variable node choices differ substantially between short training and longer training even for the same model. For example in the shorter training, variable node choice 2 dominates the first choice, and choice 0 comes in a distant second, while for the longer training, variable node choice 0 dominates and choice 2 comes in a distant second. This is interesting in that some configurations may comparatively perform more poorly with a few iterations of training suggesting that there are possibly grave consequences for naive few-shot. Transfer learning may offer a possible solution here to low-quality performance with few-shot learning. In transfer learning, model weights are transferred from one model candidate to another, frozen while non-transfer layers are trained, and then later unfrozen and "fine-tuned". This process of transferring weights essentially provides additional epochs of training for a particular set of model weights as they are incrementally improved over the course of model training with each transfer potentially giving both the benefits of smaller numbers of epochs while providing some of the same types of quality improvements. We will consider this aspect further later in our study #### Iii-A4 Takeaways Both theory and empirical study show that the random selection of both location and specific mutations has an impact on the kinds of models observed during NAS. Future work can exploit this to design effective caching systems for NAS using transfer learning and to bias the layer selection to favor transfer learning. We finally observe that the transfer learning may help compensate for the smaller epoch counts and the differences in the search process. ### _How do model evolution patterns occur in a distributed context where information is incomplete?_ #### Iii-B1 Motivation and Method Next, we zoom in further and can consider how the search process occurs across nodes by considering the search patterns of the particular model tensors as they are accessed by particular worker nodes in a distributed search powered by transfer learning. Studying what tensors each worker gets can inform the search process in two ways when using transfer learning: 1) when a model tensor is repeated between multiple workers within a given point in time, broadcasting can be used to reduce the bandwidth required to distribute them to the workers 2) since model architecture sequences are approximately known in advance due to the deterministic nature of pseudo-random number generators, the workers that receive particular nodes can be chosen in such a way as to minimize communication if a particular model tensor is common among them. 3) We can observe the temporal locality of particular model tensors recurring which can inform worker local caching on the nodes. We can use a scatter plot showing the worker ID, timestamp, and a few of the most popular tensors to study this aspect. A scatter plot is useful here because it will allow us to see the locality of a particular tensor access as it is repeated both on a single client, and when multiple clients access a particular sequence across nodes simultaneously. In Figure 2 on the X-axis we show the timestamp of a model search beginning, on the Y-axis, we show the worker ID. In the colors and markers, we show with transparency three symbols that represent the 2nd, 3rd, and 4th most popular model tensors. From this, we can observe particular transfer patterns between nodes, and the co-occurrence of particular model tensors on various workers. We omit the most popular model tensor for clarity as it occurs ubiquitously on all workers throughout the search process. #### Iii-B2 Algorithmic Analysis Because regularized evolution is implemented on top of a distributed task pool, we know that accesses for particular tensors will be distributed across the system. We further know based on the structural distribution we studied in Figure 1 that over the course of the search, access patterns for particular tensors between and within workers will recur especially in stage 2 of Regularized Evolution. However, even with seeded random number generators, the variation in the times to train and evaluate models will introduce non-determinism into the NAS process because the different orders of model completion will result in differing populations that are sampled to determine which models to evolve. Therefore, we will need to study this aspect empirically to draw conclusions about how sampling occurs between nodes. The degree of non-determinism could potentially be decreased by waiting to assign workers to candidates until a certain number of workers have all completed their tasks. Since candidate generation through mutation is fast relative to candidate evaluation through model training, this should not cause task starvation beyond the delay introduced by waiting for a group of tasks to finish. This has the trade-off of increased determinism and information with the additional wait caused by the delay. We can upper-bound this trade-off by using the formula for the average \(k^{th}\) order statistic drawn [14] from a normal distribution and with appropriate scaling constant Fig. 2: Which workers have which model layers transferred to them at which points in time. This figure shows 2nd-4th most popular layers only for clarity, The most frequently appearing model layer would cover this entire field. [15]. We can upper bound the size of this delay on average by \(E(s,w)-E(w,w)\) where \(E(r,w)=\Phi^{-1}(\frac{r-\pi/8}{w-\pi/4+1})\) where \(\Phi^{-1}\) is the inverse cumulative distribution function of normal distribution of the training times, \(s\) is the number of workers that we wait to complete before assigning new candidates to workers, and \(w\) is the total number of workers such that \(s\leq w\). We know this is an upper bound because it models the worst case because it models the case where all workers begin at the same time. On subsequent iterations, there would be workers that did not finish when we last assigned candidates to workers but would now complete sooner in the next decision relative to other workers that started at the beginning of the new quanta. #### Iii-B3 Observations We can observe two major trends from Figure 2. First, there are clusters of particular model layers across several workers that are temporally located. For example, workers 11-13, 20, and 23-25 access tensor 3 at nearly the same time. This temporal clustering could inform how information about these layers is broadcasted for example tree-based broadcasting, or job co-location to place related jobs on nearby nodes where they more easily share information. Additionally, we can observe that a small scheduling quanta could be introduced with minimal effect on the scheduling time due to the tight verticle clustering seen between workers. Second, we can observe that a particular layer can be frequently accessed on a particular worker ID. For example, worker 19 accesses tensors 1, 2, and 3 in repeated succession. We can further observe that several workers transfer similar groups of tensors throughout the search as can be observed from the overlapping of the symbols in the scatter plot. This motivates the use of local caching on the workers of the most popular nodes, and potentially even grouping of commonly co-accessed tensors to perform more efficient bulk data transfers. These two pairs of findings confirm our intuition that there will be a locality of tensor accesses both between workers and within a worker. #### Iii-B4 Takeaways There are temporal localities of a particular tensor both on a particular worker and between multiple workers. The former motivates the use of worker-local caching of model layers, and the latter motivates the use of broadcasting and worker-to-task mappings that group similar workers together in the network. We can model the upper bound of the delay introduced by the scheduling quanta required to do this after the initial dispatch of the workers using a formula of the quanta size and the number of workers. ### _What can be known about model quality during NAS over time?_ #### Iii-C1 Motivation and Method In some senses, this is both the foundational and most well-known aspect of NAS. It is foundational in that it motivates the use is NAS despite its high computational cost - higher quality models are worth some effort to develop them. It is well-known that many studies of NAS consider the quality of the models over time [5]. We include a brief treatment of quality during NAS here to motivate the broader study, but also to point out a key aspect that is hidden by the common formats of figures that describe the quality of a NAS and to highlight a key aspect of search results that will be important as we consider later research questions. To study model quality over time, we plotted a scatter plot of the quality of the model architectures against the end time of the competition of the search in Figure 3. We also include a line that shows the cumulative maximum accuracy of the models. Together this combination clearly shows both the top performers, but also does not hide the presence of low performers throughout the search. A band showing the performance of the top k models would have served the same purpose as displaying just the cumulative max, but we can make our point with just the top model performance. This particular plot shows the cumulative accuracy for ATTN, but a similar plot is easily constructed for CIFAR-10. We have trimmed the y-axis to make it easier to differentiate high-quality results from each other. If we extended the y-axis to the minimum quality observed, there would be models with a validation score of less than.3 at all times during the search process. This indicates that there are often missteps in the search processes which result in low-quality models. This fact is often obscured by the formatting of figures in NAS papers which will either only show the max or top-1 accuracy as it is called in other papers, or by showing bands for the median, \(90^{th}\), \(95^{th}\), or \(99^{th}\) percentile, or the interquartile range. While it is true that high performers are important, and they are rightly the focus of prior papers, knowing of the presence of low-performers is important for the design of caching systems for example when performing transfer learning where caching them is wasted effort. #### Iii-C2 Algorithmic Analysis First, let us consider the case of low performers. Suppose that a new model has just been evaluated, and it has the worst accuracy in the current population, would it ever be transferred from? Not in regularized evolution. As such, you should skip I/O to store this model if it is only going to be used for transfer learning. While this specific case of the worse performer is obvious, what Fig. 3: Cumulative and observed validation accuracy for Candle-ATTN over the course of a search with 1000 candidates without transfer learning. Y-axis is trimmed to focus on high-quality candidates. about the next \(s-2\) worse models? They too will also never be selected for transfer. For higher quality models, we can calculate the probability that a model will be selected which is related to the hypergeometric distribution with a probability density function of \(p(X=k|X\sim H(N,K,n))=\frac{\binom{K}{k}\binom{N-K}{n-k}}{\binom{n}{n}}\). From this we can derive the probability that a model will be the basis for a transfer in a given timestep if \(\lambda\) is the rank of the just-evaluated model, there are \(P-\lambda\) models that it would overtake if they were sampled is upper bounded by \(p(X=0|X\sim H(P,P-\lambda,s))\) allowing us to estimate the probability of transfer 2. If this probability is suitably small, caching could be skipped. Footnote 2: the full probability would account for the that the model with rank \(\lambda\) was sampled, not merely that no models were selected with higher quality than it, but this simpler form requires less computation, is conceptually simpler, and is likely good enough for a fast caching decision. Determining the exact probability requires careful use of both Bayes theorem and the hypergeometric probability distribution Next, let us consider the case of high performers. Improvements in maximum quality are often stepwise. This is intuitive. Suppose we knew the quality of models in the search space, each time we observe a new maximum performance, there are fewer and fewer models that remain in the search space that are both improvements over the current best-observed model, and unobserved themselves. Additionally, we expect that over time the magnitude of performance increases would also tend to decrease. This too is intuitive. If the distribution of model performance is approximately Normally distributed as it is in [10], we would observe that the probability of getting a result with a large increase in quality is subsumed by getting a mere incremental improvement in quality. The implication of this for optimizing NAS is that there are diminishing returns the greater the longer the search proceeds. #### V-C3 Obervations Now let us turn our attention to the actual results from Candle-ATTN by examining the low and high-quality performers While the lowest performers are omitted from Figure 3, we do in fact have low performers throughout the NAS process. As stated in our analysis, these low performers are unlikely to ever be transferred and are confirmed in our traces. Next, we turn our attention to high performers. In this search, we observe improvements in the max quality occur around timestep 50, again around 175, 250, and 375. In this case, Stage 1 of the Algorithm 1 completes near timestep 100. We can observe that there are consistent improvements in the quality of results found in the search in phase 2. We further observe that as the search continues the size of the improvement tends to decrease. We will explore this aspect of stepwise improvement further in subsection V-D. #### V-C4 Takeaways We find key insights with respect to both low and high-quality performers that influence the design of transfer learning for NAS. For low performers, we can upper bound the probability that a model will be transferred in the subsequent search process and thus define a threshold for model quality that would suggest that it would never be transferred allowing us to skip storing it. Additionally, for high performers, there are diminishing returns to network architecture search. _When will a model architecture become popular and possibly subsequently become less popular and relevant?_ #### V-D1 Motivation and Method In this section, we combine aspects of our previous analysis to consider both the structure of transfers made, and how that evolves with the population changes over time. Understanding how model structure evolves with the population has clear implications for the design of caching systems. Beyond what we could observe from the structural analysis of Section V-A or Section V-B, looking at how the population evolves over time gives us greater insight in to the exact prefixes of popular models that will occur. Beyond what we could observe from the temporal analysis of Section V-C, we can observe how changes in quality influence the choices of particular transfers of models which is ultimately the most important aspect of designing an efficient system for NAS. To consider this aspect, in Figure 4 we consider histograms of the architecture sequences from sliding windows of the traces over time corresponding with the population. Histograms are useful in showing the distribution By forcing the sliding window to coincide with the population, we can see how the populations' distribution of prefixes over the NAS. We can only improve upon further this by utilizing animations that cannot be included due to the limitations of PDF to show the continuously evolving architectures that we will make available as supplementary material upon publication of this work3. Since we cannot use full animations, we look at 3 distinct snapshots in time. Footnote 3: [https://www.zenodo.com/](https://www.zenodo.com/) In each x-axis, we display a prefix id for each unique prefix of the architecture sequence of a given length (in this case 3) of models evaluated in the last population size number of models (in this case 100). The prefix ids are common within a row of figures (e.g. id=50 on row 1 is the same prefix in all three columns), but are distinct between rows (e.g. id=50 on row 1 refers to a different prefix than on row 2), within a row, prefix ids are sorted based on the lexicographical ordering of their variable node choices and effectively arbitrary. While it is difficult to tell, there are slightly different numbers of prefix ids in each row. By making the prefix IDs consistent across the timesteps, we can see how different prefixes become more or less popular over time. In each y-axis, we display the frequency that that prefix occurred in the last population size number of models. Note that the y-axis ranges differ between subplots. The columns represent a particular timestep of the search process, and because each search examines 1000 model candidates, are the same across each model space considered so one can see the evolution of the search process over time. Lastly, the first two rows are from CANDLE-ATTN with two different seeds, and the latter represents a run from CIFAR-10. #### V-D2 Algorithmic Analysis Let's first consider how the maximum number of duplicates we expect to observe during stage 1 of Algorithm 1. During stage 1, there is no direction to the search process. A generalization of the birthday paradox [16] states that with probability \(p\) that a particular prefix out of a total number of prefixes \(c\) will occur at least \(k\) times when the sample size is at least \(\left[c^{k-1}k!\log_{c}\left(\frac{1}{1-p}\right)\right]^{1/k}\). However, we can also observe duplicates because of the state space itself - not all possible prefixes are valid because they would result in an invalid model, and thus not prefixes of a given length do not have uniform probability. Next, we can relate the time between when an improvement in the maximum quality is observed to when we expect to see changes in the distributions of model prefixes. Models are selected for mutation whenever they are 1) first selected randomly from the population, and then 2) subsequently the highest performing model from the sampled subset. Once a model sets a new cumulative maximum accuracy, it will then dominate any set of models that it is then sampled in making it a like source as a prefix. We can compute the expected number of samples until this new maximum is selected and then mutated using the expected value of a geometric distribution. This delay from when an improvement is found to when the improvement is then itself improved upon explains the gaps between the changes in the relative popularity of the model prefixes we observe with the highest quality. #### Iv-B3 Observations In stage 1, we observe for each of the three configurations that we have no model prefixes dominating the population. The most commonly occurring prefixes constitute 6 out of the last 100 candidates. This value while high, is still consistent with the theory regarding the generalized birthday paradox. Beyond this effect, a specific concentration of the most popular prefixes can be attributed to the few cases of model prefixes that are slightly more popular than others due to the invalidities of some configurations. Next, we can consider the next two columns which are reasonable distances into stage 2 of Algorithm 1. In this column, we observe a few models with many more visits than others - several near or exceeding 30 visits. At this level of popularity, we cannot attribute the popularity of a particular prefix id to Fig. 4: Evolution of the models over time with three transfer variants. Progressing from left to right shows three slices of time of the search process. Each row shows a different model training variant. The X-axis shows a unique id for a specified model prefix. Prefix ids are consistent with all 9 subplots allowing comparisons of popular prefixes over time and search methods. Y-axis shows the frequency of a particular model prefix in the last 100 model candidates considered (the current population) the generalized birthday paradox or the state space's subtle biases for particular prefix sequences. Instead, we have a strong intuition that the regularized evolution algorithm is heavily selecting common prefixes. We can see a first "tier" of prefix that appears in 30% or more of the last considered models. This first tier of model once it becomes popular, tends to dominate the search for the remaining duration and is seldomly dethroned. We can also observe that there is a second "tier" of model prefixes that have more than 1 or 2 instances but also do not have more than 25 instances. It is more common that these second "tier" peaks will move over time and other model alternatives drive performance. A good example of this can be seen between 4e and 4f where two tiers of two peaks appear between prefix ids 150 and the dominant peak of 175. However, very few of the popular models coming out of Stage 1 remain so in Stage 2. Lastly, there is a third tier of model prefix that seldom gets more than 3 transfers. These very seldom get multiple instances. Lastly, we can look at how things can differ across models and transfer and no-transfer cases by comparing the rows of the model. While we cannot draw inferences from the model IDs between these models, we can observe individual models may have a slightly greater or fewer number of second-tier peaks, and their relative size may fluctuate over the candidates, the same relative structure persists with one or at most 2 dominant peaks and a smaller number of second-tier peaks, and many entries with only 3 or fewer transfers. #### V-B4 Takeaways What we describe as a three-tier nature of popularity has implications for the design of caching systems when using transfer learning in the context of NAS. Specifically, in addition to not caching models in the lower levels of quality, it is likely not worth widely caching models until they have more than 5 transfers in the last 100. The empirical probability that a model is a donor for mutation is substantially higher given that it has been a donor at least 5 times in the past. Additionally, the gap between when a model is improved, and when it begins being used for transfer implies that in expectation, there is a timing opportunity to provide time to prefetch the model to the clients. ## VI Conclusions and Future Work This work developed a methodology and then performed a characterization study to study how models are produced and evaluated by regularized evolution in network architecture search. We answered four key research questions regarding the structure of model candidates selected by this algorithm, how evolution patterns change in the context of NAS powered by asynchronous distributed workers where there is incomplete knowledge of model performance, and how model quality evolves over time, and finally how portions of models become popular and subsequently become unpopular. The answers to these questions set a path towards improving the scalability and performance of network architecture search using regularized evolution and other genetic algorithms. There are three clear directions for future work: 1) I/O and caching for transfer learning - our work proposed caching heuristics to be used based on the popularity of model layers. Future work should evaluate these proposed heuristics in a model repository such as [9] to alleviate I/O bottlenecks from using transfer learning with network architecture search. 2) Improvements to NAS scheduling - our work identified some of the trade-offs along the continuum of batch scheduling and continuous scheduling which trade delays for improved accuracy and determinism. Future work should evaluate these trade-offs in the context of a complete system where the increased quality could be more directly compared to the increased runtime. 3) Improvements to genetic search algorithms for NAS - our work identified that using NAS with transfer learning is hampered by the limited expected number of layers transferred which could be addressed by weighting later nodes in the architecture more heavily for mutation. Future work should evaluate the impact of these trade-offs on model quality. ## Acknowledgments This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research, under Contract DE-AC02-06CH11357.
2309.06503
Leveraging Large Language Models and Weak Supervision for Social Media data annotation: an evaluation using COVID-19 self-reported vaccination tweets
The COVID-19 pandemic has presented significant challenges to the healthcare industry and society as a whole. With the rapid development of COVID-19 vaccines, social media platforms have become a popular medium for discussions on vaccine-related topics. Identifying vaccine-related tweets and analyzing them can provide valuable insights for public health research-ers and policymakers. However, manual annotation of a large number of tweets is time-consuming and expensive. In this study, we evaluate the usage of Large Language Models, in this case GPT-4 (March 23 version), and weak supervision, to identify COVID-19 vaccine-related tweets, with the purpose of comparing performance against human annotators. We leveraged a manu-ally curated gold-standard dataset and used GPT-4 to provide labels without any additional fine-tuning or instructing, in a single-shot mode (no additional prompting).
Ramya Tekumalla, Juan M. Banda
2023-09-12T18:18:23Z
http://arxiv.org/abs/2309.06503v1
Leveraging Large Language Models and Weak Supervision for Social Media data annotation: an evaluation using COVID-19 self-reported vaccination tweets ###### Abstract The COVID-19 pandemic has presented significant challenges to the healthcare industry and society as a whole. With the rapid development of COVID-19 vaccines, social media platforms have become a popular medium for discussions on vaccine-related topics. Identifying vaccine-related tweets and analyzing them can provide valuable insights for public health researchers and policymakers. However, manual annotation of a large number of tweets is time-consuming and expensive. In this study, we evaluate the usage of Large Language Models, in this case GPT-4 (March 23 version), and weak supervision, to identify COVID-19 vaccine-related tweets, with the purpose of comparing performance against human annotators. We leveraged a manually curated gold-standard dataset and used GPT-4 to provide labels without any additional fine-tuning or instructing, in a single-shot mode (no additional prompting). Keywords:Large language models, GPT, weak supervision, social media data, Twitter. ## 1 Introduction ### A Subsection Sample The widespread adoption of social media platforms has led to an explosion of user-generated content, making them valuable sources of real-time information [1]. Social media platforms have become a valuable resource for studying public health issues [2], including the COVID-19 pandemic. Social media platforms like Twitter have a vast user base, representing diverse demographics and geographic locations. Analyzing vaccination sentiment data from such platforms allows for a more comprehensive understanding of public opinion, as it encompasses a wide range of perspectives. Twitter, in particular, has emerged as a platform where individuals share their personal experiences, including vaccination updates [3]. By analyzing the data, public health officials, policymakers, and researchers can gauge the overall sentiment towards vaccines, identify trends, and make informed decisions to address concerns or misconceptions. Analyzing self-reported vaccination tweets can provide valuable insights into vaccine sentiment, vaccine uptake, and vaccine-related concerns among the general population. However, manually annotating large volumes of social media data is labor-intensive and time-consuming, requiring domain experts to label the data accurately. Weak supervision [4] techniques have emerged as a powerful approach for data annotation, offering significant advantages in terms of scalability [5], cost-effectiveness [6], and flexibility [7]. Traditional methods of data annotation often rely on manual labeling, which can be time-consuming, expensive, and limited in terms of the volume of labeled data that can be produced. In contrast, weak supervision techniques leverage various sources of supervision to automatically generate labeled data, reducing the manual effort required while maintaining reasonable accuracy. Scalability is one of the primary advantages of weak supervision. With the exponential growth of data, manually labeling vast amounts of data becomes impractical and expensive. Weak supervision allows for the rapid annotation of large datasets by leveraging existing resources such as heuristics, rules, or readily available weak labels [8]. These weak signals can be automatically applied to unlabeled data, effectively increasing the amount of labeled data available for training and development of robust machine learning models. Cost-effectiveness is another key benefit of weak supervision techniques. Manual data annotation often requires skilled human annotators, which can be costly and time-consuming. In contrast, weak supervision reduces the reliance on manual annotation efforts, thus reducing costs. Although weakly supervised labels may not be as accurate as manually annotated labels, they can still provide valuable insights and improve the performance of machine learning models. By combining weakly supervised labels with a smaller amount of manually labeled data, comparable results can be achieved at a fraction of the cost [9]. Additionally, traditional data annotation methods often require significant upfront effort to design annotation schemas, guidelines, and quality control processes. These rigid procedures can be challenging to adapt as new data sources or requirements emerge [10]. In contrast, weak supervision provides a more agile and adaptable approach to data annotation. Weakly supervised labels can be easily generated or modified based on changing needs, enabling rapid iteration and refinement of models in response to evolving data or domain-specific requirements. Large language models (LLMs), such as GPT-3 [11], have revolutionized natural language processing and transformed various applications across multiple domains. These models employ deep learning techniques to generate coherent and contextually relevant text, making them invaluable for tasks like language translation, text summarization, and conversational agents. Their effectiveness is attributed to the vast amount of pre-training data and the ability to capture complex linguistic patterns. This work assesses the effectiveness of Language Models (LLMs) (GPT-3.5 and GPT-4 (March 23 version)), in conjunction with weak supervision, for the identification of COVID-19 vaccine-related tweets. The primary objective is to compare the performance of LLMs against human annotators. To achieve this, we utilized an expertly curated gold-standard dataset and employed GPT3.5 and GPT-4 to generate labels in a single-shot mode, without resorting to additional fine-tuning or explicit instructions. Related Works In the past weak supervision has demonstrated successful results in clinical text classification [12], multi-language sentiment classification [13], generating training sets for phenotype models [14], information retrieval [15], identifying drugs from Twitter [16; 17; 18], classifying different kinds of epidemics [19], natural disasters [20; 21] and several health applications [22; 23; 24]. In this aspect, LLMs have been effectively utilized to leverage weak supervision techniques, automating data annotation processes by generating or modifying labels based on the model's pre-trained knowledge and heuristics. LLMs, such as BERT [25] and GPT [26], have shown impressive performance in various natural language processing tasks, including sentiment analysis, named entity recognition, and text classification. These models can be fine-tuned on domain-specific datasets, enabling them to learn specific patterns and characteristics of the data. By leveraging pre-trained LLMs, researchers can automate or assist in the annotation process, significantly reducing the human effort required for data labeling. The evolution of LLMs has been marked by significant milestones, with BERT acting as a groundbreaking advancement. BERT introduced the concept of pretraining and fine-tuning, revolutionizing the field of NLP. By pretraining models on large corpora of text data and fine-tuning them on specific downstream tasks, BERT achieved state-of-the-art performance on a wide range of NLP benchmarks. BERT served as a foundation and inspiration for the development of numerous pre-trained models like GPT [26], AlBERT [27], RoBERTa [28], DistilBERT [29], ELECTRA [30], XLNet [31], T5 [32], MegatronLM [33], BART [34], CamemBERT [35]. These models have leveraged the success of BERT's architecture and training techniques to and improved the model by tackling various limitations like performance, optimization, reduction in training size. As a result, several other domain specific pre-trained models like Covid-Twitter-BERT [36], BioBERT [37], SciBERT [38], ClinicalBERT [39], LegalBERT [40], FinBERT [41; 42; 43] emerged. Building upon the success of BERT, subsequent models such as GPT-2 [44] and GPT-3 [11] further pushed the boundaries of LLM capabilities. GPT-2 demonstrated impressive language generation abilities, while GPT-3 introduced even larger model sizes and showcased the potential for diverse applications. GPT-3.5 is a transitional model, which further refines the AI's capabilities of GPT-3, and is known for nuanced understanding and contextual response generation. GPT-4, introduces a major leap with significant improvements in model size, training data, and comprehension abilities. GPT-4 is designed to better handle ambiguities and complexities in natural language, generating more coherent, relevant, and detailed responses. In the aspect of data labeling, LLMs have emerged as a promising solution to address these challenges by automating or assisting in the data annotation process. With their language understanding capabilities, LLMs can be employed to generate annotations or suggest labels for a given input, a technique known as active learning [45]. This approach allows human annotators to focus their efforts on more challenging or uncertain instances, thereby improving the efficiency and quality of the annotation process. Previous research has demonstrated that 35-40% of the crowd workers widely use LLMs for text related data annotation tasks [46]. In a study conducted by Gi lardi et.al, Chat-GPT outperformed Crowd-Workers for text annotation tasks [47]. To improve the precision of ChatGPT as the hallucination is one of the limitations of LLMs, He et.al. designed a two step approach to explain why the text was labeled [48]. LLMs have demonstrated success in various data annotation tasks [49], sentiment analysis [50], text categorization, linguistic annotations [51], multi-linguistic data annotation [52] and social computing [53]. This work examines the role of LLMs in data annotation, discussing the benefits, limitations, and potential future directions. The advancements in LLMs have not only transformed NLP tasks but have also had a profound impact on human tasks that involve language understanding and generation. LLMs have been integrated into various applications, ranging from chatbots and virtual assistants to language translation and content generation. In human-computer interaction scenarios, LLM-based systems have enabled more natural and effective communication, bridging the gap between machines and humans. However, the increasing reliance on LLMs also raises important ethical and societal considerations, such as potential biases and the responsible deployment of AI technologies [54]. LLMs exhibit non-deterministic behavior, similar to human coders, where identical input can produce varying outputs [55, 56]. Hence, it is crucial to exercise caution when utilizing LLMs to ensure consistent and reliable results. ## 3 Methods ### Datasets Used #### 3.1.1 Gold standard dataset We collected a dataset of tweets related to COVID-19 vaccines by filtering related keywords, from one of the largest COVID-19 Twitter datasets [57] available. After filtering, this dataset consists of 2,454 self reported vaccination confirmation tweets and 19,946 vaccine chatter tweets. The complete dataset was manually curated by two medical students, having a Cohen Kappa score inter annotator agreement of 0.82 with a third annotator resolving all conflicts. This dataset was used in the Social Media Mining for Health 2022 shared tasks [58]. With the annotation task consuming over 200 human hours, it is vital to try to identify additional techniques to attempt to streamline this process. #### 3.1.2 Silver standard dataset While weak supervision has shown promise in the area of social media mining [17, 59], we extracted an additional dataset, not manually curated, which consists of tweets selected by a weak labeling heuristic consisting of expressions like "vaccine causes", "I was vaccinated", "I got Moderna", and similar. This weakly-supervised, or'silver standard', consists of 750,000 randomly sampled (from a larger set of 12 million) tweets with an unidentifiable mixture of both classes. The rationale for doing so is that researchers have shown that data augmentation using weak supervision leads to better and more generalizable models, than when only using gold standard data [60, 61]. Note that none of the 750,000 randomly sampled tweets used in this dataset do not have any overlap with the gold standard data. ### Additional language models used Besides the previously mentioned GPT-4 and GPT-3.5, we fine tuned COVID-Twitter-BERT [36] and BERTweet [62] with the GPT-labeled silver-standard data, for downstream tweet classification. Note that the class imbalance from the gold-standard dataset is roughly 1 to 8, between self reported vaccination tweets and vaccine chatter tweets. This was also found to be similarly the same in the GPT-labeled silver-standard data, making the fine-tuning and evaluation comparable. ### Evaluation set-up #### 3.3.1 LLM performance in annotating data We evaluate the performance of GPT-4 and GPT-3.5 on the labeling of the gold-standard data. This evaluation will assess how good are LLMs in labeling data when compared to a set of medical professionals. As one of the most resource-expensive parts of generating datasets, if human annotation/labeling can be aided or streamlined, there is great value in leveraging LLMs in these types of tasks. Leveraging the Open AI API for both GPT-3.5 and GPT-4, we used the following prompt: "Categorize the following text: XXXXXXXXXXXX into vaccine self-reports or vaccine chatter. Figure 1 shows a sample output of the GPT-4 prompting on the chat.openai.com website. This evaluation made 22,400 API calls to each GPT-4 and GPT-3.5 models. #### 3.3.2 LLM to improve weakly supervised dataset creation In these evaluations, we leverage GPT-4/GPT-3.5 to attempt to 'properly' label the silver-standard data and then fine-tune BERT-like models to classify the gold-standard data. The creation of silver-standard datasets has gained popularity in the field of NLP with many groups building systems that leverage silver-standard data to enhance their training sets and achieve state-of-the-art results in a variety of NLP shared tasks [58, 63]. Using the same prompt for the first evaluation, we made a total of 750,000 API calls to each GPT model to label this silver-standard dataset. With these evaluations we aim to answer two questions: a) is GPT-4/GPT-3.5 enough to annotate data with similar quality than a human expert, b) could we leverage both weak supervision and GPT-4/GPT-3.5 to quickly and scalably annotate large amounts of data with near-expert level performance. We would call these datasets: electron datasets, which are a mixture of gold and silver standard-data. ## 4 Results Before we introduce the actual results of our analysis, we would like to present a cost analysis of how much it would cost to run the data annotation tasks leveraging the GPT models and other traditional sources. We sent a total of 1,544,800 API calls, with a total cost of $2,743.40 USD. While this price might seem high, note that we annotated a total of 1,544,800 tweets, which would be time and cost prohibitive to do hiring humans and paying them a fair wage. Even using a service like Amazon Sage-Maker Ground Truth, would cost around $52,896.00 USD for the same task. Leveraging Amazon Mechanical Turk would cost $37,075.20 USD for the same number of text classification tasks [64]. There is clear value in evaluating if we can leverage such a resource for data annotation, this would particularly help resource constrained researchers that can not afford to pay expert annotators. The second aspect is scale, while ~1.5 Million API calls are done fairly quickly, nobody to our knowledge has manually annotated any dataset this large. Figure 1: Sample GPT-4 prompts to evaluate the created datasets. #### 4.1.1 Results for LLM performance in annotating data In Table 1 we showcase the annotating performance of both GPT-4 and GPT-3.5. It is not surprising that GPT-4 outperformed GPT-3.5 by nearly 10% for the self reported vaccination tweets category, the more interesting one, and marginally for vaccine chatter. While vaccine chatter is more easily identified, nearly 90% for both models, GPT-3.5's 71.11% performance on the self reported class, and 80.81% for GPT-4 are promising numbers. However, once larger amounts of data are annotated this way, this would lead to a considerable amount of noise to be added. These results are still promising as there was no additional prompting or fine-tuning performed, so the zero-shot results are pretty solid. We look at the inter-annotator agreement between both GPT models using Cohen Kappa coefficient [65] and the human annotators. We evaluate this to get insights into how much the correctly labeled tweets diverge between models. The inter annotator agreement between GPT models was 0.79 (p-value \(<\) 0.0001), which is considered substantial [66]. In comparison, the human Cohen Kappa score inter annotator agreement was of 0.82, with a p-value \(<\) 0.0001, which is considered near perfect agreement. Objectively, the difference is not much, 0.03, however it does show that humans agree slightly better than the GPT models. Note that our human annotators worked independently and did not know or communicated with each other. #### 4.1.2 Results for LLM to improve weakly supervised dataset creation In the second evaluation, GPT-4 labeled 68,561 tweets as vaccine self-reports and 681,439 tweets as vaccine chatter. GPT-3.5 labeled 66,288 tweets as vaccine self-reports and 683,712 as vaccine chatter. While it might seem that GPT-4 labels more tweets, we are not sure they are correctly labeled and they have not been annotated by a human. Due to this fact, there are no comments on accuracy, the idea behind this exercise is to then feed this data as part of the fine-tuning step for the previously identified BERT-like models. After fine-tuning COVID-Twitter-BERT and BERTweet, Table 2 shows the correct tweet labeling results achieved. It is very interesting to see that a fine-tuned COVID-Twitter-BERT performs marginally better than GPT-4 (and GPT-3.5) at labeling both tweet classes. While the improvement is marginal, it goes to show that a properly fine-tuned model does outperform a more complex model, at least in this scenario. Another interesting finding is that BERTweet performs slightly worse than GPT-4, \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Label** & **GPT-4** & **\%** & **GPT-3.5** & **\%** \\ \hline Self reported vaccination & 1,983 & 80.81\% & 1,745 & 71.11\% \\ \hline Vaccine chatter & 18,541 & 92.96\% & 17,842 & 89.45\% \\ \hline \end{tabular} \end{table} Table 1: Correct tweet labeling results for GPT models. but better than GPT-3.5. This is most likely due to the training data for BERTweet not being focused on COVID-related tweets. In order to assess the actual labeling agreement between our top two models (GPT-4 and COVID-Twitter-BERT) we measured the Cohen Kappa score, which was quite surprising to learn that it was 0.85 with a p-value \(<\) 0.0001. This means that both models have a high level of agreement in which tweets they labeled, even more so than humans. Additionally, we calculated the Fleiss Kappa statistic [67] between all annotators, showing that we have a score of 0.76 with a p-value \(<\) 0.0001. This showcases that both the models and the humans mostly agree on what class the tweets should be labeled. ## 5 Conclusion In conclusion, our study has several important findings: GPT models perform fairly well, in a zero-shot, task of properly labeling social media data, tweets in this case. However, at larger scales the number of incorrect classifications might start becoming problematic, particularly depending on the downstream task that said data will be used for. * When leveraging GPT models alongside weak supervision techniques to identify'silver-standard' data, we can use data augmentation with higher confidence. These resulting 'electrum datasets' could be leveraged for further fine-tuning with potentially a considerable amount of less noise than just using weak supervision alone. * Fine-tuned BERT models are still not obsolete, as we showed them outperforming GPT-4 for labeling social media data, self reported vaccine tweets in this case. While this comparison might be unfair, the point we show is that combining approaches leads to better results. * Lastly, we show with our cost analysis that it is very cost effective to label data using GPT models, and that the results data is usable for downstream tasks. While we would continue to use human annotators to label data for our NER tasks, we can consider labeling less data to have equally or better performant systems in downstream tasks. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Label** & **COVID-Twitter-BERT** & **\%** & **BERTweet** & **\%** \\ \hline Self reported vaccination & 2,045 & 83.33\% & 1,897 & 77.30\% \\ \hline Vaccine chatter & 19,012 & 95.32\% & 18,457 & 92.53\% \\ \hline \end{tabular} \end{table} Table 2: Correct tweet labeling results for BERT models. While we show that GPT models perform well, this work does not advocate for the replacing of human labeled data with GPT-annotated data. Our argument is to show that leveraging multiple approaches together, and fine-tuning, leads to potentially better and more generalizable results. The limitations of our work are clear: we only used one task - self reported vaccine tweet labeling, we only fine-tuned two different BERT models, and we did not evaluate how large our 'electrum dataset' should be to fine-tune a model enough to achieve solid performance. All these are future research directions that would greatly inform the community.
2309.00172
Detecting Evidence of Organization in groups by Trajectories
Effective detection of organizations is essential for fighting crime and maintaining public safety, especially considering the limited human resources and tools to deal with each group that exhibits co-movement patterns. This paper focuses on solving the Network Structure Inference (NSI) challenge. Thus, we introduce two new approaches to detect network structure inferences based on agent trajectories. The first approach is based on the evaluation of graph entropy, while the second considers the quality of clustering indices. To evaluate the effectiveness of the new approaches, we conducted experiments using four scenario simulations based on the animal kingdom, available on the NetLogo platform: Ants, Wolf Sheep Predation, Flocking, and Ant Adaptation. Furthermore, we compare the results obtained with those of an approach previously proposed in the literature, applying all methods to simulations of the NetLogo platform. The results demonstrate that our new detection approaches can more clearly identify the inferences of organizations or networks in the simulated scenarios.
T. F. Silva, J. E. B. Maia
2023-08-31T23:57:02Z
http://arxiv.org/abs/2309.00172v1
# Detecting Evidence of Organization in groups by Trajectories ###### Abstract Effective detection of organizations is essential for fighting crime and maintaining public safety, especially considering the limited human resources and tools to deal with each group that exhibits co-movement patterns. This paper focuses on solving the Network Structure Inference (NSI) challenge. Thus, we introduce two new approaches to detect network structure inferences based on agent trajectories. The first approach is based on the evaluation of graph entropy, while the second considers the quality of clustering indices. To evaluate the effectiveness of the new approaches, we conducted experiments using four scenario simulations based on the animal kingdom, available on the NetLogo platform: Ants, Wolf Sheep Predation, Flocking, and Ant Adaptation. Furthermore, we compare the results obtained with those of an approach previously proposed in the literature, applying all methods to simulations of the NetLogo platform. The results demonstrate that our new detection approaches can more clearly identify the inferences of organizations or networks in the simulated scenarios. Network Structure Inference Graph Entropy Clusters Quality Index Multi-Agent ## 1 Introduction A wide range of research has been devoted to detecting anomalous behavior, whether through video analysis or tracking the trajectories of objects [1]. However, the complexity increases considerably when extending this challenge to detecting group anomalous behavior. Identifying abnormal behavior patterns in public spaces, especially when manifested by groups, presents additional challenges. This advanced detection capability has the potential for enhanced monitoring of public safety levels [2]. Scenes obtained from surveillance video are generally of low resolution, have occlusion occurring, and have a limited field of observation [2]. The last few decades have seen steady growth in location-tracking devices (e.g., vehicle navigation systems and smartphones). This has generated a massive amount of trajectory data for co-motion pattern mining [3]. A co-motion pattern indicates a set of moving objects traveling together over some time [3]. Some examples of representation of this pattern include flocks [4], convoys [5], swarms [6], groups [7], platoons [8], and meetings [9]. In analyzing movement and behavior patterns, networks are a natural choice of representation, as they can highlight connections between agents, locations, events, or any entity of interest. Networks, composed of nodes and edges that model the relationships between those nodes, offer an effective way to capture the underlying structure and interconnections in data [10]. Inferred networks, in particular, form a class whose definitions of nodes and/or edges are derived from the data without the need for predefined relationships or a priori knowledge about the network structure [11]. This allows previously unnoticed or hidden information to emerge from data analysis, providing new perspectives and insights. This approach is particularly valuable in situations where the interactions between elements are not well understood or when the data is heterogeneous and cannot be easily translated into a traditional network format. In the context of surveillance in public spaces, where the accurate identification of individuals often takes place through tracking devices to the detriment of traditional security cameras, the trajectory-based Network Structure Inference approach emerges as a promising strategy for the surveillance of public spaces detection of anomalous co-movement patterns in groups. Consequently, the present study introduces two strategies to identify evidence of organizational(network) structure. To evaluate the effectiveness of these approaches, we conducted tests on four simulations of organizational dynamics in the animal kingdom, which were made available through the NetLogo platform [12]. These simulations encompass several combinations of values for the number of agents and scenario dimensions, providing a comprehensive assessment of the performance of our approaches. The main contributions of this work are summarized as follows: * Two new approach is proposed using the trajectory. One generate clusters of objects with co-motion, then the cluster quality index is evaluated to identify simulations that have evidence of organization (high-quality index). The other using graph entropy calculation to detect behavior anomalies; * To evaluate the proposed approach, we used simulations from the animal kingdom, which are readily accessible in the NetLogo library. These simulations cover scenarios with agents exhibiting organized and disorganized patterns, providing a solid basis for evaluating our methodology. In many contexts, the absence of robust datasets for evaluation is a recurrent issue. However, multi-agent simulations are crucial in filling this gap, allowing a reliable and comprehensive evaluation of proposed approaches. This paper is organized into four more sections. In Section II, presents the related work. Furthermore, it shows the experiment and the results. Section III describes essential concepts in this work, such as the DBSCAN clustering algorithm, Silhouette Coefficient, Network Structure Inference, and Entropy. Section IV, the scenarios, the organization, and our approach are described. Section V shows our results compared to the literature. Finally, Section VI concludes the paper with final observations and future research. ## 2 Related Work [13] and [14] introduced the problem of classifying types of organizations in multi-agent systems in a scenario where a group of target mobile agents is continuously monitored by a smaller group of mobile observer agents in the CTO problem [15]. The approach by [13] considers that the group of target agents can be organized according to eight paradigms (hierarchy, holarchy, team, coalition, congestion, society, federation and matrix organizations), while the approach by [14] considers four paradigms (hierarchy, holarchy, teams and coalition). The approach proposed by [13] consists of collecting the exchange of messages from all agents of the simulation that is shared with the two rival groups, and through seven supervised learning classifiers, the classification of the detected organizational paradigms is carried out.The approach proposed by [14] collects images of the simulation scenario at each time step and through Mobilenetv2 the classification of organizational paradigms is carried out. The results of both approaches showed that the approaches had satisfactory results in the classification of each organizational paradigm. However, the two scenarios used in the approaches are unrealistic: exchanging messages between agents from different rival groups and identifying the type of organization using only the scenario images. [10] presented a generic approach to an entropy-based analysis, which uses the combined and automated analysis of short-term and long-term behavior of entropy values over some time to characterize and examine the self-organizing behavior of systems complexes. The approach consisted of obtaining the x and y coordinates of each evaluated simulation. Using the values of these discretized parameters, a histogram for each parameter is created [10]. From probability \(pi\), which is the probability of a discretized value of the calculated histogram, entropy is calculated. Finally, for the automated detection of self-organization, the entropy values were calculated using filters, and the time derivative of the entropy values was post-processed as input for the analysis of the short-term and long-term behavior of the systems. [10] used a small window size (25 simulation time steps) and a larger window size (\(10\%\) of all simulation steps) for the analysis of long-term behavior in the evaluation. In addition, two simulation models were used. The first is the chicken simulation model, where a negative self-organized behavior in this scenario occurs if a chicken is injured and the other chickens hunt and surround the injured chicken and try to peck it to death. The second is the pollination model, which simulates the movements of bees as they collect and spread pollen from flowers. The results of the evaluated systems indicate that the approach proposed by [10] is adequate since the combined use of short-term and long-term behavior analysis works in identification. However, this approach has not been evaluated in scenarios with a larger number of agents, larger simulation environments, unlimited environments, other forms of organized systems, etc. ## 3 Background ### DBSCAN Clustering O DBSCAN was proposed in 1996 as the first Density-Based Clustering Algorithm (DBCLA) [16]. The key idea is that for each cluster point, the neighborhood of a given radius has to contain at least a minimum number of points, i.e., the density in the neighborhood has to exceed some threshold [16]. Therefore, DBSCAN takes two parameters, \(Eps\) and \(Minpts\). The \(Minpts\) defines the minimum amount of points to define whether a point \(p\) is the center in a cluster, an edge, or noise in dataset \(D\). While for a given point \(p\), \(Eps\) signifies the radius of its surrounding region known. The literature denotes \(Eps\) neighborhood of \(p\) as \(N_{Eps}(p)\). Let \(D\) denote the dataset, then for any \(p\in D\), its \(Eps\) neighborhood is given the Equation (1): \[N_{Eps}(p)=\{q\in D\|dist(p,q)\leq Eps\} \tag{1}\] If the \(N_{Eps}(p)\) of an object \(p\) contains at least a minimum number, \(Minpts\), of objects, then the object \(p\) is called a core point. If the \(N_{Eps}(p)\) has less than \(Minpts\), but some core point, object \(p\) is called an border point. Otherwise, the point will be considered noise. Two principal points have the same cluster membership if they are directly reachable by density with respect to \(Eps\) and \(Minpts\) in a set \(D\), if there exists a chain of objects \(p_{1},...,p_{n}\), such that \(p_{1}=q\) and \(p_{n}=p\) and \(p_{i}+1\) is reachable by density directly from pi with respect to e \(Minpts\), for \(1\leq i\leq n\), \(p_{i}\) in \(D\)[16]. Algorithm 1 presents the pseudocode of the DBSCAN algorithm. ``` 0:\(X,Eps,Minpts\) 1: Mark all points \(x\in X\) as noise 2:for\(unvisited\ point\ x\in X\)do 3: Mark \(x\) is visited 4:\(N\gets GETNEIGHBORS(x,Eps)\) 5:if\(|N|\geq Minpts\)then 6: Mark \(x\) as core 7:for\(unvisited\ point\ y\in N\)do 8:\(M\gets GETNEIGHBORS(y,Eps)\) 9:if\(|M|\geq Minpts\)then 10: Mark \(y\) as core 11: else 12: Mark \(y\) as border 13:endif 14:endfor 15:endif 16:endfor ``` **Algorithm 1** DBSCAN Algorithm As each simulation has different numbers, scenarios, and objectives, different numbers of clusters can be formed, we selected the DBSCAN clustering algorithm. In addition, DBSCAN allows focusing on the agents involved in the groups, avoiding noise. ### Silhouette Coefficient There are many algorithms for partitioning a set of objects into clusters, but these clustering methods always result in \(k\) clusters, whatever the data. However, it is necessary to assess whether the partitioning reflects a grouping structure in the data or whether the objects were only partitioned into some artificial groups [17]. Therefore, Silhouette Coefficient is a metric used to calculate the goodness of a clustering technique. The silhouettes are helpful when the proximities are on a ratio scale and when seeking compact and separated clusters [17]. To construct silhouettes, it only needs two things: the partition by the application of some clustering technique and the collection of all proximities between objects. In Silhouette Coefficient, its value ranges from \(-1\) to \(1\). For values closer to \(1\), the clusters are well apart and distinguished. On the other hand, they are indifferent to values more relative to \(0\), or the distance between them is insignificant. Finally, the clusters are assigned incorrectly for values closer to \(-1\). Equation (2) presents the formula for the Silhouette Coefficient. The \(a\) is the average intra-cluster distance, i.e., the average distance between each point within a cluster, Equation (3), and \(b\) is the average inter-cluster distance, i.e., the average distance between all partitions, Equation (4). \[\frac{(b_{i}-a_{i})}{max(a_{i},b_{i})} \tag{2}\] \[a_{i}=\frac{1}{|C_{i}|-1}\sum_{j\in C_{i},i\neq j}d(i,j) \tag{3}\] \[b_{i}=\min_{k\neq i}\frac{1}{|C_{k}|}\sum_{j\in C_{k}}d(i,j) \tag{4}\] ### Network Structure Inference Throughout history, the exploration of networks has predominantly belonged to the domain of discrete mathematics, more specifically, graph theory. This field began in 1735, with Euler's resolution of the problem of Konigsberg bridges, resulting in the formulation of the concept of Eulerian graph [18]. As defined by [11], a network \(G=(V,E,A)\) is a set composed of a set \(V\) of \(n\) nodes, where each node is represented by \(v_{i}\in V\), a set \(E\) of \(m\) pairs of nodes, represented by \(e_{ij}\in E\), and a set \(A\) that contains information about nodes and/or sets of edge attributes. Among these attributes, the specific weight of an edge stands out, represented by \(w_{ij}\), a scalar value that obeys the constraint \(|w_{ij}|\leq 1\), \(|w_{ij}|=0\) indicating the absence of an edge. The network is classified as unweighted when \(w_{ij}\in 0,1\). This last configuration represents a particular case of the weighted network. Furthermore, node and edge attributes assume a significant role as specific types of features derived from functions that measure local properties of the edge or node, such as, for example, the degree of the node. In turn, time-varying networks are conceptualized as sequences that contain snapshots of static networks: \(G=(G_{1},...G_{k},...G_{t})\). These dynamic networks include attributes and/or edges that transform over time, creating a multifaceted panorama of the underlying interactions [11]. The network topology inference problem merges at the intersection of two fundamental models. First, a network model, denoted as \(R(D,\alpha)\to G\), builds the network representation \(G\) from input data \(D\) under a variety of parameters \(\alpha\). These models can be exemplified by parametric statistical approaches, such as exponential random graph models, or by non-parametric and bounded interactions networks. Then, the problem incorporates a task model, expressed by \(T(G,\beta)\to p_{1},p_{2}...\), which acts on the network \(G\) as input under specific parameters \(\beta\), generating task results (such as "\(p_{i}\)" predictions). These results approximate the optimal hypothetical function \(T^{*}(G)\) of a network task, based, for example, on classification or prediction, considering an error \(e()\)[11]. In this structure, it is feasible to carry out various tasks. In the context of network prediction tasks, the predictive model demonstrates abilities to generate predictions for (1) edges, (2) attributes, or (3) the data itself [11]. From a set of validation edges \(E^{*}\), an application instance can be evaluated as in Equation \(5\): \[\arg\min_{G}e(T(R(D,\alpha),\beta),E^{*}) \tag{5}\] This implies the inference of \(G\) based on network model parameters \(\alpha\) and task model parameters \(\beta\). In this scenario, the adequacy of the network and task models and the selection of a suitable error function is critical to determine the inferred network performance [11]. Network model selection methodologies mitigate some of the biases arising from the offline construction of manually tuned networks by fully exploring various combinations of possible models [11]. This formulation highlights two fundamental challenges in the task of network structure inference. First, the parameter space of \(G\), from \(R(D,\alpha)\), can become substantially vast, and the exploration of these parameters, very possibly, will not present convexity about the performance of the task of interest. Second, models that perform well in this parameter space can result in remarkably distinct network topologies [11]. According to [11] synthesizing and harmonizing these discrepancies can be crucial to understanding the most appropriate network model. The wide variety of plausible tasks and network models generates a scenario that significantly challenges hypothesis generation and evaluation for the researcher; the presence of more credible models requires additional interpretive approaches to understanding the mechanisms underlying the system's behavior [11]. ### Graph Entropy The first researchers to define and explore the concept of graph entropy include [19], [20], [21]. After these pioneering contributions, [22] presented a distinct definition of graph entropy, strongly linked to issues in information theory and coding. This work aimed to solve the problem of evaluating the effectiveness of the ideal encoding of messages coming from an information source, in which the symbols belong to a finite set of vertices \(V\). Another definition of Korner entropy, which first appeared in [23], is based on the well-known stable set problem and is closely related to minimum entropy colorings of graphs [24]. Network Entropy is based on the classic "Shannon Entropy" for discrete distributions [25]. [26] proposed a Network Entropy measure based on the probability of a random walker going from node \(i\) to any other node \(j\). According to [27], this probability distribution \(P(i)\) is defined for each node \(i\) as shown in Equation 6. So \(\sum_{j}p_{i\to j}=1\) \[p_{i\to j}=\left\{\begin{array}{ll}0,&for\ a_{ij}=0\\ \frac{1}{k_{i}},&for\ a_{ij}=1\end{array}\right. \tag{6}\] Based on the probability distribution \(P^{(i)}\), the entropy for each node, with \(\varphi^{(i)}=0\), can be defined as in Equation 7: \[\varphi^{(i)}\equiv\varphi[P^{(i)}]=-\sum_{j=1}^{N-1}p_{i\to j}\ln p_{i \to j}=\ln k_{i} \tag{7}\] Next, the normalized entropy of the node is calculated, Equation 8: \[H^{(i)}=\frac{\varphi[P^{(i)}]}{\ln(N-1)}=\frac{\ln k_{i}}{\ln(N-1)} \tag{8}\] Finally, the normalized network entropy is calculated by averaging the normalized node entropy over the entire network, Equation 9[27]: \[H=\frac{1}{N}\sum_{i=1}^{N}H^{(i)}=\frac{1}{N\ln(N-1)}\sum_{i=1}^{N}\ln k_{i} \tag{9}\] ## 4 Experimental Planning and Approaches ### Scenarios The simulations were performed on the NetLogo [12] platform because this platform has a library with simulation examples. In this library, we selected four simulations from the animal kingdom that had the characteristic of the organization. It was necessary to make some modifications to the simulations. Scenarios and changes are exemplified in the following subsections. #### 4.1.1 Ants In this simulation, there is a colony of ants whose functions of each ant are to look for food in the environment and return the food to the anthill. When an ant finds a piece of food, it carries it back to the nest, dropping a chemical as it moves, Figure 1. When other ants "sniff" the chemical, they follow the chemical toward the food. As more ants carry food to the nest, they reinforce the chemical trail. The ant colony generally exploits the food source in order, starting with the food closest to the nest and finishing with the food most distant from the nest. It is more difficult for the ants to form a stable trail to the more distant food since the chemical trail has more time to evaporate and diffuse before being reinforced. Once the colony finishes collecting the closest food, the chemical trail to that food naturally disappears, freeing up ants to help collect the other food sources. However, the more distant food sources require a more significant "critical number" of ants to form a stable trail. When the ants "take" all the food available in the scenario to the anthill, the organization is undone, and they walk randomly through the simulation scenario. No one or more agents command the others. Instead, all the ants work together to reach the objective. Thus, this simulation contains the organizational paradigm of teams. To evaluate the organization detection in this scenario, we modified the ants' behavior so that they worked individually. That is, the ants look for food and take it to the anthill, but without informing the others. Thus, we can evaluate our approaches with and without organization. #### 4.1.2 Wolf Sheep Predation This model has two groups of agents: the wolves (black agents) and the sheep (white agents), Figure 2. This simulation aims to explore the stability of predator-prey ecosystems. In this model, wolves and sheep wander randomly around the landscape while the wolves look for sheep to prey on. Each step costs the wolves and sheep energy, and they must eat to replenish their energy. If their run out of energy, they die. To allow the population to continue, each wolf or sheep has a fixed probability of reproducing at each time step. However, in this work, we wanted to evaluate our approach in several organizational paradigms. By knowing how wolves organize themselves to hunt their prey, in which an alpha wolf coordinates the attack on a target, we selected this simulation to simulate the hierarchical organization. Therefore, some modifications were necessary. In the original scenario, sheep and wolves die if their energy runs out and the wolves work individually. In our changes, only sheep die if a wolf eats them. As our approach involves observing the trajectories of agents that may contain some organization, in this case, the wolves do not die during the simulation. When the wolves "devour" all the sheep in the scenario, the group is disbanded and the wolves walk randomly. In the hierarchical scenario, we added an alpha wolf agent that selects the prey for the other wolves to help him capture it. Whereas in the original scenario, all wolves work individually. Thus, we designed a predation scenario with and without wolf organization. Figure 1: Ants Simulation - Teams Figure 2: Wolf Sheep Predation Simulation - Hierarchy #### 4.1.3 Flocking This NetLogo template seeks to mimic flocking birds, Figure 3. The resulting movement can also resemble schools of fish. Flocks spawned in this scenario are not created or led by any leader birds. Instead, each bird follows the same rules from which flocks arise and break up. Each flock is dynamic. Once together, a community is not guaranteed to keep all its members. The birds follow three rules: "alignment", "separation", and "cohesion". "Alignment" means that a bird tends to turn to move in the same direction that nearby birds are moving. "Separation" means that a bird will turn to avoid another bird that gets too close. "Cohesion" means that a bird will move towards other nearby birds (unless another bird is too close). When two birds are too close, the "separation" rule overrides the other two, deactivating until the minimum separation is achieved. The three rules affect only the bird's heading. Therefore, each bird always moves forward at the same constant speed. As bands arise due to a need and disband when the requirement is fulfilled, the paradigm present in this simulation is the coalition. So, to generate a simulation without this organization to evaluate our approach, we modified the birds to roam randomly around the environment, with no intention of forming flocks to achieve their goals. #### 4.1.4 Ants Adaptation In this simulation, there are two ant colonies (one of the red ants and one of the black ants) and many flowers are created in the world, Figure 4. Ants are spawned in the ant colony and then wander until they find food, represented as flowers in the model. When they find food, they gain energy through eating nectar. Then they return to the colony while laying down a pheromone. Ants near pheromone trails are attracted to the most potent chemical trail. As the ants exhaust a food source, they once again begin to wander until they find another food source or pheromone trail to follow. When two or more ants of opposing colonies encounter each other, they fight or scare each other away, leaving chemicals that attract more ants. For the winner, this works to protect the food source from competing colonies. The ant queen reproduces when the ants in her colony collect enough food, set by the created cost for each colony. Flowers periodically grow up around the map, resupplying food in the game. Ants die if they get too old, cannot find food, or sometimes when they lose a fight. Finally, nests die if they have no more ants living in them. Unlike Ants simulation, ants can die or spawn new ants. Therefore, just like the Wolf Sheep Predation simulation, as our goal is to follow the agents' trajectories to identify patterns of organization, we modified this simulation so that the agents in this scenario are fixed, i.e., they do not die, and ones are not generated. Like the Ants simulation, when there is no more food in the simulation scenario, the organization is undone, and they walk randomly in the environment. As the rest remains as in the original scenario, the paradigm of this organization is the congregation since the colonies are long-term and not created by a simple objective that when supplied, the colonies dissolve. Furthermore, there is not one agent leading the others in each colony. So, to remove this paradigm from this simulation, we performed the same modification as in the Ants scenario. Ants will look for food and return it to the anthill, but they do not collaborate on the food location; they work individually. Figure 3: Flocking Simulation - Coalition ### Proposed Approaches #### 4.2.1 Adjacency Matrices Initially, our approach obtains the trajectory matrix, \(M_{traj}\), of all agents during the simulation period, \(time\). The initial design of the algorithm was based on graphs, whereby collecting \(X\) simulation steps, a trajectory of \(X\) steps is obtained for each agent. A graph is constructed where each path is a vertex. For \(Y\) agents, we will have a graph with \(Y\) vertices. Next, the similarity of each vertex is evaluated. The similarity was evaluated by trajectory similarity and distance between vertices. We use cosine similarity to determine how similar the trajectories are. The distance was used, as the vertices may be identical but are far from each other, so there was no interaction between the agents even though they presented a similar path. The distances between each vertex were normalized in a scale from \(0\) to \(1\). Obtaining the \(Ms\) matrix, which are adjacency matrices, with the similarity values of the trajectories at index 0 (\(Ms[0]\)) and the distance of each agent during the simulation at index 1 (\(Ms[1]\)), we apply the metric described in Equation 10, than these two values, to verify the degree of similarity between the agents. This generates a final similarity matrix \(num\_agents\times num\_agents\) (\(M_{yes}\)), where \(num\_agents\) is the number of agents in the simulation. \[M_{sim}=(1-\mathrm{mod}(Ms[0]))*Ms[1] \tag{10}\] #### 4.2.2 Cluster Quality Indexes The similarity matrix is used in training the DBSCAN algorithm to generate the clusters. Finally, the Silhouette Coefficient is applied to assess the quality of the partitions in scenarios where there are and where there are no organizations. Summarizing the description made in the previous paragraphs about our algorithm, Algorithm 2 shows the pseudocode. The input parameters are the trajectory matrix, the simulation time, the window size, and the number of agents in the simulation. For each time step, the trajectory similarity of the agents is calculated for a period (window size) based on the similarity of the trajectory and distance between the agents. After that, clusters of similar agents are generated in the given period of time, and the quality of the clusters is evaluated. ``` 0:\(M_{traj},time,size,window,num\_agents\) 1:functionCalculateSimilarity(\(start\_size\_window,M_{traj},num\_agents\)) 2:for\(k\in\{1,...,num\_agents\}\)do 3:\(windowing[0]=M_{traj}[start,start+size\_window,k,0]\) 4:\(windowing[1]=M_{traj}[start,start+size\_window,k,1]\) 5:\(M_{window}=windowing\) 6:endfor 7:\(Ms[0]\gets cosine\_similarity(M_{window})\) 8:\(Ms[1]\gets pairwise\_distances(M_{window})\) 9:\(Ms[1]\gets normalize(Ms[1])\) 10:endfunction 11:for\(k\in\{1,...,time-size\_window\}\)do 12:\(Ms\gets CalculateSimilarity(k,size\_window,M_{traj},num\_agents)\) 13:\(M_{sim}\leftarrow(1-\mathrm{mod}(Ms[0]))\circ Ms[1]\) 14:\(clusters\gets DBSCAN(M_{sim})\) 15:\(IndClusters[k]\gets silhouette\_score(clusters)\) 16:endfor ``` **Algorithm 2** Detecting Organization by Cluster Quality Indexes Figure 4: Ants Adaptation Simulation - Congregation #### 4.2.3 Graph Entropy In the graph entropy approach, after obtaining the adjacency matrix, we define the number of nodes that equals the number of agents in each simulation and calculate the normalized network entropy based on the formula of Equations \(7,8and9\), as shown in Algorithm 3. Algorithm 2 shows the pseudocode this approach. As well as the approach with the cluster quality index, the input parameters are the trajectory matrix, the simulation time, the window size, and the number of agents in the simulation. For each time step, the trajectory similarity of the agents is calculated for a period (window size) based on the similarity of the trajectory and distance between the agents. After that, the entropy of the graph generated by the adjacency matrix is calculated in the time period of the window size. ``` 1:\(M_{traj},time,size\_window,num\_agents\) 2:\(\textbf{1}\):functionCalcusellation\(attr\_size\_window,M_{traj},num\_agents) 3:for\(k\in\{1,...,num\_agents\}\textbf{do}\) 4:\(windowing[0]=M_{traj}[start,start+size\_window,k,0]\) 5:\(windowing[1]=M_{traj}[start,start+size\_window,k,1]\) 6:\(M_{window}=windowing\) 7:endfor 8:\(Ms[]\leftarrow\mathit{cosine\_similarity}(M_{window})\) 9:\(Ms[1]\gets pairwise\_distances(M_{window})\) 10:\(Ms[1]\gets normalize(Ms[1])\)return\(Ms\) 11:endfunction 12:for\(k\in\{1,...,time-size\_window\}\)do 13:\(Ms\leftarrow\mathit{CalculateSimilarity(k,size\_window,M_{traj},num\_agents)}\) 14:\(num\_nodes\gets num\_agents\) 15:\(normalizedentropy\leftarrow\frac{num\_nodes\times\ln(num\_nodes-1)}{num\_nodes \times\ln(num\_nodes-1)}\times\ln(\sum Ms)\) 16:endfor ``` **Algorithm 3** Detecting Organization by Graph Entropy #### 4.2.4 Experiment Parameters In the attempt to evaluate the approach in several paradigms with different amounts of agents, in scenarios with few agents, there was no cluster generated, i.e., all agents were labeled as noise in the simulation, which caused an error in the Silhouette Coefficient. To work around this problem, missing values (NaN) were added to the quality score vector when this problem occurred. Table 1 presents all configurations used in the experiments of this work. The dimension of the scenario for each simulation was the default configured by the Netlogo library. The value of \(500\) time steps was selected because, in simulations with a goal, such as those of ants, wolves, and sheep, this was the average time for agents in these simulations to reach their goals. Afterward, the agents randomly walk around the environment and match the simulations without organization. In addition, window sizes of \(25\) and \(50\) were chosen due to the values defined in the study conducted by [10]. ## 5 Results and Analysis This section presents the results obtained with our approaches in the four organization simulation scenarios available in the NetLogo platform. We use a smoothed curve in the graphs to visualize better and interpret the results. Based on the configurations of the simulations used in this paper to evaluate the approaches, that is, number of agents, size of agents, world with no limit or with limit, and size of the simulation scenario, and on the results observed in the simulations. \begin{table} \begin{tabular}{|l|c|} \hline Parameters & Values \\ \hline Arts (Scenarios) & 150 ants \\ Wolf (Scenarios) & 15 wolves \\ Flocking (Scenarios) & 300 birds \\ Arts Adaptation (Scenarios) & 10 blue ants and 10 red ants \\ Simulation Time (Scenarios) & 500 time steps \\ Eng (DBSCAN) & 0.01 \\ Mtis, samples (DBSCAN) & 5 \\ Metric (DBSCAN) & precomputed \\ Metric (Silhouette Score) & euclidean \\ \(size\_window\) & 25 and 50 \\ \hline \end{tabular} \end{table} Table 1: Experiment Setup. ### Cluster Quality Indexes Figures 5, 6, 7, and 8 show the results of the quality indices of the clusters generated during the \(500\) time steps, with window size equal to \(25\) and \(50\), in the Ants (Teams), Flocking (Coalitions), Wolf Sheep Predation (Hierarchy) and Ants Adaptation (Congregations), respectively. The Figures contain the results of the simulations with and without organizations. In Figure 5 it is possible to notice that the quality indices in most simulation times were higher than \(0\) in the scenarios with organized ants. In this scenario, values close to \(0\) occur when no food is detected and ants walk randomly to locate it. When the pheromone is found, it is released and the ants begin to cluster in the indicated food path. Thus, the value of the clusters' quality index increases. On the other hand, in scenarios with disorganized ants, the quality indices, in most simulation times, were lower or close to \(0\) because there is no cooperation/interaction between the ants, they pick up the food they randomly find and take it to the anthill, without releasing pheromones, so there is no grouping in the path of food. The Figure 6 presents the results of the quality scores of the Flocking simulation. In this scenario, there are many agents, just like in the Ants simulation, but the environment is limitless. The agents in a simulation can move freely in any direction without being impeded by edges or borders. They reappear in the opposite direction as if they had continued the movement. Due to this characteristic, the quality index of clusters with organized birds was lower than that of others simulations. However, when compared with the simulations with disorganized agents, it is possible to observe in the window with size \(50\) that the quality index in the organized simulation has higher values, while in the case of disorganized agents the index was lower than \(0\) in most of the simulation time. In the simulation with the wolves, Figure 7, these are low-agent scenarios compared to Ants and Flocking simulations. Thus, clusters may not be formed because all the agents were classified as noise for the low interaction, especially in environments without organization. However, with organized wolves, it was still possible to obtain better results than in Flocking, with the quality index reaching \(0.6\) in both window sizes, as the world of this simulation is limited, and the wolves do not separate at any time. Instead, they always walk in a pack after the next prey. However, when the wolves were disorganized, there were few interactions between them, so the cluster quality index in most simulations was equal to \(0\) most of the time. Finally, in Figure 8, it is possible to notice that the quality of the clusters was superior to \(0\) for the organized ants, obtaining a quality superior to \(0.6\) in some moments, even having few agents in the simulations. While in disorganized ants, the value was equal to or close to \(0\). In just one instant the value was less than \(0\). This occurs because the environment has a smaller dimension and the size of the agents are larger when compared to the other scenarios, allowing greater interaction, especially in organized scenarios. Figure 5: Quality index of Ants simulation clusters with and without organization. The y-axis corresponds to the cluster quality index and the x-axis corresponds to the simulation time period. ### Graph Entropy Figures 9, 10, 11, and 12 show the results of the graph entropy generated during the \(500\) time steps, with window size equal to \(25\) and \(50\), in the Ants (Teams), Flocking (Coalitions), Wolf Sheep Predation (Hierarchy) and Ants Adaptation (Congregations), respectively. The Figures contain the results of the simulations with and without organizations. A graph with entropy close to \(0\) suggests that the connections between nodes are highly deterministic, that is, there is a clear trend or pattern in the relationships [28]. In Figure 9, it is possible to notice that in the organized simulations the entropy values were lower when compared to the disorganized simulations. As well as in the approach with cluster quality indexes, in the simulations with organization the entropy values were higher at the beginning when no food was yet located and at the end when the food runs out. The Figure 10 presents the results of the graph entropy of the Flocking simulation. Due the characteristic this simulation (there are many agent and the environment is limitless), entropy in the simulation with disorganization maintained constant entropy values, with more discreet peaks, while in the organized simulation there are more visible peaks where entropy is lower than the disorganized one, but there are also peaks of higher entropy in the two configured window sizes. Figure 6: Quality index of Flocking simulation clusters with and without organization. The y-axis corresponds to the cluster quality index and the x-axis corresponds to the simulation time period. Figure 7: Quality index of Wolf Sheep Predation simulation clusters with and without organization. The y-axis corresponds to the cluster quality index and the x-axis corresponds to the simulation time period. In the simulation with the wolves, Figure 11, as well as in the Ants simulation, it is possible to notice that in the scenario with organized agents the entropy values were lower than in the scenario with disorganized agents. This difference is more noticeable with the window equal to \(50\). In Figure 8, even with close values, it is possible to notice that the entropy was lower for the organized agents, especially with the window equal to \(50\). ### Wolfgang Trumler and Mike Gerdes' Approach Figures 9, 10, 11, and 12 present the results of the simulations seeking to detect patterns that evidence organized agents or not using the approach proposed by [10] to use entropy calculation in x and y coordinates. In Figure 13, a peak occurred in which entropy was lower in the organized scenario than the disorganized one. However, in general, there is no clear vision of a pattern that allows the differentiation of each case. The Figure 14 presents the results of the approach by [10] in the Flocking simulation. Different from the Ants simulation, in this simulation it is possible to identify the difference of each scenario. Because while the value of entropy is constant in the disorganized scenario, with organized agents there are peaks with lower values, especially in the y coordinate. Figure 8: Quality index of Ants Adaptation simulation clusters with and without organization. The y-axis corresponds to the cluster quality index and the x-axis corresponds to the simulation time period. Figure 9: Graphs entropy of Ant simulation with and without organization. The y axis corresponds to the entropy value and the x axis corresponds to the simulation period. In the simulation with the wolves, Figure 15, just like the Flocking simulation, the differentiation of the entropy of each scenario is more visible. However, unlike the Flocking simulation, it is possible to observe this in the two coordinates, x and y. In Figure 16, even though it is possible to notice the difference in the graph, the entropy values of each coordinate for each scenario are close. The results of our approach using graph entropy showed that in some simulations the patterns, Figures 9 and 11 were more visible than in others that do not present a constant difference of values, but peaks where the values are lower than the scenarios with agents working individually, Figures 10 and 12. The same occurs with the approach of Wolfgang Trumler and Mike Gerdes, in the Flocking and Wolf Sheep Predation simulations it is more visible to identify each scenario, Figure 14 and 15, while in the Ants and Ants simulations Adaptation there are either peaks or the values are very close, Figures 13 and 16. However, our approach with graph entropy obtained values closer to 0 than the approach proposed by [10]. Compared with the approach proposed by [10] in the literature, the results showed that the approach with the cluster quality index allowed a better identification of each scenario (organized or disorganized) when compared to the other approaches presented in this article. However, our approach with graph entropy was better than the one proposed by [10]. However, it is essential to note that the number of agents in the environment can affect the quality of the clusters, as all agents can be labeled as noise by DBSCAN. Thus, in future works, we intend to improve the algorithm, increasing more characteristics of the organization to be identified. Furthermore, we intend to evaluate our approach in real scenarios to validate its applicability and improve the detection of organizations. We also plan to apply the approach to scenarios with groups organized between individuals walking randomly, seeking to simulate real public scenarios to validate the applicability of our approach. ## Acknowledgments This was was supported in part by...... Figure 14: Wolfgang Trumler and Mike Gerdes’ approach in Flocking simulation with and without organization. The \(\mathbf{y}\) axis corresponds to the entropy value and the x axis corresponds to the simulation period. Figure 15: Wolfgang Trumler and Mike Gerdes’ approach in Wolf Sheep Predation simulation with and without organization. The y axis corresponds to the entropy value and the x axis corresponds to the simulation period.
2309.10361
Improving CLIP Robustness with Knowledge Distillation and Self-Training
This paper examines the robustness of a multi-modal computer vision model, CLIP (Contrastive Language-Image Pretraining), in the context of unsupervised learning. The main objective is twofold: first, to evaluate the robustness of CLIP, and second, to explore strategies for augmenting its robustness. To achieve this, we introduce a novel approach named LP-CLIP. This technique involves the distillation of CLIP features through the incorporation of a linear probing layer positioned atop its encoding structure. This newly added layer is trained utilizing pseudo-labels produced by CLIP, coupled with a self-training strategy. The LP-CLIP technique offers a promising approach to enhance the robustness of CLIP without the need for annotations. By leveraging a simple linear probing layer, we aim to improve the model's ability to withstand various uncertainties and challenges commonly encountered in real-world scenarios. Importantly, our approach does not rely on annotated data, which makes it particularly valuable in situations where labeled data might be scarce or costly to obtain. Our proposed approach increases the robustness of CLIP with SOTA results compared to supervised technique on various datasets.
Clement Laroudie, Andrei Bursuc, Mai Lan Ha, Gianni Franchi
2023-09-19T06:43:31Z
http://arxiv.org/abs/2309.10361v1
# Improving CLIP Robustness with Knowledge Distillation and Self-Training ###### Abstract This paper examines the robustness of a multi-modal computer vision model, CLIP (Contrastive Language-Image Pretraining), in the context of unsupervised learning. The main objective is twofold: first, to evaluate the robustness of CLIP, and second, to explore strategies for augmenting its robustness. To achieve this, we introduce a novel approach named LP-CLIP. This technique involves the distillation of CLIP features through the incorporation of a linear probing layer positioned atop its encoding structure. This newly added layer is trained utilizing pseudo-labels produced by CLIP, coupled with a self-training strategy. The LP-CLIP technique offers a promising approach to enhance the robustness of CLIP without the need for annotations. By leveraging a simple linear probing layer, we aim to improve the model's ability to withstand various uncertainties and challenges commonly encountered in real-world scenarios. Importantly, our approach does not rely on annotated data, which makes it particularly valuable in situations where labeled data might be scarce or costly to obtain. Our proposed approach increases the robustness of CLIP with SOTA results compared to supervised technique on various datasets. ## 1 Introduction Foundation models have emerged as powerful tools for processing and understanding various data modalities, encompassing text and images. These models, which include GPT-3 [5], LLaMa [53], and CLIP [43], among others, are Deep learning models trained on diverse data that can be easily adapted to a wide range of downstream tasks. An intriguing aspect of these foundation models is their versatility in working with different modalities and tasks. By leveraging the fusion of multiple modalities, these models extract rich representations, enabling tasks such as zero-shot classification. Notable examples of multi-modal foundation models include CLIP [43], ALIGN [26], BASIC [41], and LiT [55]. While these models have demonstrated impressive performance across various benchmarks, an important question arises regarding their robustness when faced with uncertainties encountered in real-world applications. Deep Neural Networks often exhibit limitations in terms of robustness [8, 16, 20, 28], necessitating a closer examination of their reliability. Ensuring robustness is essential, as it allows models to maintain reliable performance even when exposed to naturally-induced image corruptions or data distributions that differ from the training distribution. One significant issue pertains to the lack of proper calibration [21], where the confidence scores of the models should align with their accuracy. In this context, our intention is to investigate whether this constraint also affects multi-modal Foundation models, with a particular focus on CLIP. To the best of our knowledge, this work is among the first to delve into the robustness aspects of CLIP in unsupervised learning. However, it is worth noting that robustness assumes a pivotal role in real-world scenarios, where multi-modal models are required to adeply manage a range of uncertainties stemming from data noise, out-of-distribution samples, and adversarial attacks. The capacity to comprehend and quantify these uncertainties is of paramount importance in ensuring robust decision-making and the successful deployment of multi-modal models in critical applications. In light of these challenges, our paper addresses the need for enhancing the robustness of multi-modal foundation models, with a particular focus on CLIP [43]. CLIP has garnered attention for its exceptional performance, achieving state-of-the-art zero shot classification results with 64.18% top-1 accuracy on the ImageNet dataset. Our proposed technique aims to improve the robustness of CLIP through a novel training approach. Specifically, we train a linear layer on top of CLIP using an extensive collection of unlabeled images, thereby incorporating unsupervised learning. By leveraging this training set, we can enhance the model's capability to handle uncertainties and improve its robustness in real-world scenarios. The core contribution of our work lies in developing a methodology that optimizes the training procedure of multi-modal foundation models, specifically targeting CLIP. Through our approach, we aim to bridge the gap between the current limitations of multi-modal test-image models and the robustness required for real-world applications. By effectively capturing uncertainties and improving calibration, our technique strives to enhance the reliability and performance of multi-modal models in challenging scenarios. **Contributions.** Our work presents several key contributions. Firstly, we are the first to extensively investigate the robustness of CLIP, shedding light on its limitations and potential areas for improvement. Secondly, we propose a novel strategy to enhance the robustness of CLIP, offering practical insights into addressing its vulnerabilities. Lastly, we demonstrate competitive or state-of-the-art results on various datasets, particularly in the accuracy of unsupervised classification, and in effectiveness of detecting out-of-distribution samples. ## 2 Related work Uncertainty quantification and DNNs.Bayesian Neural Networks (BNNs) [34, 36] have been a fundamental source of inspiration for uncertainty quantification in deep learning. While variational inference [4, 27] has made progress in scaling BNNs, training large DNN architectures with BNNs remains challenging [13]. Deep Ensembles (DE) [31] emerged as a practical and efficient instance of BNNs. MC dropout [19] is another approach that can be considered to approximate BNNs. Most of these techniques, including ensemble methods, utilize multiple forward passes to quantify the uncertainty of DNNs. Another family of techniques are based on learning the error that the DNN performs [18, 28], yet these techniques mostly model aleatoric uncertainty. Beyond these, there are approaches that aim to model the limits of knowledge of the DNN. For instance, in [17, 39], One vs ALL training enables modeling the lack of knowledge of a DNN. Sensoy et al. [47] utilized evidential deep learning to model the unknowns of DNNs. Uncertainty quantification and Foundation Models.While foundation models have been extensively studied, there is limited research on their uncertainty quantification. Lee's work [33] explores the robustness of GPT, and Pellrine et al. [40] compare the robustness of various foundation models for NLP tasks. Hendrycks et al. [24] investigate out-of-distribution (OOD) generalization and OOD detection performance of BERT for multiple NLP tasks. Fort et al. [15] analyze the reliability of OOD detection in ViT models and demonstrate that fine-tuned ViT pre-trained models significantly enhance near OOD detection tasks. Shu et al. [49] propose retraining CLIP with beta moving average to improve its generalization power to domain shift. Esmaeilpour et al. [14] use a pretrained BERT model to generate OOD classes to quantify the uncertainty of CLIP. Allingham et al. [2] propose enhancing CLIP ensembling to improve OOD detection. However, most of these works focus solely on OOD uncertainty, whereas our research addresses different types of uncertainty. ## 3 Background ### CLIP background and notations CLIP (Contrastive Language-Image Pretraining) [43] is a multi-modal model developed by OpenAI, specifically Figure 1: **Reliability diagrams (calibration plots) on CIFAR-10. We compared calibration of CLIP predictions in zero-shot classification (_left_), supervised linear probing on CLIP features (_middle_) and our LP-CLIP trained in an unsupervised manner on CLIP features (_right_). Overall, LP-CLIP exhibits better calibration properties, while achieving better accuracy for in-domain, domain shift and out-of-distribution settings (see results in section §5.2).** designed to capture semantic relationships between images and text. By utilizing contrastive pretraining, CLIP learns to align the semantic spaces of text and image through a large corpus of image-text pairs. This pretraining process enables the emergence of strong zero-shot capabilities. Denoting \(\{\mathbf{x}_{i},\mathbf{t}_{i}\}_{i=1}^{N}\) as a set of \(N\) image-text pairs, CLIP consists of two separate Deep Neural Networks (DNNs): one for image processing, typically based on a Vision Transformer (ViT) [12], denoted as \(f^{img}_{\omega^{img}}(\cdot)\), and another for text processing, often based on GPT2 [44], denoted as \(f^{txt}_{\omega^{txt}}(\cdot)\). Here, \(\omega^{txt}\) and \(\omega^{img}\) represent the weights of the text and image DNNs, respectively. By feeding an image \(\mathbf{x}_{i}\) through the image DNN, we obtain a representation \(\mathbf{z}_{i}^{img}=f^{img}_{\omega^{img}}(\mathbf{x}_{i})\), and likewise, for text, \(\mathbf{z}_{i}^{txt}=f^{txt}_{\omega^{txt}}(\mathbf{t}_{i})\). During training, CLIP employs the InfoNet loss [38] to optimize the representations such that \(\mathbf{z}_{i}^{img}\) and \(\mathbf{z}_{i}^{txt}\) are close, while \(\mathbf{z}_{i}^{img}\) and \(\mathbf{z}_{j}^{txt}\) are distant for \(i\neq j\). The CLIP model has demonstrated impressive generalization properties across various computer vision tasks, including image generation, object detection, and zero-shot image classification [1]. For zero-shot image classification, one needs to use CLIP with a set of texts called prompts, such as "This is an image of {class name}", with the names of the classes used for the classification task. Given an image representation \(\mathbf{z}^{img}\) and a set of prompt representations \(\{\mathbf{z}_{c}^{txt}\}_{c=1}^{C}\), where \(c\) corresponds to the class ID and \(C\) is the total number of classes, we can compute the following logits vector \(\boldsymbol{\ell}\): \[\boldsymbol{\ell}=\left[\mathbf{z}^{img}\cdot\mathbf{z}_{1}^{txt}\ldots\mathbf{ z}^{img}\cdot\mathbf{z}_{C}^{txt}\right] \tag{1}\] To determine the class, we simply select \(\hat{y}=\arg\max_{c}(\boldsymbol{\ell})\). This process enables zero-shot classification using the learned representations from both the image and text modalities within the CLIP model. ### Uncertainty background Uncertainty in deep learning can primarily stem from three factors, as outlined in a recent survey [20]. Firstly, uncertainty may arise from the data acquisition process. During the acquisition of the training dataset, noise can be introduced due to various real-world factors. For instance, recording training data under specific weather conditions that subsequently change during inference can introduce variability and lead to uncertainty. Additionally, measurement systems themselves, such as sensors, can introduce errors in the acquired data, contributing to uncertainty. Figure 2: **Overview of LP-CLIP, an unsupervised fine-tuning strategy for CLIP.** Given a target dataset without annotations, we generate text embeddings for the classes to recognize and train a linear probe on top of CLIP’s image encoder. The linear probe is trained in an unsupervised manner in a teacher-student setting. Our method involves two optimization steps: (1) extracting pseudo-labels using CLIP zero-shot classification and (2) employing the pseudo-labels to train LP-CLIP. At inference, LP-CLIP predicts over the set of classes it was trained upon in the previous stage. Secondly, uncertainty can arise from the process of building and training deep neural networks (DNNs). DNNs are random functions whose parameters, denoted as \(\omega\), are initialized randomly, and the training process relies on stochastic optimization. Consequently, the resulting neural network is a random function that often corresponds to a local minimum of the expected loss function (referred to as the risk). This inherent randomness in the training procedure can introduce errors and uncertainty into the DNN. The third factor contributing to uncertainty is related to the predictions made by the DNN. Uncertainty can arise from the lack of knowledge within the DNN and may be caused by unknown test data. To categorize the predictive uncertainty, it is common to separate it into two types: uncertainty caused by the model (referred to as epistemic or model uncertainty) and uncertainty caused by the data (referred to as aleatoric or data uncertainty). Aleatoric uncertainty can be assessed by evaluating the model's performance under various corruptions or noise. However, evaluating robustness under epistemic uncertainty is more challenging, and often the evaluation is limited to assessing the model's performance against out-of-distribution samples. ## 4 Method In this paper, we introduce a novel unsupervised training approach named LP-CLIP, which leverages consistency learning within a teacher-student framework [51]. This combination forms an optimization component that serves as a consistency constraint in our DNN training system for image classification. The complete process is outlined in Figure 2 and in Algorithm 1. ``` Input :Image \(\mathbf{x}\), class embeddings \(\mathbf{Z}^{txt}\) computed from \(C\) class names, structure of \(f^{img}\) and \(g\)=\(h\)\(\circ\)\(f^{img}\) 1\(\mathbf{z}^{img}=f^{img}(\text{WeakAugment}(\mathbf{x}))\)// teacher CLIP features 2\(\hat{\mathbf{z}}^{img}=f^{img}(\text{StrongAugment}(\mathbf{x}))\)// student CLIP features 3\(\boldsymbol{\ell}=\mathbf{z}^{img}\cdot(\mathbf{Z}^{txt})^{\top}\)// teacher logits 4\(\boldsymbol{\ell}=h(\mathbf{z}^{img})\)// output of linear probe 5\(\hat{y}=\text{argmax}_{c}(\text{softmax}(\boldsymbol{\ell}))\)// teacher predicted class 6\(\hat{\varphi}=\text{max}(\text{softmax}(\boldsymbol{\ell}))\)// teacher confidence of predicted class 7\(\mathcal{L}_{\text{cons}}=-\hat{\varphi}\cdot\hat{y}\cdot\text{logsoftmax}( \boldsymbol{\ell})\)// weighted cross-entropy loss Output :\(\mathcal{L}_{\text{cons}}\) ``` **Algorithm 1**Consistency loss in LP-CLIP Specifically, our training process involves two simultaneous optimization steps: **(1)** extracting pseudo-labels using CLIP zero-shot classification and **(2)** employing teacher-student optimization with strong data augmentation on images provided to the student DNN. In detail, the teacher DNN is comprised of the CLIP zero-shot classification model explained in section 3.1, while the student DNN is denoted as \(g_{\omega}(\cdot)=h_{\omega^{h}}\circ f^{img}_{\omega^{img}}(\cdot)\), where \(h_{\omega^{h}}\) represents a fully connected layer. In our framework, we fix the weights of \(f^{img}_{\omega^{img}}\) and solely train the weights of \(h_{\omega^{h}}\). The optimization of \(h_{\omega^{h}}\) is guided by a consistency loss, denoted as \(\mathcal{L}_{\text{cons}}\), which involves a weighted cross entropy. The weights are determined by the reliability of the corresponding pseudo label, with the CLIP zero-shot classification confidence score serving as the weight. This confidence score, denoted as \(\hat{\varphi}\), is determined by evaluating the maximum value among the softmax obtained from the CLIP zero-shot classification model for a given sample. The significance of using the CLIP confidence score lies in its ability to shed light on CLIP's uncertainty during its pseudo-annotation process. This information is vital for enhancing the robustness of our new model, as it allows us to gain insights into the areas where CLIP may exhibit uncertainty or potential weaknesses. Access to the unreliability of CLIP's predictions enables us to build a more robust model that takes into account the uncertainties inherent in the underlying CLIP model and improves its performance in challenging real-world scenarios. In order to improve the generalization capabilities of our DNN, we apply strong data augmentation exclusively to the images presented to the student DNN. However, it is important to note that we only employ weak data augmentation for the images provided to the teacher DNN. Drawing inspiration from previous works [3, 42, 50], our strong data augmentation encompasses a combination of random augmentation techniques [9], cutout [11], and the utilization of common data augmentation. On the other hand, the weak data augmentation is specifically based on the CLIP transformation. By incorporating both weak and strong data augmentations, we aim to strike a balance that prevents overfitting while encouraging the distillation of CLIP's knowledge on augmented data. This mixture of data augmentations plays a crucial role in enhancing the DNN's ability to generalize effectively. By exposing the student DNN to strong data augmentations, it becomes more resilient to variations and uncertainties encountered in real-world data. Conversely, the teacher DNN is presented with less augmented data to retain a clearer representation of the original information. As a result, this complementary use of weak and strong data augmentations not only aids in preventing overfitting but also ensures that CLIP effectively distills its knowledge and adapts to data variations that may arise in practical scenarios. In summary, our proposed approach involves unsupervised training for CLIP, incorporating consistency learning within a teacher-student framework. This two-step optimization process, along with the utilization of strong data augmentation and the CLIP zero-shot classification confidence scores, contributes to enhancing the performance and generalization of the DNN. ## 5 Experiments ### Experimental setup Datasets.To evaluate the robustness of CLIP, we conduct experiments on multiple datasets, including CIFAR10 and CIFAR100 [29], Tiny ImageNet [32], STL-10 [7], and ImageNet [10]. Additionally, we considered several out-of-distribution (OOD) datasets, namely SVHN [37], Texture [54], and ImageNet-O [25]. Implementation details.We use the pre-trained CLIP models, ViT-B/32 and VIT-L/14@336px, for our experiments. CLIP's backbones were pretrained on a 15M subset of YFCC100M [43, 52]. We us the first model because it is the most commonly used while the latter, which is finetuned on images of size 336x336 pixels, seems to achieve the best performance according to Radford et al. [43]. To compare our performance with supervised methods, we look at the same architectures pretrained on ImageNet21K. To the best of our knowledge, there are no publicly available weights for ViT-L/14 pretrained on ImageNet21K, we therefore used a ViT-L/16 which is the most similar. The temperature parameter of the softmax function remains consistent with the pre-trained models, set at \(\tau=0.01\). All models have been trained using the SGD optimizer. We trained all the supervised linear probing with a learning rate of 0.03 and we use a learning rate of 0.1 on the first 3 datasets with the LP-CLIP ViT-B/32. We use a learning rate of 0.01 for other experiences with LP-CLIP. Our technique is implemented in PyTorch, and the complete code will be made available after the anonymity period. The performance of CLIP's zero-shot classification heavily relies on the choice of prompt and the amount of engineering invested in tuning [35, 43, 48, 55] or learning it [56, 57]. To mitigate the dependence on a single prompt, an effective approach is to utilize an ensemble of prompts. We do not conduct prompt engineering and to ensure a fair comparison we use the same prompts as proposed in the original CLIP paper [43] for each dataset. In cases where specific prompts are not mentioned, our experiments were conducted with the best-performing prompt and we explain how we determine the best prompt in Appendix **B**. By incorporating an ensemble of prompts, we aim to enhance the robustness and reliability of CLIP's zero-shot classification across datasets. Metrics.To assess aleatoric uncertainty, we employ accuracy as the primary criterion. Additionally, we consider the expected calibration error (ECE) [21], which measures the \begin{table} \begin{tabular}{c c c c|c c c c} \hline \hline \multirow{2}{*}{ECE \(\downarrow\)} & \multirow{2}{*}{Models} & \multirow{2}{*}{Pretraining datasets} & \multirow{2}{*}{Methods} & \multicolumn{5}{c}{Datasets} \\ & & & & CIFAR 10 & CIFAR 100 & CIFAR 100 & TinyImageNet \\ \hline \multirow{2}{*}{Supervised} & \multirow{2}{*}{VIT-B/32} & Imagenet 21K & \multirow{2}{*}{LP} & 0.0418 & 0.2966 & 0.0515 & 0.0764 \\ & & CLIP & & 0.0159 & 0.1131 & 0.0277 & 0.1051 \\ \hline \multirow{2}{*}{Unsupervised} & \multirow{2}{*}{VIT-B/32} & \multirow{2}{*}{CLIP} & Best Prompt & 0.0587 & 0.1195 & 0.0570 & 0.0637 \\ & & & & 0.0587 & 0.1088 & 0.0183 & 0.0192 \\ & & & LP-CLIP (ours) & **0.0203** & **0.0822** & **0.0041** & **0.0174** \\ \hline \hline \multirow{2}{*}{Supervised} & \multirow{2}{*}{VIT-L/16} & Imagenet 21K & \multirow{2}{*}{LP} & 0.0114 & 0.0185 & 0.0169 & 0.0232 \\ & & & CLIP & & 0.0055 & 0.0343 & 0.0113 & 0.0339 \\ \hline \multirow{2}{*}{Unsupervised} & \multirow{2}{*}{VIT-L/14@336px} & \multirow{2}{*}{CLIP} & Best Prompt & 0.0574 & 0.0790 & 0.0328 & 0.0378 \\ & & & Prompt Ensemble & & 0.0427 & 0.0952 & 0.0254 & **0.0155** \\ & & & LP-CLIP (ours) & **0.0095** & **0.0140** & **0.0080** & 0.0211 \\ \hline \hline \end{tabular} \end{table} Table 1: **Expected Calibration Error (ECE) on the different (in-distribution) datasets: CIFAR10, CIFAR10, and TinyImageNet, using ViT-B/32, ViT-L/14 or ViT-L/16 as the backbone. All the results involving training are conducted over three different seeds, and their performances are averaged.** \begin{table} \begin{tabular}{c c c c|c c c c c} \hline \hline \multirow{2}{*}{Accu \(\uparrow\)} & \multirow{2}{*}{Models} & \multirow{2}{*}{Pretraining datasets} & \multirow{2}{*}{Methods} & \multicolumn{5}{c}{Datasets} \\ & & & & CIFAR 10 & CIFAR 100 & CIFAR 100 & CIFAR 100-C & TinyImageNet & TinyImageNet-C \\ \hline \multirow{2}{*}{Supervised} & \multirow{2}{*}{VIT-B/32} & Imagenet 21K & \multirow{2}{*}{LP} & 0.9597 & 0.8266 & 0.8222 & 0.6283 & 0.8312 & 0.5337 \\ & & CLIP & & 0.9432 & 0.7783 & 0.7649 & 0.5472 & 0.7272 & 0.4427 \\ \hline \multirow{4}{*}{Unsupervised} & \multirow{4}{*}{VIT-B/32} & \multirow{4}{*}{CLIP} & Best Prompt & 0.9001 & 0.7347 & 0.6452 & 0.4490 & 0.6246 & 0.3923 \\ & & & 0.8986 & 0.7375 & 0.6507 & 0.4571 & 0.6224 & 0.3973 \\ \cline{1-1} & & & LP w/o WL Aug & 0.9141 & 0.7424 & 0.6569 & 0.4554 & 0.6257 & 0.3861 \\ \cline{1-1} & & & LP w/o Aug & 0.9097 & 0.7407 & 0.6653 & 0.4643 & 0.6267 & 0.3867 \\ \cline{1-1} & & & LP w/o WL & 0.9257 & **0.7768** & 0.6619 & 0.4861 & **0.6417** & 0.4078 \\ \cline{1-1} & & & LP-CLIP (ours) & **0.9254** & **0.7767** & **0.6950** & **0.5191** & **0.6412** & **0.4089** \\ \hline \hline \end{tabular} \end{table} Table 2: **Accuracy (Accu) on in-distribution and corrupted datasets: CIFAR10/CIFAR10-C, CIFAR100/CIFAR100-C, and TinyImageNet/ TinyImageNet-C, using ViT-B/32 as the backbone. All the results involving training are conducted over three different seeds, and their performances are averaged.** relationship between the confidence scores predicted by a DNN and its accuracy. We also evaluate cA (accuracy on the corrupted version of the dataset) and cECE (expected calibration error on the corrupted version of the dataset). The corrupted datasets introduced by Hendrycks et al. [22] include various perturbations such as Gaussian noise, shot noise, impulse noise, defocus blur, frosted glass blur, motion blur, zoom blur, snow, frost, fog, brightness, contrast, elastic, pixelate, and JPEG with five different levels of corruption. Furthermore, to evaluate epistemic uncertainty, we employ the area under the precision-recall curve (AUPR), area under the receiver operating characteristic curve (AUC), and false positive rate at 95% true positive rate (FPR-95-TPR) as defined in [23]. These metrics provide insights into the DNN's ability to detect OOD data. By analyzing results over multiple metrics, we aim for a comprehensive assessment of the DNN's performance in terms of accuracy, calibration error, failure rate, and OOD detection capabilities. ### Main results #### 5.2.1 In-distribution performance. We first assess the impact of LP-CLIP in terms of predictive performance and calibration under the same distribution as used for finetuning. Results in Tables 1, 2, and 3 show consistent improvements over the original CLIP in zero-shot classification with different prompting strategies, even though LP-CLIP uses the same class text embeddings during finetuning of the linear probe. Furthermore, LP-CLIP's performance is not far from the supervised variant that uses ground-truth labels. #### 5.2.2 Robustness to domain shift. To validate the robustness of CLIP under domain shift, we conduct evaluations on variants of the original datasets, namely CIFAR10-C, CIFAR100-C, and TinyImageNet-C. These variants underwent a distribution shift with slight modifications to the images while keeping the labels unchanged. The objective was to observe how CLIP performs when faced with these variations away from the original training distribution. Tables 2, 3, and 4 reveal that CLIP experiences accuracy losses of up to 16% on CIFAR10, 19% on CIFAR100, and 23% on TinyImageNet. These accuracy reductions indicate that CLIP struggles to adapt to changes in the training distribution. However, it is noteworthy that CLIP still maintains good calibration scores on these corrupted datasets, suggesting that despite the performance decline, CLIP remains fairly reliable. The variant employing supervised linear probing exhibits similar accuracy losses, around 16% on CIFAR10, 21% on CIFAR100, and 28% on TinyImageNet. In contrast, our approach, LP-CLIP, shows milder drops in accuracy of 14% on CIFAR10, 17% on CIFAR100, and 23% on TinyImageNet. This demonstrates that our training technique enhances the generalization capabilities of CLIP. Furthermore, as shown in Tables 2 and 3, our performance surpasses that of CLIP without linear probing and comes close to CLIP with total \begin{table} \begin{tabular}{c c c|c c|c c c} \hline \hline \multirow{2}{*}{ECE \(\downarrow\)} & \multirow{2}{*}{Pertraining datasets} & \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{VIT-B/32} & \multicolumn{2}{c}{VIT-L/14\(\pm\)0336px – ViT-L/16} \\ & & & CIFAR10-C & CIFAR100-C & TinyImageNet-C & CIFAR100-C & TinyImageNet-C \\ \hline \multirow{2}{*}{Supervised} & Imagenet 21K & \multirow{2}{*}{LP} & 0.0321 & 0.0570 & 0.0770 & 0.0208 & 0.0325 & 0.0668 \\ & CLIP & & 0.0365 & 0.1052 & 0.0984 & 0.0153 & 0.0400 & 0.0847 \\ \hline \multirow{6}{*}{Unsupervised} & \multirow{6}{*}{CLIP} & Best Prompt & 0.0559 & 0.0711 & 0.0694 & 0.0431 & 0.0530 & 0.0578 \\ & & Prompt Ensemble & 0.0451 & 0.0824 & 0.0510 & 0.0359 & 0.0707 & 0.1665 \\ & & LP w/o & 0.1251 & 0.1759 & 0.0757 & 0.0186 & 0.0399 & 0.0639 \\ & & LP w/o WL & **0.0352** & **0.0323** & 0.0669 & 0.0331 & **0.0227** & 0.0666 \\ & & LP w/o WL & 0.0552 & 0.0582 & 0.1028 & 0.0413 & 0.0856 & 0.0749 \\ & & LP-CLIP (ours) & 0.0739 & 0.1111 & **0.0650** & **0.0121** & **0.0227** & **0.0602** \\ \hline \hline \end{tabular} \end{table} Table 4: **Expected Calibration Error (ECE) on corrupted datasets**: CIFAR10/CIFAR10-C, CIFAR100/CIFAR100-C, and TinyImageNet/TinyImageNet-C, using ViT-B/32, ViT-L/14 or ViT-L/16 as backbones. All the results that involve training are conducted over three different seeds, and their performances are averaged. \begin{table} \begin{tabular}{c c c c|c c c c c} \hline \hline \multirow{2}{*}{Accu \(\uparrow\)} & \multirow{2}{*}{Models} & \multirow{2}{*}{Pertraining datasets} & \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Datasets} & \multicolumn{2}{c}{VIT-C/14\(\pm\)036px – ViT-L/16} \\ & & & CIFAR10 & CIFAR100-C & CIFAR100-C & CIFAR100-C & TinyImageNet-C \\ \hline \multirow{2}{*}{Supervised} & ViT-L/14\(\pm\)036px & Imagenet 21K & \multirow{2}{*}{LP} & 0.9785 & 0.8918 & 0.8862 & 0.7260 & 0.8909 & 0.6174 \\ & & CLIP & & 0.9759 & 0.8895 & 0.8603 & 0.6880 & 0.8469 & 0.6055 \\ \hline \multirow{4}{*}{Unsupervised} & \multirow{6}{*}{CLIP} & Best Prompt & 0.9468 & 0.8457 & 0.7628 & 0.5972 & 0.7326 & 0.5054 \\ & & Prompt Ensemble & 0.9492 & 0.8493 & 0.7703 & 0.6090 & 0.7588 & 0.5326 \\ \cline{1-1} & & LP w/o WL & Aug. & 0.9626 & 0.8663 & 0.7916 & 0.6197 & 0.7599 & 0.5329 \\ \cline{1-1} & & LP w/o Aug & 0.9681 & 0.8901 & 0.7940 & 0.6448 & 0.7673 & 0.5539 \\ \cline{1-1} & & LP w/o WL & 0.9623 & 0.8636 & 0.7837 & 0.6087 & 0.7542 & 0.5184 \\ \cline{1-1} & & LP-CLIP (ours) & **0.9705** & **0.8900** & **0.89003** & **0.6584** & **0.7761** & **0.6530** \\ \hline \hline \end{tabular} \end{table} Table 3: **Accuracy (Accu) on in-distribution and corrupted datasets**: CIFAR10/CIFAR10-C, CIFAR100/CIFAR100-C, and TinyImageNet/TinyImageNet-C, using ViT-L/14 as the backbone. All the results involving training are conducted over three different seeds, and their performances are averaged. supervision. Additionally, considering the calibration performance in Table 4, LP-CLIP exhibits excellent calibration. Therefore, LP-CLIP not only proves to be more robust than CLIP or the fully supervised variant of linear probing but also demonstrates competitive results in terms of robustness and predictive performance. Figure 1 also indicates that CLIP exhibits relatively good calibration. However, CLIP tends to display overconfidence, wherein it assigns high confidence scores to incorrect predictions, thus making its predictions less reliable. Comparatively, we observe that CLIP demonstrates lower overconfidence levels than its supervised counterpart. This observation is noteworthy as it suggests that CLIP performs reasonably well even without extensive training. On the other hand, LP-CLIP exhibits superior performance by being slightly underconfident. Notably, LP-CLIP achieves better results with a lower calibration error in absolute terms, thus making it a more robust model. **Reliability of epistemic uncertainty.** Here, we explore the sensitivity of our method in quantifying epistemic uncertainty, specifically focusing on its ability to provide a confidence value for detecting OOD samples. Figure 3 presents histograms of confidence bins comparing CLIP predictions with the variant using linear probing and supervised training, and LP-CLIP. The results show that LP-CLIP exhibits higher confidence levels on well-ranked data, making it more reliable than CLIP. Tables 5 and 6 demonstrate that, in terms of accuracy, LP-CLIP is often comparable to the supervised variant. Notably, when using the ViT-B/32 architecture, CLIP outperforms the supervised variants for TinyImageNet and ImageNet. It's worth noting that when utilizing ViT-B/32, LP-CLIP and CLIP exhibit equivalence in the worst-case scenario for TinyImageNet and ImageNet datasets. However, in most cases, our approach yields results equivalent to the supervised variant. This highlights the usefulness of our technique in detecting anomalous data. We observed that with VIT-L/14, our approach often outperforms the supervised approach. The difference in performances between ViT-B/32 and VIT-L/14 is intriguing, and we believe this can be attributed to the initial higher accuracy of VIT-L/14. This higher accuracy allows VIT-L/14 to train the linear probing layer more effectively, particularly on challenging datasets. As a result, LP-CLIP benefits from the stronger starting point provided by VIT-L/14, leading to improved performance compared to the supervised approach. An advantage of our approach over others is that we solely rely on CLIP and do not require additional algorithms such as GPT or others to introduce the notion of OOD. This eliminates potential performance distortions and avoids dependence on other black-box models, the robustness of which would also need further investigation. ## 6 Discussions Our study demonstrates that LP-CLIP serves as a powerful tool to enhance the robustness of CLIP and to yield a confidence score that is more faithful to the actual quality of its predictions. Notably, LP-CLIP consistently exhibits superior performance in identifying OOD data in most cases (see Tables 5 and 6). In Appendix **C**, we meticulously demonstrate the robustness of our technique by subjecting it to changes in the CLIP training dataset. The results are strikingly similar to those achieved with OpenAI's CLIP, underscoring that our approach is not confined to just OpenAI's CLIP architecture. Furthermore, in Appendix **D**, we delve into the latent space by visualizing the logits of both CLIP and LP-CLIP. Figure 3: **Confidence histograms on CIFAR-10.** The confidence histograms depict the probabilities of a DNN making predictions with various confidence scores. The histograms depict (_left_) CLIP zero-shot predictions, (_middle_) supervised linear probing + CLIP, and (_right_) LP-CLIP predictions. The histograms are color-coded for clarity. The blue histogram corresponds to the OOD predictions, the orange histogram represents incorrect predictions, and the green histogram indicates correct predictions. Interestingly, the visualization portrays LP-CLIP's latent space as more organized and structured. This observation suggests that LP-CLIP introduces a denoising effect to the latent space, potentially contributing to its improved performance in various tasks. ## 7 Conclusions This paper delves into the inherent robustness issues present in the CLIP model, revealing its tendency to generate unreliable zero-shot classification predictions. Moreover, we note that the pretraining of CLIP can sometimes be less robust compared to the ImageNet 21K pretraining. Recognizing the versatility of CLIP, which allows for usage without annotations, we endeavor to bolster its robustness. To this end, we introduce a straightforward yet effective approach aimed at enhancing CLIP's robustness. This technique involves the integration of a linear probing layer, meticulously trained using pseudo annotations generated through a consistency learning mechanism extracted from CLIP. Through extensive experimentation, we substantiate the capabilities of our proposed LP-CLIP technique. Surpassing the performance of zero-shot CLIP, LP-CLIP even occasionally outperforms the supervised linear probing CLIP, all the while obviating the need for ground truth labels. However, it's important to acknowledge that, like many existing state-of-the-art methods, LP-CLIP does possess a limitation: it lacks theoretical guarantees that ensure the precision of predicted uncertainty. Looking ahead, our research trajectory will be geared towards exploring the potential integration of LP-CLIP with active learning or incremental learning paradigms. Such an integration could significantly amplify its utility, enabling the provision of robust annotations without requiring human intervention. This future direction aligns with our commitment to continually enhance the practical applicability and adaptability of the LP-CLIP framework. **Acknowledgement:** This work was performed using HPC resources from GENCI-IDRIS (Grant 2021 - AD011011970R1) and (Grant 2022 - AD011011970R2). \begin{table} \begin{tabular}{c c c c|c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Models}} & \multirow{2}{*}{Pentizing datasets} & \multirow{2}{*}{Methods} & \multicolumn{3}{c}{TsingImageNet vs. Texture} & \multicolumn{3}{c}{ImageNet vs. Texture} & \multicolumn{3}{c}{ImageNet vs. ImageNet vs. ImageNet-O} \\ & & & & AC1 &
2309.09624
A trichotomy for hitting times and escape rates for a class of unimodal maps
We consider local escape rates and hitting time statistics for unimodal interval maps of Misiurewicz-Thurston type. We prove that for any point $z$ in the interval there is a local escape rate and hitting time statistics which is one of three types. While it is key that we cover all points $z$, the particular interest here is when $z$ is periodic and in the postcritical orbit which yields the third part of the trichotomy. We also prove generalised asymptotic escape rates of the form first shown by Bruin, Demers and Todd.
Mark Demers, Mike Todd
2023-09-18T09:55:27Z
http://arxiv.org/abs/2309.09624v1
# A Trichotomy for hitting times and escape rates for a class of unimodal maps ###### Abstract. We consider local escape rates and hitting time statistics for unimodal interval maps of Misiurewicz-Thurston type. We prove that for any point \(z\) in the interval there is a local escape rate and hitting time statistics which is one of three types. While it is key that we cover all points \(z\), the particular interest here is when \(z\) is periodic and in the postcritical orbit which yields the third part of the trichotomy. We also prove generalised asymptotic escape rates of the form first shown by Bruin, Demers and Todd. 2020 Mathematics Subject Classification: 37C30, 37D25, 37E05 Part of this work was completed during visits of MT to Fairfield University in 2022 and 2023. MD was partially supported by NSF grant DMS 2055070. We consider S-unimodal maps \(f:[0,1]\to[0,1]\) in the sense of [NS]. That is * \(f\) is \(C^{2}\) with one critical point \(c\); * in a neighbourhood of \(c\), \(f(x)=f(c)-A(x-c)^{\ell}\) for \(\ell>1\) and \(A>0\); * \(|Df|^{-\frac{1}{2}}\) is convex on each of \([0,c]\) and \([c,1]\). We assume that the map is Misiurewicz-Thurston: there is a minimal \(k_{0}\geqslant 2\) such that \(f^{k_{0}}(c)\) is periodic, i.e., there is a minimal \(p\geqslant 1\) such that \(f^{p}(f^{k_{0}}(c))=f^{k_{0}}(c)\). Moreover this is a repelling periodic point: \(|Df^{p}(f^{k_{0}}(c))|>1\). This implies that the postcritical orbit of \(c\), \(\operatorname{orb}(f(c))=\{f(c),f^{2}(c),\ldots,f^{k_{0}+p-1}(c)\}\), is finite and that there is an absolutely continuous \(f\)-invariant probability measure (acip) \(\mu\). Then the density of \(\mu\) has a spike at each point of \(\operatorname{orb}(f(c))\) of type \(x^{\frac{1}{\ell}-1}\). We let \(I:=[f^{2}(c),f(c)]\) denote the _dynamical core_: all points except \(0\) and \(1\) eventually map into \(I\) and remain there, so \(\mu\) is supported in \(I\). Define a hole centred at \(z\) by \(H_{\varepsilon}(z)=(z-\varepsilon,z+\varepsilon)\). The _escape rate_ of the open system with hole \(H_{\varepsilon}(z)\) with respect to the measure \(\mu\) is defined by \[\mathfrak{e}(H_{\varepsilon}(z))=-\lim_{n\to\infty}\frac{1}{n}\log\mu\left( \left\{x\in I:f^{j}(x)\notin H_{\varepsilon}(z),\ j=0,\ldots,n-1\right\} \right), \tag{1.1}\] when the limit exists. Define the _local (asymptotic) escape rate at \(z\)_ by \[\operatorname{esc}(z):=\lim_{\varepsilon\to 0}\frac{\mathfrak{e}(H_{ \varepsilon}(z)))}{\mu(H_{\varepsilon}(z))}\,.\] If \(z\) is a periodic point with prime period \(p\), we write \(\lambda_{z}:=|Df^{p}(z)|\). Our main result is the following trichotomy regarding possible values of the local escape rate. The only model where such a result has appeared previously is in [DT2, Remark 3.11], which is a very special case (the full quadratic map). **Theorem 1.1**.: _Let \(f\) be as defined above. For any \(z\in I\),_ \[\operatorname{esc}(z)=\begin{cases}1&\text{if $z$ is not periodic},\\ 1-\frac{1}{\lambda_{z}}&\text{if $z$ is periodic and not in $\operatorname{orb}(f(c))$},\\ 1-\frac{1}{\lambda_{z}^{1/\ell}}&\text{if $z$ is periodic and in $\operatorname{orb}(f(c))$}. \end{cases}\] The cases when \(z\) is outside the postcritical orbit \(\operatorname{orb}(f(c))\) follow from [DT2, Theorem 3.7], so our focus here is on the case when \(z\in\operatorname{orb}(f(c))\): when \(z\in\operatorname{orb}(f(c))\) is preperiodic (which falls into the first case in the above theorem), and when \(z\in\operatorname{orb}(f(c))\) is periodic. The techniques to show the former are essentially a subset of the latter. Our proofs exploit the fact that the finite post-critical orbit allows us to define a finite Markov partition for the map (albeit with unbounded distortion) which we then use to define a first return map to a conveniently chosen set. ### Hitting time statistics Setting \(r_{A}(x):=\inf\{k\geqslant 1:f^{k}(x)\in A\}\), the _first hitting time to \(A\)_, the _Hitting Time Statistics (HTS)_ at \(z\) is given by \[T_{z}(t):=\lim_{\varepsilon\to 0}\mu\left(r_{H_{\varepsilon}(z)}\geqslant \frac{t}{\mu(H_{\varepsilon}(z))}\right)\] for \(t>0\), provided this limit exists. A generalisation of this limit is formulated in [BDT], where the link between HTS and asymptotic escape rates was explored by scaling the hitting time asymptotic as \(t\mu(H_{\varepsilon}(z))^{-\alpha}\) for some \(\alpha>0\). Accordingly, define for \(t,\alpha>0\), \[L_{\alpha,t}(z):=\lim_{\varepsilon\to 0}\frac{-1}{t\mu(H_{\varepsilon}(z))^{1- \alpha}}\log\mu\left(r_{H_{\varepsilon}(z)}\geqslant\frac{t}{\mu(H_{ \varepsilon}(z))^{\alpha}}\right)\,.\] Note that \(T_{z}(t)\) corresponds to \(\alpha=1\). Using Theorem 1.1 we prove the following in Section 6. **Theorem 1.2**.: _For any \(z\in I\) and all \(\alpha,t>0\), \(L_{\alpha,t}(z)\) exists and equals_ \[L_{\alpha,t}(z)=\begin{cases}1&\text{ if $z$ is not periodic},\\ 1-\frac{1}{\lambda_{z}}&\text{ if $z$ is periodic and not in $orb(f(c))$},\\ 1-\frac{1}{\lambda_{z}^{\frac{1}{2}}}&\text{ if $z$ is periodic and in $orb(f(c))$}.\end{cases}\] As with Theorem 1.1, the main focus here is on the case when \(z\in\operatorname{orb}(f(c))\); the other cases follow from adaptations of [BDT], as we describe in Section 6.4. We note that a full description of \(T_{z}(t)\) in uniformly hyperbolic cases can be seen in [11, AFV]. In the non-uniformly hyperbolic case, the only full dichotomy (i.e. the result applies to all \(z\) with no excluded values), and only for \(T_{z}(t)\) rather than \(L_{\alpha,t}(z)\) as above, has been demonstrated for Manneville-Pomeau maps in [11]; see also the recent preprint [BF]. **Remark 1.3**.: _Our techniques actually deal with holes of the form \((z-\varepsilon_{L},z+\varepsilon_{R})\) where \(\frac{\varepsilon_{L}}{\varepsilon_{R}}\) is uniformly bounded away from 0 and \(\infty\), so we are able to handle non-symmetric holes in both Theorems 1.1 and 1.2._ ### Structure of the paper In Section 2 we assume that we are in the \(z\in\operatorname{orb}(f(c))\) periodic case and define a domain \(Y\) and a first return map \(F\) to \(Y\) so that the hole at \(z\) is outside \(Y\): \((Y,F)\) has exponential return times and a Markov structure. We outline how domains of \(F\) can map into the hole and then modify \(F\) to \(F_{\boldsymbol{\varepsilon}}\) by introducing extra cuts at the boundary of the hole. In Section 3 again we initially assume \(z\in\operatorname{orb}(f(c))\) is periodic and show how the density spike at \(z\) affects the scaling of the hole and its preimages. These scalings lead to the three quantities seen in Theorems 1.1 and 1.2. The section ends in Section 3.4 where the case \(z\in\operatorname{orb}(f(c))\) is preperiodic is dealt with. Section 4 sets up the functional framework for \(F_{\boldsymbol{\varepsilon}}\) and its punctured counterpart \(\mathring{F}_{\boldsymbol{\varepsilon}}\). In particular a spectral gap is proved along with the relevant perturbation theory for the transfer operator corresponding to \(\mathring{F}_{\boldsymbol{\varepsilon}}\) for \(\boldsymbol{\varepsilon}\) small and for certain nice '\(\beta\)-allowable' holes. The convergence of the relevant spectral properties is proved, which prove a version of Theorem 1.1, see (4.16) and (4.17) for these specific holes. Section 5 then proves Theorem 1.1 for general holes and in all cases of \(z\in\operatorname{orb}(f(c))\). In Section 6 we prove Theorem 1.2. The strategy is to use the induced map to construct a non-Markovian Young Tower and prove a spectral gap for the associated transfer operator in a space of weighted bounded variation. Notation: we use \(g_{1}(\varepsilon)\gtrsim g_{2}(\varepsilon)\) to mean \(\frac{g_{1}(\varepsilon)}{g_{2}(\varepsilon)}\geqslant h(\varepsilon)\) where \(h(\varepsilon)\to 1\) as \(\varepsilon\to 0\). Similarly \(g_{1}(\varepsilon)\lesssim g_{2}(\varepsilon)\) means \(\frac{g_{1}(\varepsilon)}{g_{2}(\varepsilon)}\leqslant h(\varepsilon)\) where \(h(\varepsilon)\to 1\) as \(\varepsilon\to 0\) and \(g_{1}(\varepsilon)\sim g_{2}(\varepsilon)\) if \(\frac{g_{1}(\varepsilon)}{g_{2}(\varepsilon)}\to 1\) as \(\varepsilon\to 0\). ## 2. First return map structure Recall that \(k_{0}\geqslant 2\) denotes the minimal integer such that \(f^{k_{0}}(c)\) is periodic with prime period \(p\). From here until the end of Section 3.3 we will focus on the case that \(z\in orb(f(c))\) is periodic; for the preperiodic case we make minor adjustments in Section 3.4. Suppose that for \(k_{1}\geqslant k_{0}\), \(f^{k_{1}}(c)=z\). Then necessarily, \(f^{p}(z)=z\) with prime period \(p\). **Proposition 2.1**.: 1. _There exists_ \(\lambda_{per}>1\) _such that if_ \(x\) _is periodic of period_ \(n\) _then_ \(|Df^{n}(x)|\geqslant\lambda_{per}^{n}\) 2. _For each neighbourhood_ \(U\) _of_ \(c\) _there are_ \(K_{U}>0\) _and_ \(\lambda_{U}>1\) _such that if_ \(J\) _is an interval with_ \(U\cap\left(\cup_{i=0}^{n-1}f^{i}(J)\right)=\emptyset\) _then_ \(\inf_{x\in J}|Df^{n}(x)|\geqslant K_{U}\lambda_{U}^{n}\)_._ 3. _There is a unique acip_ \(\mu\)_; this is supported on a finite union of intervals, which contain_ \(c\) _in the interior._ The first item here is in, for example [NS, Theorem A], while the second is more generally known as Mane's Lemma (eg [MS, Theorem III.5.1]). The third item can be found for example as part of [MS, Theorem V.1.3]. Recall that we call a hyperbolic periodic point \(x\) of (prime) period \(n\), _orientation preserving/reversing_ if \(f^{n}\) preserves/reverses orientation in small neighbourhoods of \(x\) (i.e. \(Df^{n}(x)>0/Df^{n}(x)<0\)). Denote the orbit of the critical point by \(c_{i}=f^{i}(c)\). Then by the definition of \(k_{0}\) and \(p\) above, \(\operatorname{orb}(c)=\{c_{0},\ldots,c_{k_{0}+p-1}\}\). Let \(\mathcal{A}:=\{A_{1},\ldots,A_{M}\}\) for \(M=k_{0}+p-1\) be the set of ordered open intervals in \([f^{2}(c),f(c)]\) with boundary points from \(\operatorname{orb}(c)\). This forms a Markov partition for \(f\) on \([f^{2}(c),f(c)]\).1 The partition induces a dynamical coding on \(I^{\prime}:=[f^{2}(c),f(c)]\setminus\bigcup_{n\geqslant 0}f^{-n}\,( \operatorname{orb}(c))\): to each \(x\in I^{\prime}\) there is a sequence \((x_{0},x_{1},\ldots)\in\{1,\ldots,M\}^{\mathbb{N}_{0}}\) with \(f^{i}(x)\in A_{x_{i}}\). For each \(n\in\mathbb{N}\), we refer to \([x_{0},\ldots,x_{n-1}]:=\{(y_{0},y_{1},\ldots)\in\Sigma:y_{0}=x_{0},\ldots,y_{ n-1}=x_{n-1}\}\) as an _\(n\)-cylinder_, or _cylinder of depth \(n\)_. There will be a unique topologically transitive component: we then remove any component which does not intersect this. Let \(\Sigma\subset\{1,\ldots,M\}^{\mathbb{N}_{0}}\) be the corresponding subshift of finite type, which is moreover locally eventually onto and thus topologically mixing. Footnote 1: Distortion is unbounded. We will abuse notation and refer to cylinders in \(\Sigma\) and the intervals in \(I\) they represent by \(w=[x_{0},\ldots,x_{n}]\). We will write the word \(w\in\cup_{n\geqslant 1}\cup_{w^{\prime}\in\{1,\ldots,M\}^{n}}w^{\prime}\) and the corresponding cylinder \([w]\) as just \(w\). Given cylinders \(w_{1}=[x_{0},\ldots,x_{n}]\) and \(w_{2}=[y_{0},\ldots,y_{m}]\), the concatenation \([w_{1}w_{2}]\) is the cylinder \([x_{0},\ldots,x_{n},y_{0},\ldots,y_{m}]\). When needed, we denote by \(\pi\) the projection from \(\Sigma\) to \(I\). ### The inducing scheme For \(N\in\mathbb{N}\) to be chosen below, we will take a collection of cylinders \(w_{1},\ldots,w_{K}\in\{1,\ldots,M\}^{N}\) containing \(z\) and consider the first return map \(F=\sigma^{\tau}\) to \(\Sigma\setminus\{w_{1},\ldots,w_{K}\}\) where \(\tau\) is the first return time. We consider the domains of this map to be cylinders \(w\in\cup_{n\geqslant 1}\cup_{w^{\prime}\in\{1,\ldots,M\}^{n}}w^{\prime}\), rather than unions of these. Let \(\Sigma_{N,K}\) denote the set of one-cylinders for \(F\). **Lemma 2.2**.: _Given \(K\in\mathbb{N}\), for any \(\eta>0\) there exists \(N=N(\eta,K)\in\mathbb{N}\) such that for all \(n\in\mathbb{N}\), \(\#\{w\in\Sigma_{M,K}:\tau(w)=n\}=O(e^{\eta n})\)._ Proof.: If \(w\in\Sigma_{N,K}\) has \(\tau(w)=n\), then \(w\) is an \(n\)-cylinder for \(\sigma\) and so must have the form \([x_{0}w_{i_{1}}w_{i_{2}}\cdots w_{i_{j}}x_{n-1}]\) where \(j\leqslant n/N\). So we can estimate the number of such cylinders by \(K^{\frac{n}{N}}\) (note that this estimate comes from considering \((\Sigma,\sigma)\) to be the full shift, which is a substantial over-estimate here). So the lemma is complete if we choose \(N\) large enough that \(\frac{1}{N}\log K\leqslant\eta\). We will use the above partly to ensure our inducing scheme has exponential tails (see Proposition 2.5). We will also choose the scheme to be compatible with the periodic structure at \(z\), see Remark 2.8 below. **Lemma 2.3**.: _For any \(\varepsilon_{0},\eta>0\), there exist \(K,N\in\mathbb{N}\) such that \(Y=\pi(\Sigma_{N,K})\) has the following properties:_ 1. \(\{f^{i}(c)\}_{i=1}^{k_{0}+p-1}\cap Y=\emptyset\)_; in particular,_ \(z\notin Y\)_;_ 2. _if_ \(x\in I\setminus Y\) _and_ \(i\geqslant 1\) _is minimal such that_ \(f^{i}(x)\in Y\)_, then_ \(f^{i}(x)\in(z-\varepsilon_{0},z+\varepsilon_{0})\) _._ 3. \(f(Y)\supset Y\)_;_ 4. _let_ \(F=\sigma^{\tau}\) _denote the first return map to_ \(\Sigma_{N,K}\)_; then_ \(\#\{w\in\Sigma_{N,K}:\tau(w)=n\}=\mathcal{O}(e^{\eta n})\)_;_ 5. \(F:\Sigma_{N,K}\circlearrowright\) _is topologically mixing._ Proof.: Fix \(\varepsilon_{0},\eta>0\). For \(\tilde{N}\in\mathbb{N}\), we will choose a pair of \(\tilde{N}\)-cylinders \(w_{L},w_{R}\) which are adjacent to \(z\) (to the left and right of this point respectively) and, observe that for \(\tilde{N}\) large, \(f^{i}(w_{L}),f^{i}(w_{R})\) are \((\tilde{N}-i)\)-cylinders for \(i=1,\ldots,p-1\). If \(i\in\{1,\ldots,p\}\) is minimal with \(z=f^{i+k_{0}}(c)\) then define \(w_{L}^{\prime}=f^{p-i}(w_{L}),w_{R}^{\prime}=f^{p-i}(w_{R})\) and set \(w_{j,L},w_{j,R}\) to be the \((\tilde{N}-p+i+j)\)-cylinders along the orbit segment \(\{f(c),\ldots,f^{k_{0}-1}(c)\}\) mapping to \(w_{L}^{\prime},w_{R}^{\prime}\) by \(f^{j}\) for \(j=0,\ldots,k_{0}-1\). Then define \(Z_{\tilde{N}}:=\{w_{j,L},w_{j,R}:j=0,\ldots,k_{0}-1\}\cup\{f^{i}(w_{L}),f^{i}( w_{R}):i=0,\ldots,p-1\}\) be the sets we remove from \(\Sigma\). The deepest cylinders here are either (i) \(w_{k_{0}-1,L},w_{k_{0}-1,R}\); or (ii) \(w_{L},w_{R}\). In case (i), these are \(N\)-cylinders, where \(N=\tilde{N}-p+i+k_{0}-1\). Since, for example \(w_{k_{0}-1-i,L}\) consists of at most \(M^{i}\)\(N\)-cylinders (a significant over-estimate), we have removed at most \(K=2\frac{M^{k_{0}+p-1}-1}{M-1}\)\(N\)-cylinders. So for \(\eta>0\), let \(N_{0}=N(\eta,K)\in\mathbb{N}\) be as in Lemma 2.2. Set2\(\Sigma^{\prime}=\Sigma^{\prime}(N_{0}):=\Sigma\setminus Z_{\tilde{N}_{0}}\) (here \(\tilde{N}_{0}=N_{0}+p+i-k_{0}+1\)) and let \(F=\sigma^{\tau}\) be the first return map to \(\Sigma^{\prime}\). Lemma 2.2 implies that the number of \(n\)-cylinders with first return time \(n\) is \(O(e^{\eta n})\). Case (ii) follows similarly. Footnote 2: Observe that we are removing a neighbourhood of the orbit of the critical _value_: the critical point is not removed. Define \(Y=\pi(\Sigma^{\prime})\). Property (a) of the lemma is obvious. The choice of \(\tilde{N}_{0}\) above guarantees property (d). The definition of \(Z_{\tilde{N}_{0}}\) guarantees that returns to \(Y\) must occur in a neighbourhood of \(z\) and that \(f(Y)\supset Y\), and choosing \(\tilde{N}\) sufficiently large forces this neighbourhood to intersect \((z-\varepsilon_{0},z+\varepsilon_{0})\), so (b) and (c) hold. Finally, (e) follows from the mixing of \(\sigma\). We will make our final choice of \(Y\) once we choose \(\varepsilon_{0}\) before Remark 2.8. We will abuse notation and let \(F=f^{\tau}\) denote the first return map to \(Y\) as well. Let \(\{I_{i}\}_{i}\) be the domains (intervals) of monotonicity of the first return map and for brevity, write \(\tau_{i}=\tau|_{I_{i}}\). Note that each \(I_{i}\) is associated to a cylinder as in the symbolic model. **Lemma 2.4**.: _There exists \(P\in\mathbb{N}_{0}\) such that \(I_{i}\) contains a periodic point of prime period \(\tau_{i}+k\) with \(k\leqslant P\)._ Proof.: Since \(I_{i}\) corresponds to some \(n\)-cylinder \([x_{0}w_{i_{1}}w_{i_{2}}\cdots w_{i_{j}}x_{n-1}]\), the lemma is saying that there is a chain of allowable transitions \(x_{n-1}\mapsto y_{1}\mapsto y_{k-1}\mapsto x_{0}\), for \(k\leqslant P\), which follows since the shift is an SFT. **Proposition 2.5**.: 1. \(F\) _has finitely many images, i.e.,_ \(\{F(I_{i})\}_{i}\) _is a finite set;_ 2. _(bounded distortion) there exists_ \(C_{d}>0\) _such that if_ \(x,y\) _belong to the same_ \(n\)_-cylinder for_ \(F\)_, then_ \[\left|\frac{DF^{n}(x)}{DF^{n}(y)}-1\right|\leqslant C_{d}|F^{n}(x)-F^{n}(y)|\,;\] 3. \(|\{\tau=n\}|=O\left(e^{-n(\log\lambda_{per}-\eta)}\right)\)_, where_ \(|\cdot|\) _denotes the Lebesgue measure of the set._ Proof.: Property (a) follows from the SFT structure and Property (b) follows from the Koebe Lemma added to the fact that we removed a neighbourhood of the orbit of the critical value to create \(Y\). To prove (c), by Lemma 2.4, given \(I_{i}\), there is a periodic point \(x\in I_{i}\) of period \(\tau_{i}+k\) for \(0\leqslant k\leqslant P\). Thus, by Proposition 2.1(1), \[|Df^{\tau_{i}}(x)|\geqslant\lambda_{per}^{\tau_{i}+k}|Df|_{\infty}^{-k} \geqslant\left(\frac{\lambda_{per}}{|Df|_{\infty}}\right)^{P}\lambda_{per}^{ \tau_{i}}=\tilde{K}\lambda_{per}^{\tau_{i}}. \tag{2.1}\] Since we also have bounded distortion, we see that \(|I_{i}|=O(\lambda_{per}^{-\tau_{i}})\) and hence applying Lemma 2.2, \(|\{\tau=n\}|=O\left(e^{-n(\log\lambda_{per}-\eta)}\right)\), as required. In the light of this result, we will assume \(\eta\in\left(0,\frac{1}{2}\log\lambda_{per}\right)\) from here on (we make a final choice for \(\eta\) before Remark 2.8. **Remark 2.6**.: _The cylinder structure here means that \(\left(\cup_{k\geqslant 1}f^{k}(\partial Y)\right)\cap(Int(Y^{c})\setminus orb(f(c)))=\emptyset\). In particular supposing that \(I_{i}\) has \(f^{k}(I_{i})=(x,y)\) in a neighbourhood of \(z\) and \(y<z\), then there will be a set of domains adjacent to \(I_{i}\) such that the closure of the union of their \(f^{k}\) iterates covers \((x,z)\). (Similarly if \(f^{k}(I_{i})\) lies to the right of \(z\).)_ Recall that \(\mu\) is the unique acip for \(f\) according to Proposition 2.1. Define \(\mu_{Y}:=\frac{\mu|_{Y}}{\mu(Y)}\), which by Kac's Lemma is an \(F\)-invariant probability measure. We can recover \(\mu\) from \(\mu_{Y}\) by the well-known formula, \[\mu(A)=\mu(Y)\sum_{i}\sum_{j=0}^{\tau_{i}-1}\mu_{Y}(I_{i}\cap f^{-j}(A)),\ \ \text{for}\ A\subset I. \tag{2.2}\] Note that \(f^{k_{1}}\) in a neighbourhood of \(c\) is a 2-to-1 map composed with a diffeomorphism. The Hartman-Grobman Theorem implies that \(f^{p}\) restricted to any small enough neighbourhood of \(z\), the linearisation domain, is conjugated to the transformation \(x\mapsto\lambda_{z}x\) and is indeed asymptotic to this for \(x\) close to \(z\). Let \(\varepsilon_{0}>0\) be such that this theorem holds in \((z-\varepsilon_{0},z+\varepsilon_{0})\). For \(\varepsilon\leqslant\varepsilon_{0}\), let \(\delta=\delta(\varepsilon)\) be such that \(f^{k_{1}}|_{(c-\delta,c+\delta)}\) is 2-to-1 onto either \((z-\varepsilon,z]\) or \([z,z+\varepsilon)\). Set \(\lambda_{\delta}:=\lambda_{(c-\delta(\varepsilon_{0}),c+\delta(\varepsilon_{ 0}))}\) and \(K_{\delta}:=K_{(c-\delta(\varepsilon_{0}),c+\delta(\varepsilon_{0}))}\) from Proposition 2.1. Then choose \(\eta>0\) such that \(\eta<\min\{\frac{1}{2}\log\lambda_{per},\log\lambda_{\delta}\}\). Now with \(\varepsilon_{0}\) and \(\eta\) fixed, we choose \(N_{1}\geqslant N(\eta,K)\) so that the cylinders making up \(Z_{\tilde{N}_{1}}\) from the proof of Lemma 2.3 are contained in \(\cup_{i=1}^{p}f^{i}(z-\varepsilon_{0},z+\varepsilon_{0})\) and adjust our inducing domain to \(\Sigma^{\prime}=\Sigma^{\prime}(N_{1})\), thus also adjusting \(Y\). **Definition 2.7**.: _With \(\varepsilon_{0}\), \(\eta\) and \(N_{1}\) fixed as above, we make our final choice of \(Y\), which will remain fixed from here on._ With \(Y\) fixed, we choose \(\varepsilon_{1}\in(0,\varepsilon_{0}]\) so that \(Y\cap\left(\cup_{i=1}^{p}f^{i}(z-\varepsilon_{1},z+\varepsilon_{1})\right)=\emptyset\). **Remark 2.8**.: 1. _Our choice of cylinders around_ \(\operatorname{orb}(z)\) _means that any domains of the first return map to_ \(Y\) _can enter_ \((z-\varepsilon_{1},z+\varepsilon_{1})\) _at most once, since the linear structure of the dynamics in this region means that any domain that does so must be'spat out' into_ \(Y\)_, and thus have made a return, before it can escape the linearisation domain._ 2. _Moreover, due to the periodic way our cylinders_ \(Z_{\tilde{N}}\) _were chosen, if_ \(f^{k}(I_{i})\) _for_ \(k<\tau_{i}\) _is in the neighbourhood of_ \(z\)_, then it cannot return to_ \(Y\) _in less than_ \(p\) _steps. Indeed,_ \(\tau_{i}-k\) _is a multiple of_ \(p\)_._ It will be convenient to treat the left and right neighbourhoods of \(z\) separately. For this purpose, we introduce the following notation for asymmetric holes. Given \(\varepsilon_{L},\varepsilon_{R}>0\), we write \(\boldsymbol{\varepsilon}=\{\varepsilon_{L},\varepsilon_{R}\}\) and define the corresponding hole at \(z\) by \(H_{\boldsymbol{\varepsilon}}(z)=(z-\varepsilon_{L},z+\varepsilon_{R})\). We will abuse notation in the following ways: we write \(\boldsymbol{\varepsilon}=0\) to mean \(\varepsilon_{L}=\varepsilon_{R}=0\), and we write \(\boldsymbol{\varepsilon}<\varepsilon\) (also \(\boldsymbol{\varepsilon}\in(0,\varepsilon)\)) to mean \(\varepsilon_{L},\varepsilon_{R}<\varepsilon\) (and \(\varepsilon_{L},\varepsilon_{R}\in(0,\varepsilon)\)). We extend the definition of \(\delta(\varepsilon)\) to \(\delta(\varepsilon)\) in the natural way, noting that for example in the orientation preserving case \(\delta(\varepsilon)\) will only depend on one of \(\varepsilon_{L}\) and \(\varepsilon_{R}\). For notation, we will denote by \((z-\varepsilon_{L})_{i}\), \((z+\varepsilon_{R})_{i}\) the local inverse of \(f^{ip}\) applied to \(z-\varepsilon_{L}\), \(z+\varepsilon_{R}\). For a hole \(H_{\boldsymbol{\varepsilon}}(z)\), we denote by \(H^{\prime}_{\boldsymbol{\varepsilon}}\) the set of intervals \(J\) in \(Y\) such that \(f^{s}(J)\subset H_{\boldsymbol{\varepsilon}}\) for \(s<\tau|_{J}\). Thus \(H^{\prime}_{\boldsymbol{\varepsilon}}\) represents the 'induced hole' for the first return map \(F\). In fact, due to the construction here, each such \(J\) will be one of the domains \(I_{i}\). The main expression we must estimate is \(\frac{\mu(H^{\prime}_{\boldsymbol{\varepsilon}})}{\mu(H_{\boldsymbol{\varepsilon }})}\) for appropriately chosen \(\varepsilon_{L},\varepsilon_{R}\): we will show how this allows us to estimate \(\frac{\mu(H^{\prime}_{\boldsymbol{\varepsilon}})}{\mu(H_{\boldsymbol{ \varepsilon}})}\) for all sufficiently small \(\varepsilon>0\) in Section 5. ### Chain structure Let the interval of \(I\setminus Y\) containing \(z\) be denoted by \([a,b]\). Now suppose \(I_{i}\) is a domain of \(Y\) with \(f^{s}(I_{i})\subset[a,b]\) with \(1\leqslant s\leqslant\tau_{i}\). Since there is an interval \(U\) closer to \(z\) than \(f^{s}(I_{i})\) with \(f^{p}(U)=f^{s}(I_{i})\), then by the first return structure, there must be some \(I_{i^{\prime}}\) with \(f^{s}(I_{i^{\prime}})=U\). Iterating this, we call the resulting domains a _chain_. We consider \(I_{i}\) here to have some _depth_\(k\), and the \(I_{i^{\prime}}\) above to have depth \(k+1\) and write these as \(I_{i}^{k}\) and \(I_{i}^{k+1}\) respectively. For each chain there is an 'outermost' interval which we consider to have depth 1: this is the interval \(I_{i^{\prime\prime}}=I_{i}^{1}\), a domain in \(Y\) where \(f^{s}(I_{i^{\prime\prime}})\subset[a,b]\), but \(f^{s+p}(I_{i^{\prime\prime}})\) not contained in \([a,b]\). Thus chains are denoted \(\{I_{i}^{k}\}_{k=1}^{\infty}\) and \(f^{s+kp}(I_{i}^{k})=f^{s}(I_{i}^{1})\). Note that the cylinder structure implies \(f^{p}(a),f^{p}(b)\notin(a,b)\). **Lemma 2.9**.: 1. _If_ \(f^{p}\) _is orientation preserving around_ \(z\) _then two elements_ \(I_{i}^{k}\) _and_ \(I_{i}^{k^{\prime}}\) _of a chain have depths differing by 1 if and only if they are adjacent to each other. Here_ \(f^{s+p}(I_{i}^{k+1})=f^{s}(I_{i}^{k})\) _for_ \(s\) _as above._ 2. _If_ \(f^{p}\) _is orientation reversing around_ \(z\) _then a chain_ \(\{I_{i}^{k}\}_{k}\) _has a neighbouring chain_ \(\{I_{j}^{k}\}_{k}\) _so that two elements_ \(I_{i}^{k}\) _and_ \(I_{j}^{k^{\prime}}\) _have depths differing by 1 if and only if they are adjacent to each other. Here the_ \(I_{i}^{k},I_{j}^{k+1},I_{i}^{k+2}\) _are adjacent intervals and moreover_ \(f^{2p}(I_{i}^{k+2})=I_{i}^{k}\)_._ See Figure 1 for a sketch of case (1) of the lemma. Proof.: We first assume that \(\{I_{i}^{k}\}_{k}\) is a chain and that \(f^{p}\) is locally orientation preserving around \(z\). Suppose that \(I_{j}^{k}\) is adjacent to \(I_{i}^{1}\). Suppose for some \(t\geqslant 1\) we have \(f^{t-1}(I_{i}^{1})\cap Y=\emptyset\), but \(f^{t}(I_{i}^{1})\subset Y\). As in Remark 2.8, according to our choice of \(Z_{\tilde{N}_{0}}\) in the proof of Lemma 2.3, \(f^{t}(I_{i}^{1})\) can only reenter \(Y\) in an \(\varepsilon_{0}\)-neighbourhood of \(z\). Note that \(f^{t}(I_{i}^{1})\) must contain either \(a\) or \(b\) in its boundary: we assume this is \(a\). We observe that \(a\) is also a boundary point of \(f^{t}(I_{j}^{k})\) and \(f^{t}(I_{j}^{k})\cap Y=\emptyset\). But then we must have \(f^{t+1}(I_{j}^{k})\subset Y\), and moreover this domain must contain Figure 1. The chain structure mapped near \(z\). We are assuming that \(f^{p}\) is locally orientation preserving and focussing attention on the left-hand side of \(z\). We sketch some intervals of the images (by \(f^{s}\)) of the chain \(\{I_{i}^{k}\}_{k\geqslant 1}\). Note that here \(\tau|_{I_{i}^{k}}=s+k\). in its boundary. Indeed, by the first return structure it must coincide with \(f^{t}(I^{1}_{i})\). But this implies that actually \(I^{k}_{j}=I^{2}_{i}\). This means that the left boundary point of \(f^{t-1}(I^{1}_{i})\) must map by \(f^{p}\) to the other boundary point \(a\). In the same way we see that all the domains \(\{I^{k}_{i}\}_{k}\) are adjacent to each other and have \(f^{p}\) mapping adjacent \(f^{s}\)-images of the chain to each other. In the orientation reversing case, suppose that \(I^{1}_{i}\) is adjacent to \(I^{k}_{j}\). Then \(f^{s+p}(I^{1}_{i})\subset Y\). The Markov structure implies that \(f^{s+p}(I^{k}_{j})\subset[a,b]\) and that these intervals both contain either \(a\) or \(b\) in their boundary, which means that \(f^{s+2p}(I^{k}_{j})\subset Y\) and so \(k=2\). By the linear structure of \(f^{2p}\) in this region, if \(I^{k}_{\ell}\) is adjacent to \(I^{2}_{j}\), then \(f^{s+2p}(I^{k}_{\ell})=f^{s}(I^{1}_{i})\), so \(I^{k}_{\ell}=I^{3}_{i}\). To prove this starting with \(I^{k}_{i}\) with \(k\geqslant 1\), we iterate the argument by \(f^{(k-1)p}\). The mapping of the \(f^{s}\)-images to each other by \(f^{2p}\) follows as in the orientation preserving case. In case (2) above we call \(\{I^{k}_{i}\}_{k}\) and \(\{I^{k}_{j}\}_{k}\)_alternating chains_. We also call any \(\{I^{k}_{i}\}_{k\geqslant j}\), for some \(j\geqslant 1\), a _subchain_. Adding the result of this lemma to the first return structure we see that the images in \([a,b]\) of any chain must be of the same fixed form, i.e., if \(\{I^{k}_{i}\}_{k}\), \(\{I^{k}_{j}\}_{k}\) are two chains which map by \(f^{s}\) and \(f^{t}\) respectively into \([a,b]\), then assuming \(f^{s}(I^{k}_{i})\) and \(f^{t}(I^{k}_{j})\), for some depth \(k\geqslant 1\), are on the same side of \(z\) then these images coincide. Moreover, \(\overline{\cup_{k}f^{s}(I^{k}_{i})}\) will be either \([a,z]\) or \([z,b]\) if \(f^{p}\) is orientation preserving around \(z\), and \(\overline{\cup_{k}f^{s}(I^{k}_{i}\cup I^{k}_{j})}=[a,b]\) if \(f^{p}\) is orientation reversing around \(z\), and the alternating chains are as in the statement of the lemma. ### Extra cuts for a 'Markovian' property Suppose that \(f^{p}\) is orientation preserving around \(z\). Let \(a_{0}=a\) and then inductively let \(f^{p}(a_{i+1})=a_{i}\), so that \(a_{i}\to z\) as \(i\to\infty\). We similarly define \((b_{i})_{i}\) to the right of \(z\), accumulating on \(z\). Observe that domains \(I^{k}_{i}\) of \(F\) which map into \([a,b]\) by some \(f^{s}\) with \(1\leqslant s<\tau_{i}\) must map onto some \((a_{k-1},a_{k})\) or \((b_{k},b_{k-1})\). In the orientation reversing case we define \(a_{0}=a\), and \(a_{1}\) such that \(f^{p}(b)=a_{1}\) and inductively define \(a_{i}\) by \(f^{2p}(a_{i+2})=a_{i}\). Similarly we define \((b_{i})_{i}\). Here if \(f^{s}(I^{k}_{i})\subset[a,b]\) for \(1\leqslant s<\tau_{i}\) then \(f^{s}(I^{k}_{i})\in\{(a_{k-1},a_{k}),(b_{k},b_{k-1})\}\). Note that by bounded distortion, the ratios \(\frac{z-a_{i}}{b_{i}-z}\) will be comparable to \(\frac{z-a}{b-z}\). The above implies that if we were to take our holes around \(z\) as, say \((a_{i},b_{j})\), these would be Markov holes in the sense that either an interval of the inducing scheme enters the hole before returning to \(Y\), or it never intersects it. Note that by topological transitivity the only way \(z\) cannot be accumulated by domains on both the left and the right as above is if \(z\) is a boundary point of the dynamical core \([f^{2}(c),f(c)]\), a simpler case which needs only obvious modifications. For generic \(\varepsilon_{L},\varepsilon_{R}\in(0,\varepsilon_{1})\) the corresponding hole will not have this Markov property, so we make an extra cut here, namely if \(z-\varepsilon_{L}\in f^{s}(I_{i})\) (in the sketch in Figure 1, \(I_{i}=I^{k-2}_{i}\)) then we split \(I_{i}\) into two so that one of the resulting intervals maps by \(f^{s}\) into the hole and one maps outside, and similarly for \(z+\varepsilon_{R}\), thus defining \(F_{\boldsymbol{\varepsilon}}\) from \(F=F_{0}\). For this map, in the generic case, the chains no longer have common boundary points, but now, in the orientation preserving case, they come in pairs \(\{I^{k}_{i}\}_{k},\{I^{k}_{i}\}\), where \(I^{k+1}_{i}\tilde{\cup}I^{k}_{i}\) is a domain of \(F\), and \(\tilde{\cup}\) denotes the union of the open intervals along with their common boundary point. In the case \(f^{p}\) is orientation reversing around \(z\), we may need to cut a domain of \(F\) twice to produce \(F_{\boldsymbol{\varepsilon}}\). For simplicity, we will not address this case here, but all the ideas below go through similarly. From now on, unless indicated otherwise, when we discuss our induced map we mean \(F_{\boldsymbol{\varepsilon}}\). Remark that due to the extra cuts, although the domains of \(F_{\boldsymbol{\varepsilon}}\) have the property that each one either maps into \(H_{\boldsymbol{\varepsilon}}(z)\) or never intersects it, for generic holes this does not define a countable Markov partition for \(F_{\boldsymbol{\varepsilon}}\). This is because \(F_{\boldsymbol{\varepsilon}}(I_{i})\) may not be a union of \(1\)-cylinders if a boundary point of \(I_{i}\) is created by one of the extra cuts. We next observe that in the orientation preserving case there is a unique pair of subchains we denote \(\{I_{L}^{k}\}_{k\geqslant k_{L,\boldsymbol{\varepsilon}}},\{\hat{I}_{L}^{k}\}_ {k\geqslant k_{L,\boldsymbol{\varepsilon}}}\subset(c-\delta(\boldsymbol{ \varepsilon}),c)\) and a unique pair of subchains \(\{I_{R}^{k}\}_{k\geqslant k_{R,\boldsymbol{\varepsilon}}},\{\hat{I}_{R}^{k} \}_{k\geqslant k_{R,\boldsymbol{\varepsilon}}}\subset(c,c+\delta(\boldsymbol{ \varepsilon}))\). We call these the _principal subchains_. Note \(\overline{\cup_{k\geqslant k_{L,\boldsymbol{\varepsilon}}}\left(I_{L}^{k} \cup\hat{I}_{I_{L}}^{k}\right)\cup\left(\cup_{k\geqslant k_{R,\boldsymbol{ \varepsilon}}}I_{R}^{k}\cup\hat{I}_{I_{R}}^{k}\right)}=[c-\delta(\boldsymbol{ \varepsilon}),c+\delta(\boldsymbol{\varepsilon})]\). We can naturally extend this idea to the orientation reversing case. In either case, \(\delta(\boldsymbol{\varepsilon})\) is determined by only one of \(\varepsilon_{L}\) or \(\varepsilon_{R}\), depending on whether \(f^{k_{1}}(c-\delta(\boldsymbol{\varepsilon}),c+\delta(\boldsymbol{\varepsilon }))=(z-\varepsilon_{L},z]\) or \(f^{k_{1}}(c-\delta(\boldsymbol{\varepsilon}),c+\delta(\boldsymbol{ \varepsilon}))=[z,z+\varepsilon_{R})\). Let us call \(\varepsilon_{*}\) this value in \(\{\varepsilon_{L},\varepsilon_{R}\}\). ## 3. Measuring the holes The aim of this section is to estimate \(\mu(H_{\boldsymbol{\varepsilon}})\), \(\mu(H_{\boldsymbol{\varepsilon}}^{\prime})\) and thus \(\frac{\mu(H_{\boldsymbol{\varepsilon}}^{\prime})}{\mu(H_{\boldsymbol{ \varepsilon}})}\). ### The contribution from principal subchains In order to estimate \(\mu(H_{\boldsymbol{\varepsilon}}^{\prime})\) and \(\mu(H_{\boldsymbol{\varepsilon}})\), we first estimate the contribution from the principal subchains. **Lemma 3.1**.: _For \(x\) close to \(c\),_ \[|x-c|\sim\left(\frac{|f^{k_{1}}(x)-z|}{A|Df^{k_{1}-1}(f(c))|}\right)^{\frac{1} {f}}.\] Proof.: By the Mean Value Theorem there exists \(\theta\in(f(x),f(c))\) such that \(|f^{k_{1}}(x)-f^{k_{1}}(c)|=Df^{k_{1}-1}(\theta)|f(x)-f(c)|\). Then, \[\frac{|f^{k_{1}}(x)-f^{k_{1}}(c)|}{|x-c|}=\frac{|f(x)-f(c)|}{|x-c|}\frac{|f^{k _{1}}(x)-f^{k_{1}}(c)|}{|f(x)-f(c)|}=\frac{A|x-c|^{\ell}}{|x-c|}|Df^{k_{1}-1}( \theta)|\,,\] where \(A\) is from Section 1. The result follows using \(|Df^{k_{1}-1}(\theta)|\sim|Df^{k_{1}-1}(f(c))|\) by bounded distortion. We can see from Lemma 3.1 that \(\left|\overline{\cup_{k\geqslant k_{L,\boldsymbol{\varepsilon}}}\left(I_{L}^ {k}\cup\hat{I}_{L}^{k}\right)}\right|\sim C_{p}(\varepsilon_{*})^{\frac{1}{f}}\) where \(C_{p}:=\frac{1}{(A|Df^{k_{1}-1}(f(c))|)^{\frac{1}{f}}}\), i.e. \(\delta(\boldsymbol{\varepsilon})\sim C_{p}(\varepsilon_{*})^{1/\ell}\). Since \(c\) is bounded away from the critical orbit, as in [M, Theorem 6.1] (see also [N, Theorem A]) the invariant density is continuous at \(c\) and defining \(\rho:=\frac{d\mu}{dm}(c)\), the \(\mu_{Y}\)-measure of the set can be estimated by, \[\sum_{k\geqslant k_{L,\boldsymbol{\varepsilon}}}\mu_{Y}(I_{L}^{k}\cup\hat{I}_ {L}^{k})\sim\frac{C_{p}\rho}{\mu(Y)}(\varepsilon_{*})^{\frac{1}{f}}. \tag{3.1}\] Similarly for \(\sum_{k\geqslant k_{L,\boldsymbol{\varepsilon}}}\mu_{Y}(I_{R}^{k}\cup\hat{I}_ {R}^{k})\). As in (2.2), the sum of these two quantities scaled by \(\mu(Y)\) is the relevant contribution to \(\mu(H_{\boldsymbol{\varepsilon}}^{\prime})\). Now for the contribution to \(\mu(H_{\boldsymbol{\varepsilon}})\), let us suppose that \(f^{p}\) is locally orientation preserving at \(z\) and \(f^{k_{1}-1}\) is also orientation preserving at \(f(c)\) (which implies that \(\varepsilon_{*}=\varepsilon_{L}\)), the other cases follow similarly. Denote \(J^{k}=\cup_{k^{\prime}\geqslant k}I_{L}^{k^{\prime}}\tilde{\cup}\hat{I}_{L}^{k^{ \prime}}\). Then \(f^{k_{1}}(J^{k_{L,\bullet}+k})=((z-\varepsilon_{L})_{k-1},z)\) so using Lemma 3.1 again, we have \[\mu_{Y}(J^{k_{L,\bullet}+k})\sim\frac{\rho C_{p}}{\mu(Y)}|(z-\varepsilon_{L})_{ k}-z|^{\frac{1}{\ell}}\sim\frac{\rho C_{p}}{\mu(Y)}\lambda_{z}^{-\frac{k}{\ell}} \varepsilon_{L}^{\frac{1}{\ell}}=\frac{\rho C_{p}}{\mu(Y)}\lambda_{z}^{-\frac {k}{\ell}}\varepsilon_{\epsilon}^{\frac{1}{\ell}}\,.\] Then using (2.2), we estimate the (scaled) contribution to \(\mu(H_{\mathbf{\varepsilon}})\) from the left of \(c\) by, \[\begin{split}\sum_{k\geqslant k_{L,\mathbf{\varepsilon}}}k\mu_{Y}(I_{ L}^{k}\cup\hat{I}_{L}^{k})&=\sum_{k\geqslant k_{L,\mathbf{ \varepsilon}}}\sum_{k^{\prime}\geqslant k}\mu_{Y}(I_{L}^{k^{\prime}}\cup\hat{I} _{L}^{k^{\prime}})=\sum_{k\geqslant k_{L,\mathbf{\varepsilon}}}\mu_{Y}(J^{k})\\ &\sim\frac{\rho C_{p}}{\mu(Y)}\varepsilon_{\star}^{\frac{1}{\ell} }\sum_{k\geqslant 0}\lambda_{z}^{-\frac{k}{\ell}}=\frac{\rho C_{p}\varepsilon_{ \star}^{\frac{1}{\ell}}}{\mu(Y)(1-\lambda_{z}^{-\frac{1}{\ell}})}\,.\end{split} \tag{3.2}\] An identical estimate follows for the domains \(\{I_{R}^{k^{\prime}},\hat{I}_{R}^{k^{\prime}}\}_{k^{\prime}\geqslant k}\) from the right of \(c\). ### The contribution from non-principal subchains Now for the subchains that are not in \((c-\delta(\mathbf{\varepsilon}),c+\delta(\mathbf{\varepsilon}))\), but which map into \(H_{\mathbf{\varepsilon}}\) before returning to \(Y\), we will use a different, rougher, type of estimate. First notice that such subchains cannot be contained in \((c-\delta(\varepsilon_{0}),c+\delta(\varepsilon_{0}))\) since if \(I_{i}\subset(c-\delta(\varepsilon_{0}),c+\delta(\varepsilon_{0}))\setminus(c -\delta(\mathbf{\varepsilon}),c+\delta(\mathbf{\varepsilon}))\), then the repelling structure of our map around \(z\) means that \(I_{i}\) will return to \(Y\) before mapping into \(H_{\mathbf{\varepsilon}}\). With this in mind, recall \(\lambda_{\delta}:=\lambda_{(c-\delta(\varepsilon_{0}),c+\delta(\varepsilon_{0 }))}\) and \(K_{\delta}:=K_{(c-\delta(\varepsilon_{0}),c+\delta(\varepsilon_{0}))}\) defined after Remark 2.6 when we fixed the definition of \(Y\). We will deal with the orientation preserving case here, the orientation reversing case is similar. Suppose that \(\{I_{i}^{k}\}_{k\geqslant k_{i}}\), \(\{\hat{I}_{i}^{k}\}_{k\geqslant k_{i}}\) is a pair of non-principal subchains such that \(f^{s}(I_{i}^{k}\cup\hat{I}_{i}^{k})\subset H_{\mathbf{\varepsilon}}\) for any \(k\geqslant k_{i}\) (so this is the first time that any element of the subchain enters \(H_{\mathbf{\varepsilon}}\)). For \(x\in I_{i}^{k}\), by the Mean Value Theorem, there is \(x\in I_{i}^{k}\) such that \[|I_{i}|=|Df^{s}(x)|^{-1}|f^{s}(I_{i}^{k})|\leqslant K_{\delta}^{-1}\lambda_{ \delta}^{-s}|f^{s}(I_{i}^{k})|\] by Proposition 2.1. So since \(f^{s}\left(\cup_{k\geqslant k_{i}}(I_{i}^{k}\cup\hat{I}_{i}^{k})\right)\) covers either \((z-\varepsilon_{L},z)\) or \((z,z+\varepsilon_{R})\), we thus obtain an analogue of (3.1): for \(\varepsilon^{\prime}=\max\{\varepsilon_{L},\varepsilon_{R}\}\), \[\mu(Y)\sum_{k\geqslant k_{i}}\mu_{Y}(I_{i}^{k}\cup\hat{I}_{i}^{k})\leqslant K_ {\delta}^{-1}\lambda_{\delta}^{-s}\varepsilon^{\prime}.\] We know that there are \(O(e^{s\eta})\) domains \(I_{j}\) which have \(f^{s}(I_{j})=f^{j}(I_{i})\), so, also accounting for the intervals which map to the other side of \(z\), the total measure of non-principal chains contributing to \(\mu(H_{\mathbf{\varepsilon}}^{\prime})\) can be estimated by \[2\sum_{s\geqslant 1}K_{\delta}^{-1}e^{s(\eta-\log\lambda_{\delta})}\varepsilon^{ \prime}=O(1)\varepsilon^{\prime}, \tag{3.3}\] and recall we have chosen \(\eta\in(0,\log\lambda_{\delta})\): in our definition of \(Y\). Similarly to (3.2) we can obtain an analogous estimate, also \(O(1)\varepsilon^{\prime}\), for the contribution to \(\mu(H_{\mathbf{\varepsilon}})\). ### The limiting ratio in the periodic postcritical case We will assume that \(\frac{\varepsilon_{L}}{\varepsilon_{R}}\) is uniformly bounded away from \(0\) and infinity (so in particular \(\varepsilon_{L},\varepsilon_{R}=O(\varepsilon_{\ast})\)), so that the density spike dominates the measure of our holes. Now recalling (2.2), (3.1) together with (3.3) imply \[\mu(H_{\mathbf{\varepsilon}}^{\prime})\sim 2C_{p}\rho(\varepsilon_{\ast})^{\frac{1}{ \ell}}+O(1)\varepsilon_{\ast}\] while (3.2) and the analogue of (3.3) imply \[\mu(H_{\mathbf{\varepsilon}})\sim 2C_{p}\frac{\rho(\varepsilon_{\ast})^{\frac{1}{\ell}}}{1- \lambda_{z}^{-\frac{1}{\ell}}}+O(1)\varepsilon_{\ast},\] so the two principal chains dominate this estimate and the limiting ratio is \[\lim_{\varepsilon\to 0}\frac{\mu(H^{\prime}_{\mathbf{\varepsilon}})}{\mu(H_{\mathbf{ \varepsilon}})}=1-\lambda_{z}^{-\frac{1}{\ell}}\,. \tag{3.4}\] ### The preperiodic postcritical case For the case that \(z=f^{k_{1}}(c)\) for \(1\leqslant k_{1}<k_{0}\), we choose the inducing scheme \(Y\) via a minor adaptation of the method described in the proof of Lemma 2.3: properties (a) and (d) of that lemma will follow here along with the condition (b'): If \(x\in I\setminus Y\) and \(i\) is minimal such that \(f^{i}(x)\in Y\) then \(f^{i}(x)\in(f^{k_{0}+p-1}(c)-\varepsilon_{0},f^{k_{0}+p-1}(c)+\varepsilon_{0})\) for some small \(\varepsilon_{0}>0\). In the notation of the proof of that lemma we choose \(w_{L}\) and \(w_{R}\) adjacent to \(z\) (or just one of these if \(z\in\{f(c),f^{2}(c)\}\) is one of the endpoints of \(I\)) and then choose the rest of the cylinders around \(\operatorname{orb}(f(c),\)\(Z_{\tilde{N}}:=\{w_{j,L},w_{j,R}:j=0,\ldots,k_{1}-1\}\cup\{f^{i}(w_{L}),f^{i}(w_{R}) :i=0,\ldots,k_{0}+p-k_{1}-1\}\) be the sets we remove from \(\Sigma\), where \(f^{k_{1}-j}(w_{j,L})=w_{L}\) and \(f^{k_{1}-j}(w_{j,R})=w_{R}\). If \(w_{L}\) and \(w_{R}\) are chosen small enough then properties (a), (c) and (d) of Lemma 2.3 hold, as does Proposition 2.5. This construction also guarantees for small enough \(\mathbf{\varepsilon}>0\) each \(I_{i}\) passes at most once through \((z-\varepsilon_{L},z+\varepsilon_{R})\) before returning to \(Y\), ensuring that \(\mu(H^{\prime}_{\mathbf{\varepsilon}})=\mu(H_{\mathbf{\varepsilon}})\). Moreover, note that an \(I_{i}\) which does pass through \((z-\varepsilon_{L},z+\varepsilon_{R})\), must also pass through a neighbourhood of \(\{f^{k_{1}+1}(c),\ldots,f^{k_{0}+p-1}(c)\}\) before returning to \(Y\). ## 4. Functional framework for \(\beta\)-allowable holes Our goal in this section is to formalise a functional framework for the transfer operator corresponding to the induced map \(F_{\mathbf{\varepsilon}}\) and its punctured counterpart \(\mathring{F}_{\mathbf{\varepsilon}}\). In order to do this, we will work with a fixed higher iterate \(n_{0}\) of the induced map and formulate a classification of holes depending on the minimal length of images of \(n_{0}\)-cylinders under \(\mathring{F}_{\mathbf{\varepsilon}}^{n_{0}}\) (see the definition of \(\beta\)-allowable in Definition 4.2). Using this control, we prove the punctured transfer operator enjoys a spectral in a space of functions of bounded variation (Theorem 4.7) and use it to prove a local escape rate for \(\check{F}_{\mathbf{\varepsilon}}\) (Lemma 4.9). We then use this prove the local escape rate for \(f\) needed for Theorem 1.1 via (4.16) by computing the limits in (4.17). We begin by formally defining the induced open system. Recall that given a hole \(H_{\mathbf{\varepsilon}}(z)=(z-\varepsilon_{L},z+\varepsilon_{R})\), the induced hole \(H^{\prime}_{\mathbf{\varepsilon}}\) for \(\mathring{F}_{\mathbf{\varepsilon}}\) is the collection of intervals that enter \(H_{\mathbf{\varepsilon}}(z)\) before returning to \(Y\). Define the open system for \(n\geqslant 1\) by \[\mathring{F}_{\mathbf{\varepsilon}}^{n}=F_{\mathbf{\varepsilon}}^{n}|_{\mathring{Y}_{ \mathbf{\varepsilon}}^{n}},\text{ where }\mathring{Y}_{\mathbf{\varepsilon}}^{n}:=\cap_{i=0}^{n-1}F_{\mathbf{ \varepsilon}}^{-i}(Y\setminus H^{\prime}_{\mathbf{\varepsilon}})\,, \tag{4.1}\] i.e. the open system at time \(n\) is the induced map restricted to the set of points that have not entered \(H^{\prime}_{\mathbf{\varepsilon}}\) before time \(n\). Note that \(\mathring{Y}_{\mathbf{\varepsilon}}^{0}=Y\) and \(\mathring{Y}_{\mathbf{\varepsilon}}^{1}=Y\setminus H^{\prime}_{\mathbf{\varepsilon}}\). We first prove the following fact about the map \(F_{\mathbf{\varepsilon}}\). **Lemma 4.1**.: _For all \(n\geqslant 0\), \(|DF_{\mathbf{\varepsilon}}^{n}(x)|\geqslant\tilde{K}\lambda_{per}^{\tau^{n}(x)} \geqslant\tilde{K}\lambda_{per}^{n}\), where \(\tilde{K}>0\) is from (2.1)._ Proof.: This follows as in the proof of Proposition 2.5, using Lemma 2.4 to find a point of the right period. Using Lemma 4.1, we choose \(n_{0}\geqslant 1\) so that \[\inf_{x}|DF_{\mathbf{\varepsilon}}^{n_{0}}(x)|>3. \tag{4.2}\] Next we define a parameter to keep track of the minimum size of images of \(n_{0}\)-cylinders under the punctured map \(\mathring{F}_{\boldsymbol{\varepsilon}}^{n_{0}}\). **Definition 4.2**.: _Let \(\beta\in(0,1/2)\) and let \(\{J_{i}\}_{i}\) denote the set of one cylinders for \(F_{\boldsymbol{\varepsilon}}^{n_{0}}\) (\(n_{0}\)-cylinders for \(F_{\boldsymbol{\varepsilon}}\))._ * _For an interval_ \(J=(x,y)\)_, we say that_ \(t\in J\) _is_ \(\beta\)-deep _in_ \(J\) _if_ \(t\in[x+\beta|J|,y-\beta|J|]\)_._ * _For our holes, we say that_ \(\boldsymbol{\varepsilon}\) _is_ \(\beta\)-left-allowable _if there is a domain_ \(J_{i}\) _of_ \(F_{\boldsymbol{\varepsilon}}^{n_{0}}\) _with_ \(f^{s}(J_{i})\subset(z-\varepsilon_{1},z)\) _with_ \(1\leqslant s<\tau_{i}\) _and_ \(z-\varepsilon_{L}\)__\(\beta\)_-deep in_ \(f^{s}(J_{i})\)_. We similarly define_ \(\beta\)-right-allowable _with respect to_ \((z,z+\varepsilon_{1})\)_. In case_ \(\boldsymbol{\varepsilon}\) _satisfies both of these conditions we call it_ \(\beta\)-allowable._ The property of \(\beta\)-allowable is important for the following reason. **Lemma 4.3**.: _(Large images depending on \(\beta\)). Let \(n_{0}\) be chosen as in (4.2). For each \(\beta\in(0,1/2)\), there exists \(C_{\beta}>0\) such that if \(H_{\boldsymbol{\varepsilon}}\) is \(\beta\)-allowable, then \(|F_{\boldsymbol{\varepsilon}}^{n_{0}}(J_{i})|,|\mathring{F}_{\boldsymbol{ \varepsilon}}^{n_{0}}(J_{i})|\geqslant C_{\beta}\) for all \(n_{0}\)-cylinders \(J_{i}\)._ Proof.: The property follows immediately from the Markov structure of \(F_{\boldsymbol{\varepsilon}}^{n_{0}}\) together with the assumption of \(\beta\)-admissible. Note that without extra cuts due to the boundary of the hole, the minimum length of the image of any one-cylinder for \(F_{0}^{n_{0}}\) is bounded below away from \(0\) by a number depending only on \(\Sigma^{\prime}(N_{0})\). Next, by definition of \(F_{\boldsymbol{\varepsilon}}^{n_{0}}\), the intervals that are cut by the boundary of the hole are such that \(z-\varepsilon_{L}\) and \(z+\varepsilon_{R}\) are \(\beta\)-deep, by assumption. Thus the length of the image under \(F_{\boldsymbol{\varepsilon}}^{n_{0}}\) is determined by the parameter \(\beta\), together with the distortion constant for \(F_{\boldsymbol{\varepsilon}}\). Given our choice of cuts depending on the boundary of the hole, the set of images for \(\mathring{F}_{\boldsymbol{\varepsilon}}^{n_{0}}\) is simply a subset of the set of images for \(F_{\boldsymbol{\varepsilon}}^{n_{0}}\), so the property holds equally for \(\mathring{F}_{\boldsymbol{\varepsilon}}^{n_{0}}\). ### A uniform spectral gap for \(\beta\)-allowable holes In this section we show that for each fixed \(\beta>0\) and in any set of \(\varepsilon_{L},\varepsilon_{R}>0\) that are \(\beta\)-allowable, the transfer operators associated to \(\mathring{F}_{\boldsymbol{\varepsilon}}\) and its punctured counterpart have a uniform spectral gap when acting on functions of bounded variation in \(Y\). Given a measurable function \(\psi:Y\to\mathbb{R}\), define the _variation_ of \(\psi\) on an interval \(J\subset Y\) (or a finite collection of intervals \(J\subset Y\)) by \[\bigvee_{J}\psi=\sup_{x_{0}<x_{1}<\cdots<x_{N}}\sum_{k=1}^{N}|\psi(x_{k})-\psi (x_{k-1})|\,, \tag{4.3}\] where \(\{x_{k}\}_{k=0}^{N}\) is the set of endpoints of a partition of \(J\) into \(N\) intervals, and the supremum ranges over all finite partitions of \(J\). Define the BV norm of \(\psi\) by, \[\|\psi\|_{BV}=\bigvee_{Y}\psi+|\psi|_{L^{1}(m)}\,,\] where \(m\) denotes Lebesgue measure on \(Y\). Let3\(BV(Y)\) denote the set of functions \(\{\psi\in L^{1}(m):\|\psi\|_{BV}<\infty\}\). Footnote 3: By the variation of \(\psi\in L^{1}(m)\), we mean the essential variation, i.e. \(\bigvee_{Y}\psi=\inf_{g}\bigvee_{Y}g\), where the infimum ranges over all functions \(g\) in the equivalence class of \(\psi\). We shall study the action of the transfer operators associated with \(F_{\boldsymbol{\varepsilon}}\) and \(\mathring{F}_{\boldsymbol{\varepsilon}}\) acting on \(BV(Y)\). For \(\psi\in L^{1}(m)\) Define \[\mathcal{L}_{\boldsymbol{\varepsilon}}\psi(x)=\sum_{y\in F_{\boldsymbol{ \varepsilon}}^{-1}\,x}\frac{\psi(y)}{|DF_{\boldsymbol{\varepsilon}}(y)|},\ \text{and for each}\ n\geqslant 0,\ \mathring{\mathcal{L}}_{ \boldsymbol{\varepsilon}}^{n}\psi=\mathcal{L}_{\boldsymbol{\varepsilon}}^{n}(1_ {\mathring{Y}_{\boldsymbol{\varepsilon}}^{n}}\psi)\,.\] We do not claim, or need, that \(1_{\hat{Y}_{\mathbf{\varepsilon}}^{n}}\in BV(Y)\). **Remark 4.4**.: _Note that for each \(x\in Y\), \(F_{\mathbf{\varepsilon}}(x)=F_{0}(x)\). This is easy to see since \(F_{\mathbf{\varepsilon}}\) simply introduces extra cuts at the boundary of \(H_{\mathbf{\varepsilon}}(z)\), but does not change the orbit of \(x\), while \(F_{0}\) introduces no extra cuts apart from those introduced in the original definition of \(Y\). Thus the 1-cylinders for \(F_{\mathbf{\varepsilon}}\) and \(F_{0}\) differ slightly (those for \(F_{\mathbf{\varepsilon}}\) can only be smaller), but pointwise the definition of the maps is the same._ Our first result proves a uniform set of Lasota-Yorke inequalities for \(\hat{\mathcal{L}}_{\mathbf{\varepsilon}}^{n_{0}}\), depending only on \(\beta\). Let \(\hat{\mathcal{I}}_{\mathbf{\varepsilon}}\) denote the (countable) collection of one-cylinders for \(\hat{F}_{\mathbf{\varepsilon}}^{n_{0}}\), and let \(\hat{\mathcal{J}}_{\mathbf{\varepsilon}}\) denote the finite set of images of elements of \(\hat{\mathcal{I}}_{\mathbf{\varepsilon}}\). **Proposition 4.5**.: _For any \(\beta>0\) and any \(\beta\)-allowable hole \(H_{\mathbf{\varepsilon}}(z)\), for all \(\psi\in BV(Y)\) and all \(k\geqslant 0\),_ \[\bigvee\hat{\mathcal{L}}_{\mathbf{\varepsilon}}^{kn_{0}}\psi \leqslant (\tfrac{2}{3})^{k}\bigvee_{Y}\psi+(1+C_{d})\big{(}C_{d}+2C_{ \beta}^{-1}\big{)}\sum_{j=0}^{k-1}(\tfrac{2}{3})^{j}\int_{\hat{Y}_{\mathbf{ \varepsilon}}^{n_{0}(k-j)}}|\psi|\,dm\,, \tag{4.4}\] \[\int_{Y}|\hat{\mathcal{L}}_{\mathbf{\varepsilon}}^{k}\psi|\,dm \leqslant \int_{\hat{Y}_{\mathbf{\varepsilon}}^{k}}|\psi|\,dm\,. \tag{4.5}\] Proof.: The second inequality follows immediately from the definition of \(\hat{\mathcal{L}}_{\mathbf{\varepsilon}}\), so we will prove the first. In fact, we will prove the inequality for \(k=1\), which can then be iterated trivially to produce (4.4). For \(\psi\in BV(Y)\), letting \(\{\bar{u}_{j},\bar{v}_{j}\}_{j}\) denote the endpoints of elements of \(\hat{\mathcal{J}}_{\mathbf{\varepsilon}}\) and \(\{u_{i},v_{i}\}_{i}\) denote the endpoints of elements of \(\hat{\mathcal{I}}_{\mathbf{\varepsilon}}\), we estimate, \[\bigvee_{Y}\hat{\mathcal{L}}_{\mathbf{\varepsilon}}^{n_{0}}\psi \leqslant\sum_{J_{j}\in\hat{\mathcal{J}}_{\mathbf{\varepsilon}}}\bigvee _{\hat{\mathcal{L}}_{\mathbf{\varepsilon}}^{n_{0}}}\psi+\hat{\mathcal{L}}_{\mathbf{ \varepsilon}}^{n_{0}}\psi(\bar{u}_{j})+\hat{\mathcal{L}}_{\mathbf{\varepsilon}}^{ n_{0}}\psi(\bar{v}_{j}) \tag{4.6}\] \[\leqslant\sum_{I_{i}\in\hat{\mathcal{I}}_{\mathbf{\varepsilon}}}\bigvee _{I_{i}}\frac{\psi}{|DF_{\mathbf{\varepsilon}}^{n_{0}}|}+\sum_{I_{i}\in\hat{ \mathcal{I}}_{\mathbf{\varepsilon}}}\frac{|\psi(u_{i})|}{|DF_{\mathbf{\varepsilon}}^{ n_{0}}(u_{i})|}+\frac{|\psi(v_{i})|}{|DF_{\mathbf{\varepsilon}}^{n_{0}}(v_{i})|}\,.\] For the first term above, given a finite partition \(\{x_{k}\}_{k=0}^{N}\) of \(I_{i}\), we split the relevant expression into two terms. \[\sum_{k}\left|\frac{\psi(x_{k})}{|DF_{\mathbf{\varepsilon}}^{n_{0}}(x_{k})|}-\frac {\psi(x_{k-1})}{|DF_{\mathbf{\varepsilon}}^{n_{0}}(x_{k-1})|}\right|\leqslant\frac {1}{3}\bigvee_{I_{i}}\psi+\sum_{k}|\psi(x_{k})|\left|\frac{1}{|DF_{\mathbf{ \varepsilon}}^{n_{0}}(x_{k})|}-\frac{1}{|DF_{\mathbf{\varepsilon}}^{n_{0}}(x_{k-1 })|}\right|\,,\] where we have used (4.2) in the first term. For the second term, we use bounded distortion, Proposition 2.5(b), to estimate, \[\left|\frac{1}{|DF_{\mathbf{\varepsilon}}^{n_{0}}(x_{k})|}-\frac{1}{|DF_{\mathbf{ \varepsilon}}^{n_{0}}(x_{k-1})|}\right|\leqslant C_{d}\frac{|F_{\mathbf{ \varepsilon}}^{n_{0}}(x_{k})-F_{\mathbf{\varepsilon}}^{n_{0}}(x_{k-1})|}{|DF_{\bm {\varepsilon}}^{n_{0}}(x_{k})|}\leqslant(1+C_{d})C_{d}|x_{k}-x_{k-1}|\,,\] where we have applied the mean value theorem to \(F_{\mathbf{\varepsilon}}^{n_{0}}\) on \([x_{k-1},x_{k}]\). Putting these estimates together yields, \[\sum_{k}\left|\frac{\psi(x_{k})}{|DF_{\mathbf{\varepsilon}}^{n_{0}}(x _{k})|}-\frac{\psi(x_{k-1})}{|DF_{\mathbf{\varepsilon}}^{n_{0}}(x_{k-1})|}\right| \leqslant\frac{1}{3}\bigvee_{I_{i}}\psi+C_{d}(1+C_{d})\sum_{k=1}^{N}| \psi(x_{k})|(x_{k}-x_{k-1})\] \[\leqslant\frac{1}{3}\bigvee_{I_{i}}\psi+C_{d}(1+C_{d})\int_{I_{i}} |\psi|+\kappa_{N}(\psi)\,,\] where we have recognised the second term as a Riemann sum, and the error term \(\kappa_{N}(\psi)\to 0\) as \(N\to\infty\). Since the variation is attained in the limit of partitions as \(N\to\infty\), we have the following bound on the first term from (4.6), \[\bigvee_{I_{i}}\frac{\psi}{|DF^{n_{0}}_{\boldsymbol{\varepsilon}}|}\leqslant \frac{1}{3}\bigvee_{I_{i}}\psi+C_{d}(1+C_{d})\int_{I_{i}}|\psi|\,dm\,. \tag{4.7}\] Next, for the second term in (4.6), we use the bound, \[|\psi(u_{i})|+|\psi(v_{i})|\leqslant 2\inf_{I_{i}}|\psi|+\bigvee_{I_{i}}\psi \leqslant\frac{2}{|I_{i}|}\int_{I_{i}}|\psi|+\bigvee_{I_{i}}\psi\,. \tag{4.8}\] Then using again bounded distortion together with Lemma 4.3, we have \[|I_{i}|\inf_{I_{i}}|DF^{n_{0}}_{\boldsymbol{\varepsilon}}|\geqslant(1+C_{d})^ {-1}|F^{n_{0}}_{\boldsymbol{\varepsilon}}(I_{i})|\geqslant(1+C_{d})^{-1}C_{ \beta}\,. \tag{4.9}\] Putting these estimates together with (4.7) into (4.6), and using again (4.2), we conclude, \[\bigvee_{Y}\hat{\mathcal{L}}^{n_{0}}_{\boldsymbol{\varepsilon}}\psi\leqslant \frac{2}{3}\sum_{i}\bigvee_{I_{i}}\psi+\big{(}C_{2}(1+C_{d})+2(1+C_{d})C_{\beta }^{-1}\big{)}\int_{I_{i}}|\psi|\leqslant\frac{2}{3}\bigvee_{Y}\psi+(1+C_{d})( C_{d}+2C_{\beta}^{-1})\int_{\hat{Y}^{n_{0}}_{\boldsymbol{\varepsilon}}}| \psi|,\] which is the required inequality for \(k=1\). Next, in order to show that \(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}\) has a uniform spectral gap (depending on \(\beta\), and for \(\boldsymbol{\varepsilon}\) sufficiently small), we will apply the perturbative framework of Keller and Liverani [14]. To this end, define the norm of an operator \(\mathcal{P}:BV(Y)\to L^{1}(m)\) by, \[|||\mathcal{P}|||=\sup\{|\mathcal{P}\psi|_{L^{1}(m)}:\|\psi\|_{BV}\leqslant 1 \}\,. \tag{4.10}\] Our next lemma is standard: \(|||\mathcal{L}_{0}-\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}|||\) is small as a function of \(m(H^{\prime}_{\boldsymbol{\varepsilon}})\). **Lemma 4.6**.: _For any \(\boldsymbol{\varepsilon}>0\) and \(\boldsymbol{\varepsilon}^{\prime}\in[0,\boldsymbol{\varepsilon})\), \(|||\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}^{\prime}}-\hat{\mathcal{L}}_{ \boldsymbol{\varepsilon}}|||\leqslant m(H^{\prime}_{\boldsymbol{\varepsilon}} \setminus H^{\prime}_{\boldsymbol{\varepsilon}^{\prime}})\). This holds in particular for \(\boldsymbol{\varepsilon}^{\prime}=0\), in which case \(\hat{\mathcal{L}}_{0}=\mathcal{L}_{0}\) is the unpunctured operator._ Proof.: Let \(\psi\in BV(Y)\). Then, \[\int_{Y}|(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}^{\prime}}-\hat{\mathcal{ L}}_{\boldsymbol{\varepsilon}})\psi|\,dm\leqslant\int_{Y}|\psi 1_{H^{\prime}_{\boldsymbol{\varepsilon}}\setminus H^{\prime}_{\boldsymbol{ \varepsilon}^{\prime}}}|\,dm\leqslant\|\psi\|_{BV}m(H^{\prime}_{\boldsymbol{ \varepsilon}}\setminus H^{\prime}_{\boldsymbol{\varepsilon}^{\prime}})\,.\] **Theorem 4.7**.: _For any \(\beta>0\), there exists \(\varepsilon_{\beta}>0\) such that for any \(\beta\)-allowable hole \(H_{\boldsymbol{\varepsilon}}(z)\) with \(\boldsymbol{\varepsilon}<\varepsilon_{\beta}\), \(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}\) is a continuous perturbation of \(\mathcal{L}_{0}\). Indeed, it is Holder continuous in \(m(H^{\prime}_{\boldsymbol{\varepsilon}})\)._ _As a consequence, \(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}\) has a spectral gap on \(BV(Y)\). In particular, there exist \(\eta_{\beta},B_{\beta}>0\), such that for all \(\boldsymbol{\varepsilon}<\varepsilon_{\beta}\), the spectral radius of \(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}\) is \(\Lambda_{\boldsymbol{\varepsilon}}<1\) and there exist operators \(\Pi_{\boldsymbol{\varepsilon}},\mathcal{R}_{\boldsymbol{\varepsilon}}:BV(Y) \circ\) satisfying \(\Pi_{\boldsymbol{\varepsilon}}^{2}=\Pi_{\boldsymbol{\varepsilon}}\), \(\Pi_{\boldsymbol{\varepsilon}}\mathcal{R}_{\boldsymbol{\varepsilon}}= \mathcal{R}_{\boldsymbol{\varepsilon}}\Pi_{\boldsymbol{\varepsilon}}=0\), and \(\|\mathcal{R}_{\boldsymbol{\varepsilon}}^{n}\|_{BV}\leqslant B_{\beta}\Lambda_{ \boldsymbol{\varepsilon}}^{n}e^{-\eta_{\beta}n}\) such that_ \[\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}\psi=\Lambda_{\boldsymbol{ \varepsilon}}\Pi_{\boldsymbol{\varepsilon}}\psi+\mathcal{R}_{\boldsymbol{ \varepsilon}}\psi\,. \tag{4.11}\] _Moreover, \(\Pi_{\boldsymbol{\varepsilon}}=\hat{e}_{\boldsymbol{\varepsilon}}\otimes\hat{g }_{\boldsymbol{\varepsilon}}\) for some \(\hat{e}_{\boldsymbol{\varepsilon}}\in BV(Y)^{*}\) and \(\hat{g}_{\boldsymbol{\varepsilon}}\in BV(Y)\) satisfying \(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}\hat{g}_{\boldsymbol{ \varepsilon}}=\Lambda_{\boldsymbol{\varepsilon}}\hat{g}_{\boldsymbol{ \varepsilon}}\) with \(\int_{Y}\hat{g}_{\boldsymbol{\varepsilon}}\,dm=1\)._ _Lastly, the spectra and spectral projectors vary (Holder) continuously as functions of \(\boldsymbol{\varepsilon}\) in the \(|||\cdot|||\)-norm, i.e. as operators from \(BV(Y)\) to \(L^{1}(m)\)._ Proof.: The Lasota-Yorke inequalities of Proposition 4.5 apply also to the unpunctured operator \(\mathcal{L}_{\mathbf{\varepsilon}}=\mathcal{L}_{0}\) with \(\mathring{Y}_{\mathbf{\varepsilon}}^{n}\) replaced by \(Y\). Thus \(\mathcal{L}_{0}^{n_{0}}\) is quasi-compact on \(BV(Y)\), and since \(\mathcal{L}_{0}\) is also bounded as on operator on \(BV(Y)\) (although we do not obtain the same contraction for one iterate of \(\mathcal{L}_{0}\), the norm estimate as in the proof of Proposition 4.5 is finite), then \(\mathcal{L}_{0}\) is also quasi-compact on \(BV(Y)\). Since \(F_{0}\) is mixing by Lemma 2.3(e) and has finite images by Proposition 2.5(a), then \(F_{0}\) is covering in the sense of [LSV]. It follows that \(\mathcal{L}_{0}\) has a spectral gap. Then so does \(\mathcal{L}_{0}^{n_{0}}\). Moreover, if \(g_{0}\) is the unique element of \(BV(Y)\) such that \(\mathcal{L}_{0}g_{0}=g_{0}\) and \(\int_{Y}g_{0}\,dm=1\), then [LSV, Theorem 3.1] implies \[c_{g}:=\inf_{Y}g_{0}>0\,. \tag{4.12}\] Next, due to the uniform Lasota-Yorke inequalities (for fixed \(\beta>0\)) of Proposition 4.5 together with Lemma 4.6 for \(\mathbf{\varepsilon}^{\prime}=0\), [KL1, Corollary 1] implies that the spectra and spectral projectors of \(\hat{\mathcal{L}}_{\mathbf{\varepsilon}}^{n_{0}}\) outside the disk of radius \(2/3\) vary continuously in \(\mathbf{\varepsilon}\) for \(\mathbf{\varepsilon}\) sufficiently small (depending on \(\beta\)). The spectral gap and the rest of the spectral decomposition for \(\hat{\mathcal{L}}_{\mathbf{\varepsilon}}\) then follows from the analogous decomposition for \(\mathcal{L}_{0}\). Lastly, fixing \(\mathbf{\varepsilon}<\varepsilon_{\beta}\) and using Lemma 4.6, we apply again [KL1, Corollary 1] to \(\hat{\mathcal{L}}_{\mathbf{\varepsilon}}\) to conclude that the spectra and spectral projectors of \(\hat{\mathcal{L}}_{\mathbf{\varepsilon}^{\prime}}\) vary Holder continuously as functions of \(|\mathbf{\varepsilon}^{\prime}-\mathbf{\varepsilon}|\) whenever \(\mathbf{\varepsilon}^{\prime}<\varepsilon_{\beta}\). The above theorem implies in particular that the size of the spectral gap is at least \(\eta_{\beta}\) for all \(\beta\)-allowable holes \(H_{\mathbf{\varepsilon}}(z)\) with \(\mathbf{\varepsilon}<\varepsilon_{\beta}\). ### Local escape rate In this section, we will set up the estimates necessary to prove Theorem 1.1 via the induced map \(F_{\mathbf{\varepsilon}}\). The strategy is essentially the same as that carried out in [DT2, Section 7], but of course now we are interested in the case in which \(z\) lies in the critical orbit, which was not allowed for geometric potentials in [DT2]. We fix \(\beta>0\) and consider the zero-hole limit as \(\varepsilon\to 0\) for \(\beta\)-allowable holes only. As a first step, we use the spectral gap for \(\hat{\mathcal{L}}_{\mathbf{\varepsilon}}\) given by Theorem 4.7 to construct an invariant measure \(\nu_{\mathbf{\varepsilon}}\) for the induced open map \(\mathring{F}_{\mathbf{\varepsilon}}\) supported on the survivor set \(\mathring{Y}_{\mathbf{\varepsilon}}^{\infty}:=\cap_{n=1}^{\infty}\mathring{Y}_{\bm {\varepsilon}}^{n}=\cap_{n=0}^{\infty}\mathring{F}_{\mathbf{\varepsilon}}^{-n}(Y)\). Define for \(\psi\in BV(Y)\), \[\nu_{\mathbf{\varepsilon}}(\psi):=\lim_{n\to\infty}\Lambda_{\mathbf{\varepsilon}}^{-n} \int_{\mathring{Y}_{\mathbf{\varepsilon}}^{n}}\psi\,\mathring{g}_{\mathbf{\varepsilon }}\,dm\,. \tag{4.13}\] **Lemma 4.8**.: _Fix \(\beta>0\) and let \(\mathbf{\varepsilon}<\varepsilon_{\beta}\) be \(\beta\)-allowable. The limit in (4.13) exists and defines a Borel probability measure, supported on \(\mathring{Y}_{\mathbf{\varepsilon}}^{\infty}\), and invariant for \(\mathring{F}_{\mathbf{\varepsilon}}\). Moreover, \(\nu_{\mathbf{\varepsilon}}\) varies continuously as a function of \(\mathbf{\varepsilon}\) (for fixed \(\beta\)) and 4_ Footnote 4: Indeed, we show that \(\nu_{\mathbf{\varepsilon}}(\tau)\) is continuous in \(\mathbf{\varepsilon}\) although \(\tau\notin BV(Y)\). \[-\log\Lambda_{\mathbf{\varepsilon}}=\left(\int\tau\,d\nu_{\mathbf{\varepsilon}} \right)\mathfrak{e}(H_{\mathbf{\varepsilon}}(z))\,, \tag{4.14}\] _where \(\tau\) is the inducing time for \(F_{\mathbf{\varepsilon}}\) and \(\mathfrak{e}(H_{\mathbf{\varepsilon}}(z))\) is the escape rate for \(f\) from (1.1)._ Proof.: The limit in (4.13) exists due to the spectral decomposition from Theorem 4.7 and the conformality of \(m\): \[\lim_{n\to\infty}\Lambda_{\mathbf{\varepsilon}}^{-n}\int_{\mathring{Y}_{\mathbf{ \varepsilon}}^{n}}\psi\,\mathring{g}_{\mathbf{\varepsilon}}\,dm=\lim_{n\to\infty} \int_{Y}\Lambda_{\mathbf{\varepsilon}}^{-n}\hat{\mathcal{L}}_{\mathbf{\varepsilon}}^{n}( \psi\mathring{g}_{\mathbf{\varepsilon}})\,dm=\mathring{e}_{\mathbf{\varepsilon}}(\psi \mathring{g}_{\mathbf{\varepsilon}})\,,\] for any \(\psi\in BV(Y)\) since if \(\psi\in BV(Y)\), then also \(\psi\mathring{g}_{\boldsymbol{\varepsilon}}\in BV(Y)\). From (4.13), \(|\nu_{\boldsymbol{\varepsilon}}(\psi)|\leqslant\nu_{\boldsymbol{\varepsilon}}(1 )|\psi|_{\infty}\), so that \(\nu_{\boldsymbol{\varepsilon}}\) extends to a bounded linear functional on \(C^{0}(Y)\), i.e. \(\nu_{\boldsymbol{\varepsilon}}\) is a Borel measure, clearly supported on \(\mathring{Y}_{\boldsymbol{\varepsilon}}^{\infty}\). Since \(\nu_{\boldsymbol{\varepsilon}}(1)=1\), \(\nu_{\boldsymbol{\varepsilon}}\) is a probability measure. Next, we prove that \(\nu_{\boldsymbol{\varepsilon}}\) is continuous as an element of \(BV(Y)^{*}\). Remark that by the above calculation, \(\nu_{\boldsymbol{\varepsilon}}(\psi)=\hat{e}_{\boldsymbol{\varepsilon}}(\mathring {g}_{\boldsymbol{\varepsilon}}\psi)\) for \(\psi\in BV(Y)\), and when \(\varepsilon=0\), \(\mu_{Y}(\psi)=e_{0}(g_{0}\psi)=\int g_{0}\psi\,dm\), since \(m\) is conformal for \(\mathcal{L}_{0}\). Indeed, \(\hat{e}_{\boldsymbol{\varepsilon}}\) defines a conformal measure \(\hat{m}_{\boldsymbol{\varepsilon}}\) for \(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}\), so that \(\hat{e}_{\boldsymbol{\varepsilon}}(\psi)=\int_{Y}\psi\,d\hat{m}_{\boldsymbol{ \varepsilon}}\) and \(d\nu_{\boldsymbol{\varepsilon}}=\mathring{g}_{\boldsymbol{\varepsilon}}d\hat{ m}_{\boldsymbol{\varepsilon}}\). Thus, for \(\psi\in BV(Y)\) and \(\boldsymbol{\varepsilon},\boldsymbol{\varepsilon}^{\prime}<\varepsilon_{\beta}\), \[|\nu_{\boldsymbol{\varepsilon}}(\psi)-\nu_{\boldsymbol{\varepsilon}^{\prime} }(\psi)| \leqslant|\hat{e}_{\boldsymbol{\varepsilon}}(\mathring{g}_{ \boldsymbol{\varepsilon}}\psi-\mathring{g}_{\boldsymbol{\varepsilon}^{\prime} }\psi)|+|\hat{e}_{\boldsymbol{\varepsilon}}(\mathring{g}_{\boldsymbol{ \varepsilon}^{\prime}}\psi)-e_{\boldsymbol{\varepsilon}^{\prime}}(\mathring{g }_{\boldsymbol{\varepsilon}^{\prime}}\psi)| \tag{4.15}\] \[\leqslant|\psi|_{\infty}|\mathring{g}_{\boldsymbol{\varepsilon}}- \mathring{g}_{\boldsymbol{\varepsilon}^{\prime}}|_{L^{1}(\hat{m}_{\boldsymbol{ \varepsilon}})}+|||\Pi_{\boldsymbol{\varepsilon}}-\Pi_{\boldsymbol{ \varepsilon}^{\prime}}|||\,\|\mathring{g}_{\boldsymbol{\varepsilon}^{\prime} }\psi\|_{BV}\,.\] Both differences go to \(0\) as \(\boldsymbol{\varepsilon}^{\prime}\to\boldsymbol{\varepsilon}\) by Theorem 4.7, while \(\|\mathring{g}_{\boldsymbol{\varepsilon}^{\prime}}\|_{BV}\) is uniformly bounded in \(\boldsymbol{\varepsilon}^{\prime}\) by Proposition 4.5. We conclude that \(\nu_{\boldsymbol{\varepsilon}}\) is continuous in \(\boldsymbol{\varepsilon}\) for fixed \(\beta\) when acting on \(BV\) functions. It remains to prove (4.14). Unfortunately, \(\tau\notin BV(Y)\) so first we must show that \(\nu_{\boldsymbol{\varepsilon}}(\tau)\) is well-defined. Indeed, it is easy to check that \(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}\tau\in BV\). This holds since \(\tau\) is constant on each \(1\)-cylinder \(Y_{i,\boldsymbol{\varepsilon}}\) for \(F_{\boldsymbol{\varepsilon}}\). Thus \(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}\tau\) has discontinuities only at the endpoints of \(\{\mathring{F}_{\boldsymbol{\varepsilon}}(Y_{i,\boldsymbol{\varepsilon}})\}_ {i}\), which is a finite collection of intervals. It follows also that \(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}(\tau\mathring{g}_{\boldsymbol{ \varepsilon}})\in BV(Y)\). Thus using (4.11), \[\lim_{n\to\infty}\Lambda_{\boldsymbol{\varepsilon}}^{-n} \int_{\mathring{Y}_{\boldsymbol{\varepsilon}}^{n}}\tau\mathring{g}_{ \boldsymbol{\varepsilon}}\,dm=\lim_{n\to\infty}\Lambda_{\boldsymbol{ \varepsilon}}^{-n}\int_{Y}\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}^{n-1}( \hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}(\tau\mathring{g}_{\boldsymbol{ \varepsilon}}))\,dm\] \[=\Lambda_{\boldsymbol{\varepsilon}}^{-1}\int\Pi_{\boldsymbol{ \varepsilon}}(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}(\tau\mathring{g}_{ \boldsymbol{\varepsilon}}))\,dm+\lim_{n\to\infty}\int\Lambda_{\boldsymbol{ \varepsilon}}^{-n}\mathcal{R}_{\boldsymbol{\varepsilon}}^{n-1}(\hat{ \mathcal{L}}_{\boldsymbol{\varepsilon}}(\tau\mathring{g}_{\boldsymbol{ \varepsilon}}))\,dm\,,\] and the second term converges to \(0\) by Theorem 4.7. Thus the limit defining \(\nu_{\boldsymbol{\varepsilon}}(\tau)\) exists and is uniformly bounded in \(\boldsymbol{\varepsilon}\) for fixed \(\beta\). More than this, the above calculation can be improved to show that \(\tau\) is uniformly (in \(\boldsymbol{\varepsilon}\)) integrable with respect to \(\nu_{\boldsymbol{\varepsilon}}\), as follows. For each \(N>0\), we use the above to estimate, \[\nu_{\boldsymbol{\varepsilon}}(1_{\tau>N}\cdot\tau)=\lim_{n\to\infty}\Lambda_{ \boldsymbol{\varepsilon}}^{-n}\int\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}^ {n-1}(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}(1_{\tau>N}\cdot\tau\mathring{g }_{\boldsymbol{\varepsilon}}))\,dm\leqslant\Lambda_{\boldsymbol{\varepsilon}}^ {-1}|\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}(1_{\tau>N}\cdot\tau\mathring{g }_{\boldsymbol{\varepsilon}})|_{\infty}\,.\] Then using bounded distortion as in (4.19), one has for \(x\in Y\), \[|\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}(1_{\tau>N}\cdot\tau \mathring{g}_{\boldsymbol{\varepsilon}})(x)| \leqslant\frac{(1+C_{d})|\mathring{g}_{\boldsymbol{\varepsilon}}|_{ \infty}}{C_{\beta}}\sum_{\begin{subarray}{c}y\in\mathring{F}_{\boldsymbol{ \varepsilon}}^{-1}x\\ \tau(y)>N\end{subarray}}\tau(y)m(Y_{i,\boldsymbol{\varepsilon}}(y))\] \[\leqslant C\sum_{k>N}k\,m(\tau=k)\leqslant C^{\prime}\sum_{k>N} ke^{-k(\log\lambda_{per}-\eta)}\,,\] where we have used Proposition 2.5(c). It follows that for each \(\kappa>0\), there exists \(N>0\) such that \(\sup_{\boldsymbol{\varepsilon}\in[0,\varepsilon_{\beta})}\{\nu_{\boldsymbol{ \varepsilon}}(1_{\tau>N}\cdot\tau)\}<\kappa\) where the \(\sup\) is restricted to \(\beta\)-allowable values of \(\boldsymbol{\varepsilon}\). Let \(\tau^{(N)}=\min\{\tau,N\}\) and note that \(\tau^{(N)}\in BV(Y)\). Then taking a limit along \(\beta\)-allowable \(\boldsymbol{\varepsilon}^{\prime}\) yields, \[\lim_{\boldsymbol{\varepsilon}^{\prime}\to\boldsymbol{\varepsilon}}|\nu_{ \boldsymbol{\varepsilon}}(\tau)-\nu_{\boldsymbol{\varepsilon}^{\prime}}(\tau)| \leqslant\lim_{\boldsymbol{\varepsilon}^{\prime}\to\boldsymbol{\varepsilon}}| \nu_{\boldsymbol{\varepsilon}}(\tau^{(N)})-\nu_{\boldsymbol{\varepsilon}^{ \prime}}(\tau^{(N)})|+|\nu_{\boldsymbol{\varepsilon}}(1_{\tau>N}\cdot\tau)|+| \nu_{\boldsymbol{\varepsilon}^{\prime}}(1_{\tau>N}\cdot\tau)|\leqslant 2\kappa\,,\] since we have shown that \(\nu_{\boldsymbol{\varepsilon}^{\prime}}\to\nu_{\boldsymbol{\varepsilon}}\) as elements of \(BV(Y)^{*}\). Since \(\kappa>0\) was arbitrary, this proves that \(\nu_{\boldsymbol{\varepsilon}}(\tau)\) varies continuously in \(\boldsymbol{\varepsilon}\). Recall that \(\{Y_{i}\}_{i}\) denotes the set of \(1\)-cylinders for \(F_{0}\) and let \(\mathcal{J}_{0}=\{F_{0}(Y_{i})\}_{i}\) denote the finite set of images. The covering property of \(f\) implies that the set of preimages of endpoints of elements of \(\mathcal{J}_{0}\) is dense in \(I\). Thus there is a dense set of \(\boldsymbol{\varepsilon}<\varepsilon_{\beta}\) such that \(F_{\boldsymbol{\varepsilon}}\) admits a countable Markov partition with finite images. For such \(\boldsymbol{\varepsilon}\), [17, Section 6.4.1] implies that \(\nu_{\boldsymbol{\varepsilon}}\) is an equilibrium state for the potential \(-\Xi_{\boldsymbol{\varepsilon}}\cdot\log(DF_{\boldsymbol{\varepsilon}})-\log \Lambda_{\boldsymbol{\varepsilon}}\), where \(\Xi_{\boldsymbol{\varepsilon}}(x)=1\) if \(x\in Y\setminus H^{\prime}_{\boldsymbol{\varepsilon}}\) and \(\Xi_{\boldsymbol{\varepsilon}}=\infty\) if \(x\in H^{\prime}_{\boldsymbol{\varepsilon}}\). Similarly, [BDM, Lemma 5.3] implies that \(\nu_{\boldsymbol{\varepsilon}}\) is a Gibbs measure for the potential \(-\Xi_{\boldsymbol{\varepsilon}}\cdot\log(DF_{\boldsymbol{\varepsilon}})- \tau\boldsymbol{\varepsilon}(H_{\boldsymbol{\varepsilon}}(z))\) with pressure equal to \(0\). Putting these together yields (4.14) for such 'Markov holes.' Finally, we extend the relation to all \(\boldsymbol{\varepsilon}\) via the continuity of \(\Lambda_{\boldsymbol{\varepsilon}}\) and \(\nu_{\boldsymbol{\varepsilon}}(\tau)\). Since \(\boldsymbol{\varepsilon}(H_{\boldsymbol{\varepsilon}}(z))\) is monotonic in \(\boldsymbol{\varepsilon}\) and equals \(-\log\Lambda_{\boldsymbol{\varepsilon}}/\nu_{\boldsymbol{\varepsilon}}(\tau)\) on a dense set of \(\boldsymbol{\varepsilon}\), it must also be continuous in \(\boldsymbol{\varepsilon}\). Thus the relation (4.14) holds for all \(\boldsymbol{\varepsilon}<\varepsilon_{\beta}\) which are \(\beta\)-allowable. With Lemma 4.8 in hand, we see that the limit we would like to compute to prove Theorem 1.1 can be expressed as follows, \[\frac{\boldsymbol{\varepsilon}(H_{\boldsymbol{\varepsilon}}(z))}{\mu(H_{ \boldsymbol{\varepsilon}}(z))}=\frac{-\log\Lambda_{\boldsymbol{\varepsilon}}} {\mu_{Y}(H^{\prime}_{\boldsymbol{\varepsilon}})}\cdot\frac{\int\tau\,d\mu_{Y} }{\int\tau\,d\nu_{\boldsymbol{\varepsilon}}}\cdot\frac{\mu(H^{\prime}_{ \boldsymbol{\varepsilon}})}{\mu(H_{\boldsymbol{\varepsilon}}(z))}\,, \tag{4.16}\] where as before \(\mu_{Y}=\frac{\mu|_{Y}}{\mu(Y)}\), and \(1/\mu(Y)=\int_{Y}\tau\,d\mu_{Y}\) by Kac's Lemma since \(F_{\varepsilon}\) is a first-return map to \(Y\). Theorem 1.1 will follow once we show that as \(\boldsymbol{\varepsilon}\to 0\), \[\frac{-\log\Lambda_{\boldsymbol{\varepsilon}}}{\mu_{Y}(H^{\prime}_{\boldsymbol {\varepsilon}})}\to 1\quad\frac{\int\tau\,d\mu_{Y}}{\int\tau\,d\nu_{ \boldsymbol{\varepsilon}}}\to 1\,,\quad\frac{\mu(H^{\prime}_{\boldsymbol{ \varepsilon}})}{\mu(H_{\boldsymbol{\varepsilon}}(z))}\to 1-\lambda_{z}^{-1/\ell}\,. \tag{4.17}\] The third limit above is precisely (3.4) when \(z\in\operatorname{orb}(f(c))\) is periodic (and is simply \(1\) by Section 3.4 when \(z\in\operatorname{orb}(f(c))\) is preperiodic), so we proceed to prove the first two. We first prove these limits for \(\beta\)-allowable holes with a fixed \(\beta>0\) in Lemmas 4.9 and 4.10. Then in Section 5 we show how to obtain the general limit as \(\boldsymbol{\varepsilon}\to 0\). **Lemma 4.9**.: _For fixed \(\beta>0\) and any sequence of \(\beta\)-allowable holes \(H_{\boldsymbol{\varepsilon}}(z)\),_ \[\lim_{\boldsymbol{\varepsilon}\to 0}\frac{-\log\Lambda_{\boldsymbol{\varepsilon}} }{\mu_{Y}(H^{\prime}_{\boldsymbol{\varepsilon}})}=1\,.\] Proof.: The lemma could follow using the results of [KL2], yet since \(z\notin Y\) in our setting, it is not clear how to verify the aperiodicity of \(H^{\prime}_{\boldsymbol{\varepsilon}}\) without imposing an additional condition on the reentry of points to \(Y\) which have spent some time in a neighbourhood of \(z\). To avoid this, we will argue directly, as in [17, Proof of Theorem 7.2], yet our argument is simpler since our operators \(\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}\) approach a fixed operator \(\mathcal{L}_{0}\) via Lemma 4.6 in contrast to the situation in [17]. We assume that \(\boldsymbol{\varepsilon}<\varepsilon_{\beta}\) so that we are in the setting of Theorem 4.7. Then since \(g_{0}\in BV(Y)\), iterating (4.11) yields for any \(n\geqslant 1\), \[\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}^{n}g_{0}=\Lambda_{\boldsymbol{ \varepsilon}}^{n}\mathring{e}_{\boldsymbol{\varepsilon}}(g_{0})\mathring{g}_{ \boldsymbol{\varepsilon}}+\mathcal{R}_{\boldsymbol{\varepsilon}}g_{0}\implies \mathring{g}_{\boldsymbol{\varepsilon}}=\frac{1}{\mathring{e}_{\boldsymbol{ \varepsilon}}(g_{0})}(\Lambda_{\boldsymbol{\varepsilon}}^{-n}\hat{\mathcal{L}}_ {\boldsymbol{\varepsilon}}^{n}g_{0}-\Lambda_{\boldsymbol{\varepsilon}}^{-n} \mathcal{R}_{\boldsymbol{\varepsilon}}^{n}g_{0})\,.\] Using this relation and Lemma 4.6 yields, \[1-\Lambda_{\boldsymbol{\varepsilon}} =\int\mathring{g}_{\boldsymbol{\varepsilon}}\,dm-\int\hat{ \mathcal{L}}_{\boldsymbol{\varepsilon}}\mathring{g}_{\boldsymbol{\varepsilon}} \,dm=\int(\mathcal{L}_{0}-\hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}) \mathring{g}_{\boldsymbol{\varepsilon}}\,dm=\int_{H^{\prime}_{\boldsymbol{ \varepsilon}}}\mathring{g}_{\boldsymbol{\varepsilon}}\,dm\] \[=\frac{1}{\mathring{e}_{\boldsymbol{\varepsilon}}(g_{0})}\int_{H^ {\prime}_{\boldsymbol{\varepsilon}}}(\Lambda_{\boldsymbol{\varepsilon}}^{-n} \hat{\mathcal{L}}_{\boldsymbol{\varepsilon}}^{n}g_{0}-\Lambda_{\boldsymbol{ \varepsilon}}^{-n}\mathcal{R}_{\boldsymbol{\varepsilon}}^{n}g_{0})\,dm \tag{4.18}\] \[=\frac{1}{\mathring{e}_{\boldsymbol{\varepsilon}}(g_{0})}\left( \int_{H^{\prime}_{\boldsymbol{\varepsilon}}}g_{0}\,dm-\int_{H^{\prime}_{ \boldsymbol{\varepsilon}}}(1-\Lambda_{\boldsymbol{\varepsilon}}^{-n}\hat{ \mathcal{L}}_{\boldsymbol{\varepsilon}})g_{0}\,dm-\int_{H^{\prime}_{\boldsymbol {\varepsilon}}}\Lambda_{\boldsymbol{\varepsilon}}^{-n}\mathcal{R}_{\boldsymbol{ \varepsilon}}^{n}g_{0}\,dm\right)\,.\] By Theorem 4.7, the third term on the right side of (4.18) is bounded by \[\int_{H^{\prime}_{\mathbf{\varepsilon}}}\Lambda^{-n}_{\mathbf{\varepsilon}}\mathcal{R}^{ n}_{\mathbf{\varepsilon}}g_{0}\,dm\leqslant B_{\beta}e^{-\eta_{\beta}n}\|g_{0}\|_{BV}c_{g}^{-1} \int_{H^{\prime}_{\mathbf{\varepsilon}}}g_{0}\,dm\,,\] where \(c_{g}=\inf_{Y}g_{0}>0\) according to (4.12). Next, the second term on the right side of (4.18) can be expressed as, \[\int_{H^{\prime}_{\mathbf{\varepsilon}}}(1-\Lambda^{-n}_{\mathbf{\varepsilon}}\hat{ \mathcal{L}}_{\mathbf{\varepsilon}})g_{0}\,dm=(1-\Lambda^{-n}_{\mathbf{\varepsilon}}) \int_{H^{\prime}_{\mathbf{\varepsilon}}}g_{0}\,dm+\Lambda^{-n}_{\mathbf{\varepsilon}} \int_{H^{\prime}_{\mathbf{\varepsilon}}}(\mathcal{L}^{n}_{0}-\hat{\mathcal{L}}^{ n}_{\mathbf{\varepsilon}})g_{0}\,dm\,,\] using the fact that \(\mathcal{L}_{0}g_{0}=g_{0}\). We claim that \((\mathcal{L}^{n}_{0}-\hat{\mathcal{L}}^{n}_{\mathbf{\varepsilon}})g_{0}\) can be made small in \(L^{\infty}(Y)\). To see this, choose \(n=kn_{0}\) and write for \(x\in Y\), \[(\mathcal{L}^{n}_{0}-\hat{\mathcal{L}}^{n}_{\mathbf{\varepsilon}})g_{0}(x)=\sum_{ y\in F_{0}^{-}n_{x}}\frac{g_{0}(y)1_{Y\setminus\mathring{Y}^{n}_{\mathbf{ \varepsilon}}}(y)}{|DF_{0}^{n}(y)|}=\sum_{j=0}^{k-1}\sum_{y\in F_{0}^{-n_{x}}} \frac{g_{0}(y)1_{\mathring{Y}^{jn_{0}}_{\mathbf{\varepsilon}}\setminus\mathring{Y} ^{(j+1)n_{0}}_{\mathbf{\varepsilon}}}(y)}{|DF_{0}^{n}(y)|}\,,\] where we have used the fact that when \(n=kn_{0}\), \(Y\setminus\mathring{Y}^{n}_{\mathbf{\varepsilon}}=\cup_{j=0}^{k-1}(\mathring{Y}^{ jn_{0}}_{\mathbf{\varepsilon}}\setminus\mathring{Y}^{(j+1)n_{0}}_{\mathbf{\varepsilon}})\) and the union is disjoint. Next, if \(Y_{i,j,\varepsilon}(y)\) is the \(jn_{0}\)-cylinder for \(F^{jn_{0}}_{\mathbf{\varepsilon}}\) containing \(y\), then by bounded distortion and Lemma 4.3, \[|DF^{jn_{0}}(y)|\geqslant\frac{m(F^{jn_{0}}(Y_{i,j,\varepsilon}(y))}{(1+C_{d} )m(Y_{i,j,\varepsilon}(y))}\geqslant\frac{C_{\beta}}{(1+C_{d})m(Y_{i,j, \varepsilon}(y))}\,. \tag{4.19}\] Then, applying also (4.2), we have \[(\mathcal{L}^{n}_{0}-\hat{\mathcal{L}}^{n}_{\mathbf{\varepsilon}})g_{0}(x) \leqslant\frac{(1+C_{d})|g_{0}|_{\infty}}{C_{\beta}}\sum_{j=0}^{k-1}m( \mathring{Y}^{jn_{0}}_{\mathbf{\varepsilon}}\setminus\mathring{Y}^{(j+1)n_{0}}_{ \mathbf{\varepsilon}})3^{k-j}=:\rho_{n}(\mathbf{\varepsilon})\,,\] and note that \(\rho_{n}(\mathbf{\varepsilon})\to 0\) as \(\mathbf{\varepsilon}\to 0\) for fixed \(n=kn_{0}\). Thus, the second term on the right side of (4.18) is estimated by, \[\int_{H^{\prime}_{\mathbf{\varepsilon}}}(1-\Lambda^{-n}_{\mathbf{\varepsilon}}\hat{ \mathcal{L}}_{\mathbf{\varepsilon}})g_{0}\,dm\leqslant\big{(}1-\Lambda^{-n}_{\bm {\varepsilon}}+c_{g}^{-1}\Lambda^{-n}_{\mathbf{\varepsilon}}\rho_{n}(\mathbf{ \varepsilon})\big{)}\mu_{Y}(H^{\prime}_{\mathbf{\varepsilon}}).\] Putting these estimates into (4.18) yields, \[\frac{1-\Lambda_{\mathbf{\varepsilon}}}{\mu_{Y}(H^{\prime}_{\mathbf{\varepsilon}})}= \frac{1}{\mathring{e}_{\mathbf{\varepsilon}}(g_{0})}\left(1+\mathcal{O}(e^{-\eta_{ \beta}n}+(1-\Lambda^{-n}_{\mathbf{\varepsilon}})+\Lambda^{-n}_{\mathbf{\varepsilon}} \rho_{n}(\mathbf{\varepsilon})\right).\] Fixing \(\kappa>0\), first choose \(n=kn_{0}\) so that \(e^{-\eta_{\beta}n}<\kappa\). Next, choose \(\mathbf{\varepsilon}\) so small that by the continuity of the spectral data from the proof of Theorem 4.7, \(|1-\Lambda^{-n}_{\mathbf{\varepsilon}}|<\kappa\), \(\Lambda^{-n}_{\mathbf{\varepsilon}}\leqslant 2\), \(\rho_{n}(\mathbf{\varepsilon})<\kappa\) and \(|\mathring{e}_{\mathbf{\varepsilon}}(g_{0})-1|<\kappa\). This last bound is possible since \(e_{0}(g_{0})=\int g_{0}\,dm=1\). Thus the relevant expression is \(1+\mathcal{O}(\kappa)\) for \(\mathbf{\varepsilon}\) sufficiently small and \(\beta\)-allowable. Since \(\kappa\) is arbitrary, the lemma is proved. Recall that the inducing time \(\tau\) for \(F_{\mathbf{\varepsilon}}\) does not depend on \(\mathbf{\varepsilon}\). **Lemma 4.10**.: _For fixed \(\beta>0\) and any sequence of \(\beta\)-allowable holes \(H_{\mathbf{\varepsilon}}(z)\),_ \[\lim_{\mathbf{\varepsilon}\to 0}\frac{\int\tau\,d\mu_{Y}}{\int\tau\,d\nu_{\mathbf{ \varepsilon}}}=1\,.\] Proof.: As above, we assume \(\boldsymbol{\varepsilon}<\varepsilon_{\beta}\). Recall that \(\nu_{0}=\mu_{Y}\) and \(e_{0}(\psi)=\int\psi\,dm\) since \(m\) is conformal for \(\mathcal{L}_{0}\). Thus the estimates (4.15) and following in the proof of Lemma 4.8 hold equally well with \(\boldsymbol{\varepsilon}^{\prime}=0\) throughout. Thus the continuity of \(\nu_{\boldsymbol{\varepsilon}}(\tau)\) extends to \(\boldsymbol{\varepsilon}=0\). This, plus the fact that \(\nu_{\boldsymbol{\varepsilon}}(\tau)\geqslant 1\) since \(\tau\geqslant 1\) implies the required limit. ## 5. Completion of the proof of Theorem 1.1 via approximation Putting together Section 3.3 and Lemmas 4.9 and 4.10, we have proved Theorem 1.1 for each \(\beta>0\) along sequences \((\boldsymbol{\varepsilon}_{n})_{n}\) where each \(\boldsymbol{\varepsilon}_{n}\) is \(\beta\)-allowable. It remains to consider the alternative case where we have to approximate a given sequence \(\varepsilon_{n}\) by \(\beta\)-allowable \(\boldsymbol{\varepsilon}_{n}^{\prime}\). We will focus our estimates on approximating on the left, with the right-hand side following similarly. We also start by assuming \(z\in\operatorname{orb}(f(c))\) is periodic. We remark that if \(H_{\boldsymbol{\varepsilon}}\) is \(\beta\)-allowable, then it is also \(\beta^{\prime}\)-allowable for any \(\beta^{\prime}<\beta\), so we will take our approximating sequence with \(\beta\) tending to \(0\). Without loss of generality and for convenience, we assume \(\beta<(2\lambda_{z})^{-1}\). Recall the notation \((a_{i})_{i},(b_{i})_{i}\) from Section 2.3, i.e. \((a_{i},a_{i+1})\) are the \(f^{s}\)-images of subchains of intervals accumulating on \(z\) from the left, while \((b_{i+1},b_{i})\) accumulate on \(z\) from the right. Recall that \(\frac{z-a_{i}}{b_{i}-z}\) are uniformly bounded away from \(0\) and \(\infty\). As in Lemma 2.9, we have \(f^{p}(a_{i},a_{i+1})=(a_{i-1},a_{i})\) if \(f^{p}\) is orientation preserving at \(z\), and \(f^{2p}(a_{i},a_{i+1})=(a_{i-2},a_{i-1})\) if \(f^{p}\) is orientation reversing. We will assume the orientation preserving case in the following, the orientation reversing case being similar. Each interval \((a_{i},a_{i+1})\) is \(f^{s}(I_{j})\) for some one-cylinder \(I_{j}\) for \(F_{\boldsymbol{\varepsilon}}\). Each \(I_{j}\) in turn is subdivided into a countable union of \(n_{0}\)-cylinders for \(F_{\boldsymbol{\varepsilon}}\) and so \((a_{i},a_{i+1})\) is subdivided into a countable union of \(f^{s}\) images of these \(n_{0}\)-cylinders. Indeed, given the chain structure, the arrangement of \(f^{s}\) images of the \(n_{0}\)-cylinders in \((a_{i},a_{i+1})\) maps precisely under \(f^{p}\) onto the \(f^{s}\) images of the \(n_{0}\)-cylinders in \((a_{i-1},a_{i})\). We describe this structure as follows: each \((a_{i},a_{i+1})\) is subdivided into finitely many intervals (depending on \(n_{0}\)), which we index by \(j\), \(j=1,\ldots,J_{i}\). The end points of these intervals are preimages of the boundaries of one-cylinders in \(Y\), some of which may map onto \(c\) before \(n_{0}\) iterates of \(F_{\varepsilon}\), in which case they are in fact accumulation points of preimages of the chains recalled above. We label these chains of intervals \((c_{i,j,k},c_{i,j,k+1})\) which accumulate on the \(j\)th point in \((a_{i},a_{i+1})\) from the left, and similarly \((d_{i,j,k+1},d_{i,j,k})\) accumulate from the right. Set \(v_{i,j,k}=c_{i,j,k+1}-c_{i,j,k}\). According to our definition, if \(\varepsilon\) is \(\beta\)-left-allowable, then \[z-\varepsilon\in A_{L,\beta}:=\cup_{i,j,k}[c_{i,j,k}+\beta v_{i},c_{i,j,k+1}- \beta v_{i}]\,.\] So if \(\varepsilon\) is not \(\beta\)-left-allowable, then \(z-\varepsilon\in(c_{i,j,k+1}-\beta v_{i,j,k},c_{i,j,k+1}+\beta v_{i,j,k})\) for some \(i,j,k\). Remark that if there is no accumulation of chains at the \(j\)th point, then the index \(k\) is redundant. Also, there is some duplication since \(c_{i,0}=a_{i}\), while \(c_{i,J_{i}}=a_{i+1}\), yet for uniformity of notation we denote each range of non-\(\beta\)-left-allowable values of \(z-\varepsilon\) as above. We approximate \(\varepsilon\) from above by \(\varepsilon_{o}^{L}:=z-(c_{i,j,k+1}-\beta v_{i,j,k})\) and from below by \(\varepsilon_{u}^{L}:=z-(c_{i,j,k+1}+\beta v_{i,j,k})\). Both \(\varepsilon_{o}^{L}\) and \(\varepsilon_{u}^{L}\) are \(\beta\)-left-allowable and \(\varepsilon\in(\varepsilon_{u}^{L},\varepsilon_{o}^{L})\). Thus, \[\frac{\varepsilon_{u}^{L}}{\varepsilon_{o}^{L}}\leqslant\frac{\varepsilon_{u}^{ L}}{\varepsilon}\leqslant\frac{\varepsilon_{o}^{L}}{\varepsilon}\leqslant\frac{ \varepsilon_{o}^{L}}{\varepsilon_{u}^{L}}\,,\] so we focus on estimating the outer two quantities. Notice also that the above implies \(\frac{\varepsilon_{u}^{L}}{\varepsilon_{u}^{R}},\frac{\varepsilon_{u}^{L}}{ \varepsilon_{o}^{R}}\) are uniformly bounded away from \(0\) and \(\infty\), due to the uniform bound on \(\frac{z-a_{i}}{b_{i}-z}\), so we may apply (3.4). Now, \[\frac{\varepsilon_{u}^{L}}{\varepsilon_{o}^{L}}=\frac{z-(c_{i,j,k+1}+\beta v_{ i,j,k})}{z-(c_{i,j,k+1}-\beta v_{i,j,k})}=1-\frac{2\beta v_{i,j,k}}{z-(c_{i,j,k}- \beta v_{i,j,k})}\,.\] Note that \(z-(c_{i,j,k+1}-\beta v_{i,j,k})\geqslant z-a_{i+1}\) while \(v_{i,j,k}\leqslant e_{i}:=a_{i+1}-a_{i}\). But due to the chain structure around \(z\), we have \(z-a_{i+1}>e_{i+1}\sim\lambda_{z}^{-1}e_{i}\), so that \[\frac{\varepsilon_{u}^{L}}{\varepsilon_{o}^{L}}\geqslant 1-\frac{2\beta e_{i}}{ e_{i+1}}\gtrsim 1-2\beta\lambda_{z}\,. \tag{5.1}\] In the same way, \(\frac{\varepsilon_{u}^{L}}{\varepsilon_{u}^{L}}\lesssim 1+2\beta\lambda_{z}\). The right hand estimates for the analogous \(\varepsilon_{u}^{R}\) and \(\varepsilon_{o}^{R}\) are similar. To complete the proof of Theorem 1.1, we must evaluate the following limit as \(\varepsilon\to 0\), \[\frac{1}{\mu(z-\varepsilon,z+\varepsilon)}\frac{-1}{n}\log\mu\left(\left\{x \in I:f^{i}(x)\notin(z-\varepsilon,z+\varepsilon),\ i=1,\ldots,n\right\} \right)\,. \tag{5.2}\] We estimate this from below by \[\frac{\mu(z-\varepsilon_{u}^{L},z+\varepsilon_{u}^{R})}{\mu(z-\varepsilon,z+ \varepsilon)}\frac{1}{\mu(z-\varepsilon_{u}^{L},z+\varepsilon_{u}^{R})}\frac{ -1}{n}\log\mu\left(\left\{x\in I:f^{i}(x)\notin(z-\varepsilon_{u}^{L},z+ \varepsilon_{u}^{L}),\ i=1,\ldots,n\right\}\right).\] By the above estimates, \(\frac{\mu(z-\varepsilon_{u}^{L},z+\varepsilon_{u}^{R})}{\mu(z-\varepsilon,z+ \varepsilon)}\sim(1-2\beta\lambda_{z})^{\frac{1}{\ell}}\), so taking the limit as \(\varepsilon\to 0\) yields an lower bound of \[(1-2\beta\lambda_{z})^{\frac{1}{\ell}}\cdot\left(1-\lambda_{z}^{-1/\ell} \right)\,,\] where the second factor comes from the application of Theorem 1.1 to \(\beta\)-allowable holes via (4.17). Similarly, one obtains an upper bound for (5.2) of \((1+2\beta\lambda_{z})^{\frac{1}{\ell}}\cdot\left(1-\lambda_{z}^{-1/\ell}\right)\). Since these bounds hold for all sufficiently small \(\beta\), we take \(\beta\to 0\) to obtain the required limit for Theorem 1.1 along an arbitrary sequence \((\varepsilon_{n})_{n}\) in the case that \(z\in\operatorname{orb}(f(c))\) is periodic. Finally notice that if \(z\in\operatorname{orb}(f(c))\) is preperiodic then the above calculations all go through similarly: from the construction of \((Y,F)\) in Section 3.4, the periodic structure of the postcritical orbit can be pulled back to \(z\) to generate the \((a_{i})_{i},\ (b_{i})_{i}\) required, but our bounds are of the form \((1\pm 2\beta\lambda)\) where \(\lambda=|Df^{p}(f^{k_{0}}(z))|\). These tend to \(1\) as \(\beta\to 0\), yielding the required limit in the case that \(z\) is preperiodic, but not periodic. ## 6. Proof of Theorem 1.2 In this section, we prove the cases of Theorem 1.2 in several steps. First, we address the case where \(z\in\operatorname{orb}(f(c))\). As in the proof of Theorem 1.1, we first fix \(\beta>0\) and consider sequences of holes \(H_{\boldsymbol{\varepsilon}}\) that are \(\beta\)-allowable. Leveraging the existence of the induced map \(F_{0}\) and the local escape rate proved in Theorem 1.1, we prove Theorem 1.2 for sequences of such holes in Section 6.2. Then in Section 6.3, we will let \(\beta\to 0\) to prove the required hitting time statistics for all sequences \(\varepsilon\to 0\). Finally, in Section 6.4 we show how to adapt the results of [BDT] to prove the hitting time statistics when \(z\notin\operatorname{orb}(f(c))\). ### A non-Markovian tower with a hole Throughout Sections 6.1 - 6.3 we will assume \(z\in\operatorname{orb}(f(c))\). Recall the induced map \(F_{0}^{n_{0}}\), where \(n_{0}\) is chosen so that (4.2) holds. Indeed, by the proof of Proposition 2.5 (see also Lemma 4.1), there exists \(\gamma>0\) such that \[\inf_{x\in Y}|Df^{\tau^{n_{0}}(x)}(x)|\,e^{-\gamma\tau^{n_{0}}(x)}>2.5\,, \tag{6.1}\] where \(\tau^{n_{0}}=\sum_{k=0}^{n_{0}-1}\tau\circ F_{0}^{k}\). Using \(F_{0}^{n_{0}}\), we define an extended system that resembles a Young tower as follows. Define \[\Delta=\{(y,k)\in Y\times\mathbb{N}:k\leqslant\tau^{n_{0}}-1\}.\] We refer to the \(k\)th level of the tower as \(\Delta_{k}=\{(y,n)\in\Delta:n=k\}\). Sometimes we refer to \((x,k)\in\Delta_{k}\) as \(x\) if \(k\) is clear from context. The dynamics on the tower is defined by \(f_{\Delta}(y,k)=(y,k+1)\) if \(k<\tau^{n_{0}}-1\) and \(f_{\Delta}(y,\tau^{n_{0}}(y)-1)=(F_{0}^{n_{0}}(y),0)\). There is a natural projection \(\pi_{\Delta}:\Delta\to I\) which satisfies \(f\circ\pi_{\Delta}=\pi_{\Delta}\circ f_{\Delta}\). Clearly, \(\pi_{\Delta}(\Delta_{0})=Y\). Remark that \(F_{0}\), \(f_{\Delta}\) and \(\Delta\) depend on \(z\), but not on \(\boldsymbol{\varepsilon}\). It follows from Lemma 2.3(c) and the fact that \(f\) is locally eventually onto that \(f_{\Delta}\) is mixing. Indeed, \(f(Y)\supset Y\) and the chain structure around \(z\) implies \(f^{k_{0}+p}(Y)\supset I\). Then since \(f\) is locally eventually onto, for any interval \(A\subset Y\), there exists \(n_{A}\in\mathbb{N}\) such that \(f^{n_{A}}(A)\supset Y\), and again by Lemma 2.3(c), \(f^{n_{A}+k}(A)\supset Y\) for all \(k\geqslant 0\). Setting \(A_{0}=(\pi_{\Delta}|_{\Delta_{0}})^{-1}(A)\), this implies \(f_{\Delta}^{n_{A}+k}(A_{0})\supset\cup_{j=0}^{k}\Delta_{j}\) for each \(k\geqslant 0\). Thus \(f_{\Delta}\) is topological mixing.5 Footnote 5: Indeed, the above argument also implies that there exist \(x,y\in A_{0}\) such that \(\tau^{n_{0}}(x)=n_{A}\) and \(\tau^{n_{0}}(y)=n_{A}+1\) so \(\operatorname{g.c.d.}(\tau^{n_{0}})=1\), yet this is not a sufficient condition for \(f_{\Delta}\) to be mixing since our tower has multiple bases. We lift Lebesgue measure to \(\Delta\) simply by defining \(m_{\Delta}|_{\Delta_{0}}=m|_{Y}\) and \(m_{\Delta}|_{\Delta_{k}}=(f_{\Delta}^{k})_{*}(m_{\Delta}|_{\Delta_{0}})\). Note that by Proposition 2.5, \(m_{\Delta}(\tau^{n_{0}}>n)\leqslant Ce^{-\zeta n}\), where \(\zeta=\log\lambda_{per}-\eta>0\) and \(\eta\) is chosen before Remark 2.6. Thus our tower has exponential tails. Now recall \(\alpha\in(0,\infty)\) from the statement of Theorem 1.2. This will determine the scaling at which we consider the hitting time to \(H_{\boldsymbol{\varepsilon}}(z)\). We reduce \(\gamma>0\) in (6.1) if necessary so that \[\gamma<\zeta\ \ \text{and}\ \ \gamma/\zeta<\alpha\,. \tag{6.2}\] Note that the second condition is relevant only if we consider \(\alpha<1\). Given a hole \(H_{\boldsymbol{\varepsilon}}(z)\), define \(H_{\Delta_{k}}(\boldsymbol{\varepsilon})=(\pi_{\Delta}|_{\Delta_{k}})^{-1}(H_ {\boldsymbol{\varepsilon}}(z)\) and \(H_{\Delta}(\boldsymbol{\varepsilon})=\cup_{k\geqslant 1}H_{\Delta_{k}}\). The corresponding punctured tower is defined analogously to (4.1) for \(n\geqslant 1\), \[\mathring{f}_{\Delta,\boldsymbol{\varepsilon}}^{n}=f_{\Delta}^{n}|_{\mathring{ \Delta}_{\boldsymbol{\varepsilon}}^{p}},\ \text{where}\ \mathring{\Delta}_{\boldsymbol{\varepsilon}}^{n}:=\cap_{i=0}^{n-1}f_{\Delta}^{- i}(\Delta\setminus H_{\Delta}(\boldsymbol{\varepsilon}))\,.\] The main difference between \(f_{\Delta}:\Delta\setminus\mathring{\cup}\) and the usual notion of a Young tower is that we do not define a Markov structure on \(\Delta\). Yet as demonstrated above, the tower has the following key properties: * Exponential tails: \(m_{\Delta}(\tau^{n_{0}}>n)\leqslant Ce^{-\zeta n}\); * \(f_{\Delta}\) is topologically mixing; * Large images at returns to \(\Delta_{0}\): if \(H_{\boldsymbol{\varepsilon}}(z)\) is \(\beta\)-allowable, then \(C_{\beta}>0\) from Lemma 4.3 provides a lower bound on the length of images in returns to \(\Delta_{0}\) under both \(f_{\Delta}\) and \(\mathring{f}_{\Delta,\boldsymbol{\varepsilon}}\). We will use these properties to prove the existence of a uniform spectral gap for the punctured transfer operators on the tower acting on a space of functions of weighted bounded variation, as follows. The inducing domain \(Y\) is comprised of finitely many maximal connected components that are intervals in \(I\). Let \(\widetilde{\mathcal{D}}_{0}\) denote this collection of intervals. Similarly, for each \(k\geqslant 1\), \(f^{k}(Y\cap\{\tau^{n_{0}}\geqslant k+1\})\) can be decomposed into a set of maximal connected components. We further subdivide these intervals at the first time \(j\leqslant k\) that they contain a point in \(\operatorname{orb}(f(c))\), \(z-\varepsilon_{L}\) or \(z+\varepsilon_{R}\). These are the cuts introduced in the definition of \(F_{\varepsilon}\) in Section 2.3. Let \(\widetilde{\mathcal{D}}_{k}\) denote this collection of intervals and define the collection of lifts by, \[\mathcal{D}_{k}=\{J=(\pi|_{\Delta_{k}})^{-1}(\tilde{J}):\tilde{J}\in \widetilde{\mathcal{D}}_{k}\}\,,\text{ for all }k\geqslant 0.\] For an interval \(J\in\mathcal{D}_{k}\) and a measurable function \(\psi:\Delta\to\mathbb{R}\), define \(\bigvee_{J}\psi\), the variation of \(\psi\) on \(J\) as in (4.3). On each level \(\Delta_{k}\), \(k\geqslant 1\), define \[\|\psi\|_{\Delta_{k}}=e^{-\gamma k}\sum_{J\in\mathcal{D}_{k}}\bigvee_{J}\psi+ e^{-\gamma k}\sup_{\Delta_{k}}|\psi|\,.\] Finally, define the weighted variation norm on \(\Delta\) by, \[\|\psi\|_{WV}=\sum_{k\geqslant 0}\|\psi\|_{\Delta_{k}}\,.\] If \(\|\psi\|_{WV}<\infty\) then \(|\psi|_{L^{1}(m_{\Delta})}<\infty\) since \(\gamma<\zeta\) by (6.2). So we denote by \(WV(\Delta)\) the set of functions \(\psi\in L^{1}(m_{\Delta})\) such that \(\|\psi\|_{WV}<\infty\). Since we consider \(WV(\Delta)\) as a subset of \(L^{1}(m_{\Delta})\), we define the variation norm of the equivalence class to be the infimum of variation norms of functions in the equivalence class. We define the transfer operator \(\mathcal{L}_{\Delta}\) corresponding to \(f_{\Delta}\) acting on \(L^{1}(m_{\Delta})\) in the natural way, \[\mathcal{L}_{\Delta}\psi(x)=\sum_{y\in f_{\Delta}^{-1}(x)}\frac{f(y)}{Jf_{ \Delta}(y)}\,,\] where \(Jf_{\Delta}\) is the Jacobian of \(f_{\Delta}\) with respect to \(m_{\Delta}\). Then the transfer operator for the punctured tower is defined for each \(n\geqslant 1\) by, \[\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}^{n}\psi=\mathcal{L}_{ \Delta}^{n}(1_{\hat{\Delta}_{\boldsymbol{\varepsilon}}^{n}}\psi)\,.\] Our goal is to prove the following proposition, which is the analogue of Theorem 4.7, but for the tower map rather than the induced map. **Proposition 6.1**.: _For any \(\beta>0\), there exists \(\varepsilon_{\beta}(\Delta)\) such that for any \(\beta\)-allowable hole \(H_{\boldsymbol{\varepsilon}}(z)\) with \(\boldsymbol{\varepsilon}<\varepsilon_{\beta}(\Delta)\), both \(\mathcal{L}_{\Delta}\) and \(\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}\) have a spectral gap on \(WV(\Delta)\)._ _Specifically, there exist \(\sigma_{\beta},A_{\beta}>0\) such that for all \(\beta\)-admissible \(\boldsymbol{\varepsilon}<\varepsilon_{\beta}(\Delta)\), the spectral radius of \(\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}\) is \(\lambda_{\Delta,\boldsymbol{\varepsilon}}<1\) and there exist operators \(\Pi_{\Delta,\boldsymbol{\varepsilon}},\mathcal{R}_{\Delta,\boldsymbol{ \varepsilon}}:WV(\Delta)\circlearrowleft\) satisfying \(\Pi_{\Delta,\boldsymbol{\varepsilon}}^{2}=\Pi_{\Delta,\boldsymbol{ \varepsilon}}\), \(\Pi_{\Delta,\boldsymbol{\varepsilon}}\mathcal{R}_{\Delta,\boldsymbol{ \varepsilon}}=\mathcal{R}_{\Delta,\boldsymbol{\varepsilon}}\Pi_{\Delta, \boldsymbol{\varepsilon}}=0\), and \(\|\mathcal{R}_{\Delta,\boldsymbol{\varepsilon}}^{n}\|_{WV}\leqslant A_{ \beta}\lambda_{\Delta,\boldsymbol{\varepsilon}}^{n}e^{-\sigma_{\beta}n}\) such that_ \[\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}\psi=\lambda_{\Delta, \boldsymbol{\varepsilon}}\Pi_{\Delta,\boldsymbol{\varepsilon}}\psi+\mathcal{R}_ {\Delta,\boldsymbol{\varepsilon}}\psi\,,\text{ for all }\psi\in WV(\Delta).\] _Indeed, \(\Pi_{\Delta,\boldsymbol{\varepsilon}}=\hat{e}_{\Delta,\boldsymbol{ \varepsilon}}\otimes\hat{g}_{\Delta,\boldsymbol{\varepsilon}}\) for some \(\hat{e}_{\Delta,\boldsymbol{\varepsilon}}\in WV(\Delta)^{*}\) and \(\hat{g}_{\Delta,\boldsymbol{\varepsilon}}\in WV(\Delta)\) satisfying \(\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}\hat{g}_{\Delta, \boldsymbol{\varepsilon}}=\lambda_{\Delta,\boldsymbol{\varepsilon}}\hat{g}_{ \Delta,\boldsymbol{\varepsilon}}\) and \(\int_{\Delta}\hat{g}_{\Delta,\boldsymbol{\varepsilon}}\,dm_{\Delta}=1\)._ The proof of this proposition is based on the following sequence of lemmas, which prove the compactness of \(WV(\Delta)\) in \(L^{1}(m_{\Delta})\), uniform Lasota-Yorke inequalities for \(\mathring{\mathcal{L}}_{\Delta,\mathbf{\varepsilon}}\), and the smallness of the perturbation when viewing \(\mathcal{L}_{\Delta}-\mathring{\mathcal{L}}_{\Delta,\mathbf{\varepsilon}}\) as an operator from \(WV(\Delta)\) to \(L^{1}(m_{\Delta})\). **Lemma 6.2**.: _The unit ball of \(WV(\Delta)\) is compactly embedded in \(L^{1}(m_{\Delta})\)._ Proof.: If \(\|\psi\|_{WV}\leqslant 1\), then restricted to each \(J\in\mathcal{D}_{k}\), the variation of \(\psi\) is at most \(e^{\gamma k}\) and \(|\psi|_{J}|_{\infty}\leqslant e^{\gamma k}\). Thus if \(B_{1}\) is the ball of radius \(1\) in \(WV(\Delta)\), then \(B_{1}|_{J}\) is compactly embedded in \(L^{1}(m_{\Delta}|_{J})\). Taking a sequence \((\psi_{n})_{n}\subset B_{1}\), we first enumerate the elements of \(\cup_{k\geqslant 0}\mathcal{D}_{k}\) and then use compactness on each \(J\) and a diagonalization procedure to extract a subsequence \((\psi_{n_{k}})_{k}\) which converges on every \(J\in\cup_{k\geqslant 0}\mathcal{D}_{k}\) to a function \(\psi\). \(\psi\) necessarily belongs to \(L^{1}(m_{\Delta})\) due to dominated convergence since \(|\psi_{n}|_{\Delta_{k}}|_{\infty}\leqslant e^{\gamma k}\) and the function which is equal to \(e^{\gamma k}\) on \(\Delta_{k}\) for each \(k\) is integrable since \(\gamma<\zeta\). **Lemma 6.3**.: _Assume \(H_{\mathbf{\varepsilon}}(z)\) is \(\beta\)-allowable. Let \(C=(1+C_{d})(C_{d}+2C_{\beta}^{-1})\). For \(k\geqslant 1\), and all \(\psi\in WV(Y)\),_ \[\|\mathring{\mathcal{L}}_{\Delta,\mathbf{\varepsilon}}\psi\|_{\Delta_{k}} \leqslant e^{-\gamma}\|\psi\|_{\Delta_{k-1}} \tag{6.3}\] \[\bigvee_{\Delta_{0}}\mathring{\mathcal{L}}_{\Delta,\mathbf{\varepsilon}}\psi\] (6.4) \[|\mathring{\mathcal{L}}_{\Delta,\mathbf{\varepsilon}}\psi|_{L^{\infty }(\Delta_{0})} \leqslant \tfrac{2}{5}\|\psi\|_{WV}+C_{\beta}^{-1}(1+C_{d})\int_{\mathring{ \Delta}_{\mathbf{\varepsilon}}^{1}}|\psi|\] (6.5) \[\int_{\Delta}|\mathring{\mathcal{L}}_{\Delta,\mathbf{\varepsilon}}^{n }\psi||\,dm_{\Delta} \leqslant \int_{\mathring{\Delta}_{\mathbf{\varepsilon}}^{n}}|\psi|\,dm_{\Delta} \text{ for all }n\geqslant 1\text{.} \tag{6.6}\] The same estimates hold for \(\mathcal{L}_{\Delta}\) with \(\mathring{\Delta}_{\mathbf{\varepsilon}}^{n}\) replaced by \(\Delta\). Proof.: Note that \(Jf_{\Delta}(x,k)=1\) if \(k<\tau^{n_{0}}(x)-1\). Thus if \(k\geqslant 1\) and \(x\in\Delta_{k}\), then \(\mathring{\mathcal{L}}_{\Delta,\mathbf{\varepsilon}}\psi(x)=(1_{\mathring{\Delta} _{\mathbf{\varepsilon}}^{1}}\psi)\circ f_{\Delta}^{-1}(x)\). Moreover, for \(J\in\mathcal{D}_{k}\) and \(k\geqslant 1\), we have \(f_{\Delta}^{-1}(J)\subset J^{\prime}\in\mathcal{D}_{k-1}\), and by definition of \(\mathcal{D}_{k-1}\), \(J^{\prime}\) is either disjoint from \(H_{\Delta}\) or contained in it. Thus \(1_{\mathring{\Delta}_{\mathbf{\varepsilon}}^{1}}\) is either identically \(0\) or \(1\) on \(J^{\prime}\) and so does not affect the variation. With these points noted, (6.3) holds trivially. Similarly, (6.6) follows immediately from the definition of \(\mathring{\mathcal{L}}_{\Delta,\mathbf{\varepsilon}}\). We proceed to prove (6.4). The proof follows closely the proof of Proposition 4.5. As in Section 4.1, denote by \(\mathcal{J}_{\mathbf{\varepsilon}}\) the finite set of intervals in \(Y\) that are the images of the one-cylinders for \(F_{\mathbf{\varepsilon}}^{n_{0}}\). Since we identify \(\Delta_{0}\) with \(Y\), we will abuse notation and again denote the finite set of images \((\pi_{\Delta}|_{\Delta_{0}})^{-1}(\mathcal{J}_{\mathbf{\varepsilon}})\) in \(\Delta_{0}\) by simply \(\mathcal{J}_{\mathbf{\varepsilon}}\). Let \(\mathcal{I}_{\mathbf{\varepsilon}}\) denote the countable collection of intervals in \(f_{\Delta}^{-1}(\Delta_{0})\) so that for each \(I_{i}\in\mathcal{I}_{\mathbf{\varepsilon}}\), \(f_{\Delta}(I_{i})=J\) for some \(J\in\mathcal{J}_{\mathbf{\varepsilon}}\). We use the notation \(I_{i}=(u_{i},v_{i})\), and \(I_{i,k}\) denotes the intervals in \(\mathcal{I}_{\mathbf{\varepsilon}}\) that lie in \(\Delta_{k}\). Remark that each \(I_{i,k}\subset J^{\prime}\in\mathcal{D}_{k}\) so that \(I_{i,k}\) is either in \(H_{\Delta}\) or is disjoint from it.6 Also, for \((x,k)\in I_{i,k}\), by definition \(\tau^{n_{0}}(\pi_{\Delta}(x,0))=k\), so that Footnote 6: Indeed, since a neighbourhood of \(z\) cannot enter \(Y\) in one step, every \(I_{i}\in\mathcal{I}_{\mathbf{\varepsilon}}\) is disjoint from \(H_{\Delta}\) in this particular tower. \[Jf_{\Delta}(x,k)=|DF^{n_{0}}(\pi_{\Delta}(x,0))|=|Df^{\tau^{n_{0}}(x)}(x)|=|Df^{ k}(x)|\,, \tag{6.7}\] where for brevity, we denote \(\pi_{\Delta}(x,0)=x\). Now for \(\psi\in WV(\Delta)\), following (4.6), we obtain, \[\bigvee_{\Delta_{0}}\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}\psi\leq \sum_{I_{i}\in\mathcal{I}_{\boldsymbol{\varepsilon}}}\bigvee_{I_{i}}\frac{ \psi}{Jf_{\Delta}}+\sum_{I_{i}\in\mathcal{I}_{\boldsymbol{\varepsilon}}}\frac{ |\psi(u_{i})|}{Jf_{\Delta}(u_{i})}+\frac{|\psi(v_{i})|}{Jf_{\Delta}(v_{i})}\,. \tag{6.8}\] For \(I_{i}=I_{i,k}\subset\Delta_{k}\), using (6.1) and (6.7), we estimate the first term above as in (4.7), \[\bigvee_{I_{i,k}}\frac{\psi}{Jf_{\Delta}}\leq\frac{1}{2.5}\bigvee_{I_{i,k}} \psi e^{-\gamma k}+C_{d}(1+C_{d})\int_{I_{i,k}}|\psi|\,.\] We estimate the second term in (6.8) using (4.8) and the large images property as in (4.9), \[\frac{|\psi(u_{i})|}{Jf_{\Delta}(u_{i})}+\frac{|\psi(v_{i})|}{Jf_{\Delta}(v_{ i})}\leq\frac{1}{2.5}\bigvee_{I_{i,k}}\psi e^{-\gamma k}+2C_{\beta}^{-1}(1+C_{d}) \int_{I_{i,k}}|\psi|\,dm_{\Delta}\,. \tag{6.9}\] Putting these estimates together in (6.8) completes the proof of (6.4), \[\bigvee_{\Delta_{0}}\hat{\mathcal{L}}_{\Delta,\boldsymbol{ \varepsilon}}\psi \leq\frac{2}{2.5}\sum_{k\geqslant 1}\sum_{I_{i,k}}e^{-\gamma k} \bigvee_{I_{i,k}}\psi+(1+C_{d})(C_{d}+2C_{\beta}^{-1})\int_{I_{i,k}}|\psi|\] \[\leq\frac{4}{5}\|\psi\|_{WV}+(1+C_{d})(C_{d}+2C_{\beta}^{-1})\int _{\hat{\Delta}_{\boldsymbol{\varepsilon}}^{1}}|\psi|\,.\] Finally, (6.5) follows immediately from (6.9) since for \(x\in\Delta_{0}\), \[|\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}\psi(x)| \leqslant\sum_{I_{i}\in\mathcal{I}_{\boldsymbol{\varepsilon}},y \in I_{i}}\frac{|\psi(y)|}{Jf_{\Delta}(y)}\leqslant\frac{1}{2.5}\sum_{i,k} \bigvee_{I_{i,k}}\psi e^{-\gamma k}+C_{\beta}^{-1}(1+C_{d})\int_{I_{i,k}}| \psi|\,dm_{\Delta}\] \[\leqslant\frac{2}{5}\|\psi\|_{WV}+C_{\beta}^{-1}(1+C_{d})\int_{ \hat{\Delta}_{\boldsymbol{\varepsilon}}^{1}}|\psi|\,dm_{\Delta}\,.\] **Lemma 6.4**.: _The operator \(\mathcal{L}_{\Delta}\) has a spectral gap on \(WV(\Delta)\)._ Proof.: Lemma 6.3 applied to \(\mathcal{L}_{\Delta}\) implies that as an operator on \(WV(\Delta)\), \(\mathcal{L}_{\Delta}\) has spectral radius at most \(1\) and essential spectral radius at most \(\max\{e^{-\gamma},4/5\}\). Since \(\mathcal{L}_{\Delta}^{*}m_{\Delta}=m_{\Delta}\), \(1\) is in the spectrum of \(\mathcal{L}_{\Delta}\) so that \(\mathcal{L}_{\Delta}\) is quasi-compact. Indeed, (4.5) implies that the peripheral spectrum has no Jordan blocks. Thus by [Ka, III.6.5], \(\mathcal{L}_{\Delta}\) admits the following representation: there exist finitely many eigenvalues \(e^{i\theta_{j}}\), \(j=1,\ldots,N\) and finite-dimensional eigenprojectors \(\Pi_{j}\) corresponding to \(\theta_{j}\) such that \[\mathcal{L}_{\Delta}=\sum_{j=1}^{N}e^{i\theta_{j}}\Pi_{j}+\mathcal{R}\,, \tag{6.10}\] where \(\mathcal{R}\) has spectral radius strictly less than \(1\) and \(\Pi_{j}\Pi_{k}=\Pi_{j}\mathcal{R}=\mathcal{R}\Pi_{j}=0\) for all \(j\neq k\). Note that if \(g\in WV(\Delta)\) satisfies \(\mathcal{L}_{\Delta}g=g\), then \(g_{0}:=g\circ(\pi_{\Delta}|_{\Delta_{0}})^{-1}\) is in \(BV(Y)\) and \(\mathcal{L}_{0}g_{0}=g_{0}\). Since \(\mathcal{L}_{0}\) has a spectral gap (see the proof of Theorem 4.7), this implies that there can be at most one (normalised) fixed point for \(\mathcal{L}_{\Delta}\). Thus the eigenvalue \(1\) is simple. Conversely, if \(g_{0}\) denotes the unique element of \(BV(Y)\) with \(\int g_{0}\,dm=1\) such that \(\mathcal{L}_{0}g_{0}=g_{0}\), then we may define \(g_{\Delta}\) such that \(\mathcal{L}_{\Delta}g_{\Delta}=g_{\Delta}\) by \[g_{\Delta}(x,k)=c_{0}g_{0}\circ\pi_{\Delta}(x,0),\ \ \text{for each}\ (x,k)\in\Delta_{k}\ \text{and}\ k\geqslant 0,\] where \(c_{0}\) is chosen so that \(\int_{\Delta}g_{0}\,dm_{\Delta}=1\). In particular, for each \((x,k)\) and \(k\geqslant 0\), \[c_{0}c_{g}\leqslant g_{\Delta}(x,k)=g_{\Delta}(x,0)\leqslant c_{0}C_{g}\,, \tag{6.11}\] where \(C_{g}=|g_{0}|_{\infty}\) and \(c_{g}>0\) is from (4.12). We will use this fact to eliminate the possibility of other eigenvalues of modulus \(1\). It is convenient to first establish the following claim. **Claim**.: _The peripheral spectrum of \(\mathcal{L}_{\Delta}\) on \(WV(\Delta)\) is cyclic: if \(e^{i\theta}\) is an eigenvalue, then so is \(e^{i\theta n}\) for each \(n\in\mathbb{N}\)._ Proof of Claim.: Since \(\mathcal{L}_{\Delta}\) is a positive operator and \(\mathcal{L}_{\Delta}^{*}m_{\Delta}=m_{\Delta}\), it follows from Rota's Theorem [BG, Theorem 2.2.9] (see also [S, Theorem 1]) that the peripheral spectrum of \(\mathcal{L}_{\Delta}\) on \(L^{1}(m_{\Delta})\) is cyclic. It remains to show that this property holds as well in \(WV(\Delta)\). We follow the strategy in [K, proof of Theorem 6.1]. For \(\theta\in[0,2\pi)\), define \[S_{n,\theta}=\frac{1}{n}\sum_{k=0}^{n-1}e^{-\theta k}\mathcal{L}_{\Delta}^{k}\,.\] It follows from (6.10) that \(\lim_{n\to\infty}S_{n,\theta}=\Pi_{j}\) if \(\theta=\theta_{j}\) and \(\lim_{n\to\infty}S_{n,\theta}=0\) otherwise. Indeed, the convergence in both cases is pointwise uniformly since \(\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}e^{(\theta_{j}-\theta)k}=0\) whenever \(\theta\neq\theta_{j}\). Then \(\bigvee\,c\psi=|c|\bigvee\psi\) for any constant \(c\) implies the sequence converges in \(\|\cdot\|_{WV}\). Since \(\mathcal{L}_{\Delta}\) is an \(L^{1}(m_{\Delta})\) contraction (i.e. \(\int|\mathcal{L}_{\Delta}\psi|\,dm_{\Delta}\leqslant\int|\psi|\,dm_{\Delta}\)), then so is \(S_{n,\theta}\), and since \(WV(\Delta)\) is dense in \(L^{1}(m_{\Delta})\), \(\lim_{n\to\infty}S_{n,\theta}\) extends to a bounded linear operator on \(L^{1}(m_{\Delta})\), with convergence in \(\|\cdot\|_{L^{1}(m_{\Delta})}\). Taking \(\theta=\theta_{j}\), we may view \(\Pi_{j}:L^{1}(m_{\Delta})\to\Pi_{j}(WV(\Delta))\). Now if \(0\neq f\in L^{1}(m_{\Delta})\) satisfies \(\mathcal{L}_{\Delta}f=e^{i\theta}f\), then \(0\neq f=\lim_{n\to\infty}S_{n,\theta}\), which implies that \(\theta=\theta_{j}\) for some \(j\) and \(f\in\Pi_{j}(WV(\Delta))\) is an element of \(WV(\Delta)\). Thus the peripheral spectra of \(\mathcal{L}_{\Delta}\) on \(WV(\Delta)\) and \(L^{1}(\Delta)\) coincide. Returning to the proof of the lemma, suppose for the sake of contradiction that \(1\) is not the only eigenvalue of modulus \(1\). Then according to the Claim, there exists \(h\in WY(\Delta)\) and \(p,q\in\mathbb{N}\setminus\{0\}\) such that \(\mathcal{L}_{\Delta}h=e^{i\pi p/q}h\). It follows that \(h\) is complex-valued and that \(\int_{\Delta}\mathrm{Re}(h)\,dm_{\Delta}=\int_{\Delta}\mathrm{Im}(h)\,dm_{ \Delta}=0\). Since \(\mathcal{L}_{\Delta}^{q}h=h\), \(h\) takes on all its possible values in the first \(q\) levels, \(\cup_{k=0}^{q-1}\Delta_{k}\). In particular, \(\sup_{\Delta}|\mathrm{Re}(h)|=\sup_{\cup_{k=0}^{q-1}\Delta_{k}}|\mathrm{Re}(h)|\), and similarly for \(\mathrm{Im}(h)\). Thus by (6.11), we may choose \(\kappa>0\) such that \[\psi:=\kappa\mathrm{Re}(h)+g_{\Delta}\ \ \text{satisfies}\ \ \inf_{\Delta}\psi>0.\] Note \(\int_{\Delta}\psi\,dm_{\Delta}=1\). Next, for \(s\in\mathbb{R}\), define \[\psi_{s}=s\psi+(1-s)g_{\Delta}=s\kappa\mathrm{Re}(h)+g_{\Delta}\,. \tag{6.12}\] Note that \(\psi_{s}\) also takes on all its possible values in \(\cup_{k=0}^{q-1}\Delta_{k}\) and \(\mathcal{L}_{\Delta}^{q}\psi_{s}=\psi_{s}\). Let \(\mathcal{S}=\{s\in\mathbb{R}:\operatorname{ess}\inf_{\Delta}\psi_{s}>0\}\). By construction of \(\psi\), and the compactness of \(\cup_{k=0}^{q-1}\Delta_{k}\), \(\mathcal{S}\) contains \([0,1]\) and is open. We will show that \(\mathcal{S}\) contains \(\mathbb{R}^{+}\). Suppose not. Let \(t>1\) be an endpoint of \(\mathcal{S}\) that is not in \(\mathcal{S}\). Then \(\operatorname{ess}\inf_{\Delta}\psi_{t}=\operatorname{ess}\inf_{\cup_{k=0}^{q -1}\Delta_{k}}\psi_{t}=0\). Without loss of generality, we may work with a representative of \(\psi_{t}\) that is lower semicontinuous.7 Since \(\int_{\Delta}\psi_{t}\,dm_{\Delta}=1\), there must exist \((y,j)\in\cup_{k=0}^{q-1}\Delta_{k}\) such that \(\psi_{t}(y)>0\). By lower semicontinuity, there exists an interval \(A\subset\Delta_{j}\) such that \(\inf_{A}\psi_{t}:=a>0\). Footnote 7: We use here that any function of bounded variation can be written as the difference of two monotonic functions so that one-sided limits exist at each point [R, Theorem 5, Section 5.2]. Since \(f_{\Delta}\) is topologically mixing, we can find \(N\in\mathbb{N}\) such that \(f_{\Delta}^{N+i}(A)\supseteq\cup_{k=0}^{q-1}\Delta_{k}\), for \(i=0,\dots,q-1\). One of these iterates must equal \(nq\) for some \(n\in\mathbb{N}\). Thus for any \(x\in\cup_{k=0}^{q-1}\Delta_{k}\), \[\psi_{t}(x)=\mathcal{L}_{\Delta}^{nq}\psi_{t}(x)\geqslant\frac{a}{\sup_{A}|Df^ {nq}\circ\pi_{\Delta}|}>0\,,\] since \(\sup|Df|<\infty\) and \(n\) is fixed. This proves that \(t\in\mathcal{S}\) so in fact \(\mathbb{R}^{+}\subset\mathcal{S}\). By (6.12), this implies that \(\operatorname{Re}(h)\geqslant 0\), but since \(\int_{\Delta}\operatorname{Re}(h)\,dm_{\Delta}=0\), it must be that \(\operatorname{Re}(h)\equiv 0\). A similar argument forces \(\operatorname{Im}(h)\equiv 0\), providing the needed contradiction. As in (4.10), denote by \(|||\cdot|||\) the norm which views \(\mathcal{L}_{\Delta}\) as an operator from \(WV(\Delta)\) to \(L^{1}(m_{\Delta})\). **Lemma 6.5**.: _There exists \(C>0\) such that for any \(\boldsymbol{\varepsilon}>0\), \(|||\mathcal{L}_{\Delta}-\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}||| \leqslant C\mu(H_{\boldsymbol{\varepsilon}}(z))^{1-\gamma/\zeta}\)._ Proof.: Let \(\psi\in WV(\Delta)\). Then, \[\int_{\Delta}|(\mathcal{L}_{\Delta}-\hat{\mathcal{L}}_{\Delta,\boldsymbol{ \varepsilon}})\psi|\,dm_{\Delta}\leqslant\int_{H_{\Delta}}|\psi|\,dm_{\Delta} \leqslant\|\psi\|_{WV}\sum_{k\geqslant 1}e^{\gamma k}m_{\Delta}(H_{\Delta}\cap \Delta_{k})\,.\] This expression can be made small in \(\mu(H_{\boldsymbol{\varepsilon}}(z))\) as follows. Let \(d\mu_{\Delta}=g_{\Delta}dm_{\Delta}\). Then \((\pi_{\Delta})_{*}\mu_{\Delta}=\mu\), so that using (6.11), \[\sum_{k\geqslant 1}e^{\gamma k}m_{\Delta}(H_{\Delta}\cap\Delta_{k})= \sum_{k=1}^{-\zeta^{-1}\log\mu(H_{\boldsymbol{\varepsilon}}(z))}e^{\gamma k}m_ {\Delta}(H_{\Delta}\cap\Delta_{k})+\sum_{k\geqslant-\zeta^{-1}\log\mu(H_{ \boldsymbol{\varepsilon}}(z))}e^{\gamma k}m_{\Delta}(H_{\Delta}\cap\Delta_{k})\] \[\leqslant\mu(H_{\boldsymbol{\varepsilon}}(z))^{-\gamma/\zeta}(c_ {0}c_{g})^{-1}\sum_{k=1}^{-\zeta^{-1}\log\mu(H_{\boldsymbol{\varepsilon}}(z))} \mu_{\Delta}(H_{\Delta}\cap\Delta_{k})+\sum_{k\geqslant-\zeta^{-1}\log\mu(H_{ \boldsymbol{\varepsilon}}(z))}Ce^{(\gamma-\zeta)k}\] \[\leqslant C\mu(H_{\boldsymbol{\varepsilon}}(z))^{1-\gamma/\zeta}\,.\] With these elements in place, we are ready to prove Proposition 6.1. Proof of Proposition 6.1.: For fixed \(\beta>0\), the Lasota-Yorke inequalities in Lemma 6.3 have uniform constants. Thus the spectral radius of \(\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}\) has essential spectral radius at most \(\max\{e^{-\gamma},\frac{4}{5}\}\) for all \(\beta\)-allowable holes. This, together with Lemma 6.5 implies by [11, Corollary 1] that the spectra and spectral projectors of \(\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}\) outside the disk of radius \(\max\{e^{-\gamma},\frac{4}{5}\}\) vary Holder continuously in \(\mu(H_{\boldsymbol{\varepsilon}}(z))\). Thus there exists \(\varepsilon_{\beta}(\Delta)>0\) such that for all \(\beta\)-admissible holes with \(\boldsymbol{\varepsilon}<\varepsilon_{\beta}(\Delta)\), the operators \(\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}\) enjoy a uniform spectral gap and can be decomposed as in the statement of the proposition. Our final lemma of this section demonstrates that the spectral radius of \(\hat{\mathcal{L}}_{\Delta,\boldsymbol{\varepsilon}}\) yields the escape rate from both \(\Delta\) and \(I\). **Lemma 6.6**.: _Under the hypotheses of Proposition 6.1, \(-\log\lambda_{\Delta,\boldsymbol{\varepsilon}}=\mathfrak{c}(H_{\boldsymbol{ \varepsilon}}(z))\), where \(\mathfrak{c}(H_{\boldsymbol{\varepsilon}}(z))\) is from (1.1)._ Proof.: Using Proposition 6.1, we compute \[-\mathfrak{e}(H_{\mathbf{\varepsilon}}(z)) =\lim_{n\to\infty}\frac{1}{n}\log\mu(\cap_{i=0}^{n-1}f^{-i}(I \setminus H_{\mathbf{\varepsilon}}))=\lim_{n\to\infty}\frac{1}{n}\log\mu_{\Delta}( \mathring{\Delta}_{\mathbf{\varepsilon}}^{n})\] \[=\lim_{n\to\infty}\frac{1}{n}\log\int_{\Delta}\mathring{\mathcal{ L}}_{\Delta,\mathbf{\varepsilon}}^{n}(g_{\Delta})\,dm_{\Delta}=\lim_{n\to\infty} \frac{1}{n}\log\left(\lambda_{\Delta,\mathbf{\varepsilon}}^{n}\mathring{e}_{\Delta, \mathbf{\varepsilon}}(g_{\Delta})+\int_{\Delta}\mathcal{R}_{\Delta,\mathbf{ \varepsilon}}^{n}(g_{\Delta})\,dm_{\Delta}\right)\] \[=\log\lambda_{\Delta,\mathbf{\varepsilon}}\,,\] since \(\mathring{e}_{\Delta,\mathbf{\varepsilon}}(g_{\Delta})>0\) due to (6.11). ### Hitting time statistics for \(\beta\)-allowable holes To prove Theorem 1.2, we will compute the following limit for fixed \(\alpha>0\) amd \(t>0\), \[\lim_{\mathbf{\varepsilon}\to 0}\frac{-1}{t\mu(H_{\mathbf{\varepsilon}}(z))^{1- \alpha}}\log\mu\left(r_{H_{\mathbf{\varepsilon}}(z)}>\frac{t}{\mu(H_{\mathbf{ \varepsilon}}(z))^{\alpha}}\right)\,.\] Recall that \(\alpha>0\) was fixed at the beginning of Section 6.1 and affected the chosen value of \(\gamma\) via (6.2). Since \(\pi_{\Delta}\circ f_{\Delta}=f\circ\pi_{\Delta}\), \(\pi_{\Delta}(H_{\Delta})=H_{\mathbf{\varepsilon}}(z)\) then \(r_{\Delta}:=r_{H_{\mathbf{\varepsilon}}(z)}\circ\pi_{\Delta}\) defines the first hitting time to \(H_{\Delta}\). Then since \((\pi_{\Delta})_{*}\mu_{\Delta}=\mu\), it is equivalent to estimate, \[\lim_{\mathbf{\varepsilon}\to 0}\frac{-1}{t\mu_{\Delta}(H_{\Delta})^{1-\alpha}} \log\mu_{\Delta}\left(r_{\Delta}>\frac{t}{\mu_{\Delta}(H_{\Delta})^{\alpha}} \right)\,.\] Setting \(n_{\mathbf{\varepsilon}}=\lfloor t\mu_{\Delta}(H_{\Delta})^{-\alpha}\rfloor= \lfloor t\mu(H_{\mathbf{\varepsilon}}(z))^{-\alpha}\rfloor\), we estimate as in [BDT, Section 2.5], \[\mu_{\Delta}(r_{\Delta}>n_{\mathbf{\varepsilon}}) =\int_{\mathring{\Delta}_{\mathbf{\varepsilon}}^{n_{\mathbf{ \varepsilon}}+1}}g_{\Delta}\,dm_{\Delta}=\int_{\Delta}\mathring{\mathcal{L}}_{ \Delta,\mathbf{\varepsilon}}^{n_{\mathbf{\varepsilon}}+1}g_{\Delta}\,dm_{\Delta}\] \[=\lambda_{\Delta,\mathbf{\varepsilon}}^{n_{\mathbf{\varepsilon}}+1}\int_{ \Delta}\lambda_{\Delta,\mathbf{\varepsilon}}^{-n_{\mathbf{\varepsilon}}-1}\mathring{ \mathcal{L}}_{\Delta,\mathbf{\varepsilon}}^{\hat{n}_{\mathbf{\varepsilon}}+1}(g_{ \Delta}-\mathring{g}_{\Delta,\mathbf{\varepsilon}})\,dm_{\Delta}+\lambda_{\Delta, \mathbf{\varepsilon}}^{n_{\mathbf{\varepsilon}}+1}\int_{\Delta}\mathring{g}_{\Delta, \mathbf{\varepsilon}}\,dm_{\Delta}\,,\] where \(\mathring{g}_{\Delta,\mathbf{\varepsilon}}\) is from Proposition 6.1. Thus, \[\log\mu_{\Delta}(r_{\Delta}>n_{\mathbf{\varepsilon}})=(n_{\mathbf{\varepsilon}}+1) \lambda_{\Delta,\mathbf{\varepsilon}}+\log\left(1+\int_{\Delta}\lambda_{\Delta, \mathbf{\varepsilon}}^{-n_{\mathbf{\varepsilon}}-1}\mathring{\mathcal{L}}_{\Delta, \mathbf{\varepsilon}}^{n_{\mathbf{\varepsilon}}+1}(g_{\Delta}-\mathring{g}_{\Delta,\bm {\varepsilon}})\,dm_{\Delta}\right)\,.\] Dividing by \(-t\mu_{\Delta}(H_{\Delta})^{1-\alpha}\), we see that the first term becomes simply \(-\frac{\log\lambda_{\Delta,\mathbf{\varepsilon}}}{\mu_{\Delta}(H_{\Delta})}\). Since \(-\log\lambda_{\Delta,\mathbf{\varepsilon}}=\mathfrak{e}(H_{\mathbf{\varepsilon}}(z))\) by Lemma 6.6, the first term yields \(\operatorname{esc}(z)\), which is either \(1\) or \(1-\lambda_{z}^{-1/\ell}\) as needed, as \(\mathbf{\varepsilon}\to 0\) according to Theorem 1.1. It remains to show that the second term tends to \(0\) as \(\mathbf{\varepsilon}\to 0\). Using the spectral decomposition in Proposition 6.1, we define \(c_{\varepsilon}=\mathring{e}_{\Delta,\mathbf{\varepsilon}}(g_{\Delta})\) and write \[\lambda_{\Delta,\mathbf{\varepsilon}}^{-n_{\mathbf{\varepsilon}}-1}\mathring{\mathcal{L }}_{\Delta,\mathbf{\varepsilon}}^{n_{\mathbf{\varepsilon}}+1}(g_{\Delta}-\mathring{g} _{\Delta,\mathbf{\varepsilon}})=(c_{\varepsilon}-1)\mathring{g}_{\Delta,\mathbf{ \varepsilon}}+\lambda_{\Delta,\mathbf{\varepsilon}}^{-n_{\mathbf{\varepsilon}}-1} \mathcal{R}_{\Delta,\mathbf{\varepsilon}}^{n_{\mathbf{\varepsilon}}+1}g_{\Delta}\,.\] Integrating this equation, we see that we must estimate, \[\log\left(1+\int_{\Delta}\lambda_{\Delta,\mathbf{\varepsilon}}^{-n_{\mathbf{\varepsilon}}-1 }\mathring{\mathcal{L}}_{\Delta,\mathbf{\varepsilon}}^{n_{\mathbf{\varepsilon}}+1}(g_{ \Delta}-\mathring{g}_{\Delta,\mathbf{\varepsilon}})\,dm_{\Delta}\right)=\log\left( c_{\varepsilon}+\int_{\Delta}\lambda_{\Delta,\mathbf{\varepsilon}}^{-n_{\mathbf{ \varepsilon}}-1}\mathcal{R}_{\Delta,\mathbf{\varepsilon}}^{n_{\mathbf{\varepsilon}}+1}g_{ \Delta}\,dm_{\Delta}\right)\,. \tag{6.13}\] Again using Proposition 6.1, we bound the integral by, \[\left|\int_{\Delta}\lambda_{\Delta,\mathbf{\varepsilon}}^{-n_{\mathbf{\varepsilon}}-1} \mathcal{R}_{\Delta,\mathbf{\varepsilon}}^{n_{\mathbf{\varepsilon}}+1}g_{\Delta}\,dm_{ \Delta}\right|\leqslant A_{\beta}e^{-\sigma_{\beta}(n_{\mathbf{\varepsilon}}+1)} \|g_{\Delta}\|_{WV}\leqslant Ce^{-\sigma_{\beta}t\mu(H_{\mathbf{\varepsilon}}(z))^{- \alpha}}\,,\] and this quantity is super-exponentially small in \(\mu(H_{\mathbf{\varepsilon}}(z))\). By Lemma 6.5 and [KL1, Corollary 1], \[|c_{\varepsilon}-1|=|\mathring{e}_{\Delta,\mathbf{\varepsilon}}(g_{\Delta})-e_{ \Delta}(g_{\Delta})|\leqslant C\mu(H_{\mathbf{\varepsilon}}(z))^{1-\gamma/\zeta} \log\mu(H_{\mathbf{\varepsilon}}(z))^{-1}\,.\] Putting these estimates together in (6.13) and dividing by \(t\mu(H_{\mathbf{\varepsilon}}(z))^{1-\alpha}\), we obtain \[\lim_{\mathbf{\varepsilon}\to 0} \frac{1}{t\mu(H_{\mathbf{\varepsilon}}(z)^{1-\alpha}}\log\Big{(}1+ \mathcal{O}(-\mu(H_{\mathbf{\varepsilon}}(z))^{1-\gamma/\zeta}\log\mu(H_{\mathbf{ \varepsilon}}(z))\Big{)}\] \[=\lim_{\mathbf{\varepsilon}\to 0}\frac{1}{t}\mathcal{O}\Big{(}-\mu(H_{ \mathbf{\varepsilon}}(z))^{\alpha-\gamma/\zeta}\log\mu(H_{\mathbf{\varepsilon}}(z)) \Big{)}\,,\] which tends to \(0\) since \(\alpha>\gamma/\zeta\) by (6.2). The above limit \(\mathbf{\varepsilon}\to 0\) is understood to be taken along sequences of \(\beta\)-allowable holes. ### Proof of Theorem 1.2 via approximation when \(z\in\operatorname{orb}(f(c))\) Section 6.2 proves Theorem 1.2 when \(z\in\operatorname{orb}(f(c))\) for each \(\alpha>0\) and \(\beta>0\) along sequences \((\mathbf{\varepsilon}_{n})_{n}\) where each \(\mathbf{\varepsilon}_{n}\) is \(\beta\)-allowable. It remains to consider the alternative case when \(\alpha>0\) is still fixed and we have to approximate a given sequence \(\varepsilon_{n}\) by \(\beta\)-allowable \(\mathbf{\varepsilon}_{n}^{\prime}\). The approximation follows closely the strategy in Section 5. As in that section, we first present the argument in the case that \(z\in\operatorname{orb}(f(c))\) is periodic. Recall that if \(H_{\mathbf{\varepsilon}}\) is \(\beta\)-allowable, then it is also \(\beta^{\prime}\)-allowable for any \(\beta^{\prime}<\beta\), so as in Section 5, we take our approximating sequence with \(\beta\) tending to \(0\). As before, we assume \(\beta<(2\lambda_{z})^{-1}\). Using precisely the same discussion and notation as in Section 5, we suppose that each non-\(\beta\)-left-allowable value of \(\varepsilon\) satisfies \(z-\varepsilon\in(c_{i,j,k+1}-\beta v_{i,j,k},c_{i,j,k+1}+\beta v_{i,j,k})\) for some \(i,j,k\). We approximate \(\varepsilon\) from above by \(\varepsilon_{o}^{L}:=z-(c_{i,j,k+1}-\beta v_{i,j,k})\) and from below by \(\varepsilon_{u}^{L}:=z-(c_{i,j,k+1}+\beta v_{i,j,k})\). Both \(\varepsilon_{o}^{L}\) and \(\varepsilon_{u}^{L}\) are \(\beta\)-left-allowable and \(\varepsilon\in(\varepsilon_{u}^{L},\varepsilon_{o}^{L})\). Applying (5.1), we have \[\frac{\varepsilon_{u}^{L}}{\varepsilon_{o}^{L}}\gtrsim 1-2\beta\lambda_{z}\ \text{ and }\ \frac{\varepsilon_{o}^{L}}{\varepsilon_{u}^{L}}\lesssim 1+2\beta \lambda_{z}\,.\] The right hand estimates for the analogous \(\varepsilon_{u}^{R}\) and \(\varepsilon_{o}^{R}\) enjoy similar bounds. To complete the proof of Theorem 1.2, we consider the following limit as \(\varepsilon\to 0\) for fixed \(t,\alpha>0\), \[\frac{-1}{t\mu(H_{\varepsilon}(z))^{1-\alpha}}\log\mu\left(r_{H_{\varepsilon}( z)}>\frac{t}{\mu(H_{\varepsilon}(z))^{\alpha}}\right)\,. \tag{6.14}\] We first estimate this from below. Let \(r_{u}\) denote the first hitting time to the smaller set \((z-\varepsilon_{u}^{L},z+\varepsilon_{u}^{R})\subset H_{\varepsilon}(z)\). Note that \(r_{u}>r_{H_{\varepsilon}(z)}\), and \(\frac{\mu(z-\varepsilon_{u}^{L},z+\varepsilon_{u}^{R})}{\mu(z-\varepsilon,z +\varepsilon)}\geqslant C_{\varepsilon}(1-2\beta\lambda_{z})^{\frac{1}{\ell}}\) where \(C_{\varepsilon}\to 1\) as \(\varepsilon\to 0\). Setting \(s=t(C_{\varepsilon}(1-2\beta\lambda_{z})^{\frac{1}{\ell}})^{\alpha}\) we estimate (6.14) from below by, \[\frac{\mu(z-\varepsilon_{u}^{L},z+\varepsilon_{u}^{R})^{1-\alpha} }{\mu(z-\varepsilon,z+\varepsilon)^{1-\alpha}}\frac{-(C_{\varepsilon}(1-2 \beta\Lambda_{z})^{1/\ell})^{\alpha}}{s\mu(z-\varepsilon_{u}^{L},z+ \varepsilon_{u}^{R})^{1-\alpha}}\log\mu\left(r_{u}>\frac{t}{\mu(H_{\varepsilon} (z))^{\alpha}}\right)\] \[\geqslant(C_{\varepsilon}(1-2\beta\Lambda_{z})^{1/\ell})\frac{-1 }{s\mu(z-\varepsilon_{u}^{L},z+\varepsilon_{u}^{R})^{1-\alpha}}\log\mu\left(r_ {u}>\frac{s}{\mu(z-\varepsilon_{u}^{L},z+\varepsilon^{R})^{\alpha}}\right)\,.\] Taking the limit as \(\varepsilon\to 0\) yields a lower bound of \[(1-2\beta\lambda_{z})^{\frac{1}{\ell}}\cdot\left(1-\lambda_{z}^{-1/\ell} \right)\,,\] where the second factor comes from the application of Theorem 1.2 to \(\beta\)-allowable holes in the case that \(z\in\operatorname{orb}(f(c))\) is periodic. Similarly, one obtains an upper bound for (6.14) of \((1+2\beta\lambda_{z})^{\frac{1}{\ell}}\cdot\left(1-\lambda_{z}^{-1/\ell}\right)\). Since these bounds hold for all sufficiently small \(\beta\), we take \(\beta\to 0\) to obtain the required limit for Theorem 1.2 along an arbitrary sequence \((\varepsilon_{n})_{n}\). Finally, if \(z\in\operatorname{orb}(f(c))\) is preperiodic then the above calculations all go through similarly. As in Section 5, from the construction of \((Y,F)\) in Section 3.4, the periodic structure of the postcritical orbit can be pulled back to \(z\) to generate the \((a_{i})_{i},\ (b_{i})_{i}\) required, but the resulting bounds are of the form \((1\pm 2\beta\lambda)\) where \(\lambda=|Df^{p}(f^{k_{0}}(z))|\). Thus they tend to \(1\) as \(\beta\to 0\), as required. ### Proof of Theorem 1.2 when \(z\notin\operatorname{orb}(f(c))\) We explain here how to adapt the results of [BDT, Section 4.2.1] to achieve the required limit in Theorem 1.2 for any \(z\notin\operatorname{orb}(f(c))\). Since \(f\) is Misiurewicz, one can choose an interval \(Y\) containing \(z\) whose endpoints are two points of a periodic orbit \(\operatorname{orb}(p)\) where \(\operatorname{orb}(p)\) is disjoint from \(\operatorname{orb}(z)\) and the interior of \(Y\). If we define \(F\) to be the induced map with first return time \(\tau\), then \(F\) is a full-branched Gibbs-Markov map, which satisfies the conditions of [BDT, Theorem 2.1]. In particular, we consider the parameter \(n_{1}\) from [BDT, eq. (2.1)] to be chosen: this is chosen so that \(F^{n_{1}}\) has sufficient expansion. At this point, we find it convenient to consider separately two cases: \(z\) is a recurrent point (every \(\varepsilon\)-neighbourhood of \(z\) contains a point in \(\operatorname{orb}(f(z))\)); or \(z\) is a nonrecurrent point. _Case 1:_\(z\) is a recurrent point. By choice of \(\partial Y\), \(z\) is necessarily contained in the interior of a domain \(Y_{i}^{k}\) of \(F^{k}\) for each \(k\geqslant 1\). Thus \(z\in Y_{\operatorname{cont}}:=\{y\in Y:F^{k}\text{ is continuous at }y\text{ for all }k\geqslant 1\}\). Moreover, for any sufficiently small \(\varepsilon\), \((z-\varepsilon,z+\varepsilon)\subset Y_{i}^{n_{1}}\) and so the lengths of images of intervals of monotonicity for \(\mathring{F}_{\varepsilon^{\prime}}^{n_{1}}\), where \(\mathring{F}_{\varepsilon^{\prime}}:=F|_{Y\setminus(z-\varepsilon^{\prime},z+ \varepsilon^{\prime})}\), have a positive uniform lower bound for all \(\varepsilon^{\prime}<\varepsilon\). This ensures that condition **(U)** of [BDT] is satisfied. Thus we may apply [BDT, Theorem 3.2] to conclude that \(L_{\alpha,t}(z)=1\). _Case 2:_\(z\) is not a recurrent point. If an accumulation point of \(\operatorname{orb}(z)\) lies in \(Y\), then \(z\) lies in the interior of a domain of \(F^{k}\) for each \(k\) and by the above argument, [BDT, Theorem 3.2] applies so that Theorem 1.2 follows. If, on the other hand, no accumulation points of \(\operatorname{orb}(z)\) lie in \(Y\), then since \(\partial Y\) is periodic, \(z\) is necessarily an accumulation point of domains \(\{Y_{i}\}_{i}\) of \(F\). In this case, a modification of the approach of [BDT] is needed on two points. First, fixing \(\beta>0\), we only consider values of \(\varepsilon_{L}\) and \(\varepsilon_{R}\) so that \(z-\varepsilon_{L}\) and \(z+\varepsilon_{R}\) are \(\beta\)-deep in intervals of monotonicity for \(F^{n_{1}}\) around \(z\). These constitute \(\beta\)-allowable holes \((z-\varepsilon_{L},z+\varepsilon_{R})\) so that the uniformly large images property **(U)** of [BDT] applies to the punctured induced map, \(\mathring{F}_{\boldsymbol{\varepsilon}}^{n_{1}}\). In particular, under these conditions, the associated punctured transfer operators enjoy a uniform spectral gap in \(BV(Y)\) for all sufficiently small \(\beta\)-allowable holes. Applying the results of [KL2] as in [BDT, Section 2.3], we see that \(\operatorname{esc}(z)=1\) as long as \[\lim_{\boldsymbol{\varepsilon}\to 0}\frac{\mu(E_{\boldsymbol{\varepsilon}}^{k})}{ \mu(H_{\varepsilon}(z))}=0\text{ for each }k\geqslant 0,\] where \[E_{\boldsymbol{\varepsilon}}^{k}=\{y\in H_{\boldsymbol{\varepsilon}}(z):F^{i} (y)\notin H_{\boldsymbol{\varepsilon}}(z),i=1,\ldots,k,\text{ and }F^{k+1}(y)\in H_{\boldsymbol{\varepsilon}}(z)\}\,.\] Since \(F\) is full branched, each domain \(Y_{i}^{k}\) of \(F^{k}\) has an interval which maps onto \(H_{\boldsymbol{\varepsilon}}(z)\). However, if \(Y_{i}^{k}\subset H_{\boldsymbol{\varepsilon}}(z)\), then \(\tau(Y_{i}^{k})\geqslant\log(|Y|\varepsilon^{-1})/\log|Df|_{\infty}\geqslant C _{0}\log\varepsilon^{-1}\), where \(\varepsilon=\max\{\varepsilon_{L},\varepsilon_{R}\}\). This implies that the contribution to \(E_{\boldsymbol{\varepsilon}}^{k}\) from the collection of such intervals is dominated by \[\sum_{j\geqslant C_{0}\log\varepsilon^{-1}}C_{k}|H_{\boldsymbol{\varepsilon} }(z)|\lambda_{\operatorname{per}}^{-j}\leqslant C_{k}^{\prime}\varepsilon^{C_ {0}\lambda_{\operatorname{per}}}|H_{\varepsilon}(z)|\,,\] where we have applied Proposition 2.1 since \(F\) is full branched. Since the invariant measure \(\mu\) has density at \(z\) bounded away from \(0\) and \(\infty\), we estimate, \[\frac{\mu(E_{\boldsymbol{\varepsilon}}^{k})}{\mu(H_{\varepsilon}(z))}\leqslant C _{k}^{\prime\prime}\varepsilon^{C_{0}\lambda_{\text{per}}}\to 0\ \text{ as }\varepsilon\to 0\text{ for each }k.\] With these modifications, \(\operatorname{esc}(z)=1\) and [BDT, Theorem 3.2] implies the desired limit \(L_{\alpha,t}(z)=1\) as well along sequences of \(\beta\)-allowable holes. The approximation of more general holes \((z-\varepsilon,z+\varepsilon)\) by \(\beta\)-allowable holes in order to prove the required limit \(1\) for Theorem 1.2 proceeds as in Section 6.3. The case here is simpler since there is no density spike so we do not need to maintain bounded ratios \(\frac{\varepsilon_{\text{I}}^{L}}{\varepsilon_{\text{II}}^{M}}\), \(\frac{\varepsilon_{\text{II}}^{L}}{\varepsilon_{\text{II}}^{M}}\) for the approximating holes.
2309.07609
Learning Quasi-Static 3D Models of Markerless Deformable Linear Objects for Bimanual Robotic Manipulation
The robotic manipulation of Deformable Linear Objects (DLOs) is a vital and challenging task that is important in many practical applications. Classical model-based approaches to this problem require an accurate model to capture how robot motions affect the deformation of the DLO. Nowadays, data-driven models offer the best tradeoff between quality and computation time. This paper analyzes several learning-based 3D models of the DLO and proposes a new one based on the Transformer architecture that achieves superior accuracy, even on the DLOs of different lengths, thanks to the proposed scaling method. Moreover, we introduce a data augmentation technique, which improves the prediction performance of almost all considered DLO data-driven models. Thanks to this technique, even a simple Multilayer Perceptron (MLP) achieves close to state-of-the-art performance while being significantly faster to evaluate. In the experiments, we compare the performance of the learning-based 3D models of the DLO on several challenging datasets quantitatively and demonstrate their applicability in the task of shaping a DLO.
Piotr Kicki, Michał Bidziński, Krzysztof Walas
2023-09-14T11:17:43Z
http://arxiv.org/abs/2309.07609v1
Learning Quasi-Static 3D Models of Markerless Deformable Linear Objects for Bimanual Robotic Manipulation ###### Abstract The robotic manipulation of Deformable Linear Objects (DLOs) is a vital and challenging task that is important in many practical applications. Classical model-based approaches to this problem require an accurate model to capture how robot motions affect the deformation of the DLO. Nowadays, data-driven models offer the best tradeoff between quality and computation time. This paper analyzes several learning-based 3D models of the DLO and proposes a new one based on the Transformer architecture that achieves superior accuracy, even on the DLOs of different lengths, thanks to the proposed scaling method. Moreover, we introduce a data augmentation technique, which improves the prediction performance of almost all considered DLO data-driven models. Thanks to this technique, even a simple Multilayer Perceptron (MLP) achieves close to state-of-the-art performance while being significantly faster to evaluate. In the experiments, we compare the performance of the learning-based 3D models of the DLO on several challenging datasets quantitatively and demonstrate their applicability in the task of shaping a DLO. ## I Introduction People encounter and skillfully manipulate DLOs, such as cables, ropes, threads, strings, and hoses. It would be beneficial to give robots similar skills to enable them to perform cable routing [1, 2], knot tying [3], rope untangling [4], belt-drive unit assembly [5], wiring harness assembly in the automotive sector [6, 7], threading the lace through the narrow hole [8], or surgical suturing [9]. A typical approach to manipulating DLOs is to use their models to plan the motion and control commands necessary to rearrange it to the desired state. In recent years, there were many attempts to develop DLO models, such as FEM-based ones [10, 11, 12, 13], using Cosserat rod theory [14, 15, 16, Kirchoff elastic rod [17, 18] or dynamic B-splines [19, 20]. However, these models make strong assumptions about the properties of the objects or require significant amounts of computations to evaluate, which limits their applicability in real-time manipulation or makes their relevance to the real DLOs questionable. A response to these problems was the trials to develop neural network-based models [21, 22, 23, 24, 25, 26]. However, most of these methods are limited to manipulating DLOs on the plane [24, 25, 26]. In contrast, the ones that work in 3D typically utilize purposefully placed markers attached to the DLO to get its exact state [21, 23], which is an unrealistic assumption for the DLO manipulation in the real world. Moreover, the inference times of the architectures proposed in [21] and [24] are prohibitively long for real-time manipulation planning. In this paper, we want to overcome the limitations mentioned above and present a neural network-based approach for a fast quasi-static model of the real DLO being manipulated by a pair of robots in 3D space without the use of any markers (see Figure 1). To do so, the DLO shape is being tracked with an RGBD camera with the improved version of our approach proposed in [27]. Based on tracking output, we train a machine learning model to be able to predict the state of a DLO after the robot's move, given an actual DLO state and an initial and final pose of the end-effectors (EEs) holding the DLO. Our four contributions that pertain to achieving this goal are the following. First, we propose to model the DLO using a neural network with the Transformer architecture [28]. The attention mechanism shows superb prediction accuracy w.r.t. state-of-the-art learning-based DLO models. In the literature, three main neural network architectures are reported: MLP, interaction network [29], and radial basis function network [30]. Our approach with the Transformer is new in this field. Second, we introduce a simple yet effective data augmentation procedure that significantly improves the accuracy of almost all considered architectures and enables the MLP to achieve accuracy similar to much more complex models like interaction network with bidirectional LSTM (IN-biLSTM) [21] or Transformer while being substantially more computationally efficient. Third, we analyzed the impact of different data representations on the prediction accuracy of neural network-based DLO models. Finally, we show that using the MLP with the proposed Fig. 1: The proposed DLO model’s prediction of the markerless DLO shape after a virtual move of the two UR3 robotic arms. data augmentation and Cross Entropy Method (CEM) for motion prediction, one can plan the coordinated movement of two arms, which leads to a desired movement of the DLO. Furthermore, it is essential to stress the fact that the knowledge gained for one DLO is transferable for the same DLO type but different lengths thanks to the proposed data scaling and to DLOs of different physical parameters. Furthermore, we evaluated the possibility of retraining a model pre-trained on a different DLO setup. To the best of the authors' knowledge, this is the first analysis of the transferability of the neural DLO models. To facilitate the research on DLO modeling, we make our code and datasets publicly available at [https://github.com/PPI-PUT/neural_dlo_model](https://github.com/PPI-PUT/neural_dlo_model). ## II Related work ### _Physics-based DLO models_ Early models of deformable linear objects were designed based on the analysis of the physics of the phenomenons that govern the motion of the DLO. In the literature, there are several approaches to physics-based DLO modeling, like (i) Finite Element Method (FEM) [31, 12], (ii) continuous elastic rod models [32, 17], (iii) multi-body models [33, 34], or (iv) jacobian-based models [35, 8], each of which with its pros and cons [36, 37]. Starting from FEM, the main advantage of mesh-based models is their accuracy and ability to simulate large deformations accurately. In [31], FEM-based model was used for simulation and planning of the ring-shape object deformation, while in [12] FEM-based model with sensitivity analysis was a key component of bi-manual robotic shape control of the deformable objects. The high accuracy of these methods is at the cost of long simulation times caused by huge amounts of computations. To address that, authors of [38] proposed a method for shape control based on FEM, which does not require real-time simulation of the deformable object in the control algorithm. To achieve real-time performance, they simplified the model of the manipulated deformable object to a 1D case, i.e., a chain of nodes. A chain of nodes is also at the core of Position-Based Dynamics (PBD) [33, 16], constrained rigid bodies-based [39], and mass-spring [40] approaches to DLO modeling. However, these approaches produce visually plausible animations instead of physically accurate simulations. To address this issue, in [34], a compliant version of PBD was used to model real rope-like objects. Nevertheless, these approaches are characterized by limited accuracy, especially for more stiff DLOs, like cables. An alternative approach is to represent the DLO as a curve, which better resembles the nature of the DLO and, at the same time, is less complex than volumetric mesh models. The key concept in DLO modeling using elastic rods is that stable configurations of the DLO correspond with minimal-energy curves that represent them [41]. This allows for more efficient DLO path planning [41], perception [42] and manipulation [17]. The most popular approaches exploit the models based on Kirchoff rods [17], Coserrat rods [15], and geometrically exact dynamics splines [32]. Despite the reduced complexity, these methods still require significant amounts of time to simulate the behavior of the DLO, especially for more prolonged movements crucial for efficient motion planning. Moreover, all of the above-mentioned methods require notable knowledge about the manipulated object, like mesh, Young's and shear modulus, or mass, but also accurate perceptions systems able to identify all elements of the complex state of the object representation, like twist along the DLO. Finally, due to the idealistic assumptions about the DLO model, environmental conditions, and perception system, they may fail in accurately predicting the DLO behavior due to reality-gap. ### _Learning-based DLO models_ To address the issues of the model-based approaches mentioned above, data-driven DLO models were introduced. They have received much attention in recent years thanks to their flexibility and relatively low computational effort. One of the first approaches to learning DLO modeling focuses on learning implicit models. In [43], a Causal InfoGAN is trained to generate plausible transitions in the observations space based on the transitions between latent states, which makes it possible to guide visual servoing. In turn, authors of [26] propose to learn a linear dynamics model in the latent space jointly with an encoder-decoder to encode the DLO states and robot actions. In contrast to [43], this approach allows directly generating actions that should lead to the goal state. However, it still operates in the image space, which seems suboptimal in terms of generalization. The learning of DLO model based on the low-dimensional DLO state representation is presented in [22]. This work uses a one-step transition model based on the initial and end state of the DLO, represented by the sequence of points and the applied change of grippers' positions. In [22], the transition model is based on MLP, but without taking advantage of the known structure of the DLO. To exploit this, authors of [25] used a bidirectional LSTM neural network to model the evolution of the DLO state given the applied action. Another approach that takes advantage of representing a DLO as a sequence of points is presented in [24], where a graph neural network based DLO model was proposed. However, the best performance of DLO modeling was achieved by the fusion of these approaches - IN-biLSTM [21]. Inspired by the jacobian-based DLO model [35], authors of [23] trained a neural network to generate jacobian between the grippers and DLO velocity based on the actual DLO state that locally approximates the DLO behavior. By doing so, they imposed a strong local linear prior on the DLO model, which led to the state-of-the-art performance in predicting the movement of the simulated DLO. Our paper considers a similar task to the one described in [23]. However, while in [23] and [24] an online adaptation of the trained DLO model was proposed, in our work, we focus only on the performance of the so-called _offline models_ learned only on the already collected data. We propose an alternative architecture for DLO model - Transformer, which can learn the structure of interactions between points that represent DLO induced by the movement of the robot's grippers. Finally, we propose a data augmentation technique that induces a similar bias on the trained model as the one introduced by the jacobian in [23] but is not restricted by the assumption of the localness of the deformation. ## III Neural network-based quasi-static model of a DLO The goal of this work is to learn from data a model \(f\) for predicting the next DLO state \(\mathbf{s}_{n+1}=f(\mathbf{s}_{n},\mathbf{p}_{n},\mathbf{a}_{n})\), given the actual DLO state \(\mathbf{s}_{n}\), an actual \(\mathbf{p}_{n}\) pose of the robots EEs holding the DLO, and a movement of the robots \(\mathbf{a}_{n}\). We define the state of DLO as a sequence of \(n_{s}\) points in 3D space \(\{(x_{1},y_{1},z_{1}),(x_{2},y_{2},z_{2}),\ldots,(x_{n_{s}},y_{n_{s}},z_{n_{s}} )\}\in\mathbb{R}^{n_{s}\times 3}\). The pose of the robots EEs consists of the positions \(\mathbf{t}\) and orientations \(\mathbf{R}\) of the left and right robotic arm TCPs i.e. \(\mathbf{p}=(\mathbf{t}^{L},\mathbf{R}^{L},\mathbf{t}^{R},\mathbf{R}^{R})\), where \(L\) and \(R\) superscripts denotes the arm. We consider our approach quasi-static as we limit our analysis to the states in which DLO is not moving and consider only the changes of the steady states, i.e., states with minimal energy of the DLO due to the reconfiguration of the end-effectors that hold it. While it is possible to analyze the dynamics of this reconfiguration, we observe that in typical bimanual manipulation of the DLO it is often enough to consider the steady states [17], as typically the performed moves are slow. Thus, DLO can achieve a steady state almost immediately, especially in the case of stiff DLOs. A crucial aspect of the problem we consider is that we limit the perception system to a single RGBD camera, and we assume markerless and textureless DLO, which distinguishes us from the most state-of-the-art approaches [21, 23, 44]. This assumption is motivated by the fact that in typical bimanual robotic manipulation settings, we cannot stick any markers to the object we want to manipulate and that most of the DLOs have no detectable visual or geometrical features that could enable us to track the twist along the DLO. ### _Data representations_ To efficiently learn a model, appropriate data representations are essential. We must define how to represent the state of a DLO and the orientation of the EEs. Regarding the representation of orientation, we analyzed three alternatives, i.e., quaternions, rotation matrix, and axis angle. A more challenging problem is how to represent the state of a DLO. We mentioned before that we want to represent the shape of a DLO as a sequence of 3D points. However, this does not fully describe the DLO state. Besides the shape, to represent the DLO state, we should also include the twist along the DLO. In general settings, we consider it, but in markerless and textureless settings, it is impossible to detect the twist. Therefore, the system we observe is, in general, partially observable. However, we expect that some notion of the twist can be seen in the geometry of a given DLO, having enough data to differentiate between the deformations innate for a given DLO instance and the twist along it. The representation based on 3D points has a significant flaw: the lack of translation invariance, i.e., the representations of the DLO of the same shape but located at different positions will differ significantly. To avoid this, we represent the DLO as a sequence of points described in the coordinate system located in the middle of the right gripper pad and orientation aligned with the coordinate system of the right manipulator base. This introduces translation invariance and, at the same time, maintains the direction of the gravity vector w.r.t. the DLO. Another approach to achieve the translation invariance of the DLO state is to represent it as a sequence of difference vectors between points, i.e., a sequence of edges. Another decision one has to make is how to represent the actual positions of the EEs \(\mathbf{t}_{i}^{L},\mathbf{t}_{i}^{R}\) and their movements \(\mathbf{a}_{i}\). A similar transformation as for the DLO state can be done for the gripper positions. We can omit the position of the right gripper and express the position of the left one in the right gripper coordinate system. While the move of the grippers \(\mathbf{a}_{i}\) can be represented by the end pose of the left arm and orientation of the right one \(\mathbf{a}_{i}=(\mathbf{t}_{i+1}^{L},\mathbf{R}_{i+1}^{L},\mathbf{R}_{i+1}^{R})\), or by the difference between the end and initial states of the grippers \(\mathbf{a}_{i}=\Big{(}\mathbf{t}_{i+1}^{L}-\mathbf{t}_{i}^{L},\big{(}\mathbf{R}_{i}^{L}\big{)} ^{-1}\,\mathbf{R}_{i+1}^{L},\big{(}\mathbf{R}_{i}^{R}\big{)}^{-1}\,\mathbf{R}_{i+1}^{R} \Big{)}\). ### _Neural network architectures_ Having the data representations defined, we can focus on how to process them to predict the DLO motion accurately. We propose not to approximate the function \(f\) directly but instead use a neural network to approximate the change of the DLO state \(\Delta\mathbf{s}_{n}=f(\mathbf{s}_{n},\mathbf{p}_{n},\mathbf{a}_{n})-\mathbf{s}_{n}\) after applying a move of the grippers \(\mathbf{a}_{n}\), given the actual state of the DLO\(\mathbf{s}_{n}\) and grippers \(\mathbf{p}_{n}\). In this paper, we considered four neural network architectures: (i) a simple MLP, (ii) our proposed Transformer-based network [28], and two baselines (iii) adjusted IN-bilSTM architecture [21], and (iv) inspired by [23] MLP that predicts the jacobian of the DLO w.r.t. grippers. All considered architectures are shown in Figure 2. In Figure 2a, a simple MLP is shown. It generates embeddings of the DLO state \(\mathbf{s}\), positional information about the left gripper \((\mathbf{t}^{L},\mathbf{a}_{\mathbf{t}}^{L})\), where \(\mathbf{a}_{\mathbf{t}}^{L}\) denotes the translational part of the left gripper movement, and rotational components of the initial state of the grippers \(\mathbf{R}^{L},\mathbf{R}^{R}\) as well as the rotational components of the movements of both arms \(\mathbf{a}_{\mathbf{R}}^{L},\mathbf{a}_{\mathbf{R}}^{R}\). Then, concatenated embeddings are processed with a sequence of fully-connected layers, and finally, the estimation of the \(\Delta\mathbf{s}\) is generated. Figure 2b illustrates our proposed approach, a Transformer-based architecture. It is inspired by the attention mechanism [28], which is supposed to capture better the interactions between the pose and movement of grippers, which serve as the context, and parts of the DLO on which we apply the attention mechanism. While the whole architecture is based on the Transformer from [28], we (i) have not introduced any masking of the cable state signal, (ii) simplified the processing of the context, which in our case is not a sequence, and, because our considered task is a regression, not a classification, (iii) we have not used softmax activation at the final processing step. In Figure (c)c baseline model is shown, noted as JacMLP, whose architecture is similar to the MLP. However, by drawing inspiration from the recent work [23], it is predicting not the DLO displacement, but the jacobian matrix \(J\), representing the local linear transformation \(\Delta\mathbf{s}=J\mathbf{a}\) between the change of the grippers poses, denoted by \(\mathbf{a}\), and the change of the DLO state \(\Delta\mathbf{s}\). We also tested the RBFN model proposed in [23] but obtained inferior performance. Figure (d)d illustrates the second baseline an IN-biLSTM architecture proposed in [21]. We adjusted it to comply with the considered task. To do so, we preprocessed poses and movements of both EEs and added their latent representations to the latent representation of the DLO's outermost edges. The rest of the architecture remained the same as in [21], with slightly reduced dimensionality of the latent space from \(150\) to \(128\). ## IV Dataset ### _Dataset collection_ To collect the dataset, we used two UR3 robots with custom 3D printed grippers, Microsoft Azure Kinect camera, and our DLOFTBs algorithm [27] based on B-splines for fast tracking of DLOs. Data were collected in sequences of 20 random arms moves with enforced constraints to prevent ripping off the DLO. For each sequence, the initial states of the robots and DLO were manually chosen to cover a broad spectrum of system configurations. Next, we removed samples with visible issues with depth measurements, which can happen for very thin objects. Finally, we generated data points by merging any pair of system configurations from the same sequence. As a result, each sample from the dataset consists of the initial and end EEs poses \(\mathbf{p}_{n-1},\mathbf{p}_{n}\) and DLO states \(\mathbf{s}_{n-1},\mathbf{s}_{n}\). For detecting the DLO shape on the RGBD image, we used a DLOFTBs algorithm [27] acting on the DLOs masks extracted based on a hue-based segmentation. The output of this method is a 3D B-spline curve representing the shape of the DLO. However, we observed that in a bimanual DLO manipulation setting, the ends of the DLO are often not clearly visible to the camera. Thus, the length of the detected DLO is changing. This makes it harder to stably track the points on the DLO. Therefore, we propose utilizing the information about the grippers handling a DLO and including their TCPs in the list of points on the DLO. Nevertheless, to achieve stable representation, we want to use a sequence of points that are equally distant from each other. Thus, we decided to fit a B-spline to the points on the DLO and grippers TCPs, and then compute \(N\) equally distant points on the DLO (where the distance is computed along the B-spline curve). Moreover, we observed that using the Kinect Azure sensor, the quality of the depth estimation of the DLO is reduced when the depth of the background is close to the DLO, which often happens at the end of the DLO, as they are close to the grippers. This might be caused by the internal Kinect algorithms smoothing for surface estimation. The low number of depth points at DLO are regarded as noise or outliers against most points constituting the background's surface. To address this issue, we decided to neglect the depth of the points on ends of a DLO that lie too close to the grippers and rely on the B-spline interpolation. In this way, we collected several datasets: * \(50\,\mathrm{cm}\) of two-wire cable (38698 training / 9182 validation / 6200 test samples), * \(45\,\mathrm{cm}\) of two-wire cable (8848/3284/2474), * \(40\,\mathrm{cm}\) of two-wire cable (7414/2824/3026), * \(50\,\mathrm{cm}\) of solar cable (3378/708/1258), * \(50\,\mathrm{cm}\) of braided cable (3674/1042/1420). To collect them, we used 3 different cables: two-wire, solar (used in photovoltaic installations), and braided, which differ notably in terms of stiffness and plasticity and have comparable diameters of 6.9mm, 5.8mm, and 5.8mm, respectively. In terms of differences, the stiffest is the solar cable, the medium is two-wire, and the less stiff is the braided one. Braided cable resembles a slightly stiffened rope and is characterized by the highest plasticity. In turn, the plasticity of solar and two-wire cables is comparable. However, the solar one retains deformation more strongly. ### _Data augmentation_ In this paper, we propose a simple yet efficient data augmentation technique that can be applied to introduce an es Fig. 2: Architectures of the neural network-based DLO models used in experiments: a) MLP, b) Transformer, c) JacMLP [23] nad d) IN-biLSTM [21]. sential inductive bias to the predictions of the neural network-based DLO model. We assumed that the considered DLO motions are quasi-static, and we would like to model them using a function \(f(\mathbf{s}_{n},\mathbf{p}_{n},\mathbf{a}_{n})\). In this setting, if grippers do not move and no variable external forces act on the DLO, we expect the DLO to remain in the same state. Therefore, we would like a neural network-based model to exhibit the same property, i.e., \(f(\mathbf{s}_{n},\mathbf{p}_{n},\mathbf{a}_{n})=0\) if \(\mathbf{a}_{n}\) is equal to actual EE pose \(\mathbf{p}_{n}\) in case of the positional representation or \(\mathbf{a}_{n}\) is equal to identity in case of representing it as a difference. Because this property is not automatically satisfied due to the architecture of our models, except the MLPJac, which in particular can satisfy it if the action representation for no move is equal to zero vector, we propose to add to the dataset samples that represent the case of no motion of the arms at all DLO configurations from the dataset. In the experimental section, we show that this simple trick substantially improves the accuracy of the predictions of all considered models except MLPJac. ### _Test-time DLO scaling_ One of the typical limitations of the neural network-based models of the DLOs is their lack of adaptation to the easy-to-get DLO parameters such as length. Typically, it is assumed that the network is trained to work only for a given DLO type and length. This problem was already addressed in [23], however, only in the context of predicting the local linear model of the DLO by scaling the parts of the Jacobian matrix. Instead, we propose to include the information about length more generally, i.e., in the input to the neural network. Particularly to consider a neural network trained on the DLO of length \(l_{1}\). To make a test data distribution of the same DLO but of length \(l_{2}\) similar to the one on which it was trained, we propose to scale the DLO representation and positions of the left arm by the factor of \(\frac{l_{1}}{l_{2}}\). Of course, this type of scaling does not correspond to actual changes in cable behavior caused by the length change, although it is a simple heuristic meant to approximate it. We conjecture that this scaling may be more effective for small changes in the length of the DLO, as for the bigger ones, the impact of the grippers rotations that is not scaling linearly with length may be predominant. ## V Experimental evaluation The results section will first focus on our proposed Transformer-based model. Then, we will show gains stemming from the proposed data augmentation method. We will compare our results against state-of-the-art approaches. Further, we elaborate more on the properties of the proposed approaches--namely, generalization and inference time. Finally, we will show the performance of the proposed method in model-based DLO bimanual manipulation. ### _DLO shape prediction_ #### V-A1 Baseline experiment We start our experiments by comparing all considered learning-based DLO models trained on the dataset of 50cm long two-wire cable. The results obtained by these models on the test subset of the two-wire cable dataset are shown in the left part of Figure 3, denoted as _baseline_. As a metric, we use the ratio of \(\mathcal{L}_{3}\) error introduced in [27] to the \(\mathcal{L}_{3}\) error computed between initial and ground-truth cable pose after the move to account for the variable scale of deformations in the dataset. One can see that the proposed Transformer architecture obtains the best result. However, almost the same prediction accuracy was achieved by the IN-biLSTM. Significantly higher errors are achieved by the MLP and JacMLP, which fared the worst. While we performed the tests with all the data representations introduced in Section III-A, we found out that the impact of the representation type is minor. Thus, for the sake of clarity of the presentation, we report the test set statistics only for the best of them. Particularly, for the Transformer architecture, the best results were obtained for representing the orientation using axis-angle, move by the difference of poses, and DLO by edges, for IN-biLSTM orientation - axis-angle, move - end-pose and DLO - a sequence of points, for MLP with orientation - rotation matrix, move - end-pose and DLO - a sequence of points, while for JacMLP with orientation - rotation matrix, move - a difference of poses, and DLO - a sequence of edges. #### V-A2 Data augmentation In this part of the experiment, we assess the impact of the proposed data augmentation technique on the results achieved by the DLO models in the previous experiment and present obtained results in the right part of Figure 3. One can observe that simple MLP, whose performance was significantly dominated in the previous experiment, thanks to the use of data augmentation, achieves accuracy comparable to the best models. Moreover, augmentation improved the performance of all considered models except the JacMLP, on average by 15% for MLP and by 6% for IN-biLSTM and Transformer. Nevertheless, the median errors at the level of 15.31% for Transformer, 15.56%, 17.08% for MLP, and 20.33% for JacMLP, may seem relatively big. To better visualize what kind of errors these values represent, we show in Figure 4 examples of the DLO behavior predictions made by the MLP model with augmentation that is in 5th, 50th, and 95th error percentile. Fig. 3: Relative prediction error [%] of the neural DLO models on the 50cm long two-wire cable test set. In the left part of the plot are the baseline experiment results, while in the right one, we present the effects of applying our proposed data augmentation technique. Improvement is visible for all models except JacMLP. ### _Generalization_ #### Iv-B1 DLO length One of the commonly overlooked generalizations regarding the neural network-based DLO modeling is the length of the cable being modeled. In this experiment, we assess the performance of all architectures that achieved similar performance in the previous experiment on the task of predicting the movement of the same DLO but shorter, i.e., 45cm and 40cm. To have a reference point, we referred these results to the ones obtained by the models trained on the datasets with shorter DLO. Moreover, we test the impact of our proposed scaling procedure (see Section IV-C). Results of this comparison are presented in Figure 5. One can see, that if the difference in length is relatively small (10%), all of the considered architectures achieve results comparable to the baseline. However, increasing the length difference to 10cm causes notable degradation of the performance of the models trained on the dataset of 50cm long cable. While in both cases, the worst-performing model was the Transformer, if the proposed scaling procedure is applied, we observe a significant improvement and the performance at the same level as for this model trained on the target dataset. Interestingly, the same scaling procedure results in no improvement or even decrease in the performance of the MLP and IN-biLSTM models, which may suggest that only Transformer was able to capture the right intuition about the nature of the modeled system. #### Iv-B2 Different DLOs Another very important ability of the model is to generalize to previously unseen DLOs or at least to efficiently learn the model of the new ones considering already gathered knowledge by pretraining. We evaluated these capabilities on 2 different DLOs: (i) braided cable and (ii) solar cable. We compared the performance of the models trained on the two-wire cable dataset with those trained from scratch on the braided and solar cables datasets and those that were first pre-trained on two-wire and then trained on braided and solar. To check how much data from a new cable is needed to train a new model efficiently, we performed these tests using 100%, 10%, 1%, and 0.1% of the target dataset. The results of these experiments are presented in Figure 6. The baseline models that were trained only on the two-wire cable data achieved decent prediction performance. To achieve an improvement w.r.t. them, one needs to start from a pretrained model and use at least hundreds of the datapoints with the target DLO. It is clearly visible that in almost all cases, the knowledge of the behavior of one cable gives an advantage in learning the behavior of the new one. However, for a braided cable, it turned out that given enough training data, the pretraining may limit the capability to learn. Nevertheless, pretraining not only, in general, improves prediction accuracy but also enables models to be trained more than 100 times faster. Finally, we do not see any significant evidence that one of the compared architectures is better for generalizing the new DLO types. However, the Transformer in both cases was slightly worse when it did not have access to the target dataset. ### _Inernece time_ Another important parameter of the machine learning models, except for their accuracy, is the inference time. This is particularly important because these models can become a part of the planning or control pipeline, which typically has a limited time budget. We compared the timings of the best MLP, IN-biLSTM, and Transformer models using Intel(R) Core(TM) i7-9750H CPU with 32GB RAM and presented results in Figure 7. One can see that the Transformer is Fig. 4: Samples from the test set with predictions made with MLP model, which are in 5th, 50th, and 95th percentile of errors. The top row presents the front view from the camera, while the bottom one is the view from the top. Fig. 5: Relative prediction errors [%] for the task of predicting the movement of the DLO of a different length. The Transformer-based model is able to achieve the same performance as the baseline (trained on the test length DLO) if the appropriate scaling is applied (x-axis training set). Fig. 6: Analysis of the generalization to different DLO types based on the relative error [%] in a function of the part of the data set [%] of the target DLO. The solid line represents the performance of the models trained from scratch of the target dataset, dashed trained on the target but pretrained on the two-wire cable, while the dashed-dot line is the baseline model which was trained only on the two-wire cable dataset. faster than IN-biLSTM. However, for the growing batch size, the difference reduces, and starting from batch size equal to 256 IN-biLSTM becomes faster. Nevertheless, the best performance is achieved by a simple MLP, which outperforms both Transformer and IN-biLSTM models by a large margin, as it is more than 50 times faster than IN-biLSTM and 7 times faster than Transformer. ### _Model-based DLO bimanual manipulation_ In the last experiment, our goal is to evaluate the possibility of using the trained froward DLO model to compute its inverse, i.e., find a movement of grippers that results in the desired deformation and movement of the DLO. While this may be achieved in several ways, we used one of the simplest yet efficient approaches, i.e., CEM [45]. To perform this experiment, we choose the MLP model as, thanks to the proposed data augmentation method, it achieved a decent prediction accuracy, and it can be evaluated very fast. The short inference time of the model is crucial for making the planning of the movements of the grippers fast, which is desirable in potential industrial applications. The core idea of CEM in our setting is to: (i) draw random gripper movements from some randomly initialized normal distribution, (ii) predict the state of the DLO after applying these movements using our neural DLO model, (iii) assess the predicted shapes of the DLO in reference to the desired one using the mean absolute difference between points on DLO, (iv) update the normal distribution parameters based on several actions that resulted in smallest shapes differences - elite actions, (v) repeat the process until convergence or exceeding the maximum number of iterations. In our experiments, we set the number of drawn samples to 64, the number of elite actions to 8, and the maximum number of iterations to 10. With these settings, the whole planning process takes less than 100ms on the same machine as in the previous section. Sample results of this experiment are presented on the frames in Figure 8, while the results of all performed manipulation trials are presented in the video attachment. One can see that the proposed model allowed for a quite accurate prediction of the gripper's positions and orientations that results in obtaining a similar 3D shape of the DLO to the desired one. These results were obtained on a wide range of reshaping problems like bending, unbending, and large deformations in the direction parallel to the camera's optical axis. Obtained shape reconstruction errors are, on average, about 1.5cm according to the \(\mathcal{L}_{3}\) metric proposed in [27]. ## VI Conclusions In this paper, we analyzed the problem of learning a quasi-static 3D model of a markerless DLO manipulated with two robotic arms. We proposed a Transformer architecture that outperforms the State-of-the-Art in terms of prediction accuracy, and we introduced a data augmentation technique that improves the prediction performance even more for almost all considered neural network architectures. By using this method, a simple MLP was able to achieve results comparable to Transformer in terms of accuracy while being more than 7 times faster. Moreover, we analyzed the ability of the considered models to generalize to different DLO types and lengths. We showed that a model pretrained on the different cable types was able to predict the behavior of the new cable quite accurately, even without training on the target one. Moreover, using a pretrained model that was then trained on the dataset with target cable in most of the cases outperforms the solution trained from scratch. Regarding the DLO length, we showed that if the length of the DLO notably changed between training and test time, the performance of the learning-based models decreased. However, if the proposed scaling procedure is applied to the Transformer model, then the achieved performance is nearly the same as in the baseline model trained on the target dataset. Finally, we evaluated the fast MLP-based DLO model trained on the augmented data in the task of predicting the movement of the grippers to obtain a desired shape of the DLO and demonstrated its speed and accuracy.
2309.04300
Trapped Imbalanced Quantum Droplets
A two-component quantum droplet is an attractive mixture of ultracold bosons stabilised against collapse by quantum fluctuations. Commonly, two-component quantum droplets are studied within a balanced mixture. However, the mixture can be imbalanced resulting in a lower energy but less stably bound droplet, or even a droplet submerged in a gas. This work focuses on the experimentally relevant question: how are imbalanced droplets modified by harmonic trap potentials? Droplet ground states and breathing modes are analysed across the two-dimensional parameter space of imbalance and trap strength. The robustness of the droplet imbalance is also studied by releasing the droplet from the trap, demonstrating that this can lead to the creation of free-space, imbalanced droplets.
Thomas A. Flynn, Nick A. Keepfer, Nick G. Parker, Thomas P. Billam
2023-09-08T12:56:12Z
http://arxiv.org/abs/2309.04300v1
# Trapped Imbalanced Quantum Droplets ###### Abstract A two-component quantum droplet is an attractive mixture of ultracold bosons stabilised against collapse by quantum fluctuations. Commonly, two-component quantum droplets are studied within a balanced mixture. However, the mixture can be imbalanced resulting in a lower energy but less stably bound droplet, or even a droplet submerged in a gas. This work focuses on the experimentally relevant question: how are imbalanced droplets modified by harmonic trap potentials? Droplet ground states and breathing modes are analysed across the two-dimensional parameter space of imbalance and trap strength. The robustness of the droplet imbalance is also studied by releasing the droplet from the trap, demonstrating that this can lead to the creation of free-space, imbalanced droplets. ## I Introduction Quantum gases have developed into a rich platform to study a variety of physics from analogues of condensed matter and many-body systems, [1; 2; 3], to simulators of cosmological processes [4; 5; 6]. Much of the theoretical and experimental results of these studies are dominated by Mean-Field (MF) contributions. At ultracold temperatures quantum mechanical effects are pronounced, enabling the study of Beyond-Mean-Field (BMF) contributions, i.e., quantum fluctuations. Two-component quantum droplets are one such quantum gas system in which quantum fluctuations are highly significant [7; 8]. Quantum droplets can be formed in ultracold mixtures of atomic Bose gases, in which the interactions between the two species are tuned to be dominantly attractive. This highlights another advantage of quantum gases: there is precise control of interactions. Two-body interactions -- characterised by the scattering length, \(a_{s}\) -- can be tuned via a Feshbach resonance [9; 10; 11]. Taking into account only MF physics, these attractive mixtures would unstable to collapse. The collapse of the cloud leads to an increased density and consequently an increased contribution of the BMF corrections. Quantum fluctuations lead to an effective repulsion between the atoms, The repulsion of quantum fluctuations -- described to first order by the Lee-Huang-Yang (LHY) correction [12] -- balances the attractive collapse forming a self-bound, dilute liquid droplet [7; 8]. Therefore, quantum droplets are an experimentally observable state of matter in which quantum fluctuations not only contribute, but are integral. It should be noted that the arrest of collapse from BMF corrections does not carry across to the single-component Bose gas, which has been experimentally demonstrated to be unstable to collapse under attractive two-body atomic interactions [13; 14]. As indicated above, quantum gases are a platform for probing physics from areas that appear disparate. One field that has benefited from a close connection with quantum gases is fluid dynamics. Many quantum gases exhibit superfluidity which has been used as an analogue to the dynamics of classical fluids such as vortex dynamics [15] or turbulence [16; 17]. Quantum droplets are a further extension of this tradition as they can be used to probe liquid properties such as surface tension [7; 18; 19] and incompressibility [20; 21]. Two-component quantum droplets have been experimentally observed in both homonuclear \({}^{39}\)K [21; 22; 23; 24] and heteronuclear, \({}^{41}\)K-\({}^{87}\)Rb and \({}^{23}\)Na-\({}^{87}\)Rb [25; 26], mixtures. The benefit of using homonuclear mixtures is the precise control over the population numbers of each component. Homonuclear mixtures are made of atoms prepared in different hyperfine states. Experiments begin with all atoms in one component; a radio frequency pulse is then used to controllably transition a proportion of the atoms to the second component. This control allows for probing one of the predictions of two-component droplets: density balancing. The original droplet prediction of Ref. [7] argues that a density balance is preserved during the droplet formation. This density balance is due to an energetic favourability for the two component densities to maintain a fixed ratio \(n_{2}/n_{1}=\text{const.}\) where \(n_{i}\) is the number density of the \(i\)th component. The majority of theoretical studies of two-component quantum droplets assume this density balance. However, recent works [27; 28; 29; 30; 31] have explored the impact of removing this assumption and the properties of such imbalanced droplets. Imbalanced droplets fall into two main regimes [7; 29]: (1) bound, imbalanced droplets, in which there is a population imbalance in the droplet core; (2) saturated, imbalanced droplets, corresponding to a droplet core that is saturated with majority component atoms, with any further majority component surrounding the droplet as an unbound gas. Ref. [29] studied the ground states and breathing modes of imbalanced quantum droplets in free space. A saturated droplet will lose any unbound atoms in free space; in a trap, the surrounding gas will be retained as it is energetically favourable for the gas to sit at the trap minimum. The key focus of this paper is to investigate how the ground states and breathing modes are modified with the application of an isotropic harmonic trap. These investigations are motivated by the feasibility of imbalanced quantum droplets to be created and probed experimentally. Additionally, this work explores the stability of these droplets when released into free space. This work starts by defining the theory used to model quantum droplets in Section II. This model is first implemented in Section III to explore how the imbalanced droplet ground states are modified by isotropic, harmonic trapping potentials. Section IV looks at propagating these ground states in time, subject to an initial perturbation, to analyse the droplet breathing modes, both for varying trap strength, and size of imbalance. Section V investigates the stability of imbalanced droplets under an instantaneous removal of the trapping potential, as this is a method widely used in droplet experiments. Finally, the main conclusions and future work are discussed in Section VI. ## II The model A zero-temperature mixture of two weakly-interacting, dilute, homonuclear Bose gases can be described by the energy functional [7; 32] \[\begin{split} E=\int&\bigg{[}\frac{\hbar^{2}}{2m}| \boldsymbol{\nabla}\Psi_{1}|^{2}+\frac{\hbar^{2}}{2m}|\boldsymbol{\nabla}\Psi_ {2}|^{2}+V_{1}|\Psi_{1}|^{2}+V_{2}|\Psi_{2}|^{2}\\ &+\mathcal{E}_{\text{MF}}+\mathcal{E}_{\text{LHY}}\bigg{]}\text {d}^{3}\mathbf{r},\end{split} \tag{1}\] in which \(m\) is the atomic mass of both components and \(V_{i}\) is the trapping potential applied to the _i_th-component. The first two terms of Equation (1) are the kinetic energy contributions, whilst \(\mathcal{E}_{\text{MF}}\) is the MF energy density term given by, \[\mathcal{E}_{\text{MF}}=\frac{2\pi\hbar^{2}a_{11}}{m}|\Psi_{1}|^{4}+\frac{2 \pi\hbar^{2}a_{22}}{m}|\Psi_{2}|^{4}+\frac{4\pi\hbar^{2}a_{12}}{m}|\Psi_{1}|^{ 2}|\Psi_{2}|^{2},\] where \(a_{ii}\) and \(a_{12}\) are the intra- and inter-species scattering lengths. The final term, \(\mathcal{E}_{\text{LHY}}\), is the energy density of the LHY correction which, to first-order, describes the effects of quantum fluctuations on the condensate [12]. For a homonuclear bosonic mixture the LHY correction takes the analytic form [7] \[\mathcal{E}_{\text{LHY}}=\frac{256\sqrt{\pi}\hbar^{2}}{15m}\left(a_{11}|\Psi_ {1}|^{2}+a_{22}|\Psi_{2}|^{2}\right)^{5/2}. \tag{2}\] The LHY energy density does not depend on \(a_{12}\) due to the assumption that the mixtures lies at the critical point of attractive instability, i.e., \(a_{12}^{2}=a_{11}a_{22}\), removing the issue of complex contributions resulting from an unstable phonon mode [7; 33; 34]. It should be noted that this approximation is made only in the derivation of Equation (2), and does not imply any parameter choice in later sections. The energy functional in Equation (1) can be minimised via the variational relation \(i\hbar(\partial\Psi_{i}/\partial t)=\delta E/\delta\Psi_{i}\), giving the equal-mass, coupled extended GP equations [7] \[\begin{split} i\hbar\frac{\partial\Psi_{1}}{\partial t}=& \bigg{[}-\frac{\hbar^{2}}{2m}\nabla^{2}+V_{1}+\frac{4\pi\hbar^{2}}{m} \left(a_{11}|\Psi_{1}|^{2}+a_{12}|\Psi_{2}|^{2}\right)\\ &+\frac{128\sqrt{\pi}\hbar^{2}a_{11}}{3m}\left(a_{11}|\Psi_{1}|^{ 2}+a_{22}|\Psi_{2}|^{2}\right)^{3/2}\bigg{]}\Psi_{1},\\ i\hbar\frac{\partial\Psi_{2}}{\partial t}=&\bigg{[} -\frac{\hbar^{2}}{2m}\nabla^{2}+V_{2}+\frac{4\pi\hbar^{2}}{m}\left(a_{22}|\Psi _{2}|^{2}+a_{12}|\Psi_{1}|^{2}\right)\\ &+\frac{128\sqrt{\pi}\hbar^{2}a_{22}}{3m}\left(a_{11}|\Psi_{1}|^{ 2}+a_{22}|\Psi_{2}|^{2}\right)^{3/2}\bigg{]}\Psi_{2}.\end{split} \tag{3}\] The dimensional scalings \(\mathbf{r}=\xi\mathbf{\tilde{r}}\), \(t=\tau\tilde{t}\) and \(\Psi_{i}=\rho_{i}^{1/2}\tilde{\Psi}_{i}\) result in the dimensionless, equal-mass coupled extended GP equations, \[\begin{split} i\frac{\partial\Psi_{1}}{\partial t}=& \bigg{[}-\frac{1}{2}\nabla^{2}+V_{1}+|\Psi_{1}|^{2}+\eta|\Psi_{2}|^{2}\\ &+\alpha\left(|\Psi_{1}|^{2}+\beta|\Psi_{2}|^{2}\right)^{3/2} \bigg{]}\Psi_{1},\\ i\frac{\partial\Psi_{2}}{\partial t}=&\bigg{[}- \frac{1}{2}\nabla^{2}+V_{2}+\beta|\Psi_{2}|^{2}+\eta\beta|\Psi_{1}|^{2}\\ &+\alpha\beta^{2}\left(|\Psi_{1}|^{2}+\beta|\Psi_{2}|^{2}\right)^ {3/2}\bigg{]}\Psi_{2}.\end{split} \tag{4}\] in which all tildes have been neglected and the dimensionless parameters are \[\eta=\frac{a_{12}}{\sqrt{a_{11}a_{22}}},\] \[\alpha=\frac{32}{3}\left[\frac{2}{3\pi}\frac{|\delta a|a_{11}^{5/2}n_{1}^{(0)} }{\sqrt{a_{11}}+\sqrt{a_{22}}}\right]^{1/2},\quad\beta=\left(\frac{a_{22}}{a _{11}}\right)^{1/2},\] with dimensional parameters \[\begin{split}\xi=\sqrt{\frac{3}{8\pi}\frac{(\sqrt{a_{11}}+\sqrt{a _{22}})}{|\delta a|\sqrt{a_{11}}n_{1}^{(0)}}},&\tau=\frac{3m}{8 \pi\hbar}\frac{(\sqrt{a_{11}}+\sqrt{a_{22}})}{|\delta a|\sqrt{a_{11}}n_{1}^{(0 )}},\\ \rho_{1}=\frac{2}{3}\frac{|\delta a|n_{1}^{(0)}}{\sqrt{a_{11}}( \sqrt{a_{11}}+\sqrt{a_{22}})},&\rho_{2}=\frac{2}{3}\frac{| \delta a|n_{1}^{(0)}}{\sqrt{a_{22}}(\sqrt{a_{11}}+\sqrt{a_{22}})},\end{split}\] where \(\delta a=a_{12}+\sqrt{a_{11}a_{22}}\) and \(n_{1}^{(0)}\) is the equilibrium density of component-1 for the balanced mixture [7]. The expression of the equilibrium density is calculated in a homogeneous infinite system under the criterion of a vanishing pressure -- i.e., the droplet in equilibrium with the vacuum -- and takes the form [7] \[n_{1}^{(0)}=\frac{25\pi}{1024}\frac{(a_{12}+\sqrt{a_{11}a_{22}})^{2}}{a_{11}^{3/2 }a_{22}(\sqrt{a_{11}}+\sqrt{a_{22}})^{5}}.\] The density scalings \(\rho_{i}\) correspond to rescaled normalisation constants, \(\tilde{N}_{i}=N_{i}/(\rho_{i}\xi^{3})\), in which \(N_{i}\) is the population of the \(i\)th-component wavefunction. By breaking the assumption of density-locking, it is possible to imbalance the population numbers such that \(N_{2}/N_{1}\neq\sqrt{a_{11}/a_{22}}\). The trapping potentials are non-dimensionalised by \(V_{i}=\left(m\xi^{2}/\tau^{2}\right)\tilde{V}_{i}\). This work only considers isotropic harmonic trapping with equal traps applied to each component, i.e., \(V_{i}=V=\frac{1}{2}m\omega_{r}^{2}r^{2}\). Under non-dimensionalisation this becomes \(\tilde{V}=\frac{1}{2}\tilde{\omega}_{r}^{2}\tilde{r}^{2}\), in which \(\tilde{\omega}_{r}^{2}=\left(\tau\xi^{2}m/\hbar\right)\omega_{r}^{2}\). In subsequent sections the dimensionless population numbers and trapping potentials are presented without tildes, as only dimensionless parameters are used hereafter. The results in this paper will be contrasted with the density-locked model, used widely in modelling quantum droplet experiments [21; 22; 24]. The density-locked model assumes a constant density ratio, \(n_{2}/n_{1}=\sqrt{a_{11}/a_{22}}\), such that the two component wavefunctions can be expressed in terms of a single wavefunction, \(\Psi_{i}=\sqrt{n_{i}}\phi\), neglecting any out-of-phase motion between the components [7; 35; 36]. Under these assumptions, Equations (3) can be non-dimensionalised and reduced to a single equation, \[i\frac{\partial\phi}{\partial t}=\left[-\frac{1}{2}\nabla^{2}-3|\phi|^{2}+ \frac{5}{2}|\phi|^{3}\right]\phi,\] with the system described by a single parameter, an effective atom number, \(\tilde{N}\), given by [7] \[\tilde{N}=\left(\frac{\sqrt{a_{22}}}{n_{1}^{(0)}(\sqrt{a_{11}}+\sqrt{a_{22}}) }\right)\frac{N}{\xi^{3}}, \tag{5}\] in which \(N\) here is the total atom number \(N=N_{1}+N_{2}\). Within this work, balanced and imbalanced droplets are both modelled by Equations (4), though it should be noted that for a balanced droplet the dimensionless parameters \((N_{1},N_{2},\alpha,\beta,\eta)\) can be recast to \(\tilde{N}\). In the density-locked model a given set of scattering lengths, \(a_{ii}\) and \(a_{12}\), correspond to a fixed population number ratio, \(N_{2}/N_{1}=\sqrt{a_{11}/a_{22}}\). ## III Ground states How the density of a spherically-symmetric balanced droplet varies with harmonic trap frequency has been studied in Ref. [37]. The trap frequency can be considered low if there is no significant change from the free-space droplet density, whereas a higher frequency trap eventually leads to the flat-top density of large droplets being lost. Furthermore, in free space the negative chemical potential, \(-\mu\), is described as the particle emission threshold of the droplet, however this description breaks down in a trap [37]. It can therefore be argued that in the trap-dominated regime the idea of a droplet begins to be less defined, and the mixture transitions to a trapped gas. Ref. [37] approximates the transition to the trap-dominated regime as the point in which the potential energy at the droplet surface becomes comparable to the binding energy of the droplet, resulting in \(\omega_{r}^{(c)}\sim(4\pi/3\tilde{N})^{1/3}\) where \(\tilde{N}\) is the effective atom number of the density-locked model in Equation (5) [7]. One main assumption of this work is spherical-symmetry. Density is assumed to be a function of radius only, reducing the computational problem to an effective 1D system with the kinetic term becoming \(\nabla^{2}\Psi_{i}\rightarrow[\partial^{2}(r\Psi_{i})/\partial r^{2}]/r\). Another assumption of this work is balanced intraspecies scattering lengths (\(a_{11}=a_{22}\implies\beta=1\)). Thus, the only possibly difference between components is from an imposed population number imbalance of \(N_{1}=N_{2}+\delta N_{1}\). To find ground states, Equations (4) are evaluated numerically in imaginary time until the energy of the mixture is adequately converged [38]. The numerical scheme is a \(4^{\rm th}\)-order Runge-Kutta method, using a \(2^{\rm nd}\)-order centred finite-difference scheme for the spatial derivatives. Neumann boundary conditions (\(\partial\Psi_{i}/\partial r=0\)) are applied at \(r=0\) and at \(r=L_{r}\), where \(L_{r}\) is the radial computational box size. Note that for all simulations presented in this work, \(\omega_{r}^{(c)}\approx 0.186\). Figure 1(a) shows balanced (purple) and imbalanced (orange) droplet density profiles in a trap with frequency \(\omega_{r}\approx 0.0442\). Figure 1(b) presents an example of the imbalanced atoms forming a significant gas density around the surface of the droplet in a trap with higher frequency \(\omega_{r}\approx 0.353\). Figure 1(a) shows that the imbalanced and balanced droplets have comparable density profiles, whereas Figure 1(b) shows a more considerable deviation between the balanced and imbalanced droplets, in a higher frequency trap. The central density splitting becomes more pronounced in the higher frequency trap, showing a more suppressed minority-component central density. The divergence of the two chemical potentials with increasing imbalance is a key observation of Ref [29]. For increasing trap frequency the chemical potential of a balanced droplet increases, eventually becoming positive as the density of the mixture significantly deviates from the free-space droplet density. Figure 1(c) presents chemical potential data, defined by a 2D parameter space of imbalance and trap frequency \((\delta N_{1},\omega_{r})\), where \(\omega_{r}\approx\{0.00441,0.0662,0.128,0.190\}\), i.e., showing trap frequencies up to approximately \(\omega_{r}^{(c)}\). Figure 1(c) shows that the two-component chemical potentials of balanced droplets are equal and increase with trap frequency, in agreement with Ref. [37]. Beyond the balanced case, the lowest frequency trap (\(\omega_{r}\approx 0.00441\)) shows similar behaviour to the free-space chemical potentials presented in Ref [29], in that the chemi cal potentials appear to reach a saturation limit, though there will be effects from the unbound gas such that these curves are approximate to the saturated limit. For the higher trap frequencies of \(\omega_{r}\approx\{0.0663,0.128,0.190\}\), the two chemical potentials diverge with the majority component chemical potential becoming large and positive, whilst the minority component chemical potential becomes large and negative. The diverging chemical potentials represent a clear distinction between balanced and imbalanced trapped droplets, and an excerpt of the 2D parameter space is included in the inset of Figure 1(c), with the majority and minority chemical potentials plotted as orange and purple surfaces, respectively, demonstrating the chemical potentials diverging for increased trap strength and imbalance. Adding harmonic traps to both balanced and imbalanced droplets causes the flat-topped density profile to eventually be lost with increasing trap frequency. One key difference between balanced and imbalanced droplets is that the trap causes any unbound atoms to form a trapped gas at the droplet surface. For balanced droplets the chemical potential increases with trap frequency until eventually becoming positive. Whereas, for imbalanced droplets it is always possible for the minority component chemical potential to be made negative in isotropic harmonic traps by tuning the imbalance. One way to understand the effect that this squeezed external gas cloud has on the droplet is to analyse the breathing modes of the droplets. ## IV Breathing modes To initiate the breathing mode dynamics of trapped droplets, a perturbation is made by imprinting a harmonic potential of the form \(e^{iter^{2}}\), where \(\epsilon\) is small (here \(\epsilon=10^{-5}\)) onto the minority component ground state wavefunction [39, 40]. This perturbed ground state is then propagated in real time. The breathing modes of balanced droplets in free space have two regimes, self-evaporative and non-self-evaporative [7, 41]. In the self-evaporative regime the breathing mode is unstable because the mode frequency exceeds the particle emission threshold, \(-\mu\). Hence, the droplet will emit atoms to lower its energy, corresponding to a decaying sinusoidal oscillation with a frequency that asymptotes to the particle emission threshold, \(-\mu\). In the non-self-evaporative regime, the breathing mode frequency does not exceed the particle emission threshold therefore the mode is stable and non-decaying. Additionally, the frequency of the balanced droplet breathing mode varies with droplet size only [7]. Breathing modes in imbalanced droplets are instead dominated by unstable regions. For both self-evaporative and non-self-evaporative droplets, an imbalance implies an unstable, decaying breathing mode except for small imbalances in the non-self-evaporative regime [29]. In this analysis, the focus will be on decaying breathing modes (i.e., excluding non-self-evaporative droplets that are either balanced or have sufficiently small imbalances). Trapped balanced droplets exhibiting non-decaying breathing modes have already been studied in Ref [37]. To examine the breathing modes of trapped, imbalanced droplets, this section will take one example Figure 1: Balanced and imbalanced quantum droplets in isotropic harmonic traps (with, \(N_{2}\approx 17027\), \(\alpha\approx 0.00657\) and \(\eta\approx-1.11\)). (a) Droplet ground state density profiles, of size \(\tilde{N}\approx 649\) with balanced components, \(\delta N_{1}=0\), (the light and dark purple dashed lines) and an imbalance of \(\delta N_{1}\approx 8513\) (majority component — dark orange, minority component — light orange), in a trap of frequency \(\omega_{r}\approx 0.0442\). (b) Equivalent balanced and imbalanced droplet to (a), but with a trapping frequency of \(\omega_{r}\approx 0.353\). (c) Majority (orange) and minority (purple) chemical potentials across the 2D parameter space of imbalance and trap frequency — i.e., \((\delta N_{1},\omega_{r})\) — for the fixed droplet size considered in (a) and (b), at trap frequencies of \(\omega_{r}\approx\{0.00441,0.0662,0.128,0.190\}\), with \(0\leq\delta N_{1}\lesssim 4257\). The inset shows the surfaces for the majority (orange) and minority (purple) component chemical potentials across the 2D parameter space for which the curves in (c) are 1D slices of set \(\omega_{r}\) values. Note that the two surfaces are equal at \(\delta N_{1}=0\) imbalance, but diverge for increasing imbalance. droplet size, \(\tilde{N}\approx 649\) (as in Figure 1). Different droplet sizes yield qualitatively the same behaviour, with only a difference in mode frequency. By fixing droplet size, the system is again reduced to a 2D parameter space in imbalance and trap frequency, \((\delta N_{1},\omega_{r})\). The remainder of this section is split into two subsections: firstly, breathing modes are observed with varying trap strengths with small imbalances, of a similar magnitude to those in Figure 1(c); secondly, trap frequency is fixed allowing for breathing modes to be observed with imbalances much larger than those in Figure 1(c). ### Varying trap strength The upper panel of Figure 2(a) shows an example of a self-evaporative, balanced droplet in a trap of frequency \(\omega_{r}\approx 0.00883\). The data shown is a measure of the droplet central density, \(\bar{n}_{i}(t)=n_{i}(r=0,t)-\langle n_{i}(r=0)\rangle_{t}\), where \(\langle\cdots\rangle_{t}\) represents time averaging. The droplet exhibits a decaying oscillation due to the emission of particles, causing the droplet to asymptotically relax to a lower energy state. However, the emitted particles are refocused by the trap back toward the droplet resulting in the short-lived, high-amplitude oscillations, which are the result of a recombination event between the droplet and the reabsorbed wavepacket. This then leads to the self-evaporation reoccurring at set intervals of approximately half the associated trap period of \(T=2\pi/\omega_{r}\approx 712=2\times 356\) with \(t=356\) being the approximate time for the reinitialised decay in Figure 2(a). The balanced droplet recombination can be thought of as 'clean', as there is little noise produced and the reinitialised oscillation is approximately equivalent to the initial oscillations. In the presence of an imbalance (\(\delta N_{1}\approx 4257\)), given in the lower panel of Figure 2(a), the recombination events are not 'clean' as each separate repetition of decaying oscillation is not equivalent to the previous. The trapped, unbound atoms alter the recombination of the emitted particles. Eventually these recombination events will lead to significant noise and thus the remainder of the analysis presented here will focus on the dynamics prior to the first recombination event. Figure 2(b), (c) and (d) show the same self-evaporative droplet given in Figure 2(a) with an imbalance of \(\delta N_{1}\approx 4257\), for three trap frequencies \(\omega_{r}\approx\{0.00662,0.0106,0.0309\}\), with each figure showing times before the first recombination event. The data presented is the same measure of central density as in Figure 2(a), with insets of the associated power spectra \(|\mathcal{F}^{\prime}[\bar{n}_{i}]|^{2}\) in which \(\mathcal{F}^{\prime}[\cdot]\) denotes the power spectrum rescaled by the mean, and all negative frequencies set to zero purely for better data visualisation. The time periods are chosen to highlight the behaviour of the modes, largely after the decay of the initial mode, except for in Figure 2(d), discussed further below. The breathing mode dynamics of an imbalanced Figure 2: Breathing modes of trapped imbalanced droplets. (a) Droplet central densities in time for both a self-evaporative, balanced (upper) and imbalanced (lower) droplet (with imbalance \(\delta N_{1}\approx 4257\)) in a trap with frequency \(\omega_{r}\approx 0.00883\). The droplet is defined with the same parameters as in Figure 1. Note the recombination events that cause the self-evaporative dynamics to be reinitiated. (b) Droplet central density for the \(\delta N_{1}\approx 4257\) imbalanced droplet in the lower panel of (a), in a trap with \(\omega_{r}\approx 0.00662\). The trap frequency is sufficiently low such that all three modes — i.e., the intrinsic mode, and the two modes corresponding to the two chemical potentials — can be observed before the first recombination event. (c) A higher trap frequency of \(\omega_{r}\approx 0.0106\) showing that the shorter period between recombination events no longer allows for the long wavelength oscillation of the majority component. (d) An increased trap frequency of \(\omega_{r}\approx 0.0309\). The recombination events occur within such short intervals that the dynamics are dominated by the intrinsic droplet breathing mode as there is not sufficient time for this mode to decay. droplet in a trap of frequency \(\omega_{r}\approx 0.00662\) are given in Figure 2(b). These dynamics exhibit the three distinct modes of the equivalent free-space droplet (with the associated three peaks given in the inset power spectrum) [29]. The near-zero frequency peak in the power spectrum of the majority component corresponds to the free-space majority component chemical potential, and is the highest amplitude mode. The effect of this mode can be seen by the relative difference in oscillation between the two components in Figure 2(b). There is also the superposition of two other modes which are of comparable amplitude in both the majority and minority component, corresponding to the intrinsic droplet breathing mode (the central peak of the inset) -- i.e., the initial, high amplitude mode -- and the minority component chemical potential (the highest frequency mode in the inset). Increasing the trap frequency to a balanced droplet corresponds to a relatively small increase in breathing mode frequency [37], and this effect appears to carry over to the trapped, imbalanced droplet. The inset power spectrum of Figure 2(b) includes the three free-space breathing modes given by the vertical, dashed lines. All three of these modes have an upshifted frequency due to the trap, though this increase is small due to the relatively low trap frequency. Figure 2(c) shows that if the trap frequency is increased, eventually the highest amplitude mode is lost. By increasing the trap frequency to \(\omega\approx 0.0106\), the two component central densities oscillate in phase with one another, i.e., there is no long-wavelength oscillation between the two components as given in (b). The low frequency mode cannot oscillate within the reduced period between recombination events from the increased trap frequency. Figure 2(d) shows the highest trap frequency, \(\omega_{r}\approx 0.0309\), considered in this section. At this trap frequency, the period between recombination events is considerably shortened. Hence the time window in focus is dominated by the initial, high-amplitude mode of the droplet, shown by the single peak in the inset power spectrum. There are some interactions with other modes at the later times shown in Figure 2(d), but there is not sufficient time for the initial mode to decay. No higher trap frequencies are studied here because the high recombination rate implies only the intrinsic mode is observable. Note too that this highest trapping frequency is still an order of magnitude smaller than \(\omega_{r}^{(c)}\approx 0.188\). In summary, there is a close relationship in the dynamics between imbalanced droplets in free-space and in harmonic traps. For low trap frequencies the three breathing modes of the free-space, imbalanced droplet are visible. However, increasing the trap frequency leads to the loss of the majority component mode. Eventually for higher trap frequencies, the oscillations are dominated by the intrinsic droplet mode, as there is not sufficient time between recombination events for the initial mode to decay, resulting in the loss of the minority component mode. The recombination events in higher frequency traps lead to dynamics rapidly dominated by excitations. Therefore, if the multiple breathing modes of trapped imbalanced droplets were to be experimentally observed, it is advised to use low trap frequencies, such that the initial intrinsic mode can sufficiently decay. ### Varying Imbalance Having established how the breathing modes of imbalanced droplets vary with trap frequency, this section focuses on how these modes vary with increasing imbalance. To analyse the breathing modes as a function of imbalance, the weakest trap strength studied in Figure 2 is used, because all three imbalanced droplet breathing modes are observable. Figure 3(a) shows an example ground state density profile of a weakly-trapped, highly imbalanced droplet with the majority and minority components, shown in dark and light orange, respectively. The inset shows the density difference, \(\delta n(r)=n_{1}(r)-n_{2}(r)\), between the majority and minority component. The density structure within the droplet core is comparable to the small imbalances shown in Figure 1(a). However, the key difference with highly imbalanced mixture is the large radius gas surrounding the droplet. Figure 3(b) highlights two examples of breathing mode oscillations for droplets with high population imbalances. The upper panel shows a mixture with \(\delta N_{1}\approx 170267\), whilst the lower panel shows a mixture with \(\delta N_{1}\approx 16856418\). These population imbalances are so large due to the weak trap geometry used, i.e., to achieve significant gas densities in these low frequency traps, substantial imbalances are needed. The same measure of central density as used in Figure 2 is shown, prior to the first recombination event. In both panels of Figure 3(b) there does not appear to be any out-of-phase oscillations between the two components from the majority-component chemical potential mode, as shown in Figure 2(b). The surrounding gas therefore seems to have frozen out this long-wavelength mode, similar to the tighter trap in Figure 2(c). The main difference between the upper and lower panels of Figure 3(b) is the lower decay rate of the initial, high-amplitude mode in the more imbalanced mixture. As shown in Figure 2(b), at later times the initial mode in the upper panel has decayed sufficiently such that the minority-component chemical potential mode is visible. By driving the imbalance even higher the decay rate of the initial rate is greatly reduced, such that minority-component chemical mode cannot be observed. In Figure 3(c), the droplet central density is fitted to a decaying sinusoidal curve, of the form \(\bar{n}_{i}=Ae^{-\gamma t}\sin(\omega t+\phi)+c\), using the optimize.curve_fit function from the SciPy library for Python [42]. With increasing imbalance there is a corresponding decrease in the fitted decay rate, \(\gamma\). This implies that a higher sur rounding gas density resists the particle emission from the droplet. This could be of potential benefit to experiments as it implies that larger imbalances would give more time to observe the decaying breathing mode oscillations. Beyond the observation of trapped imbalanced droplets, there are also questions to be asked of the experimental realisability of a free-space imbalanced droplet. For example, how stable is the imbalance under the transition from the trap -- used to prepare the mixture -- to the free-space droplet. ## V Release into free space This section studies how imbalanced droplets behave when the trap is turned off, releasing the droplet into free space. This is similar to a Time-Of-Flight (TOF) expansion, a method used in experiments in which trap potentials are switched off and the atomic cloud expands, often used for imaging. Imbalanced droplets are less stably bound than balanced droplets [29], and thus the motivating question is whether it is possible to preserve the imbalance when released from a trap. This is a crucial question in the feasibility of experimentally creating a free-space, imbalanced droplet. TOF expansion is a widely used technique in quantum gas experiments [43] from the very first experimental observation of Bose-Einstein condensation [44, 45]. Typically, by removing the trap the resulting expansion increases the scale of defects, such as vortices, accounting for the low resolution of imaging apparatus [46, 47, 48]. Measurements of condensate density from TOF images can also be used to compute approximate temperatures and population numbers of the cloud [49, 50]. Whilst most quantum gas experiments are inherently in the gas phase, droplets are by definition self-bound, liquid states [7], and hence must retain an approximately fixed size when released into free space. This property has proved popular for experiments as evidence for the production of quantum droplets [22, 23, 24, 25, 26, 51], though relatively high-resolution imaging is necessary. The two observables used here to measure the dynamics resulting from the release into free-space, are: the population numbers contained within the droplet, and the central droplet density difference, \(\delta n(r=0)=n_{1}(r=0)-n_{2}(r=0)\). The population numbers are used to measure the particle loss from each component, while the central density difference is used as a measure of how the droplet core evolves after being released from the trap. The population numbers of the droplet are computed by \[N_{i}^{\mathrm{drop}}(t)=4\pi\int_{0}^{R^{\mathrm{drop}}(t)}r^{2}|\Psi_{i}(r, t)|^{2}\mathrm{d}r\] in which \(R^{\mathrm{drop}}(t)\) is defined as the radius at which the component density equals \(0.1\%\) of the maximum component density, i.e., giving an approximate droplet radius. The population numbers are extracted in time, and \(R^{\mathrm{drop}}(t)\) is allowed to vary dynamically. To simulate the release into free space, ground states are computed as in Section III, the traps are then instantaneously turned off and the mixture is evolved in real time. The instant trap turn off can be quite a violent excitation of the droplet particularly with higher trap frequencies. Figure 4(a) shows the two-component population numbers for two different initial trap frequencies: a lower frequency of \(\omega_{r}\approx 0.00442\), given by the orange curves, and a higher frequency of \(\omega_{r}\approx 0.269\), given by the purple curves. The droplet size is the same as in Sections Figure 3: Ground states and breathing modes for high imbalances in a low frequency trap. (a) Example ground state density profile in a trap of frequency \(\omega_{r}\approx 0.00662\), and an imbalance of \(\delta N_{1}\approx 16856418\). (b) Two examples of breathing modes with a smaller imbalance of \(\delta N_{1}\approx 170267\) in the upper panel, and a considerably larger imbalance of \(\delta N_{1}\approx 16856418\) in the lower panel. These example imbalances are highlighted in (c) by the vertical dashed lines. (c) Fitted decay rate to the initial droplet breathing mode, with varying size of imbalance. III and IV, and is within the bound, imbalanced regime (\(\delta N_{1}\approx 596\)), i.e., there is no surrounding gas. The population numbers of both the minority (light orange) and majority (dark orange) components, in the lower frequency trap, remain relatively constant, as does the central density difference given in the left inset. There are some small oscillations present in the central density difference, which appear to be excitations from the instantaneous release into free space. The droplet in the higher frequency trap has a relatively constant minority-component (dark purple) population number, whereas the majority component (light purple) undergoes significant losses. The losses are so high that the imbalance of the droplet is reversed, as can be seen by the sign reversal of \(\delta n(r=0)\). Following the initial transient stage of heavy majority component losses, the droplet equilibrates with some long-lived small oscillations. Therefore, Figure 4(a) demonstrates that the equilibrated imbalance following the release into free space, depends on the initial trap frequency. Figure 4(b) shows the trap frequency dependency of the late-time equilibrated population numbers, \(N_{i,f}\). Presented are the majority (dark orange) and minority component (light orange) of a bound, imbalanced droplet (upper panel) and a saturated, imbalanced droplet surrounded by an unbound gas (lower panel), with the late-time central density difference, \(\delta n(r=0)_{f}\), inset. The upper panel of Figure 4(b) demonstrates that the imbalance can be approximately conserved following the release from an initial low frequency trap. However, this imbalance is lost or even partially reversed with higher trap frequencies. A larger imbalance of \(\delta N_{1}\approx 4257\) is shown in the lower panel. The main difference between these two panels is that higher initial imbalances suppress the imbalance reversal to higher initial trap frequencies. Following the release the into free space, the density profile of an equilibrated, imbalanced droplet can starkly differ from an identically imbalanced ground state droplet. The reversal of the central density difference occurs at smaller values of \(\omega_{r}\) than the reversal of the population numbers. This suggests that these equilibrated droplets are stably excited states that exhibit a higher minority component density in the droplet core. An example of this is shown in the real-time density profiles of an initially saturated droplet, with \(\delta N_{1}\approx 4257\), shown in Figure 4(c) [corresponding to the grey, vertical line in the lower panel of (b)]. This real-time snapshot shows that the droplet can stabilise to \(\delta n(r)<0\) in the droplet core, with a small region of \(\delta n(r)>0\) at the droplet surface, i.e., some majority component density can stably reside on the droplet surface. To summarise, an imbalanced droplet prepared in a relatively low frequency trap can retain the majority of the initial imbalance after the release into free space. However for increasing trap frequency, the initial majority component begins to lose atoms until the imbalance is either negligible or reversed. The core density reversal can occur even if the original majority component still contains more atoms. This results in a stable, excited state in which some of the majority component atoms sit at the droplet surface. The imbalance reversal can be suppressed by preparing the droplet with a higher imbalance, i.e., a higher density surrounding gas. These results show that imbalanced droplets can be robust to a release into free space, suggesting that free-space, imbalanced droplets are feasible using modern experimental techniques. ## VI Discussion This work has investigated ground states, breathing modes and the release into free space of imbalanced droplets confined in isotropic harmonic traps. First, Section III demonstrates that the trapping potential squeezes any unbound gas up to the droplet, forming a significant gas density at the droplet surface. The imbalance dependent divergence in the majority and minority component chemical potentials, increases further with higher trap frequencies. Section IV focused on the breathing modes of imbalanced droplets, and contrasted the trapped geometry with the free-space results of Ref. [29]. This section highlighted that the presence of a trap causes recombination events from reflected particle. For a free-space imbalanced droplet there are three breathing mode frequencies [29]. These three modes can be observed in the trapped, imbalanced droplet though Section IV shows that with increasing trap frequency these modes are lost. Similarly the decay rate of the imbalanced droplet breathing mode can be reduced by the presence of a significant majority-component gas. The final results presented are the dynamics from releasing the imbalanced droplets into freespace given in Section V. The results show primarily that with a low frequency initial trap, the droplet imbalance can be preserved under trap release, however higher trap frequencies lead to a loss or inversion of the imbalance. This gives promise for the experimental realisation of free-space, imbalanced quantum droplets. The stability of the imbalance under release from a trap may be significant in the experimental results of Refs. [23; 21], in which the mixture is prepared with \(N_{2}/N_{1}=1\neq\sqrt{a_{11}/a_{22}}\). These works assume that the droplet will dynamically balance, after the release into free space, to \(N_{2}/N_{1}=\sqrt{a_{11}/a_{22}}\). This could explain why the data points of \(N_{1}/N_{2}\) in Fig.4(c) of Ref. [23] are upshifted from the balanced line of \(N_{2}/N_{1}=\sqrt{a_{11}/a_{22}}\), as component-1 is setup to be the majority component. Likewise, some of the results in Ref. [24] are speculated to be sensitive to imbalance. This work suggests that balanced droplets could be a special case, and that imbalanced droplets are more common. The analysis of spherically symmetric ground states and breathing mode dynamics presented could be ex tended to explore heteronuclear mixtures. The different kinetic energy contributions of the two components may lead to novel physics, as adding an imbalance to either component is no longer symmetric. This is however a non-trivial extension due to the form of the two-component LHY correction of a heteronuclear mixture [7; 18]. The recombination events from the trap, limit the time for observing collective modes. This restriction implies that smaller computational boxes could be used to probe collective modes in more general 3D simulations, allowing for observation of non-zero angular momentum modes such as dipole [36; 51] and quadrupole modes in both balanced and imbalanced droplets. The potential of new mixtures for probing droplet physics is exciting but some experiments use highly anisotropic trap potentials with significant population number imbalances [52]. Therefore it vital to understand how droplets are modified from the prototypical balanced, free-space profile, by population imbalances and trap potentials. The data presented in this paper are available [53]. ###### Acknowledgements. The authors acknowledge support from the UK Engineering and Physical Sciences Research Council (Grants No. EP/T015241/1, and No. EP/T01573X/1). T. A. F. also acknowledges support from the UK Engineering and Physical Sciences Research Council (Grant No. EP/T517914/1). This research made use of the Rocket High Performance Computing service at Newcastle University. Figure 4: Dynamics of imbalanced droplets, after release into free space, with the same parameters as in Figures 1, 2 and 3. (a) Population numbers, with an imbalance of \(\delta N_{1}\approx 596\), from dynamics resulting from a release into free space with orange colours corresponding to an initial low frequency trap, with \(\omega_{r}\approx 0.00442\) (dark and light orange corresponding to majority and minority components, respectively), and purple colours corresponding to an initial tight trap, with \(\omega_{r}\approx 0.269\) (light and dark purple corresponding to majority and minority components, respectively). The insets show the differences in central densities through time, with the left and right panels corresponding to the low and high frequency trap cases, respectively. These two example simulations are highlighted in the upper panel of (b), by the vertical, grey, dashed lines. (b) The late-time population numbers and central densities difference (inset) for varying initial trap frequencies, in the range \(0.00442\lesssim\omega_{r}\lesssim 0.269\). The upper panel corresponds to a bound, imbalanced droplet (\(\delta N_{1}\approx 596\)) whilst the lower panel corresponds to a saturated, imbalanced droplet with an external unbound gas (\(\delta N_{1}\approx 4257\)). (c) A real time density profile example (with an inset of the density difference) from the late time free-space release dynamics (with initial imbalance \(\delta N_{1}\approx 4257\) and high trap frequency \(\omega_{r}\approx 0.269\)) [corresponding to the right-hand, vertical line in the lower panel of (b)]. The light and dark orange curves correspond to the minority and majority components, respectively. Note the negative central density difference showing the reversal of the imbalance within the droplet core.
2309.12562
Cognitive Approach to Hierarchical Task Selection for Human-Robot Interaction in Dynamic Environments
In an efficient and flexible human-robot collaborative work environment, a robot team member must be able to recognize both explicit requests and implied actions from human users. Identifying "what to do" in such cases requires an agent to have the ability to construct associations between objects, their actions, and the effect of actions on the environment. In this regard, semantic memory is being introduced to understand the explicit cues and their relationships with available objects and required skills to make "tea" and "sandwich". We have extended our previous hierarchical robot control architecture to add the capability to execute the most appropriate task based on both feedback from the user and the environmental context. To validate this system, two types of skills were implemented in the hierarchical task tree: 1) Tea making skills and 2) Sandwich making skills. During the conversation between the robot and the human, the robot was able to determine the hidden context using ontology and began to act accordingly. For instance, if the person says "I am thirsty" or "It is cold outside" the robot will start to perform the tea-making skill. In contrast, if the person says, "I am hungry" or "I need something to eat", the robot will make the sandwich. A humanoid robot Baxter was used for this experiment. We tested three scenarios with objects at different positions on the table for each skill. We observed that in all cases, the robot used only objects that were relevant to the skill.
Syed T. Bukhari, Bashira Akter Anima, David Feil-Seifer, Wajahat M. Qazi
2023-09-22T01:22:51Z
http://arxiv.org/abs/2309.12562v1
Cognitive Approach to Hierarchical Task Selection for Human-Robot Interaction in Dynamic Environments ###### Abstract In an efficient and flexible human-robot collaborative work environment, a robot team member must be able to recognize both explicit requests and implied actions from human users. Identifying "what to do" in such cases requires an agent to have the ability to construct associations between objects, their actions, and the effect of actions on the environment. In this regard, semantic memory is being introduced to understand the explicit cues and their relationships with available objects and required skills to make "tea" and "sandwich". We have extended our previous hierarchical robot control architecture to add the capability to execute the most appropriate task based on both feedback from the user and the environmental context. To validate this system, two types of skills were implemented in the hierarchical task tree: 1) Tea making skills and 2) Sandwich making skills. During the conversation between the robot and the human, the robot was able to determine the hidden context using ontology and began to act accordingly. For instance, if the person says "I am thirsty" or "It is cold outside" the robot will start to perform the tea-making skill. In contrast, if the person says, "I am hungry" or "I need something to eat", the robot will make the sandwich. A humanoid robot Baxter was used for this experiment. We tested three scenarios with objects at different positions on the table for each skill. We observed that in all cases, the robot used only objects that were relevant to the skill. ## I Introduction Recent developments in the creation of intelligent robots have created possibilities for collaborative work between people and robots in dynamic settings [1, 2]. As a result, it becomes more important for robots and other agents to comprehend teammates' implicit and explicit cues and translate those cues into suitable actions [1, 3]. If we can tell our teammate (a human) that "It is getting cold outside" or "I am feeling thirsty" rather than "I want to drink cold tea using a yellow cup," and in other situations, "I am hungry" or "I need something to eat" rather than "I want to eat burger placed at the right side," we can demonstrate the importance of understanding the environment. The teammate will infer the connection between "cold weather", "thirst" and "drink, and "hunger", "food" and "eat". The relationship between the two is that "cold weather" induces "thirst" and a desire to "drink," whereas "hunger" elicits a desire to "eat" or "consume" something. A colleague will therefore offer something to "drink" and another person will offer something to "eat" as a result. A robot would be expected to behave similarly to a human teammate when collaborating in a team with a person [1][4, 5][6]. Although there have been several contributions in this area, this kind of cooperation is still difficult in human-robot interaction [1, 7, 8, 9]. Semantic association between words, items, and abilities can be a useful method to understand partial or incomplete information. The human-robot interaction (HRI) experience can be enhanced by the robot's capability to determine what the user wishes it to do next based on a hazy or imprecise command given a knowledge model of the activities and objects in the environment [1]. To meet these requirements, we have created a technique based on our current hierarchical control [3] and cognitive [1] designs that enables people and machines to collaborate on activities like making tea, sandwiches, burger, coffee, etc. together. In this respect, we have taken into account cognitive modalities such as actuators (Robot: Baxter, verbal response), working memory (semantic analysis, _Moveit_ module, and _Rasa_ chatbot), semantic memory, perception (lingual and verbal), and sensory memory. A robot is needed to track activities, understand the commands and cues of teammates, and execute the required task(s) [10][3]. In the past, researchers have looked at task coordination to motivate users to complete various subtasks carried out by a robot [3], communicate about task failures [7], and create new tasks from vocal instructions [10]. If people and machines can communicate vocally to discuss how to carry out challenging jobs, it will resemble a human-human interaction approach. However, such interaction has the added difficulty of teammates communicating with incomplete information or requests that leverage the knowledge of the task and the environment. In this study, we present a system where the robot can complete the intended job by selecting hierarchical sub-tasks stored in procedural memory and can grasp the context of the environment in working memory using semantic similarity, RASA-based natural language understanding (NLU) engine. For task execution in a dynamic environment based on perceptual and semantic connections, we used a cognitive architecture (see Fig. 1). We use three scenarios to test our research, positioning the task items for various talents at various positions in front of the robot. By speaking with the human in each situation, the robot may use ontology to comprehend the context of the environment. The robot selects the skill it needs to do based on the semantic similarity score, performs the skill following the hierarchical task design, and uses the objects that come within the performing skill. The knowledge representation (KR) based on the existing study [1] is further tailored to accommodate the development of new skills in robot training/teaching mode [1]. ## II Related Work In human-robot interaction (HRI), robots with similar task representations can show effective results in collaborating with human teammates [5]. In a heterogeneous environment, communication is likely necessary for successful cooperation between robot and human teammates cooperation to complete tasks [8]. The clear sharing of information might serve as the foundation for communication. For instance, if the robot is instructed to "choose me a red bottle," it will be able to examine its surroundings, look for the object, and do the necessary actions to resolve the issue [1]. Tasks like route planning [11], human navigation guidance [12], learning [13], and task execution [4] may all be taught or created using explicit signals. In a related contribution, a vocal command-based interactive method was used to let people teach tasks to a mobile service robot [13]. Nicolescu explored how robots may learn tasks from language-based commands and advanced a creative strategy [10]. For socially conscious navigation in public settings, context identification, object detection, and scene data were used to generate context-appropriate rules [14]. It is necessary to create linkages between items, their effects, and the actions performed by robots to comprehend their environment and verbal signals from a human teammate [15]. Antibodies have been employed in addition to verbal cues to establish a connection between objects and their attributes [1, 16, 17]. Although this slightly enhanced the HRI experience but only a few relationship types--namely, isA, hasA, prop, usedFor, on, linked-to, and homonym--were able to extract information from implicit signals [16]. Ontology in the form of semantic memory was also described [17, 18], but it was unable to analyze the scenarios like "Tm feeling hungry," in which the robot understands the necessity to make the sandwich. For interpreting explicit cues, we have developed semantic memory from WordNet and ConceptNet. This memory is further utilized for a similarity score between verbal cues, readily available objects (teapot, lettuce, meat, bread, etc.) on the table, and skills learned by a robot (i.e. tea and sandwich making). As a baseline control structure, we adopted a modified version of Nature-inspired Humanoid Cognitive Architecture for Self-awareness and Consciousness (NiHA) [1] (see Fig 1) and hierarchical control architecture [7, 3]. as part of procedural memory. Our previous hierarchical architecture [7, 3] involved humans and robots executing the entire tree to accomplish a specific task. We have revised hierarchical architecture [7, 3] to accommodate the learning of new skills using the knowledge representation module. Upon receiving the highest similarity score among the available task objects, the architecture performs the skill associated with that object. ## III Methodology ### _Sensory Memory_ Sensory memory is part of short-term memory, which is further classified into iconic and echoic memory. The iconic involves the processing of brief images from a video stream whereas the echoic memory processes auditory steam. ### _Perception Layer_ #### Iii-B1 _Lingual Perception_ The lingual perception has two categories, first is based on the Natural Language Processing (NLP) layer which is further composed of the Part-Of-Speech (POS) tagger, and Tokenization module Tokenization modules tokenize the spoken commands into words as nouns, verbs, and adjectives. The second part is based on a knowledge representation module specially tailored from existing work [1] to generalize the procedural memory to accommodate new skills in the form of recipes. It contains SkillNode, ObjectNode, ActionEdge(PicknPlace), SequentialNode(THEN), NonOrderingNode(AND), and AlternativePathNode(OR). Further details related to POS tags represented in words (nouns, verbs, and adjectives) can be accessed at [19]. #### Iii-B2 Visual Perception The visual perception module can be developed with various deep learning modules. To simplify the process and to test various robot skills and cognitive capabilities we have opted for the ROS (Robot Operating System) defined Augmented Reality (AR) tags [20] to detect the objects on the table. AR Fig 1: NiHA ’s Modified Cognitive Architecture with Upgraded Perception Layer, Working Memory, and Procedural Memory [1] tags help to identify and track the pose of the object to determine where the object is. ### _Working Memory_ Working Memory (WM) functions as an executive control that is aware of the current situation and can recall earlier events. The basic goal of WM is semantic processing, object grounding, motion planning, and motor command manipulations. #### Iii-C1 Semantic Analysis The algorithm assesses the semantic similarity between spoken words and item categories present in the table-top scenario at the time. \(Word_{\mathit{dulatory}}=\mathit{(word_{1},word_{2},word_{3},...,word_{m})}\). The semantic function \(\mathfrak{S}:Word_{\mathit{dulatory}}\rightarrow\mathit{Item}\). The Similarity Index is being evaluated as \[\mathfrak{S}(Word_{\mathit{dulatory}}\,,\mathit{Item})=\frac{\left| Word_{\mathit{dulatory}}\cap\mathit{Item}\right|}{\left| Word_{\mathit{dulatory}}\cup\mathit{Item}\right|} \tag{1}\] #### Iii-C2 Moveit To execute our experiment, Moveit [21] was used to plan and manipulate the robot's hand movement to perform pick and place objects from the surrounding environment. #### Iii-C3 Rasa Chatbot The RASA module has three components, natural language understanding (NLU), natural language generation (NLG), and RASA core. The RASA core acts executive control of the RASA environment. The NLU unit handles intent management whereas NLG is responsible for generating sentences based on predefined templates. We have used RASA as an intermediary between robot and human teammates. ### _Semantic Memory_ Semantic memory is developed from WordNet and ConceptNet having 117,659 Synsets (WordNet Nodes), 157,300 Lemma nodes, and 1653804 Concept (ConceptNet) nodes. There are 54 categories with 3730567 relationships [22]. Lemma nodes are the "root words" retrieved from the Concept nodes that can correlate Concept nodes completely or partially with Synsets whereas an assertion is considered an atom of knowledge in the Semantic Network [23]. The semantic memory is constructed as concept-relationship-feature or concept-relationship-concept i.e. Concept (Apple)-Relationship(IS_A)-Concept(Fruit) and Concept(Apple)-Relationship(isUsedFor)-Feature(Eating). Complete details about semantic memory can be accessed at [1]. Semantic memory is used during the processing of cues and the local association between available items and user commands (see Fig 2 for various examples). ### _Procedural Memory_ Procedural memory is what controls our actions and abilities. This recollection is wholly dependent on the kind of agent or robot being used. For the execution of skills, such as making Fig 2: Semantic Graphs extracted from Semantic Memory based on verbal cues. tea and sandwiches, we have chosen Human-Robot Collaborative Architecture. The actions to be taken are detailed along with their hierarchical limitations by skills. #### 3.2.1 Hierarchical Task Representation The hierarchical task architecture's goal is to make it possible for complicated tasks to be executed realistically by humans and robots. This task design is built on a complicated hierarchical task network that enables simultaneous human and robot work in the same environment. Nearly every single task in the real world can be divided into more manageable tasks and set up as a hierarchical task network. In the real world, the task can be made up of a set of sequential, non-sequential, and parallel sub-tasks. Our robot control architecture lets the system encode tasks with different kinds of constraints, such as SequentialNode(THEN), NonOrderingNode(AND), and AlternativePathNode(OR) [24]. Tasks are shown in a tree structure ObjectNode and ActionEdge(PicknPlace). The tasks that need to be done on objects are shown by the ObjectNode, and the actions to be taken on objects are shown by the ActionEdge(PicknPlace). For a task with so many tiers, each node in the architecture keeps track of a state made up of the following: 1) Activation Level: a number that shows how important its parent thinks it is to run and finish a certain node, 2) Activation Potential: a number that shows how useful the node is thought to be and is sent to the node's parent, 3) Active: a Boolean variable that says the behavior is active when the node's activation level is higher than a threshold. 4) Done: a Boolean variable that is set to true when the node has done its job. Each node always keeps track of the above state information. By doing both top-down and bottom-up spreading, the activation spreading technique makes sure that the task is done right based on the constraints. To complete a task, activity-spreading messages are sent from the root node to its children to spread activity levels across the task tree. A bottom-up mechanism sends activation potential up the tree by having nodes send status messages to their parents about their current state. In each cycle, a loop helps keep the state of each node in the task structure up to date by checking the different parts of the node's state and adjusting them as required. The controller architecture can handle more than one robot because it keeps a copy of the task tree for each robot. This includes when that robot is currently working on behavior when a robot has completed one, and the activation potential and level for each robot and each behavior. Choosing SkillHere, first, it finds the object from the list which has the highest semantic score then finds which skill has this object. By doing this it finds out the skill that it wants to perform. ``` 1:For object \(\in\) object_list do 2:If object is highest_semantic_similarity_score then 3: skill object \(\leftarrow\) object 4:End If 5:End For 6:For skill \(\in\) skill_list do 7:If skill_object is in skill then 8: chosen_skill \(\leftarrow\) skill 9:End If 10:End For ``` **Algorithm 1**Choosing Skill After choosing the skill the hierarchical architecture updates its activation potential and activation level. For this, another behavior called skill_behavior was added to the previous hierarchical architecture [24][25]. It chooses the skill it wants to execute from the hierarchical design it wants to execute. To do this it only spreads its Activation level value to the child nodes belonging to the chosen skill. This allows the child node with the _Skill_behavior to activate. In the case of updating activation potential, the skill_behavior node spreads the activation potential of the single child with the chosen skill. ``` 1:msg \(\leftarrow\) {activation level = 1.0} 2:For child \(\in\) children do 3:If child.skill is chosen_skill then 4: SendToChild(child,msg) 5:End If 6:End For ``` **Algorithm 2**Skill_behavior - Spread Activation After choosing the skill the hierarchical architecture updates its activation potential and activation level. For this, another behavior called skill_behavior was added to the previous hierarchical architecture [24][25]. It chooses the skill it wants to execute from the hierarchical design it wants to execute. To do this it only spreads its Activation level value to the child nodes belonging to the chosen skill. This allows the child node with the _Skill_behavior to activate. In the case of updating activation potential, the skill_behavior node spreads the activation potential of the single child with the chosen skill. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & **Bread** & **Cheese** & **Cup** & **Lettuce** & **Meat** & **Sugar** & **Tea** & **Teapot** \\ \hline **Hot** & 0.0080249 & 0.0043135 & 0.0049332 & 0.0023202 & 0.0069543 & 0.0065621 & **0.0116331** & 0.0011587 \\ \hline **Hungry** & 0.0006277 & 0.0000000 & 0.0000000 & 0.0000000 & **0.0034459** & 0.0000000 & 0.000000 & 0.0000000 \\ \hline **Thirst** & 0.0012563 & 0.0000000 & **0.0057803** & 0.0052356 & 0.0006873 & 0.0000000 & 0.0051282 & 0.000000 \\ \hline **Sandwich** & **0.0224423** & 0.0160966 & 0.0065459 & 0.0178571 & 0.0176075 & 0.0026882 & 0.0108120 & 0.0021978 \\ \hline **Drink** & 0.0100839 & 0.0046816 & **0.0175440** & 0.0032726 & 0.0074370 & 0.0154660 & 0.0146541 & 0.0013999 \\ \hline **Food** & **0.0287881** & 0.0090561 & 0.0068393 & 0.0048706 & 0.0253697 & 0.0132474 & 0.0080704 & 0.0002427 \\ \hline **Burger** & 0.0044108 & 0.0029791 & 0.0000000 & 0.0068027 & **0.011111** & 0.0004470 & 0.000000 & 0.0000000 \\ \hline **Coffee** & 0.0025157 & 0.0033104 & 0.0233112 & 0.0053050 & 0.0048309 & 0.0062926 & **0.0588923** & 0.0066401 \\ \hline **Cold** & 0.0055744 & 0.0054682 & 0.0035714 & 0.0026762 & 0.0061406 & 0.0026882 & **0.0115401** & 0.000000 \\ \hline \end{tabular} \end{table} Table 1: Semantic Similarity Score between Tagged Words (vertical) and Available Items (horizontal). This information is used to select which objects are most semantically related to words that the partner might say. #### Iii-C2 Adding Skill Component in Hierarchical Architecture We added a new component to the prior task architecture to expand it. The skill that evaluates the surroundings and interactions to determine which of a variety of duties should be carried out. Following the contact between the person and the robot, the semantic knowledge module decides which task the robot should complete. The skill node receives a ROS message in string form; it can then decide which task needs to be executed. We can give the robot a variety of skill tasks under the SkillNode. Whenever the robot chooses a task to perform, it will perform the task accordingly. These skills are designed with Nodes like SkillNode(i.e. Tea and Sandwich making), THEN, AND, OR, ObjectNode, and ActionEdge(PicknPlace). As shown in Fig 3, there are two skills listed under the SkillNode: 1) Tea Making Skill and 2) Sandwich Making Skill. The Skill component determines which task to run based on semantic information and the objects that are available in the environment, the semantic relevance of various objects to words that a user might speak is shown in Table I. ## IV Experiment Design We created a speech conversation between a human participant and a robot to demonstrate the capabilities of the system we created and to verify the functionalities of the cognitive and hierarchical architecture. The robot can understand the hidden context and carry out a skill task using items from the nearby surroundings based on the participant's input. We experimented with a lab setting using a human participant and a Baxter humanoid robot that was positioned in front of a table with items. This experiment involves using a robot to make tea and sandwic. A Kinect v2 camera on top of Baxter's head and Baxter's right-hand camera were used to detect the object's AR tags. The robot will decode the tagged word from the human's speech in this human-robot interaction and assess the items' semantic similarity scores (see Table I) about the decoded tagged word. The architecture will use the score to pick the most suitable skill task to execute. If the human says a statement like "I am thirsty" or "It is cold outside," the tagged words will be "thirsty" and "cold" respectively. Based on the similarity score, in both cases, it is observed that the objects under the TeaMaking Skill have the highest scores. As a result, the robot will decide to perform the TeaMaking Skill. Based on the task tree (see Fig 3), the task will be ((PicknPlace Cup) THEN ((PicknPlace Tea) AND (PicknPlace Sugar))). According to this task statement, the robot will first pick and place Cup, then pick and place Tea and Sugar in a non-ordered fashion (see Fig 4). Fig 4: A new component SkillNode was added to the hierarchical task tree which allows the system to choose the skill based on the similarity score. Two types of Skills: 1) Tea Making Skill and 2) Sandwich Making Skill were added under the SkillNode. Fig 3: The robot is making a cup of tea after the human said, “It is cold outside.” The robot determines to execute the Tea Making Skill after analyzing the semantic scores of the available table objects. Fig. 5 displays the hierarchical state depiction of each stage involved in using the TeaMaking Skill. In comparison, the Meat object from the object collection has the greatest semantic score connected to the tagged word "hungry" if the person states something like "I am hungry" (see Table 1). The robot will begin making a sandwich because Meat is represented by the SandwichMaking skill. The task will be ((PicknPlace Bread1)THEN((PicknPlace Meat)OR(PicknPlace Lettuce))THEN(PicknPlace Bread2), again based on the tree. Therefore, the robot would choose and put Bread1 before choosing and putting either Meat or Lettuce. The task will then be finished by the robot by picking up and placing Bread2. ## V Results In our experiment, when the person says, "It is cold outside." Speech recognition provides the ontology with a word string spoken by the user. The Jaccard Similarity measure is used to determine the lexical similarity from the decoded speech between the labeled words and the accessible items, such as "tea," "sugar," "cup," "bread," "meat," "cheese," "lettuce," and "teapot" (see Table I). The ontology was able to identify the statement's inferred context based on the score, which shows that the spoken phrase is connected to "tea". The Tea Making task was chosen by the ontology using the list of objects that are both readily accessible and most closely related to the user's speech statement. This reflects a connection between the user's statement, the available objects on the table, and the available tasks that the robot can complete. Fig 5 illustrates the step-by-step state for the tree nodes in our robot architecture for executing the Tea Making skill. In the first phase, when the skill node received the object name _"Tea",_ based on the highest similarity score between available items and _"Cold"_ (see Fig2b for related graph)._"Tea"_ has the highest semantic similarity score (_0.0115401_ in Table I) i.e. 30.64%, among the other items i.e. Cup (9.48%), Sugar (7.14%), Lettuce (7.11%), Bread1 (14.80%), Bread2 (14.80%) and Meat (16.31%), the task tree then decided to execute the Tea Making Skill. The THEN node was activated for this skill (see Fig 5a), and the robot proceeded to pick and place the Cup (see Fig 4a Fig 4b respectively). When the robot placed the Cup on the table, the status of the Cup node was changed to Done from Active. From the task tree, the robot would activate the AND node (Fig 5b) and start picking the tea to pour into the cup (Fig 4c). After pouring the tea into the cup, the Tea was set on the table (Fig 4d), which made the Tea node in the task tree Done from Active. Then, the robot moved to the next step according to the task tree and activated the Sugar node (Fig 5c), and start to put sugar in the cup (Fig 4e). In the end, when the Sugar was placed on the table (Fig 4f), all the nodes' statuses were changed to Done, and the whole skill task was completed based on the tree design. We had different table setups for experiments, but the robot was still able to figure out the concept of the surroundings and worked on the skill from the hierarchical task tree. Our observations indicated that the robot does not go for objects under different skill sets. Additionally, we provided two statements for each skill test to validate the case scenarios. For instance, we used statements like "I am thirsty" (see Fig2c for graph) and "It is cold outside" (see Fig2b for graph) for the Tea Making Skill. Likewise, for the Sandwich Making Skill, we used statements like "I am hungry" (see Fig2a for graph) and "I want to make a sandwich" (see Fig2d for graph). Furthermore, we have also tried queries "I need some food" (see Fig2e for graph) and "I need something to drink" (see Fig2f for graph), the respective similarity score about extracted action verbs, nouns, and adjectives can be found in Table I. Therefore, we can observe that based on the ontology approach, the system was able to understand the context behind the user statement "It is cold outside" and choose to perform a hierarchical skill task (Tea Making Skill) by identifying the relationship between the context and the objects nearby. ## VI Discussion And Future Work This paper proposes a way to offer an efficient and flexible human-robot collaboration environment in which the robot teammate can perform the user's desired task by deciphering Fig 5: Order of execution for Tea Making Skill -((PicknPlace Cup)THEN(PicknPlace Tea)AND(PicknPlace Sugar)). (a) Tea Making Skill is invoked, which initiates the PickAndPlace process for the Cup object, (b) The PickAndPlace action for the Tea which makes the robot starts pouring Tea into the cup, (c) The PickAndPlace action for the Sugar under the AND node is activated which makes the robot adding sugar into the cup. both vague and clear requests in natural language form from a human teammate. The ontology played a vital role in the understanding of user commands due to the semantic relationship between various concepts and features. This architecture has the following contributions: * The system can find an implied link between the context of the situation and the surrounding environment using the ontology approach after interacting with a human user. * In our extended hierarchical task architecture, the robot will only select the hierarchical sub-tasks that are most relevant to the specific task derived from the ontology approach. Currently, the robot is performing the skill task after interacting once with the human. However, in the future, we are planning to add more scope to hold a conversation to make the system more dynamic and diverse. Additionally, we are hoping to apply this ontology approach in a multi-human robot environment for more robust and diverse collaboration. ## Acknowledgment The authors would like to acknowledge the financial support of this work by the National Science Foundation (NSF #SES-2121387, #IIS-2150394). We acknowledge the funding support from IEEE Robotics & Automation Society (RAS) under 2021 Developing Country Faculty Engagement Program.
2309.12204
PrNet: A Neural Network for Correcting Pseudoranges to Improve Positioning with Android Raw GNSS Measurements
We present a neural network for mitigating biased errors in pseudoranges to improve localization performance with data collected from mobile phones. A satellite-wise Multilayer Perceptron (MLP) is designed to regress the pseudorange bias correction from six satellite, receiver, context-related features derived from Android raw Global Navigation Satellite System (GNSS) measurements. To train the MLP, we carefully calculate the target values of pseudorange bias using location ground truth and smoothing techniques and optimize a loss function involving the estimation residuals of smartphone clock bias. The corrected pseudoranges are then used by a model-based localization engine to compute locations. The Google Smartphone Decimeter Challenge (GSDC) dataset, which contains Android smartphone data collected from both rural and urban areas, is utilized for evaluation. Both fingerprinting and cross-trace localization results demonstrate that our proposed method outperforms model-based and state-of-the-art data-driven approaches.
Xu Weng, Keck Voon Ling, Haochen Liu
2023-09-16T10:43:59Z
http://arxiv.org/abs/2309.12204v2
PrNet: A Neural Network for Correcting Pseudoranges to Improve Positioning with Android Raw GNSS Measurements ###### Abstract We present a neural network for mitigating pseudorage bias to improve localization performance with data collected from Android smartphones. We represent pseudorange bias using a pragmatic satellite-wise Multiple Layer Perceptron (MLP), the inputs of which are six satellite-receiver-context-related features derived from Android raw Global Navigation Satellite System (GNSS) measurements. To supervise the training process, we carefully calculate the target values of pseudorange bias using location ground truth and smoothing techniques and optimize a loss function containing the estimation residuals of smartphone clock bias. During the inference process, we employ model-based localization engines to compute locations with pseudoranges corrected by the neural network. Consequently, this hybrid pipeline can attend to both pseudorange bias and noise. We evaluate the framework on an open dataset and consider four application scenarios for investigating fingerprinting and cross-trace localization in rural and urban areas. Extensive experiments demonstrate that the proposed framework outperforms model-based and state-of-the-art data-driven approaches. Android smartphones, deep learning, localization, GPS, pseudoranges. ## I Introduction Since the release of Android raw Global Navigation Satellite System (GNSS) measurements, precise localization using ubiquitous and portable Android smartphones has been expected to enable various exciting localization-based applications, such as precise vehicle navigation, smart management of city assets, outdoor augmented reality, and mobile health monitoring [1]. However, it is difficult to keep such promise because the inferior GNSS chips and antennas mounted in mass-market smartphones lead to large pseudorange noise and bias [2, 3, 4]. While filtering or smoothing can reduce pseudorange noise, it remains challenging to mitigate pseudorange bias that might be caused by multipath, non-line-of-sight (NLOS) propagation, modeling residuals of atmospheric delays, smartphone hardware delays, etc [5]. To address this issue, we propose PrNet, a neural network for correcting pseudoranges to improve positioning with Android raw GNSS measurements. As illustrated in Fig. 1, the underlying idea is to train a neural network by regressing from six satellite-receiver-context-related features to pseudorange bias. The squared loss is optimized for all visible satellites each time. After training the neural network, we can predict the biased errors, eliminate them from pseudoranges, and then feed the corrected pseudoranges into a classical localization engine to compute locations. However, we have to overcome two stumbling obstacles to implementing the above idea. 1) _Feature Selection_: over 30 raw GNSS measurements are logged from Android smartphones [1], and relevant features must be carefully chosen to enhance the performance of the neural network while minimizing computational costs [6]. 2) _Data Labeling_: the training data should be labeled with pseudorange bias while we can only obtain the ground truth of smartphones' locations [1]. Previous research has proposed various methods to estimate the pseudorange bias for geodetic GNSS receivers but ignored the pseudorange noise [7, 8, 9, 10]. These methods degrade their own performance when transferred to Android measurements that are about one order of magnitude noisier than geodetic-quality ones. To this end, we nominate two new features by visualizing Android measurements across various dimensions and derive the target values of pseudorange bias using location ground Fig. 1: An overview of our PrNet-based localization pipeline. The blue solid and dashed lines represent the biased pseudoranges and unbiased pseudoranges (we assume all bias is positive here for easy illustration), respectively. The red, purple, yellow, and green segments denote the pseudorange bias of satellites SV1-SV4. truth and the Rauch-Tung-Striebel (RTS) smoothing algorithm [11]. Besides, our experiments show that incorporating estimation residuals of smartphone clock bias into the loss function can enhance the inference ability of the neural network. To recapitulate briefly, our contributions are: * A pipeline for learning the mapping from six satellite-receiver-context-related inputs to pseudorange bias representation, which is parameterized by a pragmatic satellite-wise MLP. This includes two new proposed inputs--unit geometry vectors and smartphone headings--and a visible satellite mask layer to enable parallel computation. * Computation methods to derive labels for pseudorange bias and a differentiable loss function involving estimation residuals of smartphone clock bias. * A comprehensive evaluation from perspectives of horizontal positioning errors, generalization ability, computational load, and ablation analysis. Our codes are available at [https://github.com/ailocar/prnet](https://github.com/ailocar/prnet). We demonstrate that our proposed PrNet-based localization framework outperforms both the model-based and data-driven state-of-the-art pseudorange-based localization approaches. To the best of our knowledge, we present the first data-driven framework for pseudorange correction and localization over Android smartphone measurements. ## II Motivation The primary motivation behind this paper is to remove the biased errors present in pseudoranges collected from Android smartphones. To investigate the pseudorange bias and its impact on horizontal positioning accuracy, we derive the pseudorange bias as detailed in Section V and adopt Weighted Least Squares (WLS), Moving Horizon Estimation (MHE), Extended Kalman Filter (EKF), and Rauch-Tung-Striebel (RTS) smoother to compute locations with the biased pseudoranges. We choose a trace of data in the Google Smartphone Decimeter Challenge (GSDC) dataset [1] and display our findings in Fig. 2. As depicted in Fig. 2(a), biased errors are pervasive across all visible satellites and can reach magnitudes up to 10 meters. Such biased errors might be attributed to multipath, NLOS, residuals in atmospheric delay modeling, smartphone hardware interference, etc. Importantly, they are hardly modeled mathematically. Furthermore, Fig. 2(b) clearly shows how the biased pseudoranges directly translate into localization errors that are challenging to mitigate using conventional filtering or smoothing techniques. ## III Related Work This paper focuses on artificial intelligence (AI) for GNSS localization using pseudoranges, which can be categorized into five types. **(i)**_AI for Pseudorange Correction_: Recently, several learning-based methods have been proposed to predict and correct pseudorange errors using raw GNSS measurements as inputs [8, 9, 10, 12]. However, all these methods use pseudorange errors (comprising noise and bias) to label training data and cannot be transferred to Android raw GNSS measurements directly. **(ii)**_AI for Position Correction_: Various machine learning methods have been proposed to predict offsets between model-based position estimations and ground truth locations [13, 14]. Then, the model-based position estimations are compensated with the learned location offsets. **(iii)**_End-to-end Neural Networks for GNSS_: This type of work directly replaces the model-based positioning engines with deep neural networks [15, 16]. For example, authors in [15] leveraged a set transformer to replace the WLS engine to solve the pseudorange equations linearized about an initial location guess. Test results showed that the set transformer is sensitive to the initial location guess to a certain extent. Compared with the first type, these two kinds of approaches need to learn how to compute locations, but it has been robustly and mathematically well-established. **(iv)**_AI Enhanced Localization Engine_: AI has been utilized to improve the physical models employed in conventional localization engines [17, 18, 19]. For example, a parameter of a filter-based localization engine can be learned instead of being tweaked empirically [19]. **(v)**_AI for Signal Classification_: The final category involves using AI to detect and classify multipath and NLOS propagations. By identifying pseudoranges containing multipath or NLOS errors, these methods can exclude them from the further localization process [20, 21, 22]. ## IV Preliminaries of GNSS After the corrections of atmospheric delays as well as satellite clock offsets [23], the pseudorange measurement \(\rho_{c_{k}}^{(n)}\) from the \(n^{th}\) satellite to a smartphone at the \(k^{th}\) time step is shown below. \[\rho_{c_{k}}^{(n)}=r_{k}^{(n)}+\delta t_{u_{k}}+\varepsilon_{k}^{(n)} \tag{1}\] where the subscript \(k\) represents the \(k^{th}\) time step. \(r_{k}^{(n)}\) denotes the geometry distance from the \(n^{th}\) satellite to the smartphone. \(\delta t_{u_{k}}\) represents the clock offsets of the smartphone relative to the GNSS reference time. We wrap up multipath delays, hardware delays, pseudorange noise, and other potential errors in one item \(\varepsilon_{k}^{(n)}\) called pseudorange errors. Then, we can estimate the smartphone's location \(\mathbf{x}_{k}=\left[x_{k},y_{k},z_{k}\right]^{T}\) in the Earth-centered, Earth-fixed (ECEF) coordinate system and its clock offset \(\delta t_{u_{k}}\) by solving the following linear equation system [23] established by \(M\) visible satellites: \[\mathbf{W}_{k}\mathbf{G}_{k}\left(\mathbf{X}_{k}-\tilde{\mathbf{X}}_{k}\right) =\mathbf{W}_{k}\Delta\mathbf{\rho}_{c_{k}} \tag{2}\] Fig. 2: Impact of pseudorange bias (Trace “2021-04-28-US-MTV-1” by Pixel4 in GSDC dataset as an example). (a) Pseudorange bias of all visible satellites throughout the trace. (b) Horizontal errors calculated with Vincenty’s formulae using Weighted Least Squares (WLS), Moving Horizon Estimation (MHE), Extended Kalman Filter (EKF), and Rauch-Tung-Striebel Smoother (RTSS). where \(\mathbf{X}_{k}=\left[x_{k},y_{k},z_{k},\delta t_{u_{k}}\right]^{T}\) is the unknown user state while \(\tilde{\mathbf{X}}_{k}=\left[\tilde{x}_{k},\tilde{y}_{k},\tilde{z}_{k},\delta \tilde{t}_{u_{k}}\right]^{T}\) is an approximation of the user state. \(\mathbf{W}_{k}\) is a diagonal matrix with the reciprocals of 1-\(\sigma\) pseudorange uncertainty of all visible satellites as its main diagonal to weight pseudoranges. The geometry matrix \(\mathbf{G}_{k}\) is calculated with the satellite location \(\mathbf{x}_{k}^{(n)}=\left[x_{k}^{(n)},y_{k}^{(n)},z_{k}^{(n)}\right]^{T}\) and the approximate user location \(\tilde{\mathbf{X}}_{k}\)[23]: \[\mathbf{G}_{k}=\left[\begin{array}{cccc}a_{x_{k}}^{(1)}&a_{y_{k}}^{(1)}&a_{ z_{k}}^{(1)}&1\\ a_{x_{k}}^{(2)}&a_{y_{k}}^{(2)}&a_{z_{k}}^{(2)}&1\\ \cdots\\ a_{x_{k}}^{(M)}&a_{y_{k}}^{(M)}&a_{z_{k}}^{(M)}&1\end{array}\right] \tag{3}\] where, \[a_{x_{k}}^{(n)} =\frac{\tilde{x}_{k}-x_{k}^{(n)}}{\tilde{r}_{k}^{(n)}},\text{ }a_{y_{k}}^{(n)}=\frac{\tilde{y}_{k}-y_{k}^{(n)}}{\tilde{r}_{k}^{(n)}},\text{ }a_{z_{k}}^{(n)}=\frac{\tilde{z}_{k}-z_{k}^{(n)}}{\tilde{r}_{k}^{(n)}}\] \[\tilde{r}_{k}^{(n)} =\sqrt{\left(\tilde{x}_{k}-x_{k}^{(n)}\right)^{2}+\left(\tilde{y} _{k}-y_{k}^{(n)}\right)^{2}+\left(\tilde{z}_{k}-z_{k}^{(n)}\right)^{2}}\] \[n=1,2,3,\cdots,M\] The pseudorange residual \(\Delta\boldsymbol{\rho}_{c_{k}}\) for the \(M\) visible satellites at the \(k^{th}\) time step are shown as follows. \[\Delta\boldsymbol{\rho}_{c_{k}}=\left[\Delta\rho_{c_{k}}^{(1)},\Delta\rho_{c_ {k}}^{(2)},...,\Delta\rho_{c_{k}}^{(M)}\right]^{T}\] where \[\Delta\rho_{c_{k}}^{(n)} =\rho_{c_{k}}^{(n)}-\tilde{r}_{k}^{(n)}-\delta\tilde{t}_{u_{k}}- \varepsilon_{k}^{(n)}\] \[n=1,2,3,\cdots,M\] The WLS-based solution to (2) is shown below [23]. \[\mathbf{X}_{k} =\tilde{\mathbf{X}}_{k}+\Delta\mathbf{X}_{k}\] \[=\tilde{\mathbf{X}}_{k}+\left(\mathbf{W}_{k}\mathbf{G}_{k}\right) ^{+}\mathbf{W}_{k}\Delta\boldsymbol{\rho}_{c_{k}} \tag{4}\] where \(\Delta\mathbf{X}_{k}=\left(\mathbf{W}_{k}\mathbf{G}_{k}\right)^{+}\mathbf{W} _{k}\Delta\boldsymbol{\rho}_{c_{k}}\) is the displacement from the approximate user state \(\tilde{\mathbf{X}}_{k}\) to the true one. The approximate user state \(\tilde{\mathbf{X}}_{k}\) will be updated with the result of (4), and the computation in (4) will be iterated until the accuracy requirement is satisfied. Note that the pseudorange error \(\varepsilon_{k}^{(n)}\) is unknown in practice. The estimated user state \(\hat{\mathbf{X}}_{k}=\left[\hat{x}_{k},\hat{y}_{k},\hat{z}_{k},\delta\hat{t}_{ u_{k}}\right]^{T}\) in the presence of pseudorange errors is: \[\hat{\mathbf{X}}_{k}=\tilde{\mathbf{X}}_{k}+\Delta\mathbf{X}_{k}+\left( \mathbf{W}_{k}\mathbf{G}_{k}\right)^{+}\mathbf{W}_{k}\boldsymbol{\varepsilon }_{k}\] where, \[\boldsymbol{\varepsilon}_{k}=\left[\varepsilon_{k}^{(1)},\varepsilon_{k}^{(2)},...,\varepsilon_{k}^{(M)}\right]^{T}\] The resulting state estimation error \(\epsilon_{\mathbf{X}_{k}}\) is: \[\epsilon_{\mathbf{X}_{k}} =\mathbf{X}_{k}-\hat{\mathbf{X}}_{k}=-\left(\mathbf{W}_{k} \mathbf{G}_{k}\right)^{+}\mathbf{W}_{k}\boldsymbol{\varepsilon}_{k}\] \[=\left[\epsilon_{x_{k}},\epsilon_{y_{k}},\epsilon_{z_{k}}, \epsilon_{\delta t_{u_{k}}}\right]^{T} \tag{5}\] ## V Data Labeling In this section, we set forth how to compute the target values of pseudorange bias using location ground truth and RTS smoother. ### _Estimating Pseudorange Errors_ According to (1), pseudorange errors can be calculated as: \[\varepsilon_{k}^{(n)}=\rho_{c_{k}}^{(n)}-r_{k}^{(n)}-\delta t_{u_{k}}\] However, we do not have the ground truth of the smartphone's clock bias \(\delta t_{u_{k}}\). Thus, we employ its WLS-based estimation \(\delta\hat{t}_{u_{k}}\) to calculate pseudorange errors: \[\hat{\varepsilon}_{k}^{(n)}=\rho_{c_{k}}^{(n)}-r_{k}^{(n)}-\delta\hat{t}_{u_{k}} \tag{6}\] Substituting (1) and (5) into (6) yields: \[\hat{\varepsilon}_{k}^{(n)} =\varepsilon_{k}^{(n)}+\epsilon_{\delta t_{u_{k}}}\] \[=\varepsilon_{k}^{(n)}-\mathbf{h}_{k}^{T}\boldsymbol{\varepsilon }_{k} \tag{7}\] where \(\mathbf{h}_{k}^{T}\) is the last row vector of \(\left(\mathbf{W}_{k}\mathbf{G}_{k}\right)^{+}\mathbf{W}_{k}\). Theoretically, we can calculate the closed form of the real pseudorange error \(\boldsymbol{\varepsilon}_{k}\) with \(M\) equations defined by (7) using the least squares (LS) algorithm. However, the coefficient matrix of the system of \(M\) equations is close to singular because only two elements are different between any two rows. Consequently, it might cause large errors in numerical computation. Therefore, we use gradient descent to estimate pseudorange errors instead, which will be detailed in Section VII. ### _Smoothing Pseudorange Errors_ The pseudorange error \(\varepsilon_{k}^{(n)}\) couples the biased error \(\mu_{k}^{(n)}\) and unbiased noise \(v_{k}^{(n)}\) together: \[\varepsilon_{k}^{(n)}=\mu_{k}^{(n)}+v_{k}^{(n)} \tag{8}\] where, \[\mu_{k}^{(n)} =\mathrm{E}\left(\varepsilon_{k}^{(n)}\right)\] \[v_{k}^{(n)} =\varepsilon_{k}^{(n)}-\mu_{k}^{(n)}\] Substituting (8) into (7) yields: \[\hat{\varepsilon}_{k}^{(n)}=\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}+v_{k}^ {(n)}-\mathbf{h}_{k}^{T}\boldsymbol{v}_{k} \tag{9}\] where, \[\mathbf{M}_{k}=\left[\mu_{k}^{(1)},\mu_{k}^{(2)},\cdots,\mu_{k}^{(M)}\right]^{T} \tag{10}\] \[\boldsymbol{v}_{k}=\boldsymbol{\varepsilon}_{k}-\mathbf{M}_{k}\] Next, we try to extract the biased error items \(\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}\) from the pseudorange error estimation \(\hat{\varepsilon}_{k}^{(n)}\) to filter out the smartphone's pseudorange noise that has been proven much larger than that of geodetic receivers [4]. **Theorem 1**: _The biased error items \(\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}\) are given by \(\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}=\mathbf{g}_{k}^{(n)}\left(\tilde {\mathbf{x}}_{k}-\mathbf{x}_{k}\right)\), where \(\tilde{\mathbf{x}}_{k}=\left[\tilde{x}_{k},\tilde{y}_{k},\tilde{z}_{k}\right]^{T}\) represents the mean of the WLS-based location estimation \(\hat{\mathbf{x}}_{k}=\left[\hat{x}_{k},\hat{y}_{k},\hat{z}_{k}\right]^{T}\), and \(\mathbf{g}_{k}^{(n)}\) is the unit geometry vector:_ \[\mathbf{g}_{k}^{(n)}=\left[a_{x_{k}}^{(n)},a_{y_{k}}^{(n)},a_{z_{k}}^{(n)} \right]^{T} \tag{11}\] \[\tilde{\mathbf{X}}_{k}=\mathbf{X}_{k}\] Proof:: Replace the approximate state \(\tilde{\mathbf{X}}_{k}\) with the real state \(\mathbf{X}_{k}\) in (2). Thus, the WLS-based state estimation \(\tilde{\mathbf{X}}_{k}\) is the optimal solution to the following optimization problem: \[\min_{\tilde{\mathbf{X}}_{k}}||\mathbf{W}_{k}\mathbf{G}_{k}\left(\hat{\mathbf{X }}_{k}-\mathbf{X}_{k}\right)-\mathbf{W}_{k}\Delta\boldsymbol{\rho}_{c_{k}}||^ {2}\] The optimal \(\hat{\mathbf{X}}_{k}\) satisfies \[\mathbf{G}_{k}\left(\hat{\mathbf{X}}_{k}-\mathbf{X}_{k}\right)\approx\Delta \boldsymbol{\rho}_{c_{k}}=\boldsymbol{\rho}_{c_{k}}-\mathbf{r}_{k}-\delta t_{u _{k}}=\boldsymbol{\varepsilon}_{k} \tag{12}\] where, \[\mathbf{r}_{k}=\left[r_{k}^{(1)},r_{k}^{(2)},\cdots,r_{k}^{(M)} \right]^{T}\] \[\boldsymbol{\rho}_{c_{k}}=\left[\rho_{c_{k}}^{(1)},\rho_{c_{k}}^{ (2)},\cdots,\rho_{c_{k}}^{(M)}\right]^{T}\] For the \(n^{th}\) satellite, the following equation can be obtained according to (3), (5) and (12): \[\mathbf{g}_{k}^{(n)}\left(\hat{\mathbf{x}}_{k}-\mathbf{x}_{k}\right)+\mathbf{ h}_{k}^{T}\boldsymbol{\varepsilon}_{k}=\varepsilon_{k}^{(n)} \tag{13}\] After calculating the expected values of both sides of (13) and rearranging the result, we have \[\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}=\mathbf{g}_{k}^{(n)}\left( \tilde{\mathbf{x}}_{k}-\mathbf{x}_{k}\right)\] **Corollary 1.1**: _The biased error items \(\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}\) are given by \(\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}=\bar{r}_{k}^{(n)}-r_{k}^{(n)}\), where \(\bar{r}_{k}^{(n)}=||\mathbf{x}_{k}-\mathbf{x}_{k}^{(n)}||_{2}\)._ Proof:: Using Taylor expansion at \(\mathbf{x}_{k}\), we have \[\bar{r}_{k}^{(n)} =||\tilde{\mathbf{x}}_{k}-\mathbf{x}_{k}^{(n)}||_{2}\] \[\approx||\mathbf{x}_{k}-\mathbf{x}_{k}^{(n)}||_{2}+\mathbf{g}_{k }^{(n)}\left(\tilde{\mathbf{x}}_{k}-\mathbf{x}_{k}\right)\] \[=r_{k}^{(n)}+\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}\] This paper uses \(\bar{\varepsilon}_{k}^{(n)}=\bar{r}_{k}^{(n)}-r_{k}^{(n)}\) to label training data. We consider the RTS smoother-based positioning solution as \(\tilde{\mathbf{x}}_{k}\). The smoothing process of RTS smoother for Android raw GNSS measurements is detailed in [24]. Fig. 3 displays the pseudorange errors of four visible satellites before and after smoothing, indicating that the smoothed pseudorange errors are much smoother than the original ones and can represent the biased components. Note that the RTS smoother needs about 120 epochs for convergence. Therefore, during the training process, the data of the first 120 observation epochs are discarded. ## VI Feature Selection To enable better representation and higher efficiency of the neural network, we meticulously select input features, including the commonly utilized ones in the community and some novel features tailored specifically for Android smartphones. ### _Carrier-to-noise Density Ratio \(C/N_{0}\)_ \(C/N_{0}\) has been proven related to pseudorange errors [8, 9, 10]. The \(C/N_{0}\) of an Android smartphone is generally smaller than 50 dB-Hz and can be normalized accordingly: \[F_{1_{k}}^{(n)}=\frac{\mathrm{Cn0DbHz}}{50}\] where "Cn0DbHz" is one Android raw GNSS measurement. ### _Elevation Angles of Satellites_ Satellite elevation angles are also closely correlated with pseudorange errors [8, 9, 10]. We estimate the elevation angle \(E_{k}^{(n)}\) of the \(n^{th}\) satellite using the WLS-based positioning solution \(\hat{\mathbf{x}}_{k}\) and the satellite location \(\mathbf{x}_{k}^{(n)}\)[23]. And it is standardized to \([-1,1]\) as follows: \[F_{2_{k}}^{(n)}=\left[\sin E_{k}^{(n)},\cos E_{k}^{(n)}\right]\] ### _Satellite ID_ We try to predict pseudorange errors of different visible satellites indexed by "Svid". In this work, we only consider GPS L1 signals, and the corresponding satellite ID (PRN index) ranges from 1 to 32, so we normalize it by 32. \[F_{3_{k}}^{(n)}=\frac{\mathrm{Svid}}{32}\] where "Svid" is one Android raw GNSS measurement. ### _Position Estimations_ The surrounding environment of each location possesses unique terrain characteristics, such as the distribution of buildings, mountains, tunnels, and skyways, determining the multipath/NLOS context at the site. Thus, it is reasonable to include position estimation in the input features [10, 12]. We can use the WLS-based position estimations that is provided as knowns in the GSDC dataset. To distinguish between closely located sites, we utilize latitude and longitude with fine-grained units, i.e., degree (\(\varphi_{deg_{b}}\)), minute (\(\varphi_{min_{k}}\)), and second Fig. 3: Pseudorange errors before and after smoothing (using four satellites from Trace “2021-04-28-US-MTV-1” by Pixel 4 in GSDC dataset as examples) (\(\varphi_{sec_{k}}\)) for latitude, and degree (\(\lambda_{deg_{k}}\)), minute (\(\lambda_{min_{k}}\)), and second (\(\lambda_{sec_{k}}\)) for longitude, which can be standardized as follows: \[F_{4_{k}}=\left[\frac{\varphi_{deg_{k}}}{90},\frac{\varphi_{min_{k}}}{60},\frac {\varphi_{sec_{k}}}{60},\frac{\lambda_{deg_{k}}}{180},\frac{\lambda_{min_{k}}}{ 60},\frac{\lambda_{sec_{k}}}{60}\right]\] ### _Unit Geometry Vectors_ The direction of signal propagation can be depicted by the unit geometry vector from the satellite to the smartphone, which is periodic at each site [12]. We visualize the unit geometry vectors in Fig. 4(c). Fig. 4(b) shows that the blue, yellow, and green traces were captured along similar routes and directions, but Fig. 4(a) indicates that only the yellow and green traces share similar patterns of pseudorange bias. It is on the grounds that the unit geometry vectors of yellow and green traces are close to each other but far away from the blue one, as displayed in Fig. 4(c). Hence, pseudorange bias are tightly correlated to unit geometry vectors. We convert the unit geometry vector \(\mathbf{g}_{k}^{(n)}\) from the ECEF coordinate system to the north-east-down (NED) coordinate system to couple it to the location information (WLS-based). Each item in the unit vector falls within \([-1,1]\) and can be directly used as input features: \[F_{5_{k}}^{(n)}=\mathbf{g}_{\mathrm{NED}_{k}}^{(n)}{}^{T}\] ### _Heading Estimations_ Fig. 4(b) shows that the blue and orange traces were collected along similar routes. And Fig. 4(c) indicates that their unit geometry vectors are also close to each other. Their pseudorange bias, however, are quite different, as shown in Fig. 4(a). It might be caused by the opposite moving directions along which the two data files are collected. According to the setup of Android smartphones [1], the moving direction determines the heading of smartphones' antennas, which may affect the GNSS signal reception. Therefore, we include smartphone headings \(\theta_{k}\) into the input features, which can be approximately represented by the unit vector pointing from the current smartphone's location to the next one: \[\theta_{k}=\frac{\hat{\mathbf{x}}_{k+1}-\hat{\mathbf{x}}_{k}}{||\hat{\mathbf{ x}}_{k+1}-\hat{\mathbf{x}}_{k}||_{2}}\] Next, we convert the smartphone heading \(\theta_{k}\) from the ECEF coordinate system to the NED coordinate system constructed at the current location \(\hat{\mathbf{x}}_{k}\) and get \(\theta_{\mathrm{NED}_{k}}\). Each item in the unit vector \(\theta_{\mathrm{NED}_{k}}\) falls within \([-1,1]\). Thus, it can be directly included as an input feature. \[F_{6_{k}}=\theta_{\mathrm{NED}_{k}}{}^{T}\] To sum up, we choose the following input features: \[\mathbf{F}_{k}^{(n)}=\left[F_{1_{k}}^{(n)},F_{2_{k}}^{(n)},F_{3_{k}}^{(n)},F_{ 4_{k}},F_{5_{k}}^{(n)},F_{6_{k}}\right]^{T}\] where \(F_{1_{k}}^{(n)}\), \(F_{2_{k}}^{(n)}\), \(F_{3_{k}}^{(n)}\), and \(F_{5_{k}}^{(n)}\) vary across different satellites while \(F_{4_{k}}\) and \(F_{6_{k}}\) are common features that are shared among all visible satellites at a given observation epoch. ## VII PrNet The MLP has proved itself in learning high-dimensional representation for regression or classification as a single network [25] or a sublayer module [26]. The proposed PrNet Fig. 4: (a) Pseudorange bias of satellite PRN 2 along the traces collected by Pixel 4 in GSDC dataset. Different traces are plotted in a single figure to facilitate easy comparison while the time axis does not correspond to the exact moments when the data were collected; (b) Traces and directions along which each data file is collected; (c) Trajectories of terminal points of the unit geometry vectors from satellite PRN 2 to Pixel 4 on the unit sphere centered at Pixel 4. Fig. 5: Diagram of PrNet. PrNet comprises a satellite-wise MLP and a visible satellite mask; \(B\) represents the batch size; \(F\) denotes the dimension of input features; \(H\) is the number of hidden neurons. is based on a deep MLP that learns the mapping from six satellite-receiver-context-related features to pseudorange bias, i.e., \(\mu_{k}^{(n)}=f\left(\mathbf{F}_{k}^{(n)}\right)\). The diagram of PrNet is shown in Fig. 5. Our approach involves passing a batch of inputs through the neural network to compute the corresponding pseudorange bias. Each sample in the batch represents all visible satellites at a given time step, and all the satellites are processed by the same MLP, called satellite-wise MLP. To address the challenge of varying-number satellites at different time steps, we compute the pseudorange bias of all 32 satellites in the GPS constellation each time, where the inputs of non-visible satellites are set to zero. After the output layer, we add a "visible satellite mask" to filter out the meaningless output of non-visible satellites and retain the information of only visible satellites. This approach allows us to execute parallel computation on inputs of varying quantities. PrNet is designed to learn the representation of the pseudorange bias \(\mu_{k}^{(n)}\). However, the training data are labeled by \(\bar{\varepsilon}_{k}^{(n)}=\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\mathbf{M}_{k}\). To account for this, we add the estimation residual of smartphone clock bias \(-\mathbf{h}_{k}^{T}\hat{\mathbf{M}}_{k}\) in the loss function to align the output of PrNet and the target value \(\bar{\varepsilon}_{k}^{(n)}=\mu_{k}^{(n)}-\mathbf{h}_{k}^{T}\hat{\mathbf{M}}_ {k}\): \[\mathcal{L}=\sum_{n=1}^{M}||f\left(\mathbf{F}_{k}^{(n)}\right)-\mathbf{h}_{k} ^{T}\hat{\mathbf{M}}_{k}-\bar{\varepsilon}_{k}^{(n)}||^{2}\] where, \[\hat{\mathbf{M}}_{k}=\left[\hat{\mu}_{k}^{(1)},\hat{\mu}_{k}^{(2) },\cdots,\hat{\mu}_{k}^{(M)}\right]^{T}\] \[\hat{\mu}_{k}^{(n)}=f\left(\mathbf{F}_{k}^{(n)}\right)\] ## VIII Implementation Details ### _Dataset_ We conduct extensive evaluations on the GSDC 2021 dataset [1]1. Most of the GSDC 2021 dataset was collected in rural areas, and only a few traces were located in urban areas. As illustrated in Fig. 6, we use the dataset to design four scenarios that encompass rural fingerprinting, rural cross-trace, urban fingerprinting, and urban cross-trace localization. Scenario I and Scenario II share the same training data, totaling 12 traces. Scenario III and Scenario IV share the same training data, totaling 2 traces. The training data was all collected using Pixel 4. The testing datasets for the four scenarios consist of 2, 1, 1, and 1 trace, respectively. Footnote 1: We opted not to utilize the GSDC 2022 dataset due to the absence of ground truth altitude information in most of its traces, which is essential for computing the target values of pseudorange bias. ### _Baseline Methods_ #### V-B1 Model-based methods We implement the vanilla WLS-based, two filtering-based (MHE and EKF), and one smoothing-based (RTS smoother) localization engines as baseline methods. More details about them can be referred to [24]. #### V-B2 Pbc-Rf Point-based Correction Random Forest (PBC-RF) represents a state-of-the-art _machine learning_ approach for predicting pseudorange errors in _specialized GNSS receivers_, as detailed in [10]. #### V-B3 Fcnn-Lstm Fully Connected Neural Network with Long Short-term Memory (FCNN-LSTM) stands as a state-of-the-art _deep learning_ model designed for the prediction of pseudorange errors in _specialized GNSS receivers_, as detailed in [9]. PBC-RF and FCNN-LSTM have not been made available as open-source software at this time. We implement them as baseline methods according to [9, 10]. Our implementations are available at [https://github.com/Aaron-WengXu](https://github.com/Aaron-WengXu). #### V-B4 Set Transformer To the best of our knowledge, it is the only open-source state-of-the-art work that performs the data-driven art on Android raw GNSS measurements [15]. We trained the neural network as one baseline method. Note that its inference performance is tightly related to the initial locations of smartphones that are measured by the "magnitude of initialization ranges" \(\mu\)[15]. We determine the value of \(\mu\) by calculating the \(95^{th}\) percentile of the horizontal localization errors obtained from the model-based approaches. The set transformer we trained are available at [https://github.com/ailocar/deep_gnsss](https://github.com/ailocar/deep_gnsss). ### _PrNet Implementation_ The proposed neural network is implemented using PyTorch and d2l libraries [27]. After hyper-parameter tuning, we set the number of hidden neurons \(H\) to 40 and the number of hidden layers \(L\) to 20. We use the Adam optimizer with a learning rate decaying from \(1\times 10^{-2}\) to \(1\times 10^{-7}\) for optimizing its weights. No regularization techniques are employed in the training process. In Scenario I and Scenario II (rural areas), the optimization of PrNet can converge within 0.5k iterations. In Scenario III and Scenario IV (urban areas), it takes about 3-5k iterations to polish up the neural network. We utilize WLS, MHE, EKF, and RTS smoother to process the pseudoranges corrected by PrNet for localization. ## IX Experiments ### _Horizontal Localization_ The primary focus of smartphone-based localization is on horizontal positioning errors. Therefore, we quantitatively compare our proposed frameworks against the baseline methods across four scenarios by computing their horizontal errors with Vincenty's formulae. Fig. 7 displays the empirical cumulative distribution function (ECDF) of horizontal errors. The corresponding horizontal scores are summarized in Table I. The proposed PrNet+RTS smoother thoroughly outperforms all the baseline methods that employ classical model-based localization engines or sophisticated data-driven models. Additionally, by comparing the vanilla model-based approaches against their PrNet-enhanced versions, PrNet can reduce the horizontal scores by up to 74% (PrNet+RTS smoother in Scenario I). The set transformer is tied to the "magnitude of initialization ranges" \(\mu\) and tends to yield horizontal scores around this particular value, which explains its bad performance in urban areas where \(\mu\) is initialized with large magnitudes. The original intention of PBC-RF and FCNN-LSTM to process data captured from geodetic GNSS receivers limits their ability to correct pseudoranges for smartphones. Nevertheless, they outperform PrNet+WLS and PrNet+MHE in Scenario II and III. It is due to the fact that we equip them with our best localization engine (RTS smoother) that already surpasses PrNet+WLS and PrNet+MHE in these scenarios. ### _Computational Load_ Deep learning promises supreme performance at the cost of substantial computational load. We summarize the computational complexity of PrNet and the other two deep learning models, i.e., set transformer and FCNN-LSTM, in Table II. In this round of competition, the set transformer outperforms the other two deep learning models even though its computational complexity is \(\mathcal{O}(M^{2})\). Such efficiency is credited to the transformer's parallel computation [27] and fewer sequential operations [15] than the other two methods. Fig. 6: (a) Scenario I: rural fingerprinting. (b) Scenario II: rural cross-trace. (c) Scenario III: urban fingerprinting. (d) Scenario IV: urban cross-trace. Fig. 7: ECDF of horizontal errors in 4 scenarios. (a) Scenario I. (b) Scenario II. (c) Scenario III. (d) Scenario IV. ### _Ablation Studies_ In Table III, we conduct an extensive ablation study to investigate the reliance of PrNet on our design choices, including two novel input features, loss function design, and label computation. We also assess the impact of position estimations, an input feature recently introduced in [10], on the resulting horizontal positioning errors. Furthermore, we analyze the scalability of PrNet by comparing models of different sizes. We present the results of PrNet+RTS smoother averaged over the four scenarios. Our best model is considered a baseline (Row 9). The results verify that all our design choices are reasonable. Specifically, through a stepwise removal of individual features (Rows 1-3), we observe that the two novel input features (unit geometry vectors and heading estimations) significantly impact the localization performance. In contrast, the position estimation has a trivial impact. Row 5-6 shows how the localization performance degrades if we don't involve the estimation residuals of smartphone clock bias or smoothed pseudorange errors in the loss function. Row 7 tells that PeNet can be scaled down to a smaller size with negligible performance loss, suggesting its potential for deployment on the smartphone or edge sides. Row 8 indicates that increasing the size of PrNet cannot improve its performance; in fact, it diminishes performance due to overfitting, which arises from an excessive number of learnable weights in PrNet. ### _Cross-phone Evaluation_ To investigate the generalization ability of PrNet on various mass-market smartphones, we perform cross-phone evaluations using PrNet+RTS smoother and summarize our results in Fig. 8 (other PrNet-based methods share similar results). The training data in the four scenarios are collected by Pixel 4. During the inference process, besides the data collected by Pixel 4, we also use data from other smartphones for evaluation. Note that Google used various combinations of smartphones to collect data along different traces2. Footnote 2: In Scenario I, the data collected by Mi 8 is abnormal so that we exclude it from our analysis. The results suggest that PrNet can be robustly adopted across various smartphones in rural areas (Scenario I and II), but its performance degrades when utilized on Samsung S20 Ultra in urban settings (Scenarios III and IV). Fig. 8 indicates that, in the urban areas, Samsung S20 Ultra obtains better localization performance using the standard RTS smoother compared to the performance of Pixel 4 enhanced by PrNet. Therefore, the performance gap between these two phones in urban environments leads to large data heterogeneity, which could be a potential factor behind the adaptation failures. ## X Conclusion The proposed PrNet-based framework can regress the biased errors in pseudoranges collected by Android smartphones from six satellite-receiver-context-related inputs and eliminate them for better localization performance. We introduce two novel input features and meticulously calculate the target values of pseudorange bias to guide a satellite-wise MLP in learning a better representation of pseudorange bias than prior work. The comprehensive evaluation demonstrates its exceptional performance compared to the state-of-the-art approaches. Our future work includes: 1) more advanced deep learning models can be explored to learn the satellite and temporal correlation; 2) the pseudorange-correction neural network can be trained using location ground truth in an end-to-end way; 3) more open datasets will be collected using other modalities of sensors as ground truth; 4) data augmentation strategies can be leveraged to enhance the generalization ability across heterogeneous data. Fig. 8: Cross-phone Evaluation.
2309.05778
Energy matching in reduced passive and port-Hamiltonian systems
It is well known that any port-Hamiltonian (pH) system is passive, and conversely, any minimal and stable passive system has a pH representation. Nevertheless, this equivalence is only concerned with the input-output mapping but not with the Hamiltonian itself. Thus, we propose to view a pH system either as an enlarged dynamical system with the Hamiltonian as additional output or as two dynamical systems with the input-output and the Hamiltonian dynamic. Our first main result is a structure-preserving Kalman-like decomposition of the enlarged pH system that separates the controllable and zero-state observable parts. Moreover, for further approximations in the context of structure-preserving model-order reduction (MOR), we propose to search for a Hamiltonian in the reduced pH system that minimizes the $\mathcal{H}_2$-distance to the full-order Hamiltonian without altering the input-output dynamic, thus discussing a particular aspect of the corresponding multi-objective minimization problem corresponding to $\mathcal{H}_2$-optimal MOR for pH systems. We show that this optimization problem is uniquely solvable, can be recast as a standard semidefinite program, and present two numerical approaches for solving it. The results are illustrated with three academic examples.
Tobias Holicki, Jonas Nicodemus, Paul Schwerdtner, Benjamin Unger
2023-09-11T19:14:11Z
http://arxiv.org/abs/2309.05778v1
# Energy matching in reduced passive and port-Hamiltonian systems ###### Abstract. It is well known that any port-Hamiltonian (pH) system is passive, and conversely, any minimal and stable passive system has a pH representation. Nevertheless, this equivalence is only concerned with the input-output mapping but not with the Hamiltonian itself. Thus, we propose to view a pH system either as an enlarged dynamical system with the Hamiltonian as additional output or as two dynamical systems with the input-output and the Hamiltonian dynamic. Our first main result is a structure-preserving Kalman-like decomposition of the enlarged pH system that separates the controllable and zero-state observable parts. Moreover, for further approximations in the context of structure-preserving model-order reduction (MOR), we propose to search for a Hamiltonian in the reduced pH system that minimizes the \(\mathcal{H}_{2}\)-distance to the full-order Hamiltonian without altering the input-output dynamic, thus discussing a particular aspect of the corresponding multi-objective minimization problem corresponding to \(\mathcal{H}_{2}\)-optimal MOR for pH systems. We show that this optimization problem is uniquely solvable, can be recast as a standard semidefinite program, and present two numerical approaches for solving it. The results are illustrated with three academic examples. Keywords: Port-Hamiltonian systems, structure-preserving model-order reduction, energy matching, quadratic output system, \(\mathcal{H}_{2}\)-optimal, semidefinite program AMS subject classification: 37J06,37M99,65P10,93A30,93C05,90C22 ## 1. Introduction The port-Hamiltonian (pH) modelling paradigm offers an intuitive energy-based formulation of dynamical systems across a wide variety of physical domains such as electrical systems [21, 26, 27], fluid-flow problems [19], or mechanical multi-body systems [7, Ex. 12]. By design, pH systems are automatically stable and passive and can be coupled across different scales and physical domains, which makes them valuable building blocks for large network models [30]. Since first-principle full-order models (FOMs) of complex systems or large system networks often have a high state-space dimension, model order reduction (MOR) is necessary in many cases to enable efficient numerical simulations or even real-time model-based control by computing a reduced-order model (ROM) that is used instead of the FOM for simulations and control. In this article, we offer a new perspective on the MOR problem for linear time-invariant (LTI) pH systems. In particular, we argue that pH systems should not be treated merely as a special case of standard LTI (state-space) systems during MOR but instead propose to view pH systems as two dynamical (respectively an extended) dynamical systems consisting of the classical input-output mapping, which is typically approximated during MOR, and, additionally, a dynamical system representing the evolution of the _Hamiltonian_, which represents the energy that is stored in a pH system. Besides the fact that the approximation of the Hamiltonian can be of interest in energy-related applications, it is particularly important for energy-aware control synthesis; see e.g. [15, 39]. Consequently, we study the approximation quality of different structure-preserving MOR algorithms in two different error measures that reflect both the input-output and the Hamiltonian dynamic. This goes beyond _structure-preserving_ MOR for pH systems, where a ROM with pH structure is computed rather than a general LTI ROM. For our investigation, we first provide an analysis of the extended pH system and derive a structure-preserving Kalman-like decomposition in Section3. Finally, we provide a new post-processing algorithm in Section4 -- to be performed after any structure-preserving MOR method -- that minimizes the approximation error of the Hamiltonian dynamic without changing the system's input output dynamic. We consider LTI pH systems defined as follows. **Definition 1.1** (Port-Hamiltonian system [46]).: _A linear time-invariant dynamical system of the form_ \[\Sigma_{\mathsf{pH}}\quad\left\{\begin{array}{l}\dot{x}(t)=(J-R)Qx(t)+(G-P)u (t),\\ y(t)=(G+P)^{\mathsf{T}}Qx(t)+(S-N)u(t),\end{array}\right.\] (1.1a) _with matrices \[J,R,Q\in\mathbb{R}^{n\times n}\], \[G,P\in\mathbb{R}^{n\times m}\], \[S,N\in\mathbb{R}^{m\times m}\], together with a Hamiltonian function \[\mathcal{H}\colon\mathbb{R}^{n}\to\mathbb{R},\qquad x\mapsto\tfrac{1}{2}x^{ \mathsf{T}}Qx, \tag{1.1b}\] _is called a port-Hamiltonian system, if_ _(i) the structure matrix \(\Gamma:=\left[\begin{smallmatrix}J&G\\ -G^{\mathsf{T}}&N\end{smallmatrix}\right]\) is skew symmetric,_ _(ii) the dissipation matrix \(W:=\left[\begin{smallmatrix}R&P\\ P^{\mathsf{T}}&S\end{smallmatrix}\right]\) is symmetric positive semi-definite, and_ _(iii) the Hessian of the Hamiltonian \(Q\) is symmetric positive semi-definite._ _The variables \(x\), \(u\), and \(y\) are referred to as the state, input, and output, respectively._ For such systems, structure-preserving MOR computes pH ROMs \[\tilde{\Sigma}_{\mathsf{pH}}\quad\left\{\begin{array}{l}\dot{\tilde{x}}(t)= (\tilde{J}-\tilde{R})\tilde{Q}\tilde{x}(t)+(\tilde{G}-\tilde{P})u(t),\\ \tilde{y}(t)=(\tilde{G}+\tilde{P})^{\mathsf{T}}\tilde{Q}\tilde{x}(t)+(\tilde{S }-\tilde{N})u(t),\end{array}\right. \tag{1.2}\] with matrices \(\tilde{J},\tilde{R},\tilde{Q}\in\mathbb{R}^{r\times r}\), \(\tilde{G},\tilde{P}\in\mathbb{R}^{r\times m}\), \(\tilde{S},\tilde{N}\in\mathbb{R}^{m\times m}\), that satisfy the same constraints as in Definition1.1 but with \(r\ll n\). Typically, MOR (and also structure-preserving MOR) aims to compute ROMs such that \(y-\tilde{y}\) is small for all admissible inputs \(u\) in an appropriate norm (which results in a good approximation of the input-output mapping). The approximation of the Hamiltonian, i.e., \(\mathcal{H}-\tilde{\mathcal{H}}\) in some appropriate norm is typically not considered; here \(\tilde{\mathcal{H}}\) denotes the Hamiltonian of the reduced system (1.2), given by \[\tilde{\mathcal{H}}\colon\mathbb{R}^{r}\to\mathbb{R},\qquad\tilde{x}\mapsto \frac{1}{2}\tilde{x}^{\mathsf{T}}\tilde{Q}\tilde{x}.\] Exploiting the _Kalman-Yakubovitch-Popov_ (KYP) inequality, see the forthcoming Section2.2, we propose a novel post-processing step called _energy matching_ for the ROM such that \(\mathcal{H}-\tilde{\mathcal{H}}\) is minimized in some appropriate norm the difference. ### Literature review and state-of-the-art MOR for standard LTI systems of the form \[\Sigma\quad\left\{\begin{array}{l}\dot{x}(t)=Ax(t)+Bu(t),\\ y(t)=Cx(t)+Du(t),\end{array}\right. \tag{1.3}\] where \(A\in\mathbb{R}^{n\times n},B\in\mathbb{R}^{n\times m},C\in\mathbb{R}^{p\times n}\), and \(D\in\mathbb{R}^{p\times m}\), is well understood. There exist several well-established algorithms that compute ROMs of the form \[\tilde{\Sigma}\quad\left\{\begin{array}{l}\dot{\tilde{x}}(t)=\tilde{A} \tilde{x}(t)+\tilde{B}u(t),\\ \tilde{y}(t)=\tilde{C}\tilde{x}(t)+\tilde{D}u(t),\end{array}\right. \tag{1.4}\] with matrices \(\tilde{A}\in\mathbb{R}^{r\times r}\), \(\tilde{B}\in\mathbb{R}^{r\times m}\), \(\tilde{C}\in\mathbb{R}^{p\times r}\), and \(\tilde{D}\in\mathbb{R}^{p\times m}\) that approximate the FOM with high fidelity. One standard input-output error measure is the \(\mathcal{H}_{2}\) error \[\left\|H-\tilde{H}\right\|_{\mathcal{H}_{2}}:=\sqrt{\frac{1}{2\pi}\int_{- \infty}^{\infty}\left\|H(\mathrm{i}\omega)-\tilde{H}(\mathrm{i}\omega)\right\| _{\mathrm{F}}^{2}\,\mathrm{d}\omega}, \tag{1.5}\] that measures the deviation of the ROM transfer function \(\tilde{H}\) from the FOM transfer function \(H\). These transfer functions are defined as \[H(s):=C(sI_{n}-A)^{-1}B+D\qquad\text{and}\qquad\tilde{H}(s):=\tilde{C}(sI_{r}- \tilde{A})^{-1}\tilde{B}+\tilde{D},\] and, we have that \(\left\|y-\tilde{y}\right\|_{L_{\infty}}\leq\left\|H-\tilde{H}\right\|_{\mathcal{ H}_{2}}\left\|u\right\|_{L_{2}}\), which ensures that a small \(\mathcal{H}_{2}\)-error leads to a good approximation of the input-output map in the \(L_{\infty}\)-norm. A comprehensive review of the classical MOR methods is beyond the scope of this paper, and we refer to [4, 5, 9] for an overview on this topic. A popular MOR method that often leads to locally \(\mathcal{H}_{2}\)-optimal ROMs is the _iterative rational Krylov algorithm_ (IRKA), which is introduced in [8]. IRKA computes ROMs based on a subspace projection onto the \(r\)-dimensional subspaces \(\operatorname{im}(V)\) and \(\operatorname{im}(W)\) of \(\mathbb{R}^{n}\) encoded via the matrices \(V,W\in\mathbb{R}^{n\times r}\), i.e., the ROM matrices are defined as \(\tilde{A}=W^{\mathsf{T}}AV\), \(\tilde{B}=W^{\mathsf{T}}B\), \(\tilde{C}=CV\), and \(\tilde{D}=D\). The projection spaces \(\operatorname{im}(V)\) and \(\operatorname{im}(W)\) are updated according to a fixed point iteration until local \(\mathcal{H}_{2}\)-optimality is attained. While popular MOR approaches such as IRKA often lead to highly accurate ROMs, they do not generically preserve the pH structure. For this, pH structure-preserving variants of classical approaches are introduced, e.g., in [24, 25, 37]. Here, the projection is adapted such that the symmetries and definiteness properties of a pH system are preserved during the projection. However, this often leads to a drastically reduced accuracy, as emphasized in [13, 43]. Another approach towards structure-preserving MOR is based on preserving the passivity of a pH system during MOR and then recovering a pH representation from a general ROM as in (1.4), which is possible for minimal passive ROMs (see [17, 40] for a thorough investigation). Passivity preserving MOR is pursued in [41, 18], in which _positive-real balanced truncation_ (PRBT) is introduced and extended to descriptor systems, respectively. In [13], a passivity preserving MOR method is designed based on a MOR of the spectral factors of a given passive FOM. Other pH MOR algorithms include the methods in [1, 2, 11, 12, 14, 20, 35] and optimization-based methods in [43, 42, 33]. MOR methods for linear systems are evaluated based on their approximation of the input-output mapping, which can be assessed using well-established error measures (such as the \(\mathcal{H}_{2}\) norm) based on the transfer function distances. The evaluation of the approximation quality of the Hamiltonian requires a more advanced error analysis that has only recently been established. When we add the Hamiltonian as an additional output, an LTI pH system becomes a _linear time-invariant system with quadratic output_ (LTIQO). MOR for such systems is considered, e.g., in [44, 45], in which single output LTIQO systems are simplified to standard LTI systems with multiple outputs such that either balancing or Krylov based MOR methods can be applied. In [38], LTIQO systems are rewritten as quadratic-bilinear (QB) systems that are subsequently reduced via balanced truncation. Our approach to approximating the Hamiltonian is based on developments from [10], in which the \(\mathcal{H}_{2}\) error measure is extended to LTIQO systems. Moreover, in [10], energy functionals and Gramians are introduced for LTIQO systems such that balanced truncation can be applied directly. Finally, in [22], an iterative structure preserving MOR algorithm is presented, based on solving two Sylvester equations, and in [23] the _Adaptive Antoulas-Anderson_ (AAA) algorithm is extended to LTIQO to develop a data-driven modelling framework. ### Organization of the manuscript Our manuscript is organized as follows, first we recall the basics of the pH framework (cf. Section 2.2) and review LTIQO systems (cf. Section 2.3) in Section 2. The view of pH systems as dual-dynamical systems and in particular the Hamiltonian dynamic are presented and analyzed in Section 3. We then present our proposed method for optimizing the energy of a ROM to match the energy of the FOM in Section 4. Finally, the efficiency of the method is demonstrated in three numerical examples in Section 5. ### Notation and abbreviations We use the symbols \(\mathbb{N}\), \(\mathbb{R}\), \(\mathbb{R}^{n}\), \(\mathbb{R}^{n\times m}\), \(\operatorname{GL}_{n}\), \(\mathcal{S}_{\succ}^{n}\), \(\mathcal{S}_{\mathcal{P}}^{n}\), and \(\operatorname{O}_{n}\) to denote the positive integers, the real numbers, the set of column vectors with \(n\in\mathbb{N}\) real entries, the set of \(n\times m\) real matrices, the set of nonsingular matrices, the set of symmetric positive definite, the set of symmetric positive semi-definite matrices, and the orthogonal matrices, respectively. For a matrix \(A\in\mathbb{R}^{n\times m}\), we use the symbols \(A^{\mathsf{T}}\), \(\operatorname{sym}(A)=\frac{1}{2}(A+A^{\mathsf{T}})\), and \(\operatorname{skew}(A)=\frac{1}{2}(A-A^{\mathsf{T}})\), for the transpose, the symmetric part, and the skew-symmetric part, respectively. ## 2. Preliminaries We first recall a few basic notions from LTI systems and pH systems, that we will later use for our developments in Section 3. Moreover, we briefly explain error measures for LTIQO systems that we use for our energy matching algorithm in Section 4. ### Controllability and Observability An LTI system such as (1.3) is called controllable or observable, if the corresponding controllability and observability matrices, have full row and column rank, respectively, i. e. \[\operatorname{rank}\begin{bmatrix}B&AB&\cdots&A^{n-1}B\end{bmatrix}=n\qquad \text{and}\qquad\operatorname{rank}\begin{bmatrix}C&A^{\mathsf{T}}C&\cdots& \bigl{(}A^{\mathsf{T}}\bigr{)}^{n-1}C\end{bmatrix}=n.\] The system (1.3) is called minimal, if it is controllable and observable. Controllability and observability are closely related to the (infinite) Gramians \[\mathcal{P} :=\int_{0}^{\infty}\exp(A\tau)BB^{\mathsf{T}}\exp(A^{\mathsf{T} }\tau)\,\mathrm{d}\tau, \tag{2.1a}\] \[\mathcal{O} :=\int_{0}^{\infty}\exp(A^{\mathsf{T}}\tau)CC^{\mathsf{T}}\exp(A \tau)\,\mathrm{d}\tau, \tag{2.1b}\] which exist if the dynamical system (1.3) is asymptotically stable, i.e., if all eigenvalues of \(A\) are in the open left-half plane. In this case, the Gramians can be computed as solutions of the Lyapunov equations \[A\mathcal{P}+\mathcal{P}A^{\mathsf{T}}+BB^{\mathsf{T}} =0, \tag{2.2a}\] \[A^{\mathsf{T}}\mathcal{O}+\mathcal{O}A+C^{\mathsf{T}}C =0, \tag{2.2b}\] respectively, and we have that \(\Sigma\) is controllable if and only if \(\operatorname{rank}(\mathcal{P})=n\), and observable if and only if \(\operatorname{rank}(\mathcal{O})=n\). ### Port-Hamiltonian systems and the Kalman-Yakubovich-Popov inequality We say that a general LTI system as in (1.3) has a pH representation, whenever we can factorize the system matrices in the form of (1.1a) with the properties given in Definition 1.1. While the specific matrices in a pH system are typically obtained during the modelling process, the factorization of the system matrices is not unique, in general. Indeed, it is easily seen that a pH system is passive and vice versa, any stable and minimal passive system has a pH representation; see for instance [6]. If \(\Sigma\) in (1.3) is passive, then a pH representation can be obtained via a symmetric positive-definite solution \(X\in\mathcal{S}_{\succ}^{n}\) of the _Kalman-Yakubovich-Popov_ (KYP) inequality \[\mathcal{W}_{\Sigma}(X)\in\mathcal{S}_{\suc}^{n+m} \tag{2.3}\] with \[\mathcal{W}_{\Sigma}\colon\mathbb{R}^{n\times n}\to\mathbb{R}^{(n+m)\times(n +m)},\qquad X\mapsto\begin{bmatrix}-A^{\mathsf{T}}X-XA&C^{\mathsf{T}}-XB\\ C-B^{\mathsf{T}}X&D+D^{\mathsf{T}}\end{bmatrix}.\] In more detail, defining the set \(\mathbb{X}_{\Sigma}:=\{X\in\mathcal{S}_{\succ}^{n}\mid\mathcal{W}_{\Sigma}( X)\in\mathcal{S}_{\suc}^{n+m}\}\), it is easy to verify that for a passive LTI system (1.3), any \(X\in\mathbb{X}_{\Sigma}\) of (2.3) yields a pH representation by setting \[Q:=X,\quad J:=\operatorname{skew}(AX^{-1}),\quad R:=-\operatorname{sym}(AX^ {-1})\] \[G:=\tfrac{1}{2}(X^{-1}C^{\mathsf{T}}+B),\quad P:=\frac{1}{2}(X^{-1}C^{\mathsf{ T}}-B),\quad S:=\operatorname{sym}(D),\quad N:=\operatorname{skew}(D).\] Note, that we have \[(J-R)Q=\tfrac{1}{2}\left(AX^{-1}-X^{-1}A^{\mathsf{T}}+AX^{-1}+X^{-1}A^{ \mathsf{T}}\right)X=A,\] and similarly for the other matrices. Hence, the pH representation does not affect the state-space description (1.3), but is merely a special decomposition of the system matrices. **Remark 2.1**.: _If \(Q\in\mathcal{S}_{\succ}^{n}\), then it is sometimes convenient to multiply the dynamical equation in (1.1a) from the left with \(Q\), such that, after renaming of the matrices, the pH system takes the form_ \[\begin{cases}\qquad Q\dot{x}(t)=(J-R)x(t)+(G-P)u(t),\\ \qquad y(t)=(G+P)^{\mathsf{T}}x(t)+(S-N)u(t).\end{cases} \tag{2.4}\] _In this setting, all matrices in (2.4) appear linearly, which can be exploited in model order reduction and system identification, see for instance [32, 43]. Let us emphasize that this formulation sometimes appears natural during the modeling process, which then also allows \(Q\) to be singular; cf. [7, 30]. Moreover, the generalized state-space description for our pH representation does not require the inverse of \(X\). The pH representation is simply obtained by multiplying the state equation in (1.3) from the left with \(Q:=X\) and taking the skew-symmetric and symmetric part of the right-hand side. In more details, if \(X\in\mathcal{S}_{\succ}^{n}\) is a solution of the KYP inequality (2.3), then we can set_ \[\begin{bmatrix}J&G\\ -G^{\mathsf{T}}&N\end{bmatrix}:=\operatorname{skew}\left(\begin{bmatrix}XA&B \\ -C&-D\end{bmatrix}\right)\qquad\text{and}\qquad\begin{bmatrix}R&P\\ P^{\mathsf{T}}&S\end{bmatrix}:=-\operatorname{sym}\left(\begin{bmatrix}XA&B \\ -C&-D\end{bmatrix}\right).\] For our forthcoming analysis we gather several results from the literature about the KYP inequality (2.3). **Theorem 2.2**.: _Consider the dynamical system \(\Sigma\) in (1.3) and the associated KYP inequality (2.3)._ 1. _If the dynamical system is asymptotically stable, i.e., the eigenvalues of_ \(A\) _are in the open left half plane, then any solution_ \(X\in\mathbb{R}^{n\times n}\) _of (_2.3_) is symmetric positive semi-definite._ 2. _If the dynamical system is observable, then any solution_ \(X\in\mathcal{S}_{\succcurlyeq}^{n}\) _of (_2.3_) is positive definite._ 3. _Suppose the dynamical system is minimal and asymptotically stable. Then there exist matrices_ \(X_{\min},X_{\max}\in\mathbb{X}_{\Sigma}\) _such that any_ \(X\in\mathbb{X}_{\Sigma}\) _satisfies_ \[X_{\min}\preccurlyeq X\preccurlyeq X_{\max}.\] _In particular, the set_ \(\mathbb{X}_{\Sigma}\) _is bounded._ Proof.: Since the results are well-known, we simply refer to the respective literature. 1. Let \(X\in\mathbb{R}^{n\times n}\) be a solution of (2.3). Then there exists a matrix \(M\in\mathcal{S}_{\succcurlyeq}^{n}\) such that \[-A^{\mathsf{T}}X-XA^{\mathsf{T}}=M.\] The result is thus an immediate consequence of [28, Cha. 12.3, Thm. 3]. 2. See [16, Prop. 1]. 3. See [49, Thm. 3]. If \(D\) is regular, solutions of the KYP that minimize \(\operatorname{rank}(\mathcal{W}_{\Sigma}(\cdot))\) can be computed by solving an associated _algebraic Riccati equation_ (ARE) of the form \[A^{\mathsf{T}}X+XA+(-C^{\mathsf{T}}+XB)(D+D^{\mathsf{T}})^{-1}(-C+B^{\mathsf{ T}}X)=0. \tag{2.5}\] The connection between solutions of this ARE and the KYP are studied to great detail in [48]. Numerical solvers for the ARE are readily available1 and can be used to compute both _minimal_ and _maximal solutions_ to the ARE, which are also the minimal and maximal solutions of the KYP inequality from Theorem 2.2 (iii). These solutions have the property that for each solution \(X\) of the ARE, we have that \(X-X_{\min}\in\mathcal{S}_{\succcurlyeq}^{n}\) and \(X_{\max}-X\in\mathcal{S}_{\succcurlyeq}^{n}\). Moreover, each solution of the ARE can be constructed as \(X=X_{\max}\mathfrak{P}+X_{\min}(I-\mathfrak{P})\), where \(\mathfrak{P}\) and \(I-\mathfrak{P}\) are projections onto invariant subspaces of associated matrices; see [48] for further details. ### Linear systems with quadratic output For our forthcoming analysis, we recall some results on _linear time-invariant systems with quadratic output_ (LTIQO) of the form \[\Sigma_{\mathsf{QO}}\quad\begin{cases}&\dot{x}(t)=Ax(t)+Bu(t),\\ &y(t)=x(t)^{\mathsf{T}}Mx(t),\end{cases} \tag{2.6}\] with \(M=M^{\mathsf{T}}\in\mathbb{R}^{n\times n}\). Similarly as before, the controllability Gramian of (2.6) is given by (2.1a) and can be computed as the solution of the Lyapunov equation (2.2a). To define the observability Gramian we assume, as common in the MOR setting, \(x(0)=0\). Then, following [10, 22], the output of (2.6) is given by \[y(t)=\int_{0}^{t}\int_{0}^{t}h(\tau,\sigma)\big{(}u(t-\tau)\otimes u(t-\sigma) \big{)}\,\mathrm{d}\tau\,\mathrm{d}\sigma\] with kernel \(h(\tau,\sigma):=\big{(}\mathrm{vec}\big{(}B^{\mathsf{T}}\exp(A^{\mathsf{T}} \sigma)M\exp(A\tau)B\big{)}\big{)}^{\mathsf{T}}\). Accordingly, the LTIQO observability Gramian can be defined as \[\mathcal{O}_{\mathsf{QO}}=\int_{0}^{\infty}\int_{0}^{\infty}\exp(A^{\mathsf{ T}}\sigma)M\exp(A\tau)B\left(\exp(A^{\mathsf{T}}\sigma)M\exp(A\tau)B\right)^{ \mathsf{T}}\,\mathrm{d}\sigma\,\mathrm{d}\tau, \tag{2.7}\] and can be computed as a solution of the Lyapunov equation \[A^{\mathsf{T}}\mathcal{O}_{\mathsf{QO}}+\mathcal{O}_{\mathsf{QO}}A+M \mathcal{P}M=0, \tag{2.8}\] where \(\mathcal{P}\) is the controllability Gramian given by (2.1a). Similar to the case of linear output, a system (2.6) is controllable or observable if and only if the corresponding Gramians are nonsingular. With these preparations, the \(\mathcal{H}_{2}\)-norm for the LTIQO system (2.6) can be defined as \[\|\Sigma_{\mathsf{QO}}\|_{\mathcal{H}_{2}}:=\bigg{(}\int_{0}^{\infty}\int_{0 }^{\infty}\|h(\tau,s)\|_{2}^{2}\,\mathrm{d}\tau\,\mathrm{d}s\bigg{)}^{\frac{1 }{2}}=\sqrt{\mathrm{tr}\,B^{\mathsf{T}}\mathcal{O}_{\mathsf{QO}}B}.\] with output bound \(\left\|y\right\|_{L_{\infty}}\leq\|\Sigma_{\mathsf{QO}}\|_{\mathcal{H}_{2}} \|u\otimes u\|_{L_{2}\otimes L_{2}}\); see [10, Sec. III.2]. Hereby, the \(L_{2}\otimes L_{2}\)-norm is defined as \(\left\|u\otimes u\right\|_{L_{2}\otimes L_{2}}:=(\int_{0}^{\infty}\int_{0}^{ \infty}\|u(\tau)\otimes u(\sigma)\|_{2}^{2}\,\mathrm{d}\tau\,\mathrm{d}\sigma )^{1/2}\). If we replace (2.6) with a surrogate model of order \(r\ll n\) of the form \[\tilde{\Sigma}_{\mathsf{QO}}\quad\begin{cases}&\dot{x}(t)=\tilde{A}\tilde{x} (t)+\tilde{B}u(t),\\ &\tilde{y}(t)=\tilde{x}(t)^{\mathsf{T}}\tilde{M}\tilde{x}(t)\end{cases} \tag{2.9}\] with matrices \(\tilde{A},\tilde{M}\in\mathbb{R}^{r\times r}\) and \(\tilde{B}\in\mathbb{R}^{r\times m}\), then the output error \(\left\|y-\tilde{y}\right\|_{L_{\infty}}\) can be bounded by the difference of the systems in the \(\mathcal{H}_{2}\) norm multiplied with the norm of \(u\otimes u\) in the \(L_{2}\otimes L_{2}\) norm. In more detail, we obtain [10, Thm. 3.4] \[\left\|\Sigma_{\mathsf{QO}}-\tilde{\Sigma}_{\mathsf{QO}}\right\|_{\mathcal{H }_{2}}^{2}=\mathrm{tr}(B^{\mathsf{T}}\mathcal{O}_{\mathsf{QO}}B+\tilde{B}^{ \mathsf{T}}\tilde{\mathcal{O}}_{\mathsf{QO}}\tilde{B}-2B^{\mathsf{T}}Z\tilde{ B}), \tag{2.10}\] where \(Z\in\mathbb{R}^{n\times r}\) and \(Y\in\mathbb{R}^{n\times r}\) are the unique solution of the Sylvester equations \[A^{\mathsf{T}}Z+Z\tilde{A}+MY\tilde{M} =0, \tag{2.11a}\] \[AY+Y\tilde{A}^{\mathsf{T}}+B\tilde{B}^{\mathsf{T}} =0. \tag{2.11b}\] ## 3. Hamiltonian dynamic and pH-minimality A MOR technique that does not involve any approximation error for the input-output behavior of a given LTI system is the computation of a minimal realization for example based on the Kalman decomposition. However, in general this technique does not preserve structure of the given system or other quantities of relevance. Since we are dealing with pH systems, we are particularly interested in the evaluation of the Hamiltonian along system trajectories next to the system's input output behavior. Thus, we develop next a Kalman-like decomposition that permits us to construct a reduced order model that preserves the input output dynamic and the Hamiltonian dynamic as defined next. **Definition 3.1** (Hamiltonian dynamic).: _Consider the \(\mathsf{pH}\) system (1.1). Then we call the dynamical system_ \[\Sigma_{\mathcal{H}}\quad\begin{cases}\qquad\dot{x}(t)=(J-R)Qx(t)+(G-P)u(t),\\ \quad y_{\mathcal{H}}(t)=\frac{1}{2}x(t)^{\mathsf{T}}Qx(t),\end{cases} \tag{3.1}\] _the Hamiltonian dynamic associated with the \(\mathsf{pH}\) system (1.1)._ If we thus want to approximate a \(\mathsf{pH}\) system or find a minimal realization of a \(\mathsf{pH}\) system, then we have to do this simultaneously for the input-output dynamic (1.1a) and the Hamiltonian dynamic (3.1). **Example 3.2**.: _Let \(n=2\), \(Q=I_{2}\), \(S=N=0\), \(P=0\), and_ \[J=\begin{bmatrix}0&-1\\ 1&0\end{bmatrix},\qquad R=\begin{bmatrix}1&-1\\ -1&2\end{bmatrix},\qquad A=(J-R)Q=\begin{bmatrix}-1&0\\ 2&-2\end{bmatrix},\qquad G=\begin{bmatrix}1\\ 0\end{bmatrix}. \tag{3.2}\] _It is easy to see that with this choice, the input-output dynamic \(\Sigma_{\mathsf{pH}}\) is controllable but not observable. A minimal realization is given by the dynamical system (1.3) with_ \[r=1, \tilde{A}=-1, \tilde{B}=1, \tilde{C}=1. \tag{3.3}\] _The minimal realization is passive with unique solution \(\tilde{Q}^{\star}=1\) of the KYP inequality (2.3). Nevertheless, straightforward computations2 show that the Hamiltonian dynamic \(\Sigma_{\mathcal{H}}\) of the original system and the minimal realization do not coincide. We conclude that while (3.3) constitutes a minimal realization for the input-output dynamic of (3.2), the reduced system given by (3.3) introduces an approximation error for the Hamiltonian dynamic._ Footnote 2: One can take for instance a nonzero, constant control input and explicitly compute the Hamiltonian dynamic \(\Sigma_{\mathcal{H}}\). To simplify our further discussion, we combine the input-output dynamic (1.1a) and the Hamiltonian dynamic (3.1) into the extended dynamical system \[\Sigma_{\mathsf{pH}+\mathcal{H}}\quad\begin{cases}\qquad\dot{x}(t)=(J-R)Qx(t )+(G-P)u(t),\\ \qquad y(t)=(G+P)^{\mathsf{T}}Qx+(S-N)u,\\ \qquad y_{\mathcal{H}}(t)=\frac{1}{2}x(t)^{\mathsf{T}}Qx(t)\end{cases} \tag{3.4}\] with extended output variable \(y_{\mathsf{pH}+\mathcal{H}}:=\left[y^{\mathsf{T}},y_{\mathcal{H}}\right]^{ \mathsf{T}}\in\mathbb{R}^{m+1}\). Let us emphasize that in this formulation the dissipation inequality can be represented as \[\frac{\mathrm{d}}{\mathrm{d}t}\begin{bmatrix}0&1\end{bmatrix}y_{\mathsf{pH}+ \mathcal{H}}(t)\leq y_{\mathsf{pH}+\mathcal{H}}(t)^{\mathsf{T}}\begin{bmatrix} I_{m}\\ 0\end{bmatrix}u(t),\] and hence does no longer require the state, but is purely an input-output property. Towards a structure-preserving Kalman-like decomposition, let \(V\in\mathrm{O}_{n}\) (i.e., \(V^{\mathsf{T}}V=I_{n}\)) such that \[V^{\mathsf{T}}QV=\begin{bmatrix}Q_{\mathsf{o}}&0\\ 0&0\end{bmatrix}\] with \(Q_{\mathsf{o}}\in\mathcal{S}^{\mathrm{rank}(Q)}_{\succ}\). Setting \[V^{\mathsf{T}}x=\begin{bmatrix}x_{\mathsf{o}}\\ x_{\mathsf{g}}\end{bmatrix}, V^{\mathsf{T}}JV=\begin{bmatrix}J_{\mathsf{o}}&-J_{ \mathsf{g}}^{\mathsf{T}}\\ J_{\mathsf{g}}&J_{\star}\end{bmatrix}, V^{\mathsf{T}}RV=\begin{bmatrix}R_{\mathsf{o}}&R_{\mathsf{g}}^{ \mathsf{T}}\\ R_{\mathsf{g}}&R_{\star}\end{bmatrix},\] \[V^{\mathsf{T}}G=\begin{bmatrix}G_{\mathsf{o}}\\ G_{\mathsf{g}}\end{bmatrix}, V^{\mathsf{T}}P=\begin{bmatrix}P_{\mathsf{o}}\\ P_{\mathsf{g}}\end{bmatrix}\] we immediately observe that the dynamic corresponding to the state \(x_{\mathfrak{G}}\) is not observable and hence can be removed without altering the input-output mapping. In particular, the pH system \[\begin{cases}&\dot{x}_{\mathfrak{o}}(t)=\left(J_{\mathfrak{o}}-R_{\mathfrak{o}} \right)Q_{\mathfrak{o}}x_{\mathfrak{o}}(t)+(G_{\mathfrak{o}}-P_{\mathfrak{o}})u (t)\\ &y(t)=\left(G_{\mathfrak{o}}+P_{\mathfrak{o}}\right)^{\mathsf{T}}\!Q_{ \mathfrak{o}}x_{\mathfrak{o}}(t)+(S-N)u(t)\\ &y_{\mathcal{H}}(t)=\frac{1}{2}x_{\mathfrak{o}}(t)^{\mathsf{T}}\!Q_{ \mathfrak{o}}x_{\mathfrak{o}}(t)\end{cases} \tag{3.5}\] has the same input-output mapping as (3.4), i.e., from an approximation perspective, we can always assume \(Q\in\mathcal{S}_{\succ}^{n}\) since we can simply remove the singular components, which is considered favorable from a modeling perspective [30, Sec. 4.3]. **Theorem 3.3**.: _Consider the pH system (3.4) with \(Q\in\mathcal{S}_{\succ}^{n}\). Then, there exists a matrix \(V\in\operatorname{GL}_{n}\) such that a state-space transformation with \(V\) transforms the pH system (3.4) into_ \[\begin{cases}&\begin{bmatrix}\dot{x}_{\mathfrak{c}}(t)\\ \dot{x}_{\mathfrak{c}}(t)\end{bmatrix}=\begin{bmatrix}(J_{\mathfrak{c}}-R_{ \mathfrak{c}})&0\\ 0&(J_{\mathfrak{c}}-R_{\mathfrak{c}})\end{bmatrix}\begin{bmatrix}x_{\mathfrak{ c}}(t)\\ x_{\mathfrak{c}}(t)\end{bmatrix}+\begin{bmatrix}G_{\mathfrak{c}}-P_{\mathfrak{c}} \\ 0\end{bmatrix}u(t),\\ &y(t)=\left(G_{\mathfrak{c}}+P_{\mathfrak{c}}\right)^{\mathsf{T}}\!x_{ \mathfrak{c}}(t)+(S-N)u(t),\\ &y_{\mathcal{H}}(t)=\frac{1}{2}x_{\mathfrak{c}}(t)^{\mathsf{T}}x_{ \mathfrak{c}}(t)+\frac{1}{2}x_{\mathfrak{c}}(t)^{\mathsf{T}}x_{\mathfrak{c}}( t)\end{cases} \tag{3.6}\] _with \(V^{-1}QV=I_{n}\) and_ \[V\begin{bmatrix}x_{\mathfrak{c}}\\ x_{\mathfrak{c}}\end{bmatrix}=x,\qquad\begin{bmatrix}J_{\mathfrak{c}}-R_{ \mathfrak{c}}&0\\ 0&J_{\mathfrak{c}}-R_{\mathfrak{c}}\end{bmatrix}=V^{-1}(J-R)V,\qquad\begin{bmatrix} G_{\mathfrak{c}}-P_{\mathfrak{c}}\\ 0\end{bmatrix}=V^{-1}(G-P)\] _such that the subsystem corresponding to \(x_{\mathfrak{c}}\) is in pH-form and controllable._ Proof.: Let \(Q=LL^{\mathsf{T}}\) denote the Cholesky decomposition of the Hessian of the Hamiltonian, and define \[\tilde{J}:=L^{\mathsf{T}}JL,\qquad\qquad\tilde{R}:=L^{\mathsf{T}}RL,\qquad \qquad\tilde{G}:=L^{\mathsf{T}}G,\qquad\qquad\tilde{P}:=L^{\mathsf{T}}P.\] Using a classical Kalman decomposition, let \(\tilde{V}\in\operatorname{O}_{n}\) be such that \[\left(\tilde{V}^{\mathsf{T}}(\tilde{J}-\tilde{R})\tilde{V},\tilde{V}^{\mathsf{ T}}(\tilde{G}-\tilde{P})\right)=\left(\begin{bmatrix}J_{\mathfrak{c}}-R_{ \mathfrak{c}}&J_{\star}-R_{\star}\\ 0&J_{\mathfrak{c}}-R_{\mathfrak{c}}\end{bmatrix},\begin{bmatrix}G_{\mathfrak{ c}}-P_{\mathfrak{c}}\\ 0\end{bmatrix}\right)\] is such that \((J_{\mathfrak{c}}-R_{\mathfrak{c}},G_{\mathfrak{c}}-P_{\mathfrak{c}})\) is controllable. Note that the transformation is a congruence transformation, such that we conclude \(J_{\star}=0=R_{\star}\). The result follows with \(V:=L^{-\mathsf{T}}\tilde{V}\). **Corollary 3.4**.: _Consider the system (3.6) with \(J_{\mathfrak{c}}\in\mathbb{R}^{n_{\mathfrak{c}}\times n_{\mathfrak{c}}}\) and \(J_{\mathfrak{c}}\in\mathbb{R}^{n_{\mathfrak{c}}\times n_{\mathfrak{c}}}\). Then (3.6) is zero-state observable. It is controllable, if and only if \(n_{\mathfrak{c}}=0\). In this case, asymptotic stability implies that the controllability and observability Gramians defined in (2.1a) and (2.7) are positive definite._ Proof.: For zero-state observability, observe that \(u\equiv 0\) implies \(x_{\mathfrak{c}}(t)=\exp((J_{\mathfrak{c}}-R_{\mathfrak{c}})t)x_{\mathfrak{c},0}\) and \(x_{\mathfrak{c}}(t)=\exp((J_{\mathfrak{c}}-R_{\mathfrak{c}})t)x_{\mathfrak{c},0}\). In particular, using \[y_{\mathcal{H}}(t)=\frac{1}{2}\|x_{\mathfrak{c}}(t)\|_{2}^{2}+\frac{1}{2}\|x_{ \mathfrak{c}}(t)\|_{2}^{2}\] yields \(y_{\mathcal{H}}\equiv 0\) if and only if \(x_{\mathfrak{c},0}=0\) and \(x_{\mathfrak{c},0}=0\). Controllability is a consequence of Theorem 3.5, which also implies positive definiteness of the controllability Gramian. The positive definiteness of the observability Gramian follows immediately from [28, Cha. 12.3, Thm. 3]. Summarizing the previous discussion, we obtain the main result of this section, namely a Kalman-like decomposition for pH systems. **Theorem 3.5**.: _Consider the pH system (3.4). Then, there exists a matrix \(V\in\mathrm{GL}_{n}\mathbb{R}\) such that a state-space transformation with \(V\) transforms the pH system (3.4) into_ \[\left\{\begin{aligned} \begin{bmatrix}\dot{x}_{\mathsf{co}}(t)\\ \dot{x}_{\mathsf{co}}(t)\\ \dot{x}_{\mathsf{c\overline{c}}}(t)\\ \dot{x}_{\mathsf{c\overline{c}}}(t)\end{bmatrix}&=\begin{bmatrix}J_{\mathsf{ co}}-R_{\mathsf{co}}&0&0&0\\ 0&J_{\mathsf{co}}-R_{\mathsf{co}}&0&0\\ J_{\mathsf{c\overline{c}},1}-R_{\mathsf{c\overline{c}},1}&J_{\mathsf{c \overline{c}},2}-R_{\mathsf{c\overline{c}},2}&0&0\\ 0&0&0&0\end{bmatrix}\begin{bmatrix}x_{\mathsf{co}}(t)\\ x_{\mathsf{c\overline{c}}}(t)\\ x_{\mathsf{c\overline{c}}}(t)\end{bmatrix}+\begin{bmatrix}G_{\mathsf{co}}-P_{ \mathsf{co}}\\ 0\\ G_{\mathsf{c\overline{c}}}-P_{\mathsf{c\overline{c}}}\\ 0\end{bmatrix}u(t),\\ y(t)&=(G_{\mathsf{co}}+P_{\mathsf{co}})^{\mathsf{T}}x_{\mathsf{co}}(t)+(S-N )u(t),\\ y_{\mathcal{H}}(t)&=\frac{1}{2}x_{\mathsf{co}}^{\mathsf{T}}(t)x_{\mathsf{co}}(t) +\frac{1}{2}x_{\mathsf{co}}^{\mathsf{T}}(t)x_{\mathsf{co}}(t)\end{aligned}\end{aligned} \right. \tag{3.7}\] _such that_ 1. _the subsystem corresponding to_ \(x_{\mathsf{co}}\) _is in pH form, controllable, and zero-state observable,_ 2. _the subsystem corresponding to_ \(x_{\mathsf{co}}\) _is in pH form and zero-state observable,_ 3. _the subsystem corresponding to_ \(x_{\mathsf{co}}\) _and_ \(x_{\mathsf{c\overline{c}}}\) _is zero-state observable, and_ 4. _the subsystem corresponding to_ \(x_{\mathsf{co}}\) _and_ \(x_{\mathsf{c\overline{c}}}\) _is controllable._ Proof.: Using a classical Kalman decomposition, let \(\mathcal{V}_{\mathsf{c}}\subseteq\mathbb{R}^{n}\) denote the controllability space associated with (3.4) and define the spaces \[\mathcal{V}_{\mathsf{c\overline{c}}}:=\mathcal{V}_{\mathsf{c}}^{\perp}\cap \ker(Q),\qquad\qquad\tilde{\mathcal{V}}_{\mathsf{c\overline{c}}}:=\mathcal{V }_{\mathsf{c}}\cap\ker(Q),\qquad\qquad\mathcal{V}_{1}:=(\mathcal{V}_{\mathsf{ cc}}+\tilde{\mathcal{V}}_{\mathsf{cc}})^{\perp}.\] Using \(\mathcal{V}_{\mathsf{c}}+\mathcal{V}_{\mathsf{c}}^{\perp}=\mathbb{R}^{n}\), we conclude \(\ker(Q)\perp\mathcal{V}_{1}\). Assume that the columns of \(V_{1},V_{\mathsf{c\overline{c}}},V_{\mathsf{c\overline{c}}}\) form a basis for \(\mathcal{V}_{1},\mathcal{V}_{\mathsf{c\overline{c}}},\mathcal{V}_{\mathsf{ c\overline{c}}}\) such that \(\tilde{V}=[V_{1},V_{\mathsf{c\overline{c}}},V_{\mathsf{c\overline{c}}}]\in \mathrm{O}_{n}\). Define \(Q_{1}:=V_{1}^{\mathsf{T}}QV_{1}\), \(J_{1}:=V_{1}^{\mathsf{T}}JV_{1}\), and \(R_{1}:=V_{1}^{\mathsf{T}}RV_{1}\). A state-space transformation of (3.4) with \(\tilde{V}\) then yields \[\left\{\begin{aligned} \begin{bmatrix}\dot{x}_{1}(t)\\ \dot{x}_{\mathsf{c\overline{c}}}(t)\\ \dot{x}_{\mathsf{c\overline{c}}}(t)\end{bmatrix}&=\begin{bmatrix}J_{1}-R_{1} &0&0\\ J_{\mathsf{c\overline{c}}}-R_{\mathsf{c\overline{c}}}&0&0\\ 0&0&0\end{bmatrix}\begin{bmatrix}Q_{1}x_{1}(t)\\ x_{\mathsf{c\overline{c}}}(t)\\ x_{\mathsf{c\overline{c}}}(t)\end{bmatrix}+\begin{bmatrix}G_{1}-P_{1}\\ G_{\mathsf{c\overline{c}}}-P_{\mathsf{c\overline{c}}}\\ 0\end{bmatrix}u(t),\\ y(t)&=(G_{1}^{\mathsf{T}}+P_{1}^{\mathsf{T}})Q_{1}x_{1}(t)+(S-N)u(t),\\ y_{\mathcal{H}}(t)&=\frac{1}{2}x_{1}^{\mathsf{T}}(t)Q_{1}x_{1}(t),\end{aligned} \right.\] where the subsystem corresponding to \(x_{1}\) is in pH form. The result follows from applying Theorem 3.3 and Corollary 3.4 to the pH subsystem corresponding to \(x_{1}\). **Corollary 3.6**.: _Consider the pH system (3.4) with initial value \(x(0)=0\) and, using the notation of Theorem 3.5, the reduced controllable and zero-state observable pH system_ \[\left\{\begin{aligned} \dot{x}_{\mathsf{co}}(t)&=(J_{ \mathsf{co}}-R_{\mathsf{co}})x_{\mathsf{co}}(t)+(G_{\mathsf{co}}-P_{\mathsf{co }})u(t),\\ \tilde{y}(t)&=(G_{\mathsf{co}}+P_{\mathsf{co}})^{\mathsf{T}}x_{ \mathsf{co}}(t)+(S-N)u(t),\\ \tilde{y}_{\mathcal{H}}(t)&=\frac{1}{2}x_{\mathsf{co}}^{ \mathsf{T}}(t)x_{\mathsf{co}}(t)\end{aligned}\right. \tag{3.8}\] _with initial value \(x_{\mathsf{co}}(0)=0\). Then \(y\equiv\tilde{y}\) and \(y_{\mathcal{H}}\equiv\tilde{y}_{\mathcal{H}}\) for any control input \(u\)._ **Remark 3.7**.: _A Kalman decomposition for pH systems considering only the input-output dynamic (1.1a) is obtained in [36], which however requires certain invertibility assumptions to preserve the pH structure and, moreover, does not take the Hamiltonian dynamic (3.1) into consideration._ ## 4. Energy matching algorithm in surrogate models In an optimization setting, the approximation of a pH system can be interpreted as a _multi-objective optimization problem_ accounting for the approximation of the input-output dynamic and the Hamiltonian dynamic. If we seek optimal approximations, then we have to solve the multi-objective optimization problem. \[\min_{\tilde{\Sigma}_{\mathsf{ph}+\mathcal{H}}}\tfrac{1}{2}\left[\left\| \Sigma_{\mathsf{ph}}-\tilde{\Sigma}_{\mathsf{ph}}\right\|_{\mathcal{H}_{2}}^{2}\right] \tag{4.1}\] As a first approach to solving (4.1), we propose a two-step approach towards the Pareto front: first find a good (optimal) surrogate for the input-output dynamic, for instance by applying classical structure preserving MOR methods; then minimize the error of the Hamiltonian dynamic without changing the surrogate input-output dynamic. This strategy is motivated by the numerous very effective available structure preserving MOR methods that we wish to exploit. The investigation of other approaches to deal with the multi-objective problem (4.1) is left for future research. ### The optimal Hamiltonian surrogate We replace (1.1a) with a reduced pH system of the form \[\tilde{\Sigma}_{\mathsf{pH}}\quad\begin{cases}&\dot{\tilde{x}}(t)=(\tilde{J}- \tilde{R})\tilde{Q}\tilde{x}(t)+(\tilde{G}-\tilde{P})u(t),\\ &\tilde{y}(t)=(\tilde{G}-\tilde{P})^{\mathsf{T}}\tilde{Q}\tilde{x}(t)+(\tilde {S}-\tilde{N})u(t),\end{cases} \tag{4.2}\] with reduced dimension \(r\ll n\) and reduced Hamiltonian dynamic \[\tilde{\Sigma}_{\mathcal{H}}\quad\begin{cases}&\dot{\tilde{x}}(t)=(\tilde{J}- \tilde{R})\tilde{Q}\tilde{x}(t)+(\tilde{G}-\tilde{P})u(t),\\ &\tilde{y}_{\mathcal{H}}(t)=\tfrac{1}{2}\tilde{x}(t)^{\mathsf{T}}\tilde{Q} \tilde{x}(t).\end{cases} \tag{4.3}\] For notational convenience, we introduce the reduced system matrices \[\tilde{A}:=(\tilde{J}-\tilde{R})\tilde{Q},\qquad\quad\tilde{B}:=\tilde{G}- \tilde{P},\qquad\quad\tilde{C}:=(\tilde{G}-\tilde{P})^{\mathsf{T}}\tilde{Q}, \qquad\quad\tilde{D}:=\tilde{S}-\tilde{N} \tag{4.4}\] and assume that \(\tilde{A}\) is asymptotically stable. We assume for the moment that the surrogate (4.2) is already available, for instance via the system theoretic methods mentioned in the introduction. Since these methods usually aim at an approximation of the input-output mapping and not at an optimal approximation of the Hamiltonian dynamic (see Definition3.1), we in general encounter a poor approximation of the Hamiltonian dynamic. However, for any given pH ROM, we can replace the Hessian of the Hamiltonian \(\tilde{Q}\) by any other positive definite solution of the KYP inequality (2.3) without changing the input-output mapping. Hence, this matrix can be treated as a decision variable, then we are interested in solving the constrained minimization problem \[\min_{\tilde{Q}\in\mathcal{S}_{\succ}^{r}}\tfrac{1}{2}\|\Sigma_{\mathcal{H}}- \tilde{\Sigma}_{\mathcal{H}}\|_{\mathcal{H}_{2}}^{2}\qquad\text{s.t.}\qquad \mathcal{W}_{\tilde{\Sigma}}(\tilde{Q})\in\mathcal{S}_{\succ}^{r+m}, \tag{4.5}\] where the reduced system \(\tilde{\Sigma}_{\mathcal{H}}\) depends on \(\tilde{Q}\); see Section2.2. Note that the constraint in (4.5) ensures that the input-output mapping of the given ROM is preserved. Using the discussion in Section2.3, we note that the cost functional \(\bar{\mathcal{J}}(\tilde{Q}):=\tfrac{1}{2}\|\Sigma_{\mathcal{H}}-\tilde{ \Sigma}_{\mathcal{H}}\|_{\mathcal{H}_{2}}^{2}\) can be computed as \[\bar{\mathcal{J}}(\tilde{Q})=\tfrac{1}{2}\operatorname{tr}(B^{\mathsf{T}} \mathcal{O}_{\mathsf{QO}}B+\tilde{B}^{\mathsf{T}}\tilde{\mathcal{O}}_{ \mathsf{QO}}\tilde{B}-2B^{\mathsf{T}}Z\tilde{B}) \tag{4.6}\] with \(\mathcal{O}_{\mathsf{QO}},\tilde{\mathcal{O}}_{\mathsf{QO}}\), and \(Z\) are the unique solution of the linear matrix equations \[A\mathcal{P}+\mathcal{P}A^{\mathsf{T}}+BB^{\mathsf{T}} =0, \tilde{A}\tilde{\mathcal{P}}+\tilde{\mathcal{P}}\tilde{A}^{ \mathsf{T}}+\tilde{B}\tilde{B}^{\mathsf{T}} =0, \tag{4.7a}\] \[A^{\mathsf{T}}\mathcal{O}_{\mathsf{QO}}+\mathcal{O}_{\mathsf{QO} }A+\tfrac{1}{4}QPQ =0, \tilde{A}^{\mathsf{T}}\tilde{\mathcal{O}}_{\mathsf{QO}}+\tilde{ \mathcal{O}}_{\mathsf{QO}}\tilde{A}+\tfrac{1}{4}\tilde{Q}\tilde{\mathcal{P}} \tilde{Q} =0,\] (4.7b) \[A^{\mathsf{T}}Z+Z\tilde{A}+\tfrac{1}{4}QY\tilde{Q} =0, AY+Y\tilde{A}^{\mathsf{T}}+B\tilde{B}^{\mathsf{T}} =0. \tag{4.7c}\] **Example 4.1**.: _To illustrate the optimization problem, we discuss (4.5) with a concrete academic toy example. Suppose the FOM (1.1) is given by the matrices_ \[J=\begin{bmatrix}0&1\\ -1&0\end{bmatrix},\quad R=\begin{bmatrix}2&0\\ 0&1\end{bmatrix},\quad Q=\begin{bmatrix}1&0\\ 0&1\end{bmatrix},\quad A=(J-R)Q=\begin{bmatrix}-2&1\\ -1&-1\end{bmatrix},\quad G=\begin{bmatrix}6\\ 0\end{bmatrix},\quad D=1.\] _Accordingly, the Gramians are_ \[\mathcal{P} =\begin{bmatrix}8&-2\\ -2&2\end{bmatrix}, \mathcal{O} =\begin{bmatrix}8&2\\ 2&2\end{bmatrix}, \mathcal{O}_{\mathsf{QO}} =\tfrac{1}{36}\begin{bmatrix}19&-2\\ -2&7\end{bmatrix}\] _and hence \(\left\|\Sigma_{\mathcal{H}}\right\|_{\mathcal{H}_{2}}^{2}=\operatorname{tr}(B^{ \mathsf{T}}\mathcal{O}_{\mathsf{QO}}B)=19\). For the reduced model, we make the choice_ \[\tilde{A}=-2, \tilde{B}=6, \tilde{C}=6, \tilde{D}=1, \tag{4.8}\] _and we immediately see that the KYP inequality_ \[\mathcal{W}_{\Sigma}(\tilde{Q})=\begin{bmatrix}4\tilde{Q}&6-6\tilde{Q}\\ 6-6\tilde{Q}&2\end{bmatrix}\succcurlyeq 0\] _is satisfied for any \(\tilde{Q}\in[\frac{10}{9}-\frac{\sqrt{76}}{18},\frac{10}{9}+\frac{\sqrt{76}}{18 }]=\mathbb{X}_{\Sigma}\). In particular, the ROM is passive and the optimization problem (4.5) is feasible. The Gramians for the ROM and the solutions of the matrix equations (4.7) are_ \[\tilde{\mathcal{P}}=9, \tilde{\mathcal{O}}=9, \tilde{\mathcal{O}}_{\mathsf{QO}}=\tfrac{9}{16}\tilde{Q}^{2}, Y=\tfrac{1}{13}\begin{bmatrix}108\\ -36\end{bmatrix}, Z=\tfrac{\tilde{Q}}{169}\begin{bmatrix}90\\ -9\end{bmatrix}.\] _We thus obtain_ \[\bar{\mathcal{J}}(\tilde{Q})=\tfrac{1}{2}\left(19+\tfrac{81}{4}\tilde{Q}^{2}- 2\cdot\tfrac{3240}{169}\tilde{Q}\right)=\tfrac{19}{2}+\tfrac{81}{8}\tilde{Q}^{ 2}-\tfrac{3240}{169}\tilde{Q}.\] _Thus, the first-order necessary condition implies \(\tilde{Q}^{\star}=\tfrac{160}{169}\approx 0.95\), which is an element of the feasible set and thus the optimal point. We like to stress that the ROM (4.8) is obtained via Galerkin projection onto the space spanned by the matrix \(V=[1,0]^{\mathsf{T}}\), which in this particular setting preserves the \(\rho\mathsf{H}\)-structure. Nevertheless, we have \(\tilde{Q}^{\star}\neq V^{\mathsf{T}}QV=1\), i.e., a standard projection framework does not automatically yield the best Hamiltonian in the ROM. Moreover, the optimal Hamiltonian is not an element of the solution set of the ARE (2.5) for the ROM, which is \(\{\tfrac{10}{9}-\tfrac{\sqrt{76}}{18},\tfrac{10}{9}+\tfrac{\sqrt{76}}{18}\}\)._ **Remark 4.2**.: _Since the first term of \(\bar{\mathcal{J}}(\tilde{Q})\) is independent of \(\tilde{Q}\) (and the trace operator is linear), it can be neglected and simplified to_ \[\mathcal{J}(\tilde{Q})=\tfrac{1}{2}\operatorname{tr}(\tilde{B}^{\mathsf{T}} \tilde{\mathcal{O}}_{\mathsf{QO}}\tilde{B}-2B^{\mathsf{T}}Z\tilde{B}), \tag{4.9}\] _where \(\tilde{\mathcal{O}}_{\mathsf{QO}}\) and \(Z\) depend on \(\tilde{Q}\) via (4.7). To determine the computational cost of repeated evaluations of the objective function, we notice that due to the Kronecker product it is sufficient to solve the (sparse) linear system_ \[(I_{r}\otimes A^{\mathsf{T}}+\tilde{A}^{\mathsf{T}}\otimes I_{n})K=(I_{r} \otimes QY)\] _then_ \[\operatorname{vec}(Z)=-\tfrac{1}{4}K\operatorname{vec}(\tilde{Q})\quad\text{ and}\quad\operatorname{vec}(B^{\mathsf{T}}Z\tilde{B})=-\tfrac{1}{4}(\tilde{B}^{\mathsf{T}} \otimes B^{\mathsf{T}})K\operatorname{vec}(\tilde{Q}).\] _In particular, we observe that \(Z\) depends linearly on \(\tilde{Q}\) and, whenever we have computed \(K\), the cost for evaluating \(\mathcal{J}\) does not depend on the full dimension \(n\)._ We make the following observations. **Lemma 4.3**.: _Assume that \(\Sigma\) is asymptotically stable and \(\bar{\Sigma}\) is minimal and asymptotically stable. Then \(\mathcal{J}\colon\mathcal{S}_{\succ}^{r}\to\mathbb{R}\) as defined in (4.9) is twice Frechet differentiable, strictly convex, and_ \[\nabla_{\tilde{Q}}\mathcal{J}(\tilde{Q})=\tfrac{1}{4}\left(\tilde{\mathcal{P}} \tilde{Q}\tilde{\mathcal{P}}-Y^{\mathsf{T}}QY\right). \tag{4.10}\] Proof.: We first discuss the Frechet differentiability and compute the gradient using similar ideas as for the gradient calculation for the \(\mathcal{H}_{2}\) norm for standard LTI systems (without quadratic outputs) as presented in [47, Thm. 3.3]. A perturbation \(\Delta_{\tilde{Q}}\) of \(\tilde{Q}\) leads to a perturbation \(\Delta_{\tilde{\mathcal{O}}_{\mathsf{QO}}}\) in \(\tilde{\mathcal{O}}_{\mathsf{QO}}\) and \(\Delta_{Z}\) in \(Z\). Then, using the cyclic property of the trace, we obtain \[\mathcal{J}(\tilde{Q}+\Delta_{\tilde{Q}})-\mathcal{J}(\tilde{Q})=\tfrac{1}{2} \operatorname{tr}(\tilde{B}^{\mathsf{T}}\Delta_{\tilde{\mathcal{O}}_{\mathsf{QO}}}\tilde{B}-2B^{ \mathsf{T}}\Delta_{Z}\tilde{B})=\tfrac{1}{2}\operatorname{tr}(\tilde{B}\tilde{B} ^{\mathsf{T}}\Delta_{\tilde{\mathcal{O}}_{\mathsf{QO}}}-2\tilde{B}B^{ \mathsf{T}}\Delta_{Z}),\] where \(\Delta_{\tilde{\mathcal{O}}_{\mathfrak{QO}}}\) and \(\Delta_{Z}\) are solutions of the Lyapunov equation (4.7b) and Sylvester equation (4.7c) \[0 =\tilde{A}^{\mathsf{T}}\Delta_{\tilde{\mathcal{O}}_{\mathfrak{QO}}} +\Delta_{\tilde{\mathcal{O}}_{\mathfrak{QO}}}\tilde{A}+\tfrac{1}{4}\left( \Delta_{\tilde{Q}}\tilde{\mathcal{P}}\tilde{Q}+\tilde{Q}\tilde{\mathcal{P}} \Delta_{\tilde{Q}}+\Delta_{\tilde{Q}}\tilde{\mathcal{P}}\Delta_{\tilde{Q}} \right), \tag{4.11}\] \[0 =A^{\mathsf{T}}\Delta_{Z}+\Delta_{Z}\tilde{A}+\tfrac{1}{4}QY \Delta_{\tilde{Q}}, \tag{4.12}\] respectively. Then, applying [47, Lem. 3.2] to (2.2a) and (4.11) respectively to (2.11b) and (4.12), we obtain \[\operatorname{tr}\left(\tilde{B}\tilde{B}^{\mathsf{T}}\Delta_{ \tilde{\mathcal{O}}_{\mathfrak{QO}}}\right) =\tfrac{1}{4}\operatorname{tr}\left(\left(\Delta_{\tilde{Q}} \tilde{\mathcal{P}}\tilde{Q}+\tilde{Q}\tilde{\mathcal{P}}\Delta_{\tilde{Q}}+ \Delta_{\tilde{Q}}\tilde{\mathcal{P}}\Delta_{\tilde{Q}}\right)^{\mathsf{T}} \tilde{\mathcal{P}}\right)\] \[=\tfrac{1}{2}\operatorname{tr}\left(\Delta_{\tilde{Q}}^{\mathsf{ T}}\tilde{\mathcal{P}}\tilde{Q}\tilde{\mathcal{P}}\right)+\tfrac{1}{4} \operatorname{tr}\left(\Delta_{\tilde{Q}}^{\mathsf{T}}\tilde{\mathcal{P}} \Delta_{\tilde{Q}}^{\mathsf{T}}\tilde{\mathcal{P}}\right),\] \[\operatorname{tr}\left(\tilde{B}B^{\mathsf{T}}\Delta_{Z}\right) =\tfrac{1}{4}\operatorname{tr}\left(\Delta_{\tilde{Q}}^{\mathsf{ T}}Y^{\mathsf{T}}QY\right).\] With these preparations we arrive at, \[\mathcal{J}(\tilde{Q}+\Delta_{\tilde{Q}})-\mathcal{J}(\tilde{Q}) =\tfrac{1}{4}\operatorname{tr}\left(\Delta_{\tilde{Q}}^{\mathsf{ T}}\left(\tilde{\mathcal{P}}\tilde{Q}\tilde{\mathcal{P}}-Y^{\mathsf{T}}QY \right)\right)-\tfrac{1}{8}\operatorname{tr}\left(\Delta_{\tilde{Q}}^{\mathsf{ T}}\tilde{\mathcal{P}}\Delta_{\tilde{Q}}^{\mathsf{T}}\tilde{\mathcal{P}}\right).\] The Cauchy-Schwarz inequality yields \[\frac{\left|\operatorname{tr}(\Delta_{\tilde{Q}}^{\mathsf{T}}\tilde{\mathcal{ P}}\Delta_{\tilde{Q}}^{\mathsf{T}}\tilde{\mathcal{P}})\right|}{\|\Delta_{ \tilde{Q}}\|_{\mathrm{F}}}\leq\|\Delta_{\tilde{Q}}\|_{\mathrm{F}}\|\tilde{ \mathcal{P}}\|_{\mathrm{F}}^{2}\] such that we conclude that \(\mathcal{J}\) is Frechet differentiable with directional derivative \[\mathcal{D}_{\Delta_{\tilde{Q}}}\mathcal{J}(\tilde{Q})=\tfrac{1}{4}\langle \tilde{\mathcal{P}}\tilde{Q}\tilde{\mathcal{P}}-Y^{\mathsf{T}}QY,\Delta_{ \tilde{Q}}\rangle_{\mathrm{F}}\] and the gradient is given as in (4.10). For the second derivative, we observe \[\mathcal{D}_{\Delta_{\tilde{Q}}}\mathcal{J}(\tilde{Q}-\Gamma)-\mathcal{D}_{ \Delta_{\tilde{Q}}}\mathcal{J}(\tilde{Q})=\tfrac{1}{4}\langle\tilde{\mathcal{P }}\Gamma\tilde{\mathcal{P}},\Delta_{\tilde{Q}}\rangle_{\mathrm{F}}.\] Hence, also the second derivative exists with \(\mathcal{D}_{\Delta,\Gamma}^{2}\mathcal{J}(\tilde{Q})=\tfrac{1}{4}\langle \Gamma\tilde{\mathcal{P}},\tilde{\mathcal{P}}\Delta\rangle_{\mathrm{F}}\). Using properties of the Kronecker product and the vec operator, we conclude \[\mathcal{D}_{\Delta,\Delta}^{2}\mathcal{J}(\tilde{Q})(\Delta,\Delta)=\tfrac{1}{ 4}\operatorname{vec}(\Delta)^{\mathsf{T}}(\tilde{\mathcal{P}}\otimes\tilde{ \mathcal{P}})\operatorname{vec}(\Delta)>0\] for all \(\Delta\neq 0\). Hence, \(\mathcal{J}\) is strictly convex. **Theorem 4.4**.: _In addition to the assumptions of Lemma 4.3 suppose that \(\tilde{\Sigma}\) is passive. Then the optimization problem (4.5) is solvable and has a unique solution._ Proof.: Since \(\tilde{\Sigma}\) is minimal and passive, Theorem 2.22 implies the existence of \(\tilde{Q}\in\mathbb{X}_{\tilde{\Sigma}}\). Moreover, \(\mathcal{J}\) is bounded from below. Let \((\tilde{Q}_{k})_{k\in\mathbb{N}}\) denote a sequence in \(\mathbb{X}_{\tilde{\Sigma}}\) such that \[\lim_{k\to\infty}\mathcal{J}(\tilde{Q}_{k})=\inf_{X\in\mathbb{X}_{\tilde{ \Sigma}}}\mathcal{J}(X).\] Since \(\mathbb{X}_{\tilde{\Sigma}}\) is bounded, cf. Theorem 2.23, we can choose a convergent subsequence \((\tilde{Q}_{k_{j}})_{j\in\mathbb{N}}\) with limit \(\tilde{Q}^{\star}:=\lim_{j\to\infty}\tilde{Q}_{k_{j}}\). By construction, we obtain \(\mathcal{W}_{\tilde{\Sigma}}(\tilde{Q}^{\star})\in\mathcal{S}_{\mathrm{\tiny \mbox{\tiny$\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrmmathrmmathrm{\mathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrm{\mathrmmathrmmathrmmathrmmathrm{ \mathrmmathrm{ \mathrmmathrmmathrmmathrm{ \mathrmmathrmmathrmmathrm{ \mathrmmathrmmathrm{ \mathrmmathrm{\mathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrm{ \mathrm{\mathrmmathrmmathrm{\mathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrm{ \mathrm{\mathrmmathrm{\mathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrm{ \mathrmmathrm{\mathrmmathrmmathrmmathrm{ \mathrmmathrmmathrmmathrmmathrmmathrm{ \mathrm{\mathrmmathrm{ \mathrmmathrmmathrmmathrmmathrm{ \mathrm{ \mathrmmathrmmathrmmathrmmathrmmathrm{ \mathrm{ \mathrmmathrmmathrmmathrm{ \mathrm{ \mathrmmathrmmathrmmathrmmathrm{ \mathrmmathrmmathrm{ \mathrm{ \mathrmmathrmmathrmmathrmmathrm{ \mathrm{ \mathrm{ \mathrmmathrmmathrm{ \mathrmmathrm{ \mathrmmathrm{ \mathrm{ \ \ \ \ \ \ }}}}}}}}}}{}}}\)\) \) \)\)\)\)\)\)\)\)\)\)\)\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\}\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\ ### A special case: Positive-real balanced truncation To obtain further insights into the optimization problem (4.5), we consider the special case that the ROM is obtained via PRBT and the Hessian of the Hamiltonian of the FOM is given by the minimal solution of the KYP inequality. In this case, the minimal solution of the KYP inequality of the FOM is given via projection of the minimal solution of the FOM KYP, and hence one might get the idea that in this specific scenario, PRBT is optimal with respect to (4.5). We refer to the forthcoming numerical Section 5.2 for a corresponding numerical example. The following two toy examples, generated via the balanced parametrization for positive-real systems from [34], demonstrate that PRBT can be optimal in the setting described above (cf. Example 4.5), but in general, there is no guarantee; see Example 4.6. **Example 4.5**.: _Consider the system described by the matrices_ \[A=\begin{bmatrix}-2&-4\\ -4&-9\end{bmatrix},\ \ \ B=\begin{bmatrix}4\\ 4\end{bmatrix},\ \ \ C=\begin{bmatrix}4&4\end{bmatrix},\ \ \ D=1,\ \ \ Q_{\min}=\begin{bmatrix}\frac{1}{2}&0\\ 0&\frac{1}{4}\end{bmatrix},\] _which is already in positive-real balanced form [34] with Gramians \(Q_{\min}\). Then the ROM obtained by PRBT of order \(r=1\) is given by the upper left entry, i.e.,_ \[\tilde{A}=-2,\ \ \ \tilde{B}=4,\ \ \tilde{C}=4,\ \ \tilde{D}=1,\ \ \tilde{Q}_{\min}=\frac{1}{2}. \tag{4.13}\] _For any given \(\tilde{Q}\in\mathbb{X}_{\widehat{\Sigma}}=[\frac{1}{2},2]\) we obtain \(\tilde{\mathcal{P}}=4\), \(\tilde{\mathcal{O}}_{\tt QO}=\frac{1}{4}\tilde{Q}^{2}\), \(Y=\begin{bmatrix}4\\ 0\end{bmatrix}\), \(Z=\frac{\tilde{Q}}{14}\begin{bmatrix}11/4\\ -1\end{bmatrix}\) such that the simplified cost functional (4.9) is given by_ \[\mathcal{J}(\tilde{Q})=2\tilde{Q}^{2}-2\tilde{Q}, \nabla\mathcal{J}(\tilde{Q})=4\tilde{Q}-2,\] _which is minimized for \(\tilde{Q}^{\star}=\frac{1}{2}=\tilde{Q}_{\min}\), i.e., the PRBT ROM (4.13) is optimal._ **Example 4.6**.: _Consider the positive-real balanced system_ \[A=\begin{bmatrix}-1&-\frac{9}{2}\\ -\frac{9}{2}&-27\end{bmatrix},\ \ B=\begin{bmatrix}4\\ 4\end{bmatrix},\ \ \ C=\begin{bmatrix}4&4\end{bmatrix},\ \ \ D=\frac{1}{3},\ \ \ Q_{\min}= \begin{bmatrix}\frac{3}{4}&0\\ 0&\frac{1}{4}\end{bmatrix},\] _with diagonal Gramian \(Q_{\min}\). The one-dimensional PRBT ROM is given by_ \[\tilde{A}=-1,\ \ \tilde{B}=4,\ \ \tilde{C}=4,\ \ \tilde{D}=\frac{1}{3},\ \ \tilde{Q}_{\min}=\frac{3}{4}.\] _For any given \(\tilde{Q}\in\mathbb{X}_{\widehat{\Sigma}}=[\frac{3}{4},\frac{4}{3}]\) we obtain \(\tilde{\mathcal{P}}=8\), \(\tilde{\mathcal{O}}_{\tt QO}=\tilde{Q}^{2}\), \(Y=-\frac{16}{143}\begin{bmatrix}-94\\ 10\end{bmatrix}\), \(Z=\frac{\tilde{Q}}{143^{2}}\begin{bmatrix}31764\\ -5156\end{bmatrix}\) such that the simplified cost functional (4.9) is given by_ \[\mathcal{J}(\tilde{Q})=8\tilde{Q}^{2}-\frac{425728}{143^{2}}\tilde{Q}, \nabla\mathcal{J}(\tilde{Q})=16\tilde{Q}-\frac{425728}{143^{2}}.\] _We deduce \(\tilde{Q}_{\min}<\tilde{Q}^{\star}=\frac{26608}{143^{2}}\in\mathbb{X}_{ \widehat{\Sigma}}\) and thus conclude that the reduced Hamiltonian \(\tilde{Q}_{\min}\) obtained via PRBT is not optimal._ ### Equivalent semi-definite program In this section we show that the energy matching optimization problem (4.5) is equivalent to a standard _semi-definite program_ (SDP). This is a consequence of the following observation. **Lemma 4.7**.: _Let \(A\) be Hurwitz, \(\mathcal{P}\in\mathcal{S}_{\succ}^{n}\), and \(\mathcal{O}_{\tt QO}\in\mathcal{S}_{\succcurlyeq}^{n}\) be the controllability and quadratic output observability Gramian of the LTIQO system (2.6), respectively. Then, for any \(\gamma\geq 0\), the following statements are equivalent:_ 1. \(\|\Sigma_{\tt QO}\|_{\mathcal{H}_{2}}^{2}=\operatorname{tr}(B^{\sf T}\mathcal{ O}_{\tt QO}B)\leq\gamma\)_._ 2. _There exists_ \(\tilde{\mathcal{O}}_{\tt QO}\in\mathcal{S}_{\succcurlyeq}^{n}\) _satisfying_ \(A^{\sf T}\tilde{\mathcal{O}}_{\tt QO}+\tilde{\mathcal{O}}_{\tt QO}A+M\mathcal{ P}M\preccurlyeq 0\) _with_ \(\operatorname{tr}(B^{\sf T}\tilde{\mathcal{O}}_{\tt QO}B)\leq\gamma\)_._ 3. _There exists_ \(\tilde{\mathcal{O}}_{\tt QO}\in\mathcal{S}_{\succcurlyeq}^{n}\) _satisfying_ \(\begin{bmatrix}A^{\sf T}\tilde{\mathcal{O}}_{\tt QO}+\tilde{\mathcal{O}}_{\tt QO }A&M\\ M&-\mathcal{P}^{-1}\end{bmatrix}\preccurlyeq 0\) _with_ \(\operatorname{tr}(B^{\sf T}\tilde{\mathcal{O}}_{\tt QO}B)\leq\gamma\)_._ Proof.: The equivalence of (i) and (ii) follows immediately from the observation that any \(\tilde{\mathcal{O}}_{\tt QO}\) with \(A^{\sf T}\tilde{\mathcal{O}}_{\tt QO}+\tilde{\mathcal{O}}_{\tt QO}A+M\mathcal{ P}M\preccurlyeq 0\) satisfies \(\mathcal{O}_{\tt QO}\prec\tilde{\mathcal{O}}_{\tt QO}\). The equivalence of (ii) and (iii) is an immediate consequence of the Schur complement. Lemma 4.7 in combination with the fact that \(Z\) depends linearly on \(\tilde{Q}\) (cf. Remark 4.2) allows us to reformulate the optimization problem (4.5) as the equivalent standard SDP (using \(\tilde{M}=\frac{1}{2}\tilde{Q}\)) \[\min_{\tilde{Q}=\tilde{Q}^{\mathsf{T}},\tilde{\mathcal{O}}_{\mathsf{QO}}= \tilde{\mathcal{O}}_{\mathsf{QO}}^{\mathsf{T}}}\operatorname{tr}(\tilde{B}^{ \mathsf{T}}\tilde{\mathcal{O}}_{\mathsf{QO}}\tilde{B}-2B^{\mathsf{T}}Z(\tilde {Q})\tilde{B})\] (4.14a) subject to \[\begin{bmatrix}\tilde{A}^{\mathsf{T}}\tilde{\mathcal{O}}_{\mathsf{QO}}+ \tilde{\mathcal{O}}_{\mathsf{QO}}\tilde{A}&\frac{1}{2}\tilde{Q}\\ \frac{1}{2}\tilde{Q}&-\tilde{\mathcal{P}}^{-1}\end{bmatrix}\preccurlyeq 0\quad\text{and} \quad\begin{bmatrix}\tilde{A}^{\mathsf{T}}\tilde{Q}+\tilde{Q}\tilde{A}& \tilde{Q}\tilde{B}-\tilde{C}^{\mathsf{T}}\\ \tilde{B}^{\mathsf{T}}\tilde{Q}-\tilde{C}&-\tilde{D}-\tilde{D}^{\mathsf{T}} \end{bmatrix}\preccurlyeq 0. \tag{4.14b}\] ### Numerical approach To solve the energy matching problem (4.5) numerically, we propose two different strategies. Our first strategy exploits that the energy matching problem can be recast as the standard SDP problem (4.14), such that we can apply state-of-the-art SDP solvers. Our second strategy is to directly apply an interior-point approach for (4.5) using a barrier function and a quasi-Newton method. In more detail, we define the barrier function \[\psi\colon\mathbb{R}^{r\times r}\to\overline{\mathbb{R}},\quad\tilde{Q} \mapsto\begin{cases}-\ln\det\left(\mathcal{W}_{\Sigma_{\mathsf{pH}}}(\tilde{Q })\right),&\text{if }\mathcal{W}_{\Sigma_{\mathsf{pH}}}(\tilde{Q})\in \mathcal{S}_{\succ}^{r+m},\\ \infty,&\text{otherwise},\end{cases} \tag{4.15}\] and consider for \(\alpha>0\) the parametrized cost function \(\mathcal{J}_{\alpha,\psi}(\tilde{Q})\coloneqq\mathcal{J}(\tilde{Q})+\alpha \psi(\tilde{Q})\) and corresponding optimization problem \[\min_{\tilde{Q}\in\mathcal{S}_{\succ}^{r}}\mathcal{J}_{\alpha,\psi}(\tilde{Q }). \tag{4.16}\] Note that the barrier function (4.15) requires (1.2) to be strictly passive. If this is not the case, then a perturbation of the feedthrough term is required (see the forthcoming Section 5). **Proposition 4.8**.: _Assume that the ROM (4.2) is passive and let \(X\in\mathbb{X}_{\Sigma}\) with \(\det(\mathcal{W}_{\Sigma_{\mathsf{pH}}}(X))>0\). Then the gradient of the barrier function is given by_ \[\nabla_{X}\ln\left(\det\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X) \right)\right)=\begin{bmatrix}-\tilde{A}&-\tilde{B}\end{bmatrix}\left( \mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)^{-1}\begin{bmatrix}I \\ 0\end{bmatrix}+\begin{bmatrix}I&0\end{bmatrix}\left(\mathcal{W}_{\tilde{ \Sigma}_{\mathsf{pH}}}(X)\right)^{-1}\begin{bmatrix}-\tilde{A}^{\mathsf{T}} \\ -\tilde{B}^{\mathsf{T}}\end{bmatrix}.\] Proof.: Using the chain rule, we obtain \[\nabla_{X}\ln\left(\det\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X) \right)\right)=\frac{1}{\det\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}} (X)\right)}\det\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right) \operatorname{tr}\left(\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X) \right)^{-1}\nabla_{X}\left(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X) \right)\right).\] The directional derivative of \(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\) is given by \[\mathcal{D}_{\Delta_{X}}\mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)= \begin{bmatrix}-\tilde{A}^{\mathsf{T}}\Delta_{X}-\Delta_{X}\tilde{B}&-\Delta_{X }\tilde{B}\\ -\tilde{B}^{\mathsf{T}}\Delta_{X}&0\end{bmatrix}=\begin{bmatrix}-\tilde{A}^{ \mathsf{T}}\\ -\tilde{B}^{\mathsf{T}}\end{bmatrix}\Delta_{X}\begin{bmatrix}I&0\end{bmatrix}+ \begin{bmatrix}I\\ 0\end{bmatrix}\Delta_{X}\begin{bmatrix}-\tilde{A}&-\tilde{B}\end{bmatrix}\] resulting in \[\mathcal{D}_{\Delta_{X}}\ln\left(\det\left(\mathcal{W}_{\tilde{ \Sigma}_{\mathsf{pH}}}(X)\right)\right)\\ =\operatorname{tr}\left(\begin{bmatrix}I&0\end{bmatrix}\left( \mathcal{W}_{\tilde{\Sigma}_{\mathsf{pH}}}(X)\right)^{-1}\begin{bmatrix}- \tilde{A}^{\mathsf{T}}\\ -\tilde{B}^{\mathsf{T}}\end{bmatrix}\Delta_{X}\right)+\operatorname{tr}\left( \begin{bmatrix}-\tilde{A}&-\tilde{B}\end{bmatrix}\left(\mathcal{W}_{\tilde{ \Sigma}_{\mathsf{pH}}}(X)\right)^{-1}\begin{bmatrix}I\\ 0\end{bmatrix}\Delta_{X}\right).\qed\] The idea of our energy matching algorithm is now straight forward. For a decreasing sequence of \(\alpha_{k}\), we minimize (4.16) with a gradient-based optimization method such as a quasi-Newton method. Since the surrogate model is assumed to be stable, Theorem 2.2 implies that any solution of the KYP inequality is positive definite. Hence, the barrier function automatically ensures that \(\tilde{Q}\) is symmetric positive definite whenever \(\mathcal{J}_{\alpha,\psi}(\tilde{Q})\) is finite. In our numerical implementation, we reduce the degrees of freedom by explicitly forcing \(\tilde{Q}\) to be symmetric via the _half vectorization_ operator \(\operatorname{vech}\colon\mathbb{R}^{r\times r}\to\mathbb{R}^{r(r+1)/2}\); see for instance [29]. In this way, we can represent a \(r\times r\) symmetric matrix as a vector of length \(r(r+1)/2\) and vice versa. Straightforward calculations show that, \[\nabla_{z}\mathcal{J}_{\alpha,\psi}(\operatorname{vech}^{-1}(z))=\operatorname{ vech}\left(2\nabla_{\operatorname{vech}^{-1}(z)}\mathcal{J}_{\alpha,\psi}( \operatorname{vech}^{-1}(z))-\operatorname{diag}(\nabla_{\operatorname{vech}^ {-1}(z)}\mathcal{J}_{\alpha,\psi}(\operatorname{vech}^{-1}(z)))\right),\] such that we can compute the gradient using Lemma 4.3 and Proposition 4.8. The resulting algorithm is described in Algorithm 1. ``` Input:\(\mathsf{FOM}\) (1.1), passive \(\mathsf{ROM}\) (4.4), initial Hamiltonian \(\tilde{Q}_{0}\) with \(\det(\mathcal{W}_{\Sigma_{\mathsf{ph}}}(\tilde{Q}_{0}))>0\) Output: Approximate minimizer \(\tilde{Q}_{\mathsf{opt}}\in\mathcal{S}_{\succcurlyeq}^{r}\) of (4.16) 1 Set \(z_{0}:=\operatorname{vech}(\tilde{Q}_{0})\). 2for\(\alpha\in\{10^{-3},10^{-4},\ldots,10^{-15}\}\)do 3 Set \(f(z):=\mathcal{J}_{\alpha,\psi}(\operatorname{vech}^{-1}(z))\). 4 Solve \(z_{\mathsf{opt}}:=\operatorname{argmin}_{z}f(z)\) with initial value \(z_{0}\). 5 Set \(z_{0}:=z_{\mathsf{opt}}\). 6 endfor return\(\tilde{Q}_{\mathsf{opt}}:=\operatorname{vech}^{-1}(z_{\mathsf{opt}})\) ``` **Algorithm 1**Energy matching ## 5. Numerical experiments In the following sections, we illustrate the effectiveness of our energy-matching algorithm on a pH mass-spring-damper model and a pH poroelasticity model. For this, we compare the structure-preserving MOR algorithm pH-IRKA and the passivity-preserving MOR algorithm PRBT with our method, _energy-matched_ PRBT (EM-PRBT), where we use the ROMs obtained by PRBT as initialization. Regarding the implementation details of the methods, the following remarks are in order: i) As in [13], we use the minimal solution of the KYP inequality (2.3) as the Hamiltonian to obtain a pH representation for the ROMs from PRBT. ii) For the computation of the extremal solutions of the KYP inequality (2.3), i.e., the stabilizing and anti-stabilizing solution of the ARE (2.5), we added an artificial feedthrough term \(D=1\times 10^{-6}I_{m}\) and then used the built-in MATLAB function icare to obtain the solutions. iii) The computation of the \(\mathcal{H}_{2}\)-norm for standard LTI systems is done via the julia package ControlSystems.jl3. iv) For the implementation details for pH-IRKA and PRBT, we refer to [13]. v) The SDP solvers for (4.14) are applied within the JuMP4 framework. vi) For the minimization of (4.16) we use the BFGS implementation from Optim.jl[31]. vii) To initialize Algorithm 1, we pick \(\tilde{Q}_{0}\) as the optimal solution of the optimization problem (4.5), where we replace the KYP inequality with the ARE (2.5). Note that resulting KYP matrix \(\mathcal{W}_{\tilde{\Sigma}_{\mathsf{ph}}}(\tilde{Q}_{0})\) is rank deficient by construction and hence perturbed to render it positive definite. Footnote 3: [https://github.com/JuliaControl/ControlSystems.jl](https://github.com/JuliaControl/ControlSystems.jl) Footnote 4: [https://jump.dev](https://jump.dev) The code and data used to generate the subsequent results are accessible via doi:10.5281/zenodo.8335231 under MIT Common License. ### Mass-spring-damper system Our first experiment considers a pH mass-spring-damper system with \(n=100\) degrees of freedom and an input/output dimension \(m=2\). The system was introduced in [25] and is described in detail in the pH benchmark systems collection5. Comparing the \(\mathcal{H}_{2}\) errors of the structure-preserving MOR algorithms in Figure 1, it can be observed that pH-IRKA leads to an approximation error that is, in general, a few orders of magnitude worse compared to PRBT (as already observed in [13, 43]. However, pH-IRKA yields better approximations of the Hamiltonian dynamic than PRBT. Using either Algorithm 1 or the solution of the SDP solver (which gives approximately the same result in this example), we can significantly improve the error of the Hamiltonian dynamic of PRBT (see EM-PRBT in Figure 1). In fact, after the optimization, the \(\mathcal{H}_{2}\)-error of the Hamiltonian dynamic is similar or even better compared to the one obtained when using pH-IRKA, whereas the input-output dynamic matches PRBT. Figure 1 also shows that it is not sufficient to search only for the best rank-minimizing solution of the KYP inequality, i.e., the solutions of the ARE (2.5). These ROMs are denoted with PRBT(\(X^{\star}\)) and only slightly improve the Hamiltonian dynamic \(\mathcal{H}_{2}\)-error of PRBT. Footnote 5: [https://port-hamiltonian.io](https://port-hamiltonian.io) In Figure 2, we show the error trajectories \(\|y(t)-\tilde{y}(t)\|_{2}\) and \(\left|y_{\mathcal{H}}(t)-\tilde{y}_{\mathcal{H}}(t)\right|\) of the ROMs with reduced order \(r=20\) for the input signal \(u(t)=[\sin(t),\cos(t)]^{\dagger}\) for times \(t>50\) at which the system response has approximately settled. These trajectories are in line with our observations from Figure 1. Note that the output error trajectory for pH-IRKA is worse than the errors of PRBT and EM-PRBT. As expected, the output errors of PRBT are identical before and after optimization. In contrast, the Hamiltonian error is worst for PRBT (before energy matching) but even better than the error of pH-IRKA after applying our method. ### Mass-spring-damper with \(X_{\min}\) as Hamiltonian For our numerical experiment, we investigate the findings of Section 4.2, i.e., we analyze the situation when the Hessian of the Hamiltonian is given by the minimal solution of the KYP inequality (which corresponds to the optimal choice for pH-IRKA [13]). In particular, we consider the mass-spring-damper system from the previous subsection and modify the Hamiltonian of the FOM to the minimal solution of the KYP inequality (2.3) and transform the other matrices accordingly, see Section 2.2. The \(\mathcal{H}_{2}\)-error of PRBT before and after optimization is presented in Table 1. We conclude that for this example, PRBT already provides a close-to-optimal approximation of the Hamiltonian since the error is almost identical before and after the optimization. Figure 1: Error of the input-output dynamic and the Hamiltonian dynamic of different methods for the mass-spring-damper example. ### Linear poroelasticity In our third example, we apply our proposed method to Biot's consolidation model for poroelasticity. A general pH formulation was derived in [3], and the system is also part of the pH benchmark collection. In Figure 3, we can observe that pH-IRKA leads to the better input-output dynamic \(\mathcal{H}_{2}\)-error and also to the best Hamiltonian dynamic \(\mathcal{H}_{2}\)-error at the same time. In fact, PRBT computes ROMs with a Hamiltonian dynamic \(\mathcal{H}_{2}\)-error that is between one and two orders of magnitude worse than pH-IRKA. In this example, we compare Algorithm 1 with the state-of-the-art SDP solvers COSMO6, MOSEK7, and SeDuMi8, denoted with EM-PRBT-COSMO, EM-PRBT-MOSEK, and EM-PRBT-SeDuMi, respectively. We observe that the barrier method provides the best results among the methods, especially for larger reduced orders. Footnote 6: [https://github.com/oxfordcontrol/COSMO.jl](https://github.com/oxfordcontrol/COSMO.jl) Footnote 7: [https://www.mosek.com](https://www.mosek.com) Footnote 8: [https://sedumi.ie.lehigh.edu](https://sedumi.ie.lehigh.edu) ## 6. Conclusions We introduced the view of pH systems as a dual-dynamical system: the well-known input-output dynamic and additionally the Hamiltonian dynamic (cf. Definition 3.1). We studied how this view affects observability and have derived a corresponding structure-preserving Kalman-like decomposition in Theorem 3.5. Consequently, the optimal approximation of a pH system can be considered a multi-objective minimization problem: one objective for the approximation quality of the input-output dynamic and one for the approximation quality of the Hamiltonian dynamic. Using the observation that the KYP inequality determines all possible Hamiltonians, we proposed a MOR post-processing method called energy-matching: Given a structure-preserving ROM for the input-output dynamic, solely consider the optimization problem (4.5) for finding the best Hamiltonian. We showed that this optimization problem is uniquely solvable and convex -- see Theorem 4.4 -- and can be recast as a standard semi-definite program (cf. Section 4.3). We presented two numerical \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(r\) & \(4\) & \(8\) & \(12\) & \(16\) & \(20\) \\ \hline PRBT & \(4.11\times 10^{-1}\) & \(1.02\times 10^{-2}\) & \(4.24\times 10^{-4}\) & \(1.55\times 10^{-4}\) & \(1.52\times 10^{-4}\) \\ EM-PRBT & \(4.11\times 10^{-1}\) & \(1.02\times 10^{-2}\) & \(4.20\times 10^{-4}\) & \(1.45\times 10^{-4}\) & \(1.42\times 10^{-4}\) \\ \hline \hline \end{tabular} \end{table} Table 1. - Hamiltonian dynamic \(\mathcal{H}_{2}\)-errors of PRBT and EM-PRBT for the mass-spring-damper example with \(X_{\min}\) as Hamiltonian. Figure 2. - Error trajectory of the output and the Hamiltonian. approaches to solve this problem and demonstrated their feasibility on three academic examples. Future work will be the further analysis of the multi-objective optimization problem (4.1). ### Acknowledgments We thank Prof. Carsten Scherer (U Stuttgart) for valuable comments regarding the SDP formulation of the energy matching problem. P. Schwerdtner acknowledges funding from the DFG within the project 424221635. T. Holicki, J. Nicodemus and B. Unger acknowledge funding from the DFG under Germany's Excellence Strategy - EXC 2075 - 390740016 and are thankful for support by the Stuttgart Center for Simulation Science (SimTech).
2303.18014
Low-Temperature Thermoelectric Performance and Optoelectronic Properties of Monolayer of WX2N4(X = Si, Ge)
We investigated the thermoelectric properties of the 2D monolayer of WX2N4 using Density Functional Theory combined with Boltzmann Transport Equation. We obtained an outstanding thermoelectric figure of merit of 0.91 at 400K for p-type WGe2N4, whether it showed a ZT value of 0.56 for n-type at the same temperature. On the other hand, the WSi2N4 showed significantly low ZT at room temperature.
Chayan Das, Dibyajyoti Saikia, Atanu Betal, Satyajit Sahu
2023-03-31T12:38:27Z
http://arxiv.org/abs/2303.18014v1
Low-Temperature Thermoelectric Performance and Optoelectronic Properties of Monolayer of WX\({}_{2}\)N\({}_{4}\)(X = Si, Ge). ## Abstract Two-dimensional (2D) materials proved their suitability for thermoelectric applications due to specific quantum confinement and distinct density of states (DOS). New two-dimensional layered materials WX\({}_{2}\)N\({}_{4}\)(X = Si, Ge) are suitable for thermoelectric applications with a quite satisfactory value of ZT. Here we investigated the thermoelectric properties of the 2D monolayer of WX\({}_{2}\)N\({}_{4}\)(X = Si, Ge) by using Density Functional Theory (DFT) combined with Boltzmann Transport Equation (BTE). We obtained an outstanding thermoelectric figure of merit (\(ZT\)) of 0.91 at 400K for p-type WGe\({}_{2}\)N\({}_{4}\), whether it showed a \(ZT\) value of 0.56 for n-type at the same temperature. On the other hand, the WSi\({}_{2}\)N\({}_{4}\) showed significantly low ZT at room temperature. However, the \(ZT\) value increases significantly at higher temperatures. We examined the electrical property and discovered that the indirect bandgaps (BG) of WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\) are 2.00 eV and 1.15 eV, respectively. In the deep ultraviolet (UV) and UV areas, they displayed very strong absorption. Due to their strong absorption, these materials may also be used in thermoelectric applications and (solar-blind) UV optoelectronic devices. ## 1 Introduction The search for materials to fabricate highly efficient thermoelectric generators is trending research. Metal chalcogenides, alloys [1, 2, 3, 4, 5, 6], and their oxides [7, 8, 9] made up the majority of the thermoelectric material regime. Bi\({}_{2}\)Te\({}_{3}\)[10], SnSe [11], and PbTe [12] possess very high \(ZT\) value among them. After the discovery of graphene 2D transition metal dichalcogenides (TMDC) also attracted our attention because of their extraordinary optical [13], electronic [14], and thermal [14] properties. Recently new 2D monolayer of MoSi\({}_{2}\)N\({}_{4}\) was deposited using the CVD method in centimeter scale [15]. So, we are motivated to explore the properties of these monolayers. These materials possess very high dynamical stability and very high mechanical properties [16]. Electricity can be generated from waste heat using these materials. Renewable energy sources are crucial in the current period since non-renewable energy sources are rapidly running out. In this work, we investigated the 2D monolayer of WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\) using Boltzmann transport theory in association with DFT, and we found very high Seebeck coefficients (\(S\)). The seebeck coefficient of a material is the measure of the magnitude of induced voltage generated when a temperature difference is introduced between the two ends of the material. For efficient thermoelectric material, both the electrical conductivity (\(\sigma\)) and \(S\) should be high, and the thermal conductivity (\(k=k_{el}+k_{ph}\)) should be low. The thermal conductivity \(k\) has two components, \(k_{el}\), and \(k_{ph}\), which represents the contribution coming from electrons and phonons respectively. Generally, these 2D materials possess high mobility, excellent stability and good optoelectronic and piezoelectric properties [16]. These materials can be represented by N-X-N-M-N-X-N. Metal atom (M) was sandwiched in between two nitrogen (N) atoms, and the whole system was sandwiched between buckled honeycomb XN layer where X = Si or Ge. Here M is the transition metal atom (Mo, W, Ti, Cr, etc.). They can be synthesized using chemical vapor deposition (CVD). Yi-Lun Hong et al. synthesized MoSi\({}_{2}\)N\({}_{4}\) using CVD [15]. Bohayra Mortazavi predicted mobility of 490 and 2190 cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\) for p and n-type of monolayer MoSi\({}_{2}\)N\({}_{4}\)[16]. Till now, many good researches have been carried out by researchers on 2D TMDC materials. Huang et al. and his group reported a very low \(ZT\) value \(<\) 0.2, along with a high thermal conductivity of about 60 W/m K, with MoSe\({}_{2}\) monolayer. SD Guo and co-workers reported a \(ZT\) value greater than 0.9 at 600K with ZrS\({}_{2}\) monolayer along with a thermal conductance of 47.8 W/K, respectively. Monolayer MoS\({}_{2}\) and WS\({}_{2}\) were also reported with very high \(k_{ph}\) of 23.15 W/mK and 72 W/m.K [17]. But monolayer HfS\({}_{2}\) was reported with very low \(k_{ph}\) of 2.83 W/mK along with a very good \(ZT\) i.e. \(ZT_{HfS_{2}}=0.90\)[18]. Many researches are still going on to improve the \(ZT\) factor of materials. In this work, we systematically investigated with great detail the electronic, optical, and thermoelectric properties of monolayers of WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\) using DFT and BTE. In the WGe\({}_{2}\)N\({}_{4}\) monolayer, we found an outstanding \(ZT\) product of 0.91 for p-type (0.56 for n-type) at a very low temperature of 400K. Similarly, the WSi\({}_{2}\)N\({}_{4}\) monolayer showed a poor \(ZT\) of 0.26 for p-type (0.05 for n-type). Thus, we need a theoretical investigation of thermoelectric and optical properties to understand the physical and chemical properties that cause the huge difference in their efficiency towards thermoelectric application necessitates. WSi\({}_{2}\)N\({}_{4}\) showed high absorption in the deep UV region (285 nm), whereas WGe\({}_{2}\)N\({}_{4}\) showed strong absorption in the UV region (306 nm), making the materials usable in UV optoelectronic applications beyond the visible spectrum. ## 2 Methodology We accomplished the first principle calculations using DFT with projector augmented wave (PAW) potentials [19, 20] and Perdew-Burke-Ernzerhof (PBE) as generalized gradient approximation (GGA) [21] in Quantum Espresso (QE) package. To avoid the interaction of two layers, we kept a vacuum of 30 A between two layers along the z-direction. The geometry optimization was performed using a 15\(\times\)15\(\times\)1 k-mesh grid. A wave function energy cutoff of 50 Ry and self-consistency of 10\({}^{-9}\) Ry were kept in all the calculations. The atoms were relaxed until the force convergent threshold of 3.8\(\times\)10\({}^{-4}\) Ry was achieved. We used the Phonopy package combined with QE to evaluate the phonon dispersion band structure using 2\(\times\)2\(\times\)1 supercell with 9\(\times\)9\(\times\)1 k-mesh in the QE package. The optical properties were evaluated using the SIESTA package, which is implemented with Time-Dependent Density Functional Perturbation Theory (TD-DFPT) [22]. A 48\(\times\)48\(\times\)1 k-mesh was used for the calculation of the optical properties. We obtained the imaginary (\(\varepsilon_{i}\)) and real (\(\varepsilon_{r}\)) parts of dielectric functions using Momentum space formulation along with Kramers-Kronig transformation [23]. After, that we obtained the absorption coefficient (\(\alpha\)), refractive index (\(\eta\)), and extinction coefficient (\(K\)) using the following equations. \[\eta=\left[\frac{\left(\left(\varepsilon_{r}{}^{2}+\varepsilon_{i}{}^{2} \right)^{1/2}+\varepsilon_{r}\right)}{{}^{2}}\right]^{1/2} \tag{1}\] \[K=\left[\frac{\left(\left(\varepsilon_{r}{}^{2}+\varepsilon_{i}{}^{2}\right)^ {1/2}-\varepsilon_{r}\right)}{{}^{2}}\right]^{1/2} \tag{2}\] \[\alpha=\frac{2K\omega}{c} \tag{3}\] Here, \(\varepsilon_{r}\), \(\varepsilon_{l}\), \(\omega\), and \(C\) are real and imaginary parts of the dielectric function, frequency, and speed of light, respectively. Thermoelectric parameters were obtained using constant scattering time approximation from BoltzTraP code [24] using the Boltzmann transport equation. \[\sigma_{l,m}=\frac{1}{\alpha}f\ \sigma_{l,m}(\varepsilon)\left[-\frac{ \Delta f_{\mu}(T,\varepsilon)}{\Delta\varepsilon}\right]d\varepsilon \tag{4}\] \[k_{l,m}(T,\mu)=\frac{1}{e^{2}T\Omega}\ f\ \sigma_{l,m}(\varepsilon)( \varepsilon-\mu)^{2}\left[-\frac{\Delta f_{\mu}(T,\varepsilon)}{\Delta \varepsilon}\right]d\varepsilon \tag{5}\] \(S_{l,m}(T,\mu)=\frac{(\sigma^{-1})_{n,l}}{eT\Omega}\ f\ \sigma_{n,m}(\varepsilon)( \varepsilon-\mu)\left[-\frac{\lambda f_{\mu}(T,\varepsilon)}{\Delta\varepsilon} \right]d\varepsilon\) (6) Using these equations, we obtained the transport properties. Here, \(\sigma_{l,m}\), \(k_{l,m}\), \(S_{l,m}\), are the electrical conductivity, thermal conductivity, and Seebeck coefficient, respectively. Whereas \(e,\mu,\Omega,T\) are the electron charge, chemical potential, unit cell volume, and temperature, respectively. The phono3py package combined with QE was used to evaluate the \(k_{ph}\). In phono3py, a 2\(\times\)2\(\times\)1 supercell with 9\(\times\)9\(\times\)1 k-mesh was generated, and self-consistent calculations were performed using a default displacement of 0.06 A. ## 3 Result and Discussions ### Structural Properties and Stability WX\({}_{2}\)N\({}_{4}\)(X = Si, Ge) monolayer can be viewed as a WN\({}_{2}\) monolayer sandwiched between two honeycomb (SiN/GeN) layers. The three layers are stacked on each other. W atom is located at the center of a trigonal prism building block with six Si atoms, and the WN\({}_{2}\) layer is bonded to (SiN/GeN) layers via vertically aligned Si-N bonds (figure 1(a)). WX\({}_{2}\)N\({}_{4}\) possesses a hexagonal primitive unit cell with space groups of P-6 m2 (No. 187) [16] (figure 1(b)). We relaxed the unit cells and obtained the lattice constant for WSi\({}_{2}\)N\({}_{4}\) is a = b = 2.91 A which matches exactly with previously reported results [16]. For WGe\({}_{2}\)N\({}_{4}\), the parameters were a = b = 3.02 A. The obtained lattice constants, bond lengths, and bond angles for both structures are shown in Table 1. Here we observed that the bond lengths were increased in the WGe\({}_{2}\)N\({}_{4}\) compared to the bond lengths of WSi\({}_{2}\)N\({}_{4}\) because the Ge atom has a larger atomic radius than the Si atom. The cohesive energy gives us information about the stability of the structure of the material. We evaluated the cohesive energy (\(E_{ch}\)) for both the structures given by the following formula: \(E_{ch}=\left\{(E_{W}+2\times E_{X}+4\times E_{N})-E_{WX_{2}N_{4}}\right\}/7\). Here \(E_{WX_{2}N_{4}}\), \(E_{W}\), \(E_{X}\), and \(E_{N}\) are the energy of the monolayer of WX\({}_{2}\)N\({}_{4}\), the energy of a single W atom, the energy of a single X atom, and the energy of a single N atom, respectively. The cohesive energy per atom obtained for WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\) monolayer was 0.97 eV, and 0.25 eV, respectively, which confirms that these structures are thermodynamically stable. To check the structural stability, we calculated the phonon dispersion curve (shown in figure 2) of WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\), respectively, along the high symmetry path of \(\Gamma\)-M-K-\(\Gamma\). No imaginary frequency was found in the phonon dispersion curve, which again confirms the thermodynamical stability of these structures. Since the unit cell of the monolayer contains seven atoms, there are in total twenty-one vibrational modes; the first three are acoustic modes, and the other eighteen the optical modes. The lower three branches correspond to the acoustic vibrational modes, and they are in-plane longitudinal acoustic (LA) mode, transverse acoustic (TA) mode, and out-of-plane mode (ZA). The upper eighteen modes are optical modes. It was observed that for the WGe\({}_{2}\)N\({}_{4}\) monolayer, the optical bands overlapped with the acoustic modes, unlike the WSi\({}_{2}\)N\({}_{4}\) monolayer, where the acoustic modes are independent. ### Electronic Properties The electronic band structure of WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\) monolayer was calculated along \(\Gamma\)-M-K- \(\Gamma\) path within an energy range from -4 eV to 4 eV. Figure 3a and 3b show the electronic band structure of WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\), corresponding BGs of 2.00 eV and 1.15 eV, respectively, according to PBE. The obtained BG value for both materials matched precisely with the previous work by Bohayra Mortazavi et al. [16]. Both the band structures showed an indirect bandgap (BG), and for both the monolayer, the conduction band minima (CBM) are situated at K point. The valance band maxima (VBM) are located at \(\Gamma\) point for WSi\({}_{2}\)N\({}_{4}\), but for WGe\({}_{2}\)N\({}_{4}\), the VBM is situated near the \(\Gamma\) point, which was between the \(\Gamma\) and K points. The DOS and Local density of states (LDOS) for different orbitals is shown in figure 4 for both monolayers. For WSi\({}_{2}\)N\({}_{4}\), the contribution for VBM is mainly from the p orbitals of N and d orbitals of the W atom, but CBM is contributed primarily by d of the W atom, as shown in the in figure 4(a). For WGe\({}_{2}\)N\({}_{4}\), the primary contribution for VBM is mainly from p of the N atom, and the contribution towards CBM is primarily from the d orbital of the W atom, as shown in figure 4(b). It is observed that for both the monolayer, the N contributes more to the valance band, and W contributes more to the conduction band. p orbital of Si and Ge contributes almost equally toward both the VBM and CBM. **Figure.3:** The Band structure of monolayer of a) WSi\({}_{2}\)N\({}_{4}\) and b) WGe\({}_{2}\)N\({}_{4}\), with PBE as GGA. The arrow signifies the BG and green dotted lines show the Fermi level. **Figure.4:** Total DOS with LDOS of the monolayer of a) WSi\({}_{2}\)N\({}_{4}\) and b) WGe\({}_{2}\)N\({}_{4}\) plotted as a function of energy. **3.3. Carrier mobility and relaxation time** The carrier mobility of electrons and holes for WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\) was predicted by Bohayra Mortazavi et al. They predicted electron mobility (\(\mu_{e}\)) and hole mobility (\(\mu_{h}\)) was 320 cm\({}^{2}\)V-\({}^{1}\)s-\({}^{1}\) and 2026 cm\({}^{2}\)V-\({}^{1}\)s-\({}^{1}\) for WSi\({}_{2}\)N\({}_{4}\) and 690 cm\({}^{2}\)V-\({}^{1}\)s-\({}^{1}\) and 2490 cm\({}^{2}\)V-\({}^{1}\)s-\({}^{1}\) for WGe\({}_{2}\)N\({}_{4}\) monolayer. Effective mass (\(m^{*}\) = \(\frac{d^{2}E}{dk^{2}}\)) of specific charge carriers can be found from the band edge of band structures. The effective mass of the electron (\(m_{e}\)) can be determined by fitting a parabola at CBM and operating a double derivative. Similarly, the effective mass of an electron (\(m_{h}\)) can be determined by fitting a parabola at VBM and operating a double derivative. The relaxation time of the charge carrier was calculated by using the well-known formula \(\tau=\frac{\mu m^{*}}{e}\). All calculated parameters for both crystals are listed in table 2. \begin{tabular}{|l|c|c|c|c|} \hline Crystal & \(m_{e}(Kg)\) & \(m_{h}(Kg)\) & \(\tau_{e}\times(10^{\text{-}14})\) s & \(\tau_{h}\times(10^{\text{-}14})\) s \\ \hline WSi\({}_{2}\)N\({}_{4}\) & \(0.254m_{0}\) & \(0.617m_{0}\) & 6.82 & 71.25 \\ \hline WGe\({}_{2}\)N\({}_{4}\) & \(0.243m_{0}\) & \(0.512m_{0}\) & 11.57 & 72.90 \\ \hline \end{tabular} ### Optical properties Optoelectronic device applications require an understanding of optical properties. The optical properties were investigated along the perpendicular direction of the plane. The \(\varepsilon_{r}\) is obtained using Kramers-Kronig Transformation, and the \(\varepsilon_{i}\) of dielectric function was calculated using momentum space formulation using proper matrix elements. The dielectric functions were plotted as a function of the energy of photons in figure 5(a) and (b). The \(\varepsilon_{i}\) shows peak at 3.82 eV for WSi\({}_{2}\)N\({}_{4}\), at 3.45 eV for WGe\({}_{2}\)N\({}_{4}\). This can be explained by the band structure, as monolayer WGe\({}_{2}\)N\({}_{4}\) has 0.85 eV lower BG compared to WSi\({}_{2}\)N\({}_{4}\). The secondary peaks were found at 7.27 eV and 6.22 eV, respectively. The peaks were found at 2.55 eV and 1.95 eV for WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\), respectively, for the \(\varepsilon_{r}\). The absorption spectra are shown in figure 5(c), and the peaks were found at 4.35 eV and 4.05 eV with absorption coefficient (\(\alpha\)) of 2.39\(\times 10^{5}\)/cm and 2.23\(\times 10^{5}\)/cm for WSi\({}_{2}\)N\({}_{4}\), and WGe\({}_{2}\)N\({}_{4}\), respectively, which are of the order of SnI\({}_{2}\), SiI\({}_{2}\), [25] ZrS2 and ZrSSe [26]. Some secondary peaks are also observed near 8 eV for WSi\({}_{2}\)N\({}_{4}\) and near 10 eV for WGe\({}_{2}\)N\({}_{4}\) with an even higher absorption coefficient. The variation of refractive index (\(\eta\)) with energy is shown in figure 5(d). At zero energy, the refractive index is found to be 1.78, and 1.90 for WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\), respectively. The secondary peaks are found at energy 2.70 eV and 2.10 eV for and at relatively high energy of 5.85 eV and 5.40 eV for WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\). After 12 eV, the refractive indexes of WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\) are almost the same. **Figure.5: Representation of optical properties a)\(\varepsilon_{r}\), b) \(\varepsilon_{i}\), c) \(\alpha\), and d) \(\eta\) with respect to energy is shown.** **3.5. Thermoelectric Properties** The plot of the Seebeck coefficient (\(S\)) with respect to \(\mu\) at 300 K, 350 K, and 400 K for the monolayer of WSi\({}_{2}\)N\({}_{4}\) is shown in figure 6**a.** The Seebeck coefficient (\(S\)) obtained for WSi\({}_{2}\)N\({}_{4}\) monolayer at 350 K is 2705.29 \(\mu\)V/K for n-type carriers (\(\mu>0\)) and 2761.84 \(\mu\)V/K for p-type carriers (\(\mu<0\)). At 350 K, the \(S\) with increased a little, but after that, it decreased at 400 K. The representation of relaxation time-scaled electrical conductivity (\(\sigma/\tau\)) as a function of \(\mu\) is shown in figure 6**b.** No such change was found in (\(\sigma/\tau\)) for change in \(T\) of 100K. The p-type WSi\({}_{2}\)N\({}_{4}\) possesses much lower \(\sigma/\tau\) compared to the n-type WSi\({}_{2}\)N\({}_{4}\). The representation of the relaxation time-scaled power factor (PF = \(S^{2}\sigma/\tau\) ) as a function of \(\mu\) is shown in figure 6**c for the WSi\({}_{2}\)N\({}_{4}\) monolayer.** The highest PF was obtained for WSi\({}_{2}\)N\({}_{4}\) monolayer for n-type carriers (10.53\(\times\)10 W/m\({}^{2}\)Ks) at 400 K, and for p-type carriers, the highest PF was obtained (4.67\(\times\)10\({}^{10}\) W/m\({}^{2}\)Ks) at 400K. For the WGe\({}_{2}\)N\({}_{4}\) monolayer, the variation of \(S\), \(\sigma/\tau\) and \(S^{2}\sigma/\tau\) with respect to \(\mu\) is shown in figure 6d, e, and f. Unlike the WSi\({}_{2}\)N\({}_{4}\) monolayer, the \(S\) gradually decreases with an increase in temperature. The highest power factor we obtained for the WGe\({}_{2}\)N\({}_{4}\) monolayer is 8.60 \(\times\)10\({}^{10}\) W/m\({}^{2}\)Ks, which is for the p-type carriers at 400 K, and for n-type carriers, the obtained power factor is 7.51 \(\times\)10\({}^{10}\) W/m\({}^{2}\)Ks. So, the p-type doping is much more efficient in WGe\({}_{2}\)N\({}_{4}\) towards thermoelectric application. The highest values of \(S\), (\(\sigma/\tau\)), (\(S^{2}\sigma/\tau\) ) for both the monolayers are listed in table 3. As the Seebeck coefficient (\(S\)) is directly proportional to BG\({}^{26}\), the \(S\) for WSi\({}_{2}\)N\({}_{4}\) monolayer was found to be higher compared to WGe\({}_{2}\)N\({}_{4}\), which follows the BG trend for the two materials, i.e., WSi\({}_{2}\)N\({}_{4}\) has the higher BG value compared to WGe\({}_{2}\)N\({}_{4}\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Crystal & Maximum Seebeck & Maximum & Maximum Power & Thermoelectric \\ & Coefficient(S) & Conductivity & Factor & \(\times\)10\({}^{10}\) & figure of merit \\ & (\(\mu\)V/K) & & \(\times\)10\({}^{19}\)(\(\sigma/\tau\)) (S/ms) & (\(S^{2}\sigma/\tau\)) W/m\({}^{2}\)Ks & (\(ZT\)) & \\ \cline{2-8} & p & n & p & n & p & n \\ \hline WSi\({}_{2}\)N\({}_{4}\) & 2761.84 & 2705.29 & 1.64 & 4.82 & 4.67 & 10.53 & 0.26 & 0.05 \\ \hline WGe\({}_{2}\)N\({}_{4}\) & 1961.78 & 1935.55 & 4.00 & 5.82 & 8.61 & 7.51 & 0.91 & 0.56 \\ \hline \end{tabular} \end{table} Table 3: Calculated maximum a) \(S\), b) \(\sigma/\tau\), and c) \(S^{2}\sigma/\tau\) for monolayer of WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\). ### Lattice Thermal Conductivity (\(\kappa_{ph}\)) The variation of \(\kappa_{ph}\) with respect to \(T\) for monolayer of WSi\({}_{2}\)N\({}_{4}\), and WGe\({}_{2}\)N\({}_{4}\) is shown in figure 7a, and 7b. Monolayer of WGe\({}_{2}\)N\({}_{4}\) showed significantly lower \(\kappa_{ph}\)(3.63 W/mK) at 300 K, compared to WSi\({}_{2}\)N\({}_{4}\) (41.9 W/mK) which are lower than the popular 2D TMDC like MoS\({}_{2}\) (34.5 W/(m.K)) [27], WS\({}_{2}\) (72 W/(m.K)) [17], SnS\({}_{2}\) (15.85 W/(m.K)) [14], SiSe\({}_{2}\) (15.85 W/(m.K)) [28] etc. A decrease in \(\kappa_{ph}\) with an increase in \(T\) is observed for both of the structures. The WGe\({}_{2}\)N\({}_{4}\) monolayer possesses much lower thermal conductivity compared to monolayer of WSi\({}_{2}\)N\({}_{4}\). The change of phonon lifetime (\(\tau\)) and group velocity (\(G_{\mathbf{v}}\)) as a function of frequency for both the monolayers is shown in figure 8. The maximum phonon lifetime of WSi\({}_{2}\)N\({}_{4}\) (figure 8a) is observed to be much higher than the maximum phonon lifetime of the WGe\({}_{2}\)N\({}_{4}\) monolayer (figure 8b). Which is low compared to MoS\({}_{2}\) and WS\({}_{2}\)[29, 30]. The maximum phonon group velocity obtained for the WSi\({}_{2}\)N\({}_{4}\) monolayer is about 11.1 Km/s (figure 8c) which is due to the out-of-plane acoustic mode. Similarly, the maximum obtained \(G_{\mathbf{v}}\) for WGe\({}_{2}\)N\({}_{4}\) is about 7.5 Km/s (figure 8d) due to the out-of-plane acoustic mode combined with optical modes. As the \(\kappa_{ph}\) is proportional to \(G_{\mathbf{v}}\), and \(\tau\), the change in \(\kappa_{ph}\) of the monolayers can be explained easily. As both the \(\tau\) and \(G_{\mathbf{v}}\) for WSi\({}_{2}\)N\({}_{4}\) monolayer is much higher than the WGe\({}_{2}\)N\({}_{4}\), the \(\kappa_{ph}\) is much higher in WSi\({}_{2}\)N\({}_{4}\). As from the phonon band structure we can observe that for WSi\({}_{2}\)N\({}_{4}\) the acoustic and optical phonon bands are not overlapped, which leads to the increment in \(\kappa_{ph}\) for WSi\({}_{2}\)N\({}_{4}\) monolayer. On the other hand, for the WGe\({}_{2}\)N\({}_{4}\), the acoustic and optical phonon bands are overlapped which leads to the decrement of \(\kappa_{ph}\) for WGe\({}_{2}\)N\({}_{4}\) monolayer. Figure 7: Plot of \(\kappa_{ph}\) with respect to Temperature(K) of monolayer WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\). ### Thermoelectric figure of merit (\(Zt\)) The parameter \(ZT\) signifies the efficiency towards the thermoelectric effect along with the quality of a material. To get a high \(ZT\) value, we need a high value of \(\sigma\) and a much low value of \(k\) (\(k_{el}+k_{ph}\)) as represented in equation 7. The Thermoelectric figure of merit (\(ZT\)) is defined as \[ZT=\frac{S^{2}\sigma T}{\kappa_{el}+\kappa_{ph}}\] ( 7 ) Where \(S,T,\sigma,k,\) and \(k_{ph}\) are the same as defined earlier. The obtained \(ZT\) product is represented as a function of \(\mu\) for the monolayer of WSi\({}_{2}\)N\({}_{4}\) and WGe\({}_{2}\)N\({}_{4}\) at 300 K, 350 K, and 400 K, as shown in figure 9(a) and (b), respectively. Among them, WGe\({}_{2}\)N\({}_{4}\) monolayer shows the highest \(ZT\) value, i.e., 0.91 for p-type at 400 K and 0.56 for n-type, respectively. On the other hand, the WSi\({}_{2}\)N\({}_{4}\) monolayer showed a very low \(ZT\) value of 0.26 for p-type and 0.05 for n-type at 400 K. However, Figure.8: Variation of phonon lifetime with phonon frequency (a) for WSi\({}_{2}\)N\({}_{4}\) and (b) WGe\({}_{2}\)N\({}_{4}\) monolayer. Variation of group velocity (\(G_{\nu}\)) for various acoustic and optical modes with the frequency for (c) WSi\({}_{2}\)N\({}_{4}\) and (d) WGe\({}_{2}\)N\({}_{4}\) monolayer. at higher temperatures like 900K, the \(ZT\) value reaches around 0.69 (supplementary) for the p-type of WSi\({}_{2}\)N\({}_{4}\) monolayer. Both the structures showed higher \(ZT\) values towards p-type doping compared to n-type doping. So, for both the structures, the p-type doping (compared to n-type doping) showed a higher \(ZT\) value which signifies the effectiveness of p-type doping compared to n-type. The \(S\) of WSi\({}_{2}\)N\({}_{4}\) is higher rather than WGe\({}_{2}\)N\({}_{4}\), but the \(\sigma\) of WGe\({}_{2}\)N\({}_{4}\) is higher than WSi\({}_{2}\)N\({}_{4}\), so the power factor remains comparable, but the \(k_{ph}\) played the crucial role in making the WGe\({}_{2}\)N\({}_{4}\) monolayer an excellent material towards thermoelectric effect. ## 4 Conclusion We calculated the electronic, optical, and thermoelectric properties using DFT and BTE. The absence of imaginary frequency in phonon dispersion curves proved the dynamical stability of both structures. The \(ZT\) and \(k_{ph}\) for WGe\({}_{2}\)N\({}_{4}\) are found to be outstanding for low-temperature operation. And WSi\({}_{2}\)N\({}_{4}\) is not much efficient in the thermoelectric application. An excellent \(ZT\) product found is 0.91 for the p-type of WGe\({}_{2}\)N\({}_{4}\) at 400 K. So, the WGe\({}_{2}\)N\({}_{4}\) monolayer with p-type doping can significantly increase the thermoelectric performance. So, WGe\({}_{2}\)N\({}_{4}\) can be used for next-generation low-temperature excellent thermoelectric devices to generate electricity from waste heat. Figure.9: Plot of \(ZT\) at different temperatures (\(T\)) with respect to chemical potential (\(\mu\)) for the monolayer of (a) WSi\({}_{2}\)N\({}_{4}\) and (b) WGe\({}_{2}\)N\({}_{4}\). ### Acknowledgement: We are thankful to the Department of Science and Technology (DST), India for supporting us through the INSPIRE program and the Ministry of Human Resource and Development (MHRD). We are also grateful to the Indian Institute of technology Jodhpur for providing the infrastructure to carry out the research.
2309.03309
Tetraquark mass relations in quark and diquark models
We present new linear relations among the masses of S-wave tetraquarks with either one flavour ($QQ \bar Q \bar Q$) or two ($QQ\bar q \bar q$). Because the relations are sensitive to the hidden-colour, spin, and spatial degrees of freedom, comparison to experimental data can help to reveal the internal structure of tetraquarks, and discriminate among different theoretical models. Depending on the model, the relations are either exact, or valid in perturbation theory, and a thorough comparison with existing literature confirms their validity at the MeV level. Additionally, we explore the connections among tetraquark models, and show how those with effective (quark or diquark) masses are related to dynamical potential models. We also show how the spectrum of diquark models is effectively a limiting case of (more general) quark models, and in particular, that the diquark concept is most relevant in the particular combination $QQ\bar q \bar q$, where $Q$ is much heavier than $\bar q$.
Muhammad Naeem Anwar, Timothy J. Burns
2023-09-06T18:41:07Z
http://arxiv.org/abs/2309.03309v2
# Tetraquark mass relations in quark and diquark models ###### Abstract We present new linear relations among the masses of S-wave tetraquarks with either one flavour (\(QQ\bar{Q}\bar{Q}\)) or two (\(QQ\bar{q}\bar{q}\)). Because the relations are sensitive to the hidden-colour, spin, and spatial degrees of freedom, comparison to experimental data can help to reveal the internal structure of tetraquarks, and discriminate among different theoretical models. Depending on the model, the relations are either exact, or valid in perturbation theory, and a thorough comparison with existing literature confirms their validity at the MeV level. Additionally, we explore the connections among tetraquark models, and show how those with effective (quark or diquark) masses are related to dynamical potential models. We also show how the spectrum of diquark models is effectively a limiting case of (more general) quark models, and in particular, that the diquark concept is most relevant in the particular combination \(QQ\bar{q}\bar{q}\), where \(Q\) is much heavier than \(\bar{q}\). Introduction The "November Revolution" provoked by the discovery of \(J/\psi\)[1; 2] is now sometimes known as the "first" charm revolution, owing to a more recent sequence of discoveries which are also deserving of revolutionary status. A characteristic feature of the "second" charm revolution, which started at BaBar and Belle, and is still ongoing at BESIII and the LHC experiments at CERN, is the discovery of states which cannot apparently be understood as ordinary \(q\bar{q}\) mesons or \(qqq\) baryons; the experimental situation is reviewed in Refs. [3; 4; 5]. The new class of hadrons, which includes states in the charm and bottom quark sector, poses a significant challenge to our understanding of the strong interaction. Future experiments, such as Belle II [6] and PANDA [7], are designed to further explore these hadrons. The initial flood of new states became known collectively as the "_XYZ_" states, a name strongly suggestive of their mysterious characteristics. Some of the states are by now so well-established that their nomenclature reflects the standard conventions of the Particle Data Group (PDG) [8] - so, for example, the state \(X(3872)\)[9] which launched the second charm revolution is now known as \(\chi_{c1}(3872)\). Even so, the underlying nature of many of these states is not well understood, and there is considerable ongoing theoretical debate [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. A key dividing line in these discussions is between molecular models and "compact" multiquark models, which have characteristically different degrees of freedom. In molecular models, the constituents are hadrons, whose interactions can be modelled, for example, in terms of pion exchange, or as effective field theory contact terms which are fit to data [22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. Such approaches are essentially an extension into the heavy quark sector of ideas which are widely applied in nuclear physics. The focus of this paper is instead on compact multiquarks, which, by comparison to molecular models, are more "exotic", in the sense that there is no effective description in terms of interacting hadrons. Instead, taking a \(QQ\bar{q}\bar{q}\) tetraquark as an example, the relevant degrees of freedom are typically assumed to be four interacting quarks, or alternatively, effective \(QQ\) and \(\bar{q}\bar{q}\) diquarks. (Here \(Q\) and \(q\) are not necessarily heavy and light quarks, but rather, any distinct quark flavours.) One of our main motivations in this paper is to distinguish between these two physical pictures, which we refer to as quark models and diquark models, respectively. (We give citations to the relevant literature in the main body of the paper, where the different models are described in more detail.) Models for compact multiquarks have parameters which are typically not well-constrained, as they are usually fixed by comparison to the spectrum of conventional mesons and baryons, which introduces a systematic uncertainty which is difficult to quantify. Absolute predictions for the masses of states are therefore not very reliable, and moreover, they cannot be used to distinguish between quark and diquark models, whose predictions are similar within (large) uncertainties. By contrast, predictions for relations among masses (or mass splittings) are more general and, in some cases, are completely independent of parameters. Such relations, which are the main focus of this paper, obviously have more predictive power, and allow for more direct tests of model assumptions. For conventional hadrons, the Gell-Mann-Okubo formula [32; 33; 34; 35] is a prototypical example of an empirically successful relation among hadron masses. Additional relations among the masses of conventional mesons and baryons have also been discovered and compared favourably to experimental data, for example in Refs. [36; 37; 38; 39; 40; 41; 42]. In this paper we uncover similar relations among the masses of tetraquark states, and since they are based on similar symmetry arguments, we expect them to be equally reliable. An analogy with ordinary \(Q\bar{Q}\) mesons is instructive. In that case, absolute mass predictions (in quark potential models) have considerable uncertainty, but a linear relation among the masses in the \(P\)-wave multiplet is very reliable and is satisfied in experiments to less than an MeV [43; 44; 45; 46; 47], and also served as a benchmark for exotic structures in that mass region [48; 49]. The relations we find in this paper are conceptually very similar. Note that in this paper we are concentrating on relations _among_ the masses of tetraquark states, as distinct from relations between their masses and those of conventional mesons, which have also been discussed in the literature [50; 51; 52], but remain to be confirmed experimentally. Our main results in this paper are for tetraquarks with either two flavours in the combination \(QQ\bar{q}\bar{q}\), or just one flavour \(QQ\bar{Q}\bar{Q}\). (For specific examples, \(Q\) can be regarded as a heavy quark flavour; however, \(q\) is not necessarily a light quark, but rather, a distinct quark flavour from \(Q\).) Our interest in these particular combinations is partly due to recent experimental discoveries and lattice calculations. The one-flavour case has been a particularly hot topic recently, owing to a sequence of experimental observations showing apparent candidates for \(cc\bar{c}\bar{c}\) states in \(J/\psi J/\psi\) decays [53; 54; 55]; many of the results of this paper can be usefully applied to the phenomenology of these states [56]. Similarly, there is a growing body of evidence from experiment, lattice QCD and models, for the likely binding of and \(cc\bar{q}\bar{q}^{\prime}\) tetraquarks, where \(\bar{q}\) and \(\bar{q}^{\prime}\) are light flavours [50; 51; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73]. Note however that our results would only apply where \(\bar{q}\) and \(\bar{q}^{\prime}\) are identical or, from isospin symmetry, are an isovector \(\bar{u}\bar{d}\) combination. The possible mixing between compact tetraquark and molecular state (such as explored in [74; 75]) is not considered here. A possible caveat is that the potential for such a mixing introduces some additional parameter(s) and the derived mass relations will be subjected to some level of model dependence. We begin (Section II) with some remarks on the main distinguishing feature of quark and diquark models, namely the assumed colour wavefunctions. We then discuss quark models (Section III), showing that the chromomagnetic quark model (with effective quark masses) can be obtained in a symmetry limit from the quark potential model. We then do a similar exercise for diquark models (Section IV), and show how these are related to quark models in a truncated basis of colour. Specialising to tetraquarks with one or two flavours (Section V), we derive formulae for the masses of states in both quark and diquark models, and show how the diquark model emerges as a limiting case of quark models. Using the mass formulae, we identify linear relations among tetraquark masses (Section VI), and show how these can discriminate among models. Finally (Section VII) we summarise our results and suggest how they may be used to inform comparisons with emerging experimental data. ## II Colour, spin and flavour The key distinction between quark and diquark models is the treatment of colour. A pair of quarks can be coupled to colour \(\bar{\mathbf{3}}\) or \(\mathbf{6}\), while a pair of antiquarks can be coupled to \(\mathbf{3}\) or \(\bar{\mathbf{6}}\). To form an overall colour singlet, the possible combinations are then \(\bar{\mathbf{3}}\otimes\mathbf{3}\) or \(\mathbf{6}\otimes\bar{\mathbf{6}}\). A basic assumption of quark models is that both possibilities should be considered, and in general, a quark model state can be an admixture of the two. In diquark models, by construction, only the \(\bar{\mathbf{3}}\otimes\mathbf{3}\) configuration is included [76; 77]. As well as the treatment of colour, models are also distinguished according to whether the constituents (quarks or diquarks) have effective masses, or instead are dynamical objects whose contribution to the tetraquark mass is obtained from the Schrodinger equation with some confining potential. We will consider both of these approaches, and the relation between them. In this paper we concentrate on tetraquarks with either two flavours (in the combination \(QQ\bar{q}\bar{q}\)), or one (\(QQ\bar{Q}\bar{Q}\)). Both systems are subject to the same constraints, from the Pauli principle, on the allowed spin and colour configurations. With reference to the \(QQ\bar{q}\bar{q}\) system, an S-wave \(QQ\) pair can have (colour, spin) quantum numbers (\(\bar{\bf 3}\),1) or (\({\bf 6}\),0), while an S-wave \(\bar{q}\bar{q}\) pair can be (\({\bf 3}\),1) or (\(\bar{\bf 6}\),0). Forming an overall colour singlet, and combining the spins in S-wave to angular momentum \(J\), the allowed combinations (and their \(J^{P(C)}\) quantum numbers) are \[\left|\varphi_{2}\right\rangle =\left|\{(QQ)^{1}_{\bar{\bf 3}}(\bar{q}\bar{q})^{1}_{\bar{\bf 3}} \}^{2}\right\rangle\quad[2^{+(+)}], \tag{1}\] \[\left|\varphi_{1}\right\rangle =\left|\{(QQ)^{1}_{\bar{\bf 3}}(\bar{q}\bar{q})^{1}_{\bar{\bf 3}} \}^{1}\right\rangle\quad[1^{+(-)}],\] (2) \[\left|\varphi_{0}\right\rangle =\left|\{(QQ)^{1}_{\bar{\bf 3}}(\bar{q}\bar{q})^{1}_{\bar{\bf 3}} \}^{0}\right\rangle\quad[0^{+(+)}],\] (3) \[\left|\varphi_{0}^{\prime}\right\rangle =\left|\{(QQ)^{0}_{\bf 6}(\bar{q}\bar{q})^{0}_{\bar{\bf 6}} \}^{0}\right\rangle\quad[0^{+(+)}], \tag{4}\] where on the right-hand side, the subscripts are colour, and superscripts are spin. The charge conjugation quantum number \(C\) is relevant only for the one-flavour case, corresponding to \(Q=q\). When counting the number of distinct quark flavours, we can treat \(u\) and \(d\) quarks as identical if they come in the isovector (symmetric) combination, since they are subject to the same constraints from the Pauli principle outlined above. So, for example, results we obtain for \((I,I_{3})=(1,\pm 1)\) states \(QQ\bar{d}\bar{d}\) and \(QQ\bar{u}\bar{u}\) apply equally to the \((I,I_{3})=(1,0)\) partner \(QQ\bar{u}\bar{d}\), but would not apply to an \((I,I_{3})=(0,0)\) partner. Diquark models are characterised by the inclusion of only colour triplet combinations, meaning the spectrum has three states (\(\varphi_{2}\), \(\varphi_{1}\) and \(\varphi_{0}\)). Quark models, by contrast, include both the colour triplet and colour sextet combinations, so there are four states, namely \(\varphi_{2}\), \(\varphi_{1}\), and two scalars, which are admixtures of \(\varphi_{0}\) and \(\varphi_{0}^{\prime}\). Obviously an experimental determination of the number of scalar states can distinguish diquark models (one state) from quark models (two). ## III Quark models In the chromomagnetic quark model, also known as the colour-magnetic interaction (CMI) model, the quark constituents have effective (rather than dynamical) masses, and the splitting among the S-wave states is induced by chromomagnetic interactions (one-gluon ex change) [78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92]. The model has been widely applied to exotic hadron spectroscopy, as reviewed in Ref. [14]. The Hamiltonian for S-wave states is \[H=\overline{M}-\sum_{i<j}C_{ij}\;\mathbf{\lambda}_{i}\cdot\mathbf{\lambda}_{j}\;\mathbf{ \sigma}_{i}\cdot\mathbf{\sigma}_{j}, \tag{5}\] where the centre of mass \[\overline{M}=\sum_{i}m_{i} \tag{6}\] is the sum of quark masses, \(\mathbf{\lambda}_{i}\) and \(\mathbf{\sigma}_{i}\) are the \(SU(3)\) colour and \(SU(2)\) spin (Pauli) matrices of quark \(i\), and \(C_{ij}\) are (positive) parameters which depend on quark flavours. The eigenstates of \(H\) are, in general, admixtures of \(\bar{\bf 3}\otimes{\bf 3}\) and \({\bf 6}\otimes\bar{\bf 6}\) colour configurations, with mixing induced by the \(\mathbf{\lambda}_{i}\cdot\mathbf{\lambda}_{j}\) term. The parameters \(\overline{M}\) and \(C_{ij}\) are typically fixed by applying the same Hamiltonian to the spectrum of conventional mesons and/or baryons, and fitting. An explicit assumption is that the same coefficients \(C_{ij}\) control the interactions between any pair of flavours \(i\) and \(j\), either as a quark-quark (\(q_{i}q_{j}\)) or quark-antiquark (\(q_{i}\bar{q}_{j}\)) pair1, and regardless of whether these pairs are in a tetraquark, or in a conventional meson or baryon. In some cases, it is further assumed that the coefficients \(C_{ij}\) scale inversely with quark masses, Footnote 1: \(q_{i,j}\) stands for any quark flavour either heavy or light throughout. \[C_{ij}=\frac{c}{m_{i}m_{j}}, \tag{7}\] for some constant \(c\) (which will be identified later in this section). Quark potential models [57; 58; 59; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116] are a widely-used (and somewhat more rigorous) alternative approach, in which the quark constituents are dynamical, rather than having effective masses. A typical quark model Hamiltonian \[H=T+V+U, \tag{8}\] has a potential with chromoelectric (\(V\)) and chromomagnetic (\(U\)) contributions \[V = -\sum_{i<j}\mathbf{\lambda}_{i}\cdot\mathbf{\lambda}_{j}\;v(r_{ij}), \tag{9}\] \[U = -\sum_{i<j}\mathbf{\lambda}_{i}\cdot\mathbf{\lambda}_{j}\;\mathbf{\sigma}_{i} \cdot\mathbf{\sigma}_{j}\;u(r_{ij}), \tag{10}\] whose radial parts are typically (but not necessarily) of the form \[v(r_{ij}) =\frac{3}{16}\left(b\,r_{ij}-\frac{4}{3}\frac{\alpha_{s}}{r_{ij}}+c_ {0}\right), \tag{11}\] \[u(r_{ij}) =\frac{\pi}{6}\,\frac{\alpha_{s}}{m_{i}m_{j}}\,\delta^{3}(r_{ij})\,, \tag{12}\] where \(b\) and \(\alpha_{s}\) are the strengths of the confining force (string tension) and color-Coulomb potential, respectively, and \(c_{0}\) is mass renormalization. Numerical values of these parameters can be extracted from the hadron spectrum [69; 109; 116; 96]. Comparing the two models, it is clear that \(U\) in the potential model is closely related to the interaction term in the chromomagnetic model (5). To understand the relationship between the models, we treat \(U\) as a perturbation and consider the Hamiltonian \[\overline{H}=T+V\,, \tag{13}\] whose eigenstates are \(\varphi_{0}\), \(\varphi_{1}\), \(\varphi_{2}\) and \(\varphi_{0}^{\prime}\) introduced in equations (1)-(4). (There is no term in \(\overline{H}\) which mixes \(\varphi_{0}\) and \(\varphi_{0}^{\prime}\), due to the orthogonality of the spin wavefunctions.) Because \(\overline{H}\) depends on colour but not spin, there is degeneracy among the states \(\varphi_{0}\), \(\varphi_{1}\), and \(\varphi_{2}\), but not between these and \(\varphi_{0}^{\prime}\), \[\big{\langle}\varphi_{J}\big{|}\overline{H}\big{|}\varphi_{J}\big{\rangle}\neq \big{\langle}\varphi_{0}^{\prime}\big{|}\overline{H}\big{|}\varphi_{0}^{\prime }\big{\rangle}\,. \tag{14}\] We will point out that in order to make the connection with the chromomagnetic model, an extra symmetry constraint is required, which restores this degeneracy. The matrix elements of \(V\), using the colour matrix elements in Ref. [108], are \[\big{\langle}\varphi_{J}\big{|}V\big{|}\varphi_{J}\big{\rangle} =\frac{8}{3}\big{\langle}v(r_{12})+v(r_{34})\big{\rangle}+\frac{ 4}{3}\big{\langle}v(r_{13})+v(r_{14})+v(r_{23})+v(r_{24})\big{\rangle}\,, \tag{15}\] \[\big{\langle}\varphi_{0}^{\prime}\big{|}V\big{|}\varphi_{0}^{ \prime}\big{\rangle} =-\frac{4}{3}\big{\langle}v(r_{12})+v(r_{34})\big{\rangle}+\frac{ 10}{3}\big{\langle}v(r_{13})+v(r_{14})+v(r_{23})+v(r_{24})\big{\rangle} \tag{16}\] where an integral over all spatial degrees of freedom is implied. From the symmetries in \(T\) and \(V\), the ground state wavefunctions are symmetric under the interchange of quarks (\(1\leftrightarrow 2\)), antiquarks (\(3\leftrightarrow 4\)), or both (\(12\leftrightarrow 34\)), so the spatial integral can be reduced to two independent terms \[\big{\langle}\varphi_{J}\big{|}V\big{|}\varphi_{J}\big{\rangle} =\frac{16}{3}\big{\langle}v(r_{12})\big{\rangle}+\frac{16}{3} \big{\langle}v(r_{13})\big{\rangle}\,, \tag{17}\] \[\big{\langle}\varphi_{0}^{\prime}\big{|}V\big{|}\varphi_{0}^{ \prime}\big{\rangle} =-\frac{8}{3}\big{\langle}v(r_{12})\big{\rangle}+\frac{40}{3} \big{\langle}v(r_{13})\big{\rangle}\,. \tag{18}\] Note however that the wavefunction does not have an additional symmetry under the interchange of a quark and antiquark (such as \(2\leftrightarrow 3\)), so in general, no further simplification is possible. This applies not only to states with two flavours (\(QQ\bar{q}\bar{q}\)), but also states with one flavour (\(QQ\bar{Q}\bar{Q}\)): the Hamiltonian does not impose a symmetry under \(Q\leftrightarrow\bar{q}\) or \(Q\leftrightarrow\bar{Q}\), so the wavefunction does not have that symmetry. It turns out, however, that this additional symmetry is often imposed on the wavefunction as an artefact of the calculation. In particular this is often the case when the Gaussian Expansion Method is applied to tetraquarks with one flavour (\(QQ\bar{Q}\bar{Q}\)), as in for example Refs. [114; 116]. If we impose this extra symmetry (under \(2\leftrightarrow 3\)), the spatial integral reduces further, and the potentials for all states are identical \[\big{\langle}\varphi_{J}\big{|}V\big{|}\varphi_{J}\big{\rangle}=\big{\langle} \varphi_{0}^{\prime}\big{|}V\big{|}\varphi_{0}^{\prime}\big{\rangle}=\frac{32 }{3}\big{\langle}v(r_{12})\big{\rangle}\,, \tag{19}\] which further implies, as distinct from the general case (14), that the eigenstates of \(\overline{H}\) are degenerate. Identifying \(\overline{M}\) in equation (5) as the corresponding eigenvalue, (20) we see that a perturbative treatment of the full quark model Hamiltonian \[H=\overline{H}+U\,, \tag{21}\] is equivalent to the chromomagnetic model (5), where the coefficients \(C_{ij}\) are obtained from \(u(r_{ij})\) by integrating over the spatial wavefunctions of the eigenstates of \(\overline{H}\), \[C_{ij}=\big{\langle}u(r_{ij})\big{\rangle}\,. \tag{22}\] Note that with this interpretation, the centre of mass term \(\overline{M}\) is no longer just the sum of quark masses, but also absorbs the dynamical contributions from the potential model, namely the kinetic energy and the confining term. Also, the coefficients \(C_{ij}\) depend not only on quark masses, as in equation (7), but also depend on the spatial wavefunction of the quarks. In the symmetry limit we are working in, \(\big{\langle}r_{ij}\big{\rangle}\) is independent of \(i\) and \(j\), hence so is \(C_{ij}\). This validates the assumption, in the chromomagnetic model, that the same \(C_{ij}\) can be used for any pair of flavours \(i\) and \(j\) in a tetraquark, both quark-quark (\(q_{i}q_{j}\)) and quark-antiquark (\(q_{i}\bar{q}_{j}\)) pairs. However it does not establish that one can use the same \(C_{ij}\) in tetraquarks as in conventional mesons and baryons: this remains an assumption of the model, which could however be tested by evaluating equation (22) and the corresponding expression for mesons and baryons. With the specific form of \(u(r_{ij})\) in equation (12), we reproduce the inverse dependence of \(C_{ij}\) on quark masses as in equation (7), with \[c=\frac{\pi}{6}\,\alpha_{s}\big{\langle}\delta^{3}(r_{ij})\big{\rangle}\,. \tag{23}\] Note that, in the symmetry limit we are working in, \(c\) is indeed constant, in the sense of being independent of the flavours \(i\) and \(j\) within the tetraquark. ## IV Diquark models A widespread implementation of the diquark model [117; 118; 119; 120; 121; 122; 123] has a Hamiltonian which is very similar to that of the chromomagnetic model, \[H=\overline{M}+2\sum_{i<j}\kappa_{ij}\ \mathbf{S}_{i}\cdot\mathbf{S}_{j}\,, \tag{24}\] where \(\overline{M}\) is the sum of quark or diquark effective masses, \(\mathbf{S}_{i}=\mathbf{\sigma}_{i}/2\) is the spin of quark \(i\), and \(\kappa_{ij}\) are (positive) parameters which depend on quark flavours and which, unlike the parameters \(C_{ij}\) of the chromomagnetic model, are not assumed to be the same for quark-quark (\(q_{i}q_{j}\)) and quark-antiquark (\(q_{i}\bar{q}_{j}\)) combinations. Otherwise, the only distinction between the diquark model and the chromomagnetic model is the use of a truncated colour basis \(\bar{\mathbf{3}}\otimes\mathbf{3}\). If we evaluate the chromomagnetic Hamiltonian (5) in the same basis, the two models are equivalent provided their couplings are related \[\kappa_{ij}=-2\big{\langle}\mathbf{\lambda}_{i}\cdot\mathbf{\lambda}_{j}\big{\rangle}C _{ij} \tag{25}\] namely \[\kappa_{ij}=\begin{cases}\frac{16}{3}C_{ij},&\text{for $q_{i}q_{j}$,}\\ \frac{8}{3}C_{ij},&\text{for $q_{i}\bar{q}_{j}$,}\end{cases} \tag{26}\] where we have used the colour matrix elements in Ref. [108]. In this sense, the diquark model is identical to the chromomagnetic quark model, but evaluated in a truncated colour basis. We will later use this property to extract the spectrum of the diquark model as a limiting case of the chromomagnetic quark model. Referring to the Hamiltonian (24) as a diquark model is somewhat counterintuitive, since the spin degrees of freedom are actually quarks (not diquarks). The distinction turns out not to be important for the particular flavour combinations of tetraquarks which are the focus of this paper. For \(QQ\bar{q}\bar{q}\) states there are three independent couplings \[\kappa_{QQ} \equiv\kappa_{12}\,, \tag{27}\] \[\kappa_{qq} \equiv\kappa_{34}\,,\] (28) \[\kappa_{Q\bar{q}} \equiv\kappa_{13}=\kappa_{14}=\kappa_{23}=\kappa_{24}\,, \tag{29}\] with the obvious simplification to two couplings (\(\kappa_{QQ}\) and \(\kappa_{Q\bar{Q}}\)) in the special case \(QQ\bar{Q}\bar{Q}\). We consider the (more general) \(QQ\bar{q}\bar{q}\) case in detail. With the couplings above, the Hamiltonian reduces to \[H=\overline{M}+\frac{1}{2}\left(\kappa_{QQ}+\kappa_{qq}\right)+2\kappa_{Q\bar{ q}}\ \mathbf{S}_{12}\cdot\mathbf{S}_{34}\,, \tag{30}\] where here we have evaluated \[\left\langle\mathbf{S}_{1}\cdot\mathbf{S}_{2}\right\rangle=\left\langle \mathbf{S}_{3}\cdot\mathbf{S}_{4}\right\rangle=\frac{1}{4}\,, \tag{31}\] as appropriate to spin-1 diquarks. The key feature is that the spin-dependence of the Hamiltonian is now expressed in terms of effective diquark spin operators \[\mathbf{S}_{12} =\mathbf{S}_{1}+\mathbf{S}_{2}\,, \tag{32}\] \[\mathbf{S}_{34} =\mathbf{S}_{3}+\mathbf{S}_{4}\,, \tag{33}\] corresponding to the total spin of the \(QQ\) and \(\bar{q}\bar{q}\) diquarks, respectively. In this sense, the Hamiltonian defined at quark level can be naturally interpreted in terms of diquark degrees of freedom. But this is a peculiarity of the flavour combination \(QQ\bar{q}\bar{q}\) (or \(QQ\bar{Q}\bar{Q}\)), which leads to equation (30). For other combinations of flavours, such as \(QQ\bar{Q}\bar{q}\) and \(Qq\bar{Q}\bar{q}\), the same does not apply, and in general an effective diquark description does not emerge in the same way; this is because as well as effective diquark operators like (32) and (33), there are other operators \(\mathbf{S}_{1}-\mathbf{S}_{2}\) and/or \(\mathbf{S}_{3}-\mathbf{S}_{4}\) which mix "diquarks" with different spin. Diquark potential models are more explicit in treating diquarks as effective degrees of freedom. Here the mass spectrum of tetraquark states comes from the Schrodinger equation, in which diquarks are massive (colour-triplet) objects interacting through a confining potential [124; 125; 126; 127; 128; 129; 130; 131; 132; 52; 133]. The distinctions with the previous model are that the mass spectrum comes from the Schrodinger equation (rather than effective masses fit to data), and that the spin dependence is expressed from the outset in terms of diquark (not quark) spin operators. To clarify the relation between the different diquark models, we follow a similar procedure to our previous discussion of quark models. In the Hamiltonian, \[H=T+V+U\,, \tag{34}\] we isolate the spin-independent kinetic (\(T\)) and confining (\(V\)) terms, and a spin-dependent term (\(U\)), which in all models is expressed (for S-wave states) in terms of diquark spin operators, \[U=u(r)\;\mathbf{S}_{12}\cdot\mathbf{S}_{34}\,, \tag{35}\] where here \(r\) is the radial component of the vector joining the diquark and antidiquark. There is considerable variation among the different approaches to diquark potential models, for example, in how the effective diquark mass is obtained, the assumed form of the confining potential \(V\), and the precise form of the radial component \(u(r)\) of the spin-spin term. However these differences are immaterial to the discussion. As in the quark model case, we treat \(U\) as a perturbation. If we identify the eigenvalues of the spin-independent part \[\overline{H}=T+V \tag{36}\] with the spin-independent term in equation (30), \[\left\langle\overline{H}\right\rangle\equiv\overline{M}+\frac{1}{2}\left( \kappa_{QQ}+\kappa_{qq}\right), \tag{37}\] we notice that a perturbative treatment of the full Hamiltonian (34) is equivalent to the previous diquark model, with the couplings defined as integrals over the eigenstates of \(\overline{H}\), \[\kappa_{Q\bar{q}}\equiv\frac{1}{2}\big{\langle}u(r)\big{\rangle}\,. \tag{38}\] ## V Mass formulae At this stage we have established the underlying connections among four different classes of models, distinguished according to the colour structure (quarks versus diquarks), and the mass spectrum (effective masses versus dynamical masses from the Schrodinger equation). Among the models with effective masses, we showed that the quark model (5) and diquark model (24) are equivalent, except that the latter is evaluated in a truncated colour basis. We also showed that each of these models can be understood by applying perturbation theory to a corresponding (quark or diquark) potential model, which gives a dynamical interpretation for the effective masses, and implies that the couplings are sensitive to the spatial wavefunctions. Having established these connections among models, we will now obtain some general results which apply to all four classes of model. As a framework for our calculation, we will use the most general Hamiltonian (5). By evaluating its spectrum in the full and truncated colour basis, respectively, we get results which correspond to quark and diquark models. When applying the Hamiltonian (5) to \(QQ\bar{q}\bar{q}\) states, there are three possible couplings, \[C_{QQ} =C_{12}\,, \tag{39}\] \[C_{qq} =C_{34}\,,\] (40) \[C_{Q\bar{q}} =C_{13}=C_{14}=C_{23}=C_{24}\,, \tag{41}\] whereas for \(QQ\bar{Q}\bar{Q}\) states there are only two, \[C_{QQ} =C_{12}=C_{34}\,, \tag{42}\] \[C_{Q\bar{Q}} =C_{13}=C_{14}=C_{23}=C_{24}\,. \tag{43}\] We will discuss the more general \(QQ\bar{q}\bar{q}\) case, noting that \(QQ\bar{Q}\bar{Q}\) is then a special case with \(Q=q\). We need the matrix elements of \(H\) with respect to the basis states (1)-(4). Using, for example, the colour and spin matrix elements of Ref. [108], we find \[\left\langle\varphi_{J}\middle|H\middle|\varphi_{J}\right\rangle =\overline{M}+\frac{8}{3}\left(C_{QQ}+C_{qq}\right)+\frac{8}{3}C_ {Q\bar{q}}\left[J(J+1)-4\right]\,, \tag{44}\] \[\left\langle\varphi_{0}^{\prime}\middle|H\middle|\varphi_{0}^{ \prime}\right\rangle =\overline{M}+4(C_{QQ}+C_{qq})\,,\] (45) \[\left\langle\varphi_{0}^{\prime}\middle|H\middle|\varphi_{0}\right\rangle =-8\sqrt{6}C_{Q\bar{q}}\,, \tag{46}\] which is consistent with the results of Refs. [84; 89]. Note that \(C_{QQ}\) and \(C_{qq}\) appear only in the combination \(C_{QQ}+C_{qq}\), and because of this, it turns out to be convenient to introduce the dimensionless ratio \[R=\frac{2C_{Q\bar{q}}}{C_{QQ}+C_{qq}} \tag{47}\] which, in the case of \(QQ\bar{Q}\bar{Q}\) states, reduces to the simpler ratio of \(Q\bar{Q}\) and \(QQ\) couplings, \[R=\frac{C_{Q\bar{Q}}}{C_{QQ}}\,. \tag{48}\] If the couplings are parameterised as in equation (7), then for two-flavour states (\(QQ\bar{q}\bar{q}\)), \(R\) depends only on the ratio of quark masses \(m_{q}\) and \(m_{Q}\), \[R=\frac{2m_{q}/m_{Q}}{1+(m_{q}/m_{Q})^{2}}\,, \tag{49}\] and takes values in the range \(0<R<1\), while for one-flavour states (\(QQ\bar{Q}\bar{Q}\)) obviously \(R=1\). As discussed, the spectrum of the diquark model comes from truncating the basis to include only the hidden colour-triplet states, namely \(\varphi_{2}\), \(\varphi_{1}\) and \(\varphi_{0}\), but not \(\varphi_{0}^{\prime}\). Evaluating equation (44), the masses of the tensor (\(M_{2}\)), axial (\(M_{1}\)) and scalar (\(M_{0}\)) are \[M_{2} =\overline{M}+\frac{8}{3}\left(C_{QQ}+C_{qq}\right)\left(1+R \right), \tag{50}\] \[M_{1} =\overline{M}+\frac{8}{3}\left(C_{QQ}+C_{qq}\right)\left(1-R \right),\] (51) \[M_{0} =\overline{M}+\frac{8}{3}\left(C_{QQ}+C_{qq}\right)\left(1-2R \right). \tag{52}\] To get results for the quark model, we expand the basis to include \(\varphi_{0}^{\prime}\) which implies, as discussed previously, that the spectrum includes two scalar states. The masses of the tensor (\(M_{2}\)) and axial (\(M_{1}\)) are as above, but the masses of the scalars (\(M_{0}\) and \(M_{0}^{\prime}\)) are the eigenvalues of \[H=\overline{M}+\left(C_{QQ}+C_{qq}\right)\begin{pmatrix}\frac{8}{3}(1-2R)&-4 \sqrt{6}R\\ -4\sqrt{6}R&4\end{pmatrix}, \tag{53}\] namely \[M_{0} =\overline{M}+\frac{2}{3}\left(C_{QQ}+C_{qq}\right)\left(5-4R- \Delta\right), \tag{54}\] \[M_{0}^{\prime} =\overline{M}+\frac{2}{3}\left(C_{QQ}+C_{qq}\right)\left(5-4R+ \Delta\right), \tag{55}\] where \[\Delta=\sqrt{232R^{2}+8R+1}\,, \tag{56}\] and we are adopting the convention that \(M_{0}^{\prime}>M_{0}\). The eigenstates \(\psi_{0}\) and \(\psi_{0}^{\prime}\) corresponding to masses \(M_{0}\) and \(M_{0}^{\prime}\) are orthogonally mixed \[\big{|}\psi_{0}\big{\rangle} =\cos\theta\big{|}\varphi_{0}\big{\rangle}+\sin\theta\big{|} \varphi_{0}^{\prime}\big{\rangle}\,, \tag{57}\] \[\big{|}\psi_{0}^{\prime}\big{\rangle} =-\sin\theta\big{|}\varphi_{0}\big{\rangle}+\cos\theta\big{|} \varphi_{0}^{\prime}\big{\rangle}\,, \tag{58}\] with mixing angle \[\theta=\tan^{-1}\left(\frac{\Delta-1-4R}{6\sqrt{6}R}\right). \tag{59}\] The mass formulae above imply unambiguous orderings for the masses of states, regardless of parameters. For diquark models, the ordering is \[M_{0}<M_{1}<M_{2}\,, \tag{60}\] whereas in quark models it is \[M_{0}<M_{1}<M_{2}<M_{0}^{\prime}\,. \tag{61}\] This can help to assign quantum numbers to experimental candidates, as discussed in Ref. [100]. The results in this section are exact for (quark or diquark) models with effective masses, whereas for potential models (whether quark or diquark), they apply in the limit of perturbation theory. In the particular case of the quark potential model, there is an additional caveat: recalling the discussion at the end of Section III, the results derived above are valid only subject to the additional assumption that the spatial wavefunction of the tetraquark is totally symmetric under the interchange \(Q\leftrightarrow\bar{q}\) (or \(Q\leftrightarrow\bar{Q}\), in the one flavour case). As discussed, in many papers this assumption applies (even if implicitly), and we have found that in such cases (for example Refs. [114; 116]) the results agree with all of our results above (for masses, mixing angles, and mass orderings). In quark model studies which do not use that assumption, there are some differences, which are immediately apparent in violations of the mass ordering (61), for example in Ref. [112]. There is an intriguing connection between quark and diquark models in the limit of small \(R\). In this limit \(\Delta\approx 1+4R\), meaning the scalar masses are \[M_{0} \approx\overline{M}+\frac{8}{3}\left(C_{QQ}+C_{qq}\right)\left(1- 2R\right), \tag{62}\] \[M_{0}^{\prime} \approx\overline{M}+4\left(C_{QQ}+C_{qq}\right)\,, \tag{63}\] which is equivalent to a perturbative treatment of the Hamiltonian (53) to first order in \(R\). Note that the lighter scalar \(M_{0}\) reproduces the result of the diquark model, equation (52). In the same limit \(\theta\approx 0\) so the lighter scalar is purely \(\varphi_{0}\), again coinciding with the diquark model result. So apart from the existence of a heavier scalar state, we have found that spectra of the quark and diquark models coincide in the limit of small \(R\), both in terms of masses and wavefunctions. With reference to the definitions (47) and (48), small \(R\) means that quark-antiquark interactions are small compared to quark-quark interactions. That is precisely the limit in which the diquark concept is physically reasonable. From equation (49), the small \(R\) limit applies to \(QQ\bar{q}\bar{q}\) tetraquarks with \(m_{Q}\gg m_{q}\). On this basis we suggest that the diquark model is a sensible approximation to the quark model in such cases (modulo the absent heavier scalar state). Otherwise, the spectra of the quark and diquark models are rather different, and we explore this further in the next section. The one-flavour case \(QQ\bar{Q}\bar{Q}\) actually deviates maximally from the small \(R\) limit, as it has \(R=1\), which is the upper limit on \(R\) from equation (49). Returning to the \(QQ\bar{q}\bar{q}\) case, it is clear from above that as \(m_{Q}\to\infty\), the lighter scalar decouples from \(\varphi_{0}^{\prime}\) and becomes purely \(\varphi_{0}\). This effect has been discussed previously for \(QQ\bar{q}\bar{q}\) states [134; 101]. In terms of colour, it is exactly what was observed also for the isoscalar \(QQ\bar{u}\bar{d}\) tetraquark [113]: as \(m_{Q}\to\infty\), the ground state decouples from \(\mathbf{6}\otimes\bar{\mathbf{6}}\) and becomes purely \(\bar{\mathbf{3}}\otimes\mathbf{3}\). The comparison suggests that the effect may be generic, noting that in spite of its apparent similarity, the isoscalar \(QQ\bar{u}\bar{d}\) system is very different to our \(QQ\bar{q}\bar{q}\) system, because of the isospin asymmetry, which implies different spin-colour configurations for \(\bar{u}\bar{d}\) compared to \(\bar{q}\bar{q}\). ## VI Mass relations The mass formulae in the previous section imply relations among the masses of the states, and as far as we know these have not yet been identified in the literature. For diquark models, the situation is very simple: equations (50), (51) and (52) imply that the masses \(M_{2}\), \(M_{1}\), \(M_{0}\) satisfy the following linear relation, \[M_{1}=\frac{1}{3}\left(2M_{0}+M_{2}\right), \tag{64}\] independently of model parameters. For quark models, the situation is only slightly more complicated. From equations (50), (51), (54), and (55), it is clear that any mass splitting among \(M_{2}\), \(M_{1}\), \(M_{0}\) and \(M_{0}^{\prime}\) is independent of \(\overline{M}\), while any ratio of such splittings is independent of \(C_{QQ}+C_{qq}\), leaving a function of \(R\) only. By taking ratios of different combinations of mass splittings, we get linear relations among the masses, similar to equation (64) for the diquark model, but in this case involving \(R\). In this way, we can find a linear relation among any combination of three masses out of the four (\(M_{2}\), \(M_{1}\), \(M_{0}\) and \(M_{0}^{\prime}\)), meaning a total of four relations. We concentrate on the following two: \[M_{1} =M_{0}+\frac{\Delta-1}{\Delta-1+8R}(M_{2}-M_{0})\,, \tag{65}\] \[M_{0}^{\prime} =M_{0}+\frac{2\Delta}{\Delta-1+8R}(M_{2}-M_{0})\,, \tag{66}\] noting that the first of these is the closest analogue of the diquark model result (64), in the sense of offering a formula for \(M_{1}\) in terms of \(M_{0}\) and \(M_{2}\). Indeed, it reduces to the diquark model result (64), in the limit of small \(R\) (taking \(\Delta\approx 1+4R\)). This reinforces our previous observation that the quark model reduces (apart from the heavier scalar) to the diquark model, in the limit of small \(R\). Ultimately the utility of these mass relations is that, given any two experimental candidates, we may predict the mass of the other state (in diquark models), or the other two states (in quark models). Since these predictions are independent of (or only weakly dependent on) parameters, they provide a very direct test of models, which can be checked against future experimental data. In Ref. [10] we apply this approach to a putative multiplet of \(cc\bar{c}\bar{c}\) states observed at LHCb, CMS and ATLAS. The relations also lead to a simple and very general understanding of the pattern of masses which characterise quark and diquark models. In Figure 1 we show the mass spectrum in arbitrary units, having fixed \(M_{0}\) and \(M_{2}\), and using the relations (64), (65) and (66) to predict \(M_{1}\) and \(M_{0}^{\prime}\). Note of course that in quark models, the pattern of masses in the multiplet is sensitive to \(R\), whereas in diquark models, it is not. As a reminder, if the couplings \(C_{ij}\) are parameterised as in equation (7), then \(QQ\bar{Q}\bar{Q}\) states have \(R=1\), while \(QQ\bar{q}\bar{q}\) states have \(0<R<1\), with \(R\approx 0\) for \(m_{Q}\gg m_{q}\). Notice in Figure 1 that as \(R\to 0\), the scalar masses in the quark and diquark models become degenerate, as anticipated above. Comparison of the spectrum in Fig. 1 to experimental candidates can help to reveal the underlying dynamics of a multiplet of tetraquark states. In particular, apart from in the limit of small \(R\), quark and diquark models can be differentiated by the relative position of the axial (\(M_{1}\)) compared to the scalar (\(M_{0}\)) and tensor (\(M_{2}\)). The difference is particularly pronounced for larger \(R\), including \(R=1\) which applies to the single flavour case (\(QQ\bar{Q}\bar{Q}\)). We have been assuming that the couplings satisfy equation (7), but empirically this is not universally reliable; see for example the couplings fitted to mesons and baryons in Refs. [84; 91]. Moreover, in quark potential models, it applies only in a symmetry limit which is not strictly justified by the Hamiltonian (see Section III). If we no longer assume equation (7), then of course \(R\) is no longer constrained to \(0<R\leq 1\), and for this reason in Figure 1 we extend the plot to larger values of \(R\). Without equation (7), for one-flavour states (\(QQ\bar{Q}\bar{Q}\)) it is no longer true that \(R=1\) exactly. Using the more general definition of \(R\) in equation (48), we expect deviations from \(R=1\) due to asymmetry in \(QQ\) and \(Q\bar{Q}\) spatial wavefunctions. In this context it is reassuring that the spectrum in Fig. 1 is not very sensitive to the choice of \(R\), for values near \(R=1\). Although the relations (64), (65) and (66) have seemingly not been discussed previously, they are actually apparent in the quoted mass predictions throughout the literature, for each Figure 1: The mass spectrum (in arbitrary units) as a function of \(R\), where \(M_{0}\) and \(M_{2}\) are fixed, \(M_{1}\) (diquark model) is given by equation (64), while \(M_{1}\) (quark model) and \(M_{0}^{\prime}\) (quark model) are given by equations (65) and (66), respectively. of the four classes of model we consider in this paper. In the Table 1 (Appendix) we compile the masses \(M_{0}\), \(M_{1}\), \(M_{2}\) and \(M_{0}^{\prime}\) quoted in many different model calculations. To check the validity of the relations, for each model we have taken \(M_{0}\) and \(M_{2}\) as inputs and, using the mass relations, we have computed the masses of the axial \(\widetilde{M}_{1}\) and (if appropriate) the heavy scalar \(\widetilde{M}_{0}^{\prime}\), which we can then compare to the corresponding quoted values of \(M_{1}\) and \(M_{0}^{\prime}\). In the last two columns in Table 1, and also in Figure 2, we show the differences \(\widetilde{M}_{1}-M_{1}\) and \(\widetilde{M}_{0}^{\prime}-M_{0}^{\prime}\), which in most cases are 1 MeV or less. This is a striking confirmation of the validity of the relations, across all classes of model. From previous discussions, we know that the mass relations are exact for (quark and diquark) models based on effective masses (corresponding to Refs. [89; 135; 136; 91] in Table 1 and Figure 2). In such cases, where \(\widetilde{M}_{1}-M_{1}\) or \(\widetilde{M}_{0}^{\prime}-M_{0}^{\prime}\) deviate from zero, this Figure 2: The left and right panels show, for the axial and heavy scalar, respectively, the difference between the mass obtained from the relations (64), (65) and (66), and the quoted mass taken from the literature, including examples of all four classes of model: the chromomagnetic quark model (CQM), quark potential model (QPM), diquark model (DM), and diquark potential model (DPM). can be attributed to rounding errors, from having applied the relations using inputs which are quoted to a particular number of significant figures. In (quark and diquark) potential models, the relations apply strictly in the limit of perturbation theory. Many of the masses in Table 1 and Figure 2 have been computed in perturbation theory, so we would expect the mass relations to be satisfied exactly, up to small rounding errors as mentioned previously. Notably, the masses of Ref. [132] are not computed in perturbation theory, and the somewhat larger deviation from exact agreement in this case can be understood for that reason. In the specific case of quark (not diquark) potential models, we recall the additional caveat mentioned previously: our mass formulae - hence the resulting mass relations - apply only subject to the assumption that the wavefunction is symmetric under the interchange \(Q\leftrightarrow\bar{q}\) (or \(Q\leftrightarrow\bar{Q}\), in the one flavour case). The particular examples shown in Table 1 and Figure 2 (Refs. [114; 116]) satisfy this requirement, because in their implementation of the Gaussian Expansion Method, the symmetry is automatic for degenerate quarks (\(cc\bar{c}\bar{c}\) and \(bb\bar{b}\bar{b}\)). This is to some extent an artefact of the calculation, since the symmetry is not actually imposed by the symmetries of the Hamiltonian (see Section III). In models where the symmetry is not imposed (such as Refs. [112; 137]) the relations are not satisfied. Effectively this is because the chromoelectric term in the Hamiltonian induces a splitting between the \(\varphi_{0,1,2}\) and \(\varphi_{0}^{\prime}\) states which, before spin splitting, would otherwise be degenerate. For precisely the same reason, the relations also do not apply to the extended chromomagnetic model of Refs. [92; 138] which, unlike the ordinary chromomagnetic model, have a chromoelectric splitting in the center of mass term. ## VII Conclusions One of the main obstacles to progress in understanding the nature of exotic hadrons is that models are not very well constrained. This is because of the intrinsic ambiguity in identifying the relevant degrees of freedom and their interactions, and also because model parameters are only weakly constrained by comparison with the spectra of conventional hadrons. Consequently, absolute mass predictions for tetraquark states are subject to systematic uncertainties which are large and difficult to quantify, so comparing these to experimental candidates can hardly discriminate among models. Our perspective is that it is considerably more useful to examine not absolute mass predictions, but relations among masses. As well as being more reliable - in the sense of depending only weakly on model parameters, or not at all - such relations are considerably more direct, and therefore effective, as a way of discriminating among competing models. The most important distinguishing feature of models is whether they include all colour configurations (quark models) or only a subset (diquark models). For each class of model, we showed that the corresponding potential model is equivalent (in perturbation theory) to a simpler model with effective (quark or diquark) masses - though in the case of quark models, the equivalence relies on an assumption of spatial symmetry which, though commonplace, is not strictly justified. We derived general formulae for the mass spectrum of S-wave \(QQ\bar{q}\bar{q}\) and \(QQ\bar{Q}\bar{Q}\) states in quark and diquark models, and showed how the two models coincide in an appropriate limit. From the formulae, we identified several resulting linear relations which are independent of, or only weakly dependent on, model parameters. The relations are exact for (quark or diquark) models with effective masses, or valid in perturbation theory for (quark or diquark) potential models. Although the relations have seemingly not been discussed in the literature before, they are apparent in the quoted mass predictions in all classes of models. The relations reveal how quark and diquark models have a characteristically different pattern of masses, which can be tested against future experimental data. In particular, given any two experimental candidates, using the relations one can predict the masses of the additional one or two states (in diquark or quark models, respectively). In a forthcoming paper [56] we apply this concept (and some other results from the present work) to the apparent \(cc\bar{c}\bar{c}\) states observed at LHCb, CMS and ATLAS. ###### Acknowledgements. This work is supported by The Royal Society through Newton International Fellowship. ## Appendix
2309.04466
The Local Group Symbiotic Star Population and its Weak Relation with Type Ia Supernovae
Here we study the symbiotic stars (SySt) population and its relation with type Ia supernovae (SNe Ia) in the galaxies of the Local Group. SySt are low- and/or intermediate-mass evolved binary systems where a white dwarf (WD) accretes mass from a giant star. A fraction of these WDs can become massive enough to reach the Chandrasekhar mass. Therefore, SySt have been considered as potential SNe Ia progenitors. Taking two approaches, one empirical and another statistical, we estimated the SySt population on the Galaxy as having a minimum value of $1.69\times10^3$ and a expected one of $3.23\times10^4$. For Local Group dwarfs galaxies, the computed SySt population ranges from 2 to 4 orders of magnitudes lower. Concerning the SNe Ia with SySt progenitors, our general result is that SySt are not the main SNe Ia progenitors. On the other hand, we still expect that about 0.5-8% of the SNe Ia have symbiotic progenitors in the Milky Way, while the majority of the - low-mass - dwarfs galaxies did not experience a symbiotic type Ia supernova.
M. Laversveiler, D. R. Gonçalves
2023-09-08T17:54:43Z
http://arxiv.org/abs/2309.04466v2
# The Local Group Symbiotic Star Population and its Weak Relation with Type Ia Supernovae ###### Abstract Here we study the symbiotic stars (SySt) population and its relation with type Ia supernovae (SNe Ia) in the galaxies of the Local Group. SySt are low- and/or intermediate-mass evolved binary systems where a white dwarf (WD) accretes mass from a giant star. A fraction of these WDs can become massive enough to reach the Chandrasekhar mass. Therefore, SySt have been considered as potential SNe Ia progenitors. Taking two approaches, one empirical and another statistical, we estimated the SySt population on the Galaxy as having a minimum value of \(1.69\times 10^{3}\) and a expected one of \(3.23\times 10^{4}\). For Local Group dwarfs galaxies, the computed SySt population ranges from 2 to 4 orders of magnitudes lower. Concerning the SNe Ia with SySt progenitors, our general result is that SySt are not the main SNe Ia progenitors. On the other hand, we still expect that about 0.5-8% of the SNe Ia have symbiotic progenitors in the Milky Way, while the majority of the - low-mass - dwarfs galaxies did not experience a symbiotic type Ia supernova. ApresentMOS aqui ume estudo da populacao de estrelas simbioticas (SySt) e sua possivel relacao com supernovas do tipo la (SNe Ia) nas galaxias do Grupo Local. SySt also sistemas binriaco evolutos constituides de estrelas masa bazia e/ou intermediaria, unde una mana barona (WD) acunamas de una estrela gigante. Uma fracjoas desas WDs pode se tornar masuda o suficiente para atimgir a massa de Chandrasekera. Portanto, as SySt sio consideradas como potencia prosinetores de SNe Ia. Tomaudo duas abordagens, unma emprerica e outra estaticia, derivamos a populacao SySt na Galaxia como tendo um valor mimino de 1, \(69\times 10^{5}\) e unv algorado de 3, \(23\times 10^{4}\). Para galaxias anlos de Grupo Local, os valores correspondentes sao de 2 a 4 ordens de grandeza menores. Em relacao is SNe Ia com SySt como progenitors, obtemos, como resultado geral, que as SySt nao sa principais progenitors de SNe Ia. Por outro lado, ainda esperamos que cerca de 0.5-8% das SNe Ia tenham progenitors simbioticos na Via Lactea, enquanto a maioria das galaxias anas - de baixa masa - nao experimento SNe Ia provenientes de SySt. Binary Stars: Evolution - Symbiotic Stars - Type Ia Supernovae ## 1 Introduction Binary star systems can evolve in different ways, which primarily depend on the stars' zero age main sequence (ZAMS) masses, their initial orbital separation and eccentricity (Benacquist, 2013). Since a small difference in mass can lead to differences in evolutionary timescales during the main sequence (MS), one of the stars in a binary system will evolve first. If one of the stars becomes a giant and fills its Roche lobe (Roche lobe overflow - RLOF), it will cause a flow of matter from the so-called donor star to its companion. The evolution of the RLOF binaries can be stable or unstable, depending on the donor's envelope structure (radiative or convective), and on the mass ratio of the system (Ge et al., 2010). Stable systems will only experience a change in mass ratio, due to the mass flow and accretion on the companion. However, in unstable systems the feedback of the mass-loss on the effective Roche lobe radius (\(R_{L}\)) and on the envelope of the donor star leads to the disruption it's envelope and the engulfment of the companion, so the system enters a common envelope (CE) phase Paczynski (1976); Ge et al. (2010). The CE evolution can result in the merge of the stars or in the ejection of the envelope, producing a close evolved binary (Paczynski, 1976). In systems without RLOF, the mass transfer is limited to a fraction, due to stellar winds, and the components evolve more likely as if they were single star systems. Binary stellar evolution can lead to the formation of symbiotic stars (SySt), which are evolved systems composed by low- and/or intermediate-mass stellar objects. In the typical configuration of SySt a white dwarf (WD) accretes matter from a red giant branch (RGB) or asymptotic giant branch (AGB) star, mostly via winds (Kenyon, 2008). However, there are evidences for SySt that have distorted giants, probably because of Roche lobe filling (Mikolajewska, 2003). Since SySt have accreting WDs, they have been considered as potential progenitors of type Ia supernovae (SNe Ia; e.g. Kenyon et al., 1993; Liu et al., 2019; Ikhiewicz et al., 2019). However, many SySt have low mass WDs, with RS Oph and T CrB being known exceptions (Mikolajewska, 2013), which is a counterargument regarding this type of SNe Ia progenitor. Nevertheless, the fraction of SySt with massive enough (\(\gtrsim 1.1\) M\({}_{\odot}\)) WDs can be considered as promising progenitors of SNe Ia, contributing to the observed SNe Ia rate. Our goal with this study is to characterize the binary systems with ZAMS properties compatible with the observed evolved of SySt. Then determine the evolutionary paths these systems could have taken, to reach their expected population in the Galaxy and in Local Group dwarf galaxies. This knowledge, combined with our statistical procedure, allows us to find the fraction of SySt with the minimum requirements to be considered progenitors of SNe Ia. ## 2 Lower Limit of Milky Way's SySt Population The lower limit in the Milky Way (MW) is obtained by studying the distribution of SySt, as a function of the Galactic height. Right ascension (RA), declination (Dec) and distance from Akras et al. (2019) and the updated online catalog of SySt (Merc et al., 2019) were used to determine the distribution that better describes the data. After transforming the coordinates to the galactocentric ones, (\(X_{G}\), \(Y_{G}\), \(Z_{G}\)), and from a series of statistical tests - maximization of log-likelihood, KS, and least squares - we found that the best representation of the parametrized distribution is a Laplace distribution. From this distribution, we recover the scale height of the Galactic SySt as being \(h=0.654\) kpc. The above derived parameters allow, via projection of the data on \(Z_{G}=0\) kpc, to compute the \(1\sigma\) and \(2\sigma\) data dispersion ellipses. Through the combination with the scale height of the disk, \(H\), we computed the central SySt density of the MW, \(n_{0}\sim 1.0\)-\(2.7\) kpc\({}^{-3}\). This is the central value because it refers to the density at \(Z_{G}=0\) kpc. The lower limit for the SySt population, \(N_{\rm min}\), is then given by the integration of the distribution, scaled with \(n_{0}\), in the Galaxy's volume, assuming cylindrical symmetry. We used two values for the MW's disk, \(R_{G}\). The first as four times the scale length of the thin disk, \(R_{G}=4h_{d}\), and the other as the truncation radius, \(R_{G}=R_{\rm trunc}\). Here, \(h_{d}=2.0\)-\(3.8\) kpc and \(R_{\rm trunc}=16.1\pm 1.3\) kpc are given by Amores et al. (2017). With these values, we found \(N_{\rm min}\sim(1.2-2.8)\times 10^{3}\), and for the best fit of \(1.69\times 10^{3}\) as the SySt population in the Galaxy. ## 3 Statistical Binary Evolution and Expected SySt Population Our second approach to the problem of counting SySt in the LG relies on the use of observed properties of binary stars. Using them, we can statistically infer which evolutionary channel a given binary will follow in its evolution. From this approach, we can extract the population of SySt. To this end, three steps are needed: 1) finding the ZAMS physical characteristics a binary need to have to evolve to SySt; 2) statistically evolve this ZAMS fraction using pre-defined channels (e.g. Han et al. 2020; Lu et al. 2006); and 3) defining a parameter to scale the fraction computed with the expected population. ### ZAMS Parameters Since the stars in SySt are evolved stars and of initial low- and/or intermediate-mass (\(\sim 0.8\)-\(8.0\) M\({}_{\odot}\)), we need to restrict our analysis to this subset of systems. The first important constraint is on the minimum mass, because both stars of the system need enough time to evolve into giant dimensions, in a timescale lower than the age of the Universe, which defines a threshold mass (\(M_{\rm thr}\)). The values \(M_{\rm thr}=0.86\)-\(0.90\) M\({}_{\odot}\) are derived from the MS evolutionary timescale (Harwit 2006), taking the reionization era epoch (\(\sim 13.3\times 10^{9}\) yr; Schneider 2015) as an upper boundary. The second ZAMS parameter is a restriction on the mass ratio of the systems. It is defined as \(q_{\rm cut}(M_{1}):=M_{\rm thr}/M_{1}\), being \(M_{1}\) the primary mass, with mass ratio defined as \(q:=M_{2}/M_{1}\) (\(M_{2}\lesssim M_{1}\)). This restriction is used to discard ZAMS binaries with \(M_{2}<M_{\rm thr}\). The third ZAMS parameter is the maximum orbital separation \(a_{\rm max}\). The latter is used to discard very wide binaries, which will basically evolve as the stars were singles. Kepler's third law, as a function of the primary mass (\(M_{1}\)) and mass ratio (\(q\)), give us the maximum orbital separation, \(a_{\rm max}(M_{1},q)\), setting a maximum orbital period, \(P_{\rm max}\), as a fixed parameter. Given that this is a very uncertain parameter, we use a range of values \(\log(P_{\rm max})\in[3.6,4.2]\) (\(P_{\rm max}\) in days), based on the largests orbital periods known for SySt (R Aqr: \(\log(P)=4.1\) - Gromadzki & Mikolajewska 2009; RR Tel: \(\log(P)=5.0\) - Hinkle et al. 2013). Note that the orbital periods can increase or decrease during the system's evolution. From Kroupa's initial mass function (IMF; Kroupa 2001) for the primaries, \(\xi(M_{1})\) - for which we assume that all systems are resolved -, a binary fraction, \(f_{\rm bin}(M_{1})\) (Duchene & Kraus 2013), and the mass ratio and separation distributions (\(\zeta(q)\) and \(\zeta(a)\) - Duchene & Kraus 2013), the fraction of ZAMS binaries with the desired physical characteristics is \[f_{\rm bin}^{*}=\int_{M_{\rm thr}}^{8}\xi(M_{1})f_{\rm bin}\left[\int_{q_{\rm min }}^{1}\zeta(q)\left(\int_{a_{\rm min}}^{a_{\rm max}}\zeta(a)\ da\right)\,dq \right]\,dM_{1}. \tag{1}\] ### Binary Evolution Channels Having defined the initial population of binaries, we need to statistically consider their evolution. This is made considering three evolutionary channels: 1. The primary fills its Roche lobe during the MS phase; 2. The primary fills its Roche lobe during the giant (RGB or AGB) phase; 3. There is no RLOF during the entire evolution of the primary. The selection of the channel for a given ZAMS binary is made by using the effective Roche lobe radius (\(R_{L}\); Eggleton 1983) \[\frac{R_{L}}{a}\equiv x(q)=\frac{0.49\ q^{-2/3}}{0.6\ q^{-2/3}+\ln(1+q^{-1/3} )}, \tag{2}\] and the expected radii of the primary (\(R_{\varphi}\)), which is computed as the temporal mean of the radius in a given evolutionary phase \(\varphi\): MS, RGB, or AGB. The condition for RLOF is set as \(R_{\varphi}(M_{1})=R_{L}\), which then gives the restriction on the separation for each of the above channels as \(a_{\rm cut,\varphi}(M_{1},q)=R_{\varphi}(M_{1})/x(q)\). The RLOF in channels I and II can lead to a stable or unstable evolution, depending, strictly, on \(M_{1}\) and \(q\). The critical mass ratio, \(q_{\rm crit}(M_{1})\), is computed in channel I based on Ge et al. (2013), and on channel II based on Chen & Han (2008), where we reconstruct \(q_{\rm crit}(M_{1})\) with the assumptions made in this work. If \(q\equiv M_{2}/M_{1}>q_{\rm crit}\) the system will have a stable RLOF, on the other hand, if \(q<q_{\rm crit}\) then the RLOF will be unstable. The evolution through channel I can lead to a direct merger, a contact binary, or to a MS + He-WD (helium white dwarf) system, in the minority of cases. Since without a simulation we can't say exactly the fraction of MS + He-WD systems formed, we introduce a free parameter, \(f_{\rm r}^{\rm(t)}\), to stand for this uncertainty. The idea is that the evolution of the secondary, for MS + He-WD systems, can lead to SySt (\(f_{\rm r}^{\rm(t)}\) also takes it into account). Channel I gives the function \(f_{\rm evol}^{\rm(t)}(M_{1})\), which describes the fraction of ZAMS systems that becomes SySt, as a function of \(M_{1}\), through the evolution described. Evolution through channel II is divided in four sub-channels: RGB and AGB, stable or unstable; where RGB or AGB indicates that the ZAMS binary will fill its Roche lobe during RGB or AGB phases, and stable or unstable refers to their evolution during RLOF. Stable RLOF will lead to the formation of MS + WD binaries, while unstable RLOF will form a CE. The evolution through the CE phase is dynamic and results in the merge of the stars or in the ejection of the envelope. The outcome of CE ejection is a close MS + WD system. If CE is present, another free parameter, \(f_{\rm r}^{\rm(t)}\), related to the fraction of systems that do not merge, is introduced. The WD in both scenarios (stable or unstable RLOF) can have a dominant composition of He or C+O, depending on the mass of the Roche lobe filling star and on its evolutionary phase. Analogously to channel I, this channel returns a function \(f_{\rm evol}^{\rm(t)}(M_{1})\), for each of its sub-channels. Channel III is the simplest one. It takes into account the ZAMS binaries that do not experience RLOF during the evolution of the primary. It is only limited by the parameter \(a_{\rm max}\). Again, as channels I and II, this one returns the function \(f_{\rm evol}^{\rm(II)}(M_{1})\). Finally, all channels are brought together to compute the fraction of SySt formed: \[f_{\rm ss}=\int_{M_{\rm thr}}^{8}\frac{df_{\rm syn}^{\rm(I_{1})}}{dM_{1}}\sum_{ i}f_{\rm evol}^{\rm(i)}(M_{1})\ dM_{1}, \tag{3}\] where the super index \(i\) in \(f_{\rm evol}^{\rm(i)}\) refers to the evolutionary channel it represents. ### The Scaling Parameter To obtain the SySt population, from the relative fraction derived above, we adopt the approach given by Kenyon et al. (1993), which is based on formation rate of planetary nebulae (PNe). Thus, it is assumed that this rate closely represents the rate which stars with masses \(>0.6\) M\({}_{\odot}\) and \(<8\) M\({}_{\odot}\) complete their evolution. The scaling parameter is expressed as \({\cal N}=\mathcal{R}_{\rm PN}\tau_{\rm ss}\). Here \(\mathcal{R}_{\rm PN}=N_{\rm PN}/\tau_{\rm PN}\) is the formation rate of PNe and \(\tau_{\rm ss}\approx 5\times 10^{6}\) yr (Kenyon et al., 1993) is the timescale of the symbiotic phenomenon. For the Galaxy, we use this PNe rate as a density and combine it with the volume, \(V\), of the disk, as \(\mathcal{R}_{\rm PN}=V\nu_{PN}=2\pi R_{\rm G}^{2}H\nu_{\rm PN}\), where \(\nu_{\rm PN}\approx 2.4\times 10^{-12}\) pc\({}^{-3}\) yr\({}^{-1}\) is the formation rate density of PNe (Phillips, 1989). For the Local Group dwarf galaxies, we use a bolometric absolute magnitude approach for \(\mathcal{R}_{\rm PN}\). The PN population in a galaxy can be associated with the so called \(\alpha\)-ratio, which gives the number of PNe per unit bolometric luminosity of the galaxy (Buzzoni et al., 2006). Thus, the scaling parameter is given by \[{\cal N}=\mathcal{B}\tau_{\rm ss}L_{\rm O,bol}\times 10^{0.4(M_{\rm thr}+{\rm BC }_{\odot}-M_{\rm thr}-{\rm BC})}, \tag{4}\] being \(\mathcal{B}\approx 1.8\times 10^{-11}\) L\({}_{\odot,\rm bol}^{-1}\) yr\({}^{-1}\) the specific evolutionary flux (Buzzoni et al., 2006), \(M_{V\odot}=4.85\), BC\({}_{\odot}\approx-0.1\), \(M_{V}\) the visual magnitude of the galaxies, and BC their bolometric correction (\(-0.2\); Reid, 2016). ### Results for the Galaxy Since the majority of the multiple star systems are binaries, according to (Duchene & Kraus, 2013), we adopt the binary fraction as the multiplicity frequency (MF); \(f_{\rm bin}(M_{1})={\rm MF}(M_{1})\). The MF gives the fraction of systems that are multiple, in this case as a function of the primary star's mass. We used the MF(\(M_{1}\)) as given by (Duchene & Kraus, 2013). From an analysis of their impact on the final results, the free parameters were fixed to \(f_{\ell}^{\rm(I)}=0.25\) and \(f_{\ell}^{\rm(II)}=0.5\). In comparison with \(f_{\ell}^{\rm(II)}\) the parameter \(f_{\ell}^{\rm(I)}\) changes very little the expected number of SySt (up to few percents; \(\sim 4\%\)). The choice for \(f_{\ell}^{\rm(I)}\) is simply related to the difficulty in inferring a realistic value. As for \(f_{\ell}^{\rm(I)}=0.25\) we just assumed a non-dominant fraction, since the formation of SySt through channel I is unlikely. The metallicity to compute the the stellar radii from models, per evolutionary phase, in the Galaxy, was an average of \(Z=Z_{\odot}\approx 0.0134\)-\(0.0140\) from the models given by Lagarde et al. (2012) and Claret (2019). Our main results for the Milky Way SySt are as in what follows: Figure 1 displays the relative contribution per channel; Figure 2 shows the expected chemical composition of the SySt's WDs found; and Table 1 gives the expected SySt population. As with the empirical approach, when the disk dimension is set to \(R_{\rm G}=R_{\rm trunc}\), the resulting SySt population is an upper limit. When using \(R_{\rm G}=4h_{d}\) we get the better fit, since it follows from the behavior of the galactic disk. Therefore, our best fit implies \(3.23\times 10^{4}\), with an upper limit of \(6.18\times 10^{4}\), SySt in the Galaxy. Comparing our results with other authors' (e.g. \(3\times 10^{5}\) - Munari & Renzini, 1992; \(3.3\times 10^{4}\) - Kenyon et al., 1993; \(4\times 10^{5}\) - Magrini et al., 2003; \(1.2\)-\(15.0\times 10^{3}\) - Lu et al., 2006), we note that it is in agreement with the previous estimations, and it is also very close to the value obtained by Kenyon et al. (1993), when using \(R_{\rm G}=4h_{d}\). This is not a coincidence, since we used an approach very similar to theirs in the computation of the scaling parameter. However, our stellar evolution considerations are more complex. \begin{table} \begin{tabular}{c c c c c} \hline & \(M_{\rm thr}=0.86\) M\({}_{\odot}\) & \(M_{\rm thr}=0.90\) M\({}_{\odot}\) \\ \hline & \(R_{\rm G}=4h_{d}\) & \(R_{\rm G}=R_{\rm trunc}\) & \(R_{\rm G}=4h_{d}\) & \(R_{\rm G}=R_{\rm trunc}\) \\ \hline log(\(P_{\rm max}\)) & \(N_{\rm ex}\) & \(N_{\rm ex}\) & \(N_{\rm ex}\) & \(N_{\rm ex}\) \\ log(days) & [\(\times 10^{4}\)] & [\(\times 10^{4}\)] & [\(\times 10^{4}\)] & [\(\times 10^{4}\)] \\ \hline 3.6 & \(3.02\pm 0.30\) & \(5.82\pm 0.98\) & \(2.76\pm 0.28\) & \(5.32\pm 0.90\) \\ 3.9 & \(3.38\pm 0.34\) & \(6.50\pm 1.10\) & \(3.07\pm 0.31\) & \(5.91\pm 1.00\) \\ 4.1 & \(3.63\pm 0.36\) & \(6.98\pm 1.18\) & \(3.29\pm 0.33\) & \(6.34\pm 1.07\) \\ 4.2 & \(3.75\pm 0.37\) & \(7.23\pm 1.22\) & \(3.40\pm 0.34\) & \(6.56\pm 1.11\) \\ \hline \end{tabular} \end{table} Table 1: Results for the Galactic SySt population, given the different parameters. Figure 1: Example of the contribution from each evolutionary channel to \(f_{\rm ex}\) density (in mass space). In the left panel, we have: the blue dotted line as channel I, the red thin dashed line as channel II RGB, the red thick solid line as channel II AGB, the green dash-dotted line as channel III, and the black solid line as the sum of them all. On the right panel we have the contributions from each subset of channel II: yellow for the AGB and purple for the RGB channel; solid lines for the stable components and dashed for the unstable ones. For this plot, we used fixed \(Z=Z_{\odot}\), log(\(P_{\rm max}\)) = 4.2, and \(M_{\rm thr}=0.86\) M\({}_{\odot}\). Figure 2: Composition of SySt’s WDs obtained per metallicity model. The column He/C+O represents the percentage of SySt where we couldn’t set an expected composition for the WD, or the composition is mixed ### Results for the Local Group Dwarf Galaxies We need to know the metallicity of the galaxies in order to study their \(Z\)-dependent characteristics. For that we use the converted [Fe/H] to \(Z\), adopting \(Z=Z_{\rm 0}10^{\rm[Fe/H]}\) (see comment on table 6 of Mateo 1998), and assign, per galaxy, the stellar evolution model with the closest \(Z\). From Lagarde et al. (2012) we have the following metallicities available: \(Z=0.0001\); \(Z=0.0020\); \(Z=0.0040\). The IMF from Kroupa (2001) and the mass ratio and separation distributions from Duchene & Kraus (2013) are also adopted here. Table 2 contains the results obtained for the Local Group dwarf galaxies. We note that the expected value of the SySt population for this group is orders of magnitude lower than for the MW, which is expected, since, correspondingly, their masses are also orders of magnitude smaller. Moreover, from our analysis, the expected SySt population of a number of the LG dwarf galaxies is null. A way of interpreting these results is as an indicative that the formation rate of SySt, for the galaxies with \(N_{\rm ss}=0\), is lower than the rate at which they cease to exist (\(\sim 17\)\({}_{\rm{Ks}}\)). Draco is a good example of such an interpretation, since its SySt contradicts the expected value we obtained. For the remaining galaxies, the SySt population scales with their absolute magnitude in the V band, reaching a maximum of hundreds of SySt for the most luminous galaxies. Magrini et al. (2003) also present results for the SySt population in some LG galaxies. However, they use an approach based on the galaxies' \(K\)\(-\)\(B\) color to estimate their red giant population. Assuming that 0.5% of this population is in fact SySt. Their values are, in average, 100 times higher than ours. The discrepancy between their work and ours probably lies in the assumption of the 0.5% fraction, which can be interpreted as related with our \(\mathcal{N}\) parameter. Again exposing the difficulty in finding a proper scale for the SySt population, with respect to the total stellar population of a galaxy. yr\({}^{-1}\), Kenyon et al. 1993; \((5.4\pm 1.2)\times 10^{-3}\) yr\({}^{-1}\), Li et al. 2011; \(14.1^{+1.1}_{-8.0}\times 10^{-3}\) yr\({}^{-1}\), Adams et al. 2013) we compute a contribution of about 0.5-8% from SySt to the SNe Ia rate. By comparing ours with the previous results, we conclude it is very unlikely that SySt are the main SNe Ia progenitors. Nevertheless, SySt still cannot be ruled out as SNe Ia progenitors in the classic SNe Ia formation scenario, because a fraction of them will have massive enough accreting WDs (RS Oph and T CrB are well known examples - Mikolajewska 2013). Regarding the result for the Local Group dwarf galaxies, there exists the possibility that some of them experienced a SNe Ia from SySt during their evolution. At least in cases where \(\Delta t_{\rm exp}<10^{9}\) yr, since it is well restricted within the age of the Universe. For the remaining dwarf galaxies \(\Delta t_{\rm exp}\) is too high, and we conclude that no SNe Ia from SySt has ever occurred on these galaxies. ## 5 Conclusions This work is dedicated to the study of the population of symbiotic stars (SySt), with the goal of finding a robust way to estimate such population in the Milky Way and in the dwarf galaxies of the Local Group. Moreover, since SySt can satisfy the required characteristics for developing a SN Ia event, we used our own algorithm to compute this specific fraction of SySt. Using observational data, we adopted two approaches for the SySt population, one empirical and the other theoretical. We found that the SySt population in the Galaxy has a minimum value of \(1.69\times 10^{5}\), while its expected and upper limits are \(3.23\times 10^{4}\) and \(6.18\times 10^{4}\), respectively. For the dwarf galaxies, the value obtained ranged from zero to hundreds of SySt, which depended mostly on their bolometric absolute magnitude, with a weaker dependence on their metallicity. Regarding the SNe Ia, we obtained as a general result that SySt are not the main progenitors. Mostly due to the fact that the great majority of the WDs in SySt have masses below 1.1 M\({}_{\odot}\). This implies that the accretion rates in SySt are insufficient for them to reach the \(M_{\rm Ch}\). However, we found that a small fraction of the total SySt population could be progenitor of SNe Ia: the Galaxy \(\sim\) 1.5%; and \(\sim\) 3% in the Local Group dwarf galaxies. By calculating the formation rate of SNe Ia with SySt as progenitors, we show that 0.5-8.0% of the SNe Ia in the Galaxy could come from SySt, and that most of the dwarf galaxies of the Local Group have not yet experienced SNe Ia from SySt. ###### Acknowledgements. We would like to thank Jaroslav Merc for providing us with the updated population of known SySt in the Local Group of galaxies. Authors acknowledge the following financial supports: ML, FAPERJ fellowship (2019); DGR, CNRq (133016/2020-8) and FAPERJ (Temtico, 211-370/2021; CNE, 200.527/2023).
2309.10413
PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded Dialogue Systems
Grounding dialogue response generation on external knowledge is proposed to produce informative and engaging responses. However, current knowledge-grounded dialogue (KGD) systems often fail to align the generated responses with human-preferred qualities due to several issues like hallucination and the lack of coherence. Upon analyzing multiple language model generations, we observe the presence of alternative generated responses within a single decoding process. These alternative responses are more faithful and exhibit a comparable or higher level of relevance to prior conversational turns compared to the optimal responses prioritized by the decoding processes. To address these challenges and driven by these observations, we propose Polished \& Informed Candidate Scoring (PICK), a generation re-scoring framework that empowers models to generate faithful and relevant responses without requiring additional labeled data or model tuning. Through comprehensive automatic and human evaluations, we demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history. Furthermore, PICK consistently improves the system's performance with both oracle and retrieved knowledge in all decoding strategies. We provide the detailed implementation in https://github.com/bryanwilie/pick .
Bryan Wilie, Yan Xu, Willy Chung, Samuel Cahyawijaya, Holy Lovenia, Pascale Fung
2023-09-19T08:27:09Z
http://arxiv.org/abs/2309.10413v1
# PICK: Polished & Informed Candidate Scoring ###### Abstract Grounding dialogue response generation on external knowledge is proposed to produce informative and engaging responses. However, current knowledge-grounded dialogue (KGD) systems often fail to align the generated responses with human-preferred qualities due to several issues like hallucination and the lack of coherence. Upon analyzing multiple language model generations, we observe the presence of alternative generated responses within a single decoding process. These alternative responses are more faithful and exhibit a comparable or higher level of relevance to prior conversational turns compared to the optimal responses prioritized by the decoding processes. To address these challenges and driven by these observations, we propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework that empowers models to generate faithful and relevant responses without requiring additional labeled data or model tuning. Through comprehensive automatic and human evaluations, we demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history. Furthermore, PICK consistently improves the system's performance with both oracle and retrieved knowledge in all decoding strategies. We provide the detailed implementation in 1. Footnote 1: [https://github.com/bryamwilie/pick](https://github.com/bryamwilie/pick) ## 1 Introduction Knowledge-grounded dialogue (KGD) has been introduced as a means to ground conversation towards the provided knowledge, thereby enabling the generation of informative and engaging responses (Dinan et al., 2019; Zhou et al., 2018). Despite the advancements in training KGD systems to convincingly simulate human language on a linguistic plane, these systems still struggle with the challenge of producing responses that align with those human-preferred qualities. Such deficits can be attributed to various issues, e.g., hallucination as well as the lack of coherence and engagingness in the generated responses (Fu et al., 2022; Shuster et al., 2022; Rashkin et al., 2021; Zhao et al., 2020). Numerous methodologies have been investigated to leverage the potential of various training and decoding methods to address these identified issues. For instance, the recent human quality alignment methods, such as Ouyang et al. (2022), hinge on collecting extensive human annotations, followed by the reward model fine-tuning to approximate human preference. This process then guides the optimization of the language model (LM) through reinforcement learning. While this approach has demonstrated promising results, it is noteworthy that accumulating such a significant volume of manual human data is highly resource-intensive in terms of both time and human labor. \begin{table} \begin{tabular}{l} \hline \hline **Knowledge snippet** \\ Due to his powerful and very large vocal range and energetic live performances, Rose has been **named one of the greatest snigers** \\ \hline \hline **Baseline** \\ Vanilla He was **known for throwing humans** \\ **PICK** He has been **named one of the greatest snigers of all time by** \\ **various media outlets** \\ \hline \hline \end{tabular} \end{table} Table 1: PICK empowers models to generate more faithful to the knowledge snippet and serve as a more appropriate reply to the conversational context. In this sample, the response prioritized by PICK is more grounded in external knowledge (highlighted in blue) than the optimal response prioritized by the decoding process (i.e., beam search) and is also more relevant to the dialogue context. Here, the vanilla response repeats the dialogue history (highlighted in red). Through analyzing various LM generations, we observe that within one decoding process, there exist alternative generated responses that are more faithful and relevant to prior conversational turns. These candidates, however, are overlooked by the decoding processes as they are not prioritized as the optimal responses. Driven by these observations, we propose a straightforward yet effective human-aligned re-ranking framework to direct model responses closer to KGD qualities. We introduce Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework for KGD tasks, which empowers models to generate optimal dialogue responses that are more faithful to the knowledge provided and relevant to the dialogue history without requiring additional model tuning. The proposed framework is also model-agnostic; thus, it can be applied to various LMs with different architectures and sizes. Furthermore, it circumvents the need for supplementary labeled data by exploiting off-the-shelves metrics that correlate well with human judgment. While considering its contextual relevance to the dialogue history, utilizing these metrics allows the model to produce better responses. However, to enable the generation of responses that are more faithful and relevant, it is essential to condition the response on the dialogue history and accurate knowledge grounding. To do so, we explore various metrics that ensure the response is aligned with the knowledge and utilize the existing automatic metrics that correlate well with human judgment. Our experiments and human evaluation show that PICK enables models to produce responses more faithful to the provided knowledge and relevant to the dialogue history. Our contributions to this work are three-fold. (1) We propose PICK, a generation re-scoring framework for KGD that empowers models to generate dialogue responses that are more faithful to the provided knowledge and relevant to the dialogue history. The proposed framework is simple yet effective; it does not require further model tuning and additional labelled data for language modelling alignment. (2) We analyze the improvement from PICK-reranked responses in the systems with both oracle and retrieved knowledge and show that PICK consistently improves the performance in all decoding strategies. (3) We investigate the impact of diverse scoring metrics and decoding settings on generation quality. Then, we present the best scoring and decoding configurations for PICK on KGD tasks. ## 2 Related Work Knowledge-Grounded DialogueDinan et al. (2019) develop a large dataset with conversations directly grounded on knowledge retrieved from Wikipedia. Alongside the work, recent works aim to build dialogue models that could conduct faithful and relevant knowledgeable discussions on open-domain topics Li et al. (2022); Liu et al. (2021); Xu et al. (2022, 2023). Aiming to improve informativeness, a knowledge selection process is introduced to determine which specific elements of knowledge are informative to the dialogue Kim et al. (2020); Zhao et al. (2020). Further, Li et al. (2020) propose learning how knowledge is expressed to improve coherence and knowledge relevance. Shuster et al. (2021) utilize neural-retrieval-in-the-loop architectures to develop models that maximize knowledgeability while retaining conversational ability. Rashkin et al. (2021) use the gold knowledge and Figure 1: Overview of PICK. Instead of taking the response with the highest joint probability over the generated tokens, we select the response with the highest overall response quality score on faithfulness and relevance from the top-\(r\) responses. We propose to assess the response candidates’ quality based on the dialogue history and the corresponding knowledge without further tuning. Simple yet effective, PICK ensures better relevance and coherence of the generated response. control the model to generate faithful and relevant responses. The PICK framework is orthogonal to recent works but similarly focused on enabling the model to generate relevant responses faithful to the provided knowledge. Alignment of Dialogue Response QualityTo exhibit alignment with dialogue response quality in the model dialogue response, several works implemented the concept of reinforcement learning from human feedback (Christiano et al., 2017) explicitly or implicitly in the dialogue domain. Jaques et al. (2019) evaluate responses for coherence and engagement using a supervised conversational evaluator with human-annotated labels. Yi et al. (2019) collect human interaction data as implicit human feedback. Hancock et al. (2019) develop an agent that would ask for feedback to improve its dialogue abilities further. Those works accumulate manual human data and are highly resource-intensive in terms of time and human labor. On the other hand, there are also works that re-rank response candidates to improve the dialogue response quality. Mei et al. (2017) utilize the Latent Dirichlet Allocation (LDA) method to learn document-level latent topics to select the best continuation based on document-level topic-level matching. Welleck et al. (2019) improve the consistency by re-rank utterances using an NLI model trained on a Dialogue NLI dataset that they created for the purpose. Ko et al. (2019) train four classifiers on synthetically generated data to re-rank plausible sentences. Unlike those works, we leverage off-the-shelf automatic metrics that correlate well with human judgment on conversation-level qualities; hence it does not require additional labeled data for the language modeling alignment. Furthermore, we devise a framework that doesn't require further model tuning to promote faithful and relevant candidates. ## 3 Methodology Knowledge-grounded dialogue (KGD) systems are built to be informative teachers. Such systems must be faithful to one or more source documents we implicitly trust and serve as an appropriate reply to the conversational context (Rashkin et al., 2021; Zhan et al., 2021; Honovich et al., 2021). In KGD systems, a model is trained to generate a response based on the dialogue utterances with the user and ground to the knowledge snippet. We denote a KGD dataset as \(\{\mathcal{D}^{n}\}_{n=1}^{N}\). At every turn \(t\) we have dialogue history at turn \(t\) denoted as \(\mathcal{D}_{t}=\{(U_{i},S_{i})\}_{i=1}^{t}\), where \(U_{t}\) is the user utterance and \(S_{t}\) the system response. Each of these \(S_{t}\) responses is grounded to knowledge snippets \(K_{t}\) that are retrieved from a knowledge base As illustrated in Figure 1, our proposed framework takes in input \(X_{t}=(T_{t},K_{t},\mathcal{D}_{t-1},U_{t})\) with \(T_{t}\) resembling the conversation topic at turn t, to a fine-tuned model \(f_{\theta}\) to generate a relevant and faithful response sequence \(\hat{S}_{t}\). The concatenation of \(\mathcal{D}_{t-1}\) and \(U_{t}\) is a dialogue history. ### Re-ranking Framework Beam search and nucleus sampling decoding methods allow the model to generate multiple responses (i.e., hypotheses) to the same inputs. Instead of selecting the response with the highest probability from the model, we propose a re-ranking method that ensures better relevance and faithfulness of the generated responses without further tuning. Our approach treats all of the \(r\) hypotheses as a pool of \(r\) response candidates \(C=\{C_{1},...,C_{r}\}\) to be further ranked based on their qualities. We evaluate each response candidate with ready-to-use scorers to get its quality score of \(\mu\). Our goal is to select the best scoring candidate according to their associated scores \(C_{\mu}=\{\mu(C_{1}),...,\mu(C_{r})\}\), that is, to identify the best dialogue response candidate \(\hat{S}_{t}\) according to the metrics, which is given by: \[\hat{S}_{t}=\operatorname*{arg\,max}_{\hat{C}_{j}\in C}\{\mu(C_{1}),...,\mu(C _{r})\}\] ### Decoding Strategy To produce the top-\(r\) response candidates \(C=\{C_{1},...,C_{r}\}\), we take the same input \(X_{t}\) to the fine-tuned model \(f_{\theta}\) and perform generations with the number of return sequences to be \(r\), with \(r\) being larger than 1. Each response candidate \(C_{j}\) is an independently computed returned sequence from the search hypotheses or random sampling. Although by both paradigms, the last \((r-1)\) hypotheses are seen as inferior response candidates, we will later show that it is not the case and that by evaluating their qualities using the ready-to-use automatic metric, we can let the same fine-tuned model \(f_{\theta}\) reach a more optimal response quality. ### Response Quality Scorer KGD aims to ground the conversation by generating responses that are faithful to the provided knowledge and relevant to the dialogue history. To achieve this, we leverage off-the-shelf automatic metrics to evaluate the quality of response candidates. These metrics allow us to assess the faithfulness and the relevance of the responses without the need for additional labeled data. We consider the qualities of the response candidate w.r.t the dialogue history \(\mu_{D}\) and the input knowledge snippet \(\mu_{K}\) to construct the final quality score of \(\mu\). We elaborate on the metric corresponding to each aspect score in Section 4.2 and 4.3. The relevance score \(\mu_{D}\) is calculated given the response candidates and the dialogue history, \(\mu_{D}(C_{j},(\mathcal{D}_{t-1},U_{t}))\), which the faithfulness score \(\mu_{K}\) is calculated regarding the input knowledge snippet, \(\mu_{K}(C_{j},K_{t})\). In this work, we consider both qualities of the response candidate equally important; thus, we derive the final quality score \(\mu\) based on the sum of \(\mu_{D}\) and \(\mu_{K}\). Our proposed method allows more randomness in the decoding process, which may cause high meaningless repetition in some \(r\) hypotheses. To filter this, we remove the hypotheses that contain repetitive words. We also filter hypotheses that contain a word more than 30 characters long since words that long are not likely to occur in an English general text.2 Footnote 2: [https://en.wikipedia.org/wiki/Longest_word_i_English](https://en.wikipedia.org/wiki/Longest_word_i_English) ## 4 Experiments ### Dataset and Models We use Wizard of Wikipedia (WoW) Dinan et al. (2019), a large-scale corpus of multi-turn knowledge-grounded dialogues between an "apprentice" and a "wizard", to conduct our experiments in developing the KGD systems. We use the same split in Dinan et al. (2019) as stated in Shuster et al. (2020). We aim to produce better responses, thus we focus on only modeling "wizard" response utterances in the dialogue where they respond to the "apprentice" utterances. The data statistics of WoW are shown in Table 2. We adopt a pre-trained GPT-2 Radford et al. (2019), and T5-small Raffel et al. (2020) as the backbones. We fine-tune both models and limit the maximum sequence length to 512. Maximizing our GPU (RTX 2080Ti) capacity, we train the GPT-2 model in a batch size of 4 and the T5 model in a batch size of 8 for 10 epochs with early stopping patience of 3. We train using all of the training data and use the dev (seen topics) split and monitor the model's loss in this split to do the early stopping and to choose the best model to use in the experiment. More training details are provided in SS4.5. ### Faithfulness Score Faithfulness problem can also be considered an intrinsic hallucination problem for KGD tasks. Following Shuster et al. (2021), we leverage Knowledge F1 (KF1)3, calculated based on the unigram overlap between the generated response and the input knowledge snippet, to assess the faithfulness of responses. There are also other alternative n-gram-based automatic metrics such as BLEU Papineni et al. (2002), ROUGE Lin (2004), entailment measurement from a state-of-the-art natural language interference (NLI) model Liu et al. (2019), and the similarity measurement from BLEURT Sellam et al. (2020). We further investigate the impact of different faithfulness scorers in Section 6.1 and find out that KF1 shows its distinct effectiveness in ensuring both faithfulness and overall performance. Footnote 3: [https://github.com/facebookresearch/ParAI](https://github.com/facebookresearch/ParAI) ### Relevance Score Overlap-based automatic evaluation metrics are known to be ineffective in distinguishing the relevance between the generated response with the dialogue history due to the one-to-many nature of dialogue Zhao et al. (2017); Yeh et al. (2021). Therefore, we explore reference-free model-based metrics on top of them. Specifically, we utilize the FED metric Mehri and Eskenazi (2020) as the relevance scorer. FED is an unsupervised evaluation metric that uses DialoGPT Zhang et al. (2020) to measure 18 fine-grained turn- and dialogue-level qualities of dialogue. It calculates the likelihood of manually designed follow-up utterances to measure multiple qualities of dialogue. Moreover, it is proven to correlate well with human judgment. We follow the hierarchical groupings from Phy et al. (2020) and separate the fine-grained metrics in FED between basic (w.r.t understandability) and further (w.r.t likeability) response qualities, both at the turn- and dialogue-level. At the turn level (**TL**), \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Split**} & \multicolumn{2}{c}{**\# Wizard responses**} \\ \cline{2-3} & **seen** & **unseen** \\ \hline Train & 74092 & - \\ Dev & 3939 & 3927 \\ Test & 3865 & 3924 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of Wizard of Wikipedia. we group semantically appropriate, understandable, and fluent as turn-level metrics that measure the basic qualities of responses. We see the additional qualities as the ones that make the response more likeable and group the interesting, engaging, specific, relevant, and correct measurements into one. Similarly, at the dialogue level (**DL**), we group coherent, error recovery, consistent, and diverse as the dialogue-level basic qualities of responses. At the same level, we also group depth, likeable, understandable, flexible, informative, and inquisitive as dialogue-level metrics that measure the further qualities of responses. On top of that, we also experiment with combining each level of metrics and all of the measurements to find the best combination to produce responses relevant to the dialogue history. We also explore another reference-free model-based metric, USL-H (Phy et al., 2020), for comparison. USL-H combines three models trained to determine whether a response is valid and grammatically correct and to evaluate the sensibleness and the likelihood of a given response. The analysis is included in Section 6.1. ### Baselines We select baseline models that utilize gold knowledge snippets in their generation process. We take the performances of MemNet (Dinan et al., 2019), dodecaDialogue (Shuster et al., 2020), GPT-2 and T5 with control code and resampling (Rashkin et al., 2021), and PLUG-Golden Knowledge (Li et al., 2022) as our baselines. We also experiment with PICK in the settings where the provided knowledge is retrieved instead of using the oracle knowledge. We leverage KnowledGPT Zhao et al. (2020) and perform a similar procedure elaborated in Section 3, to select responses that are more relevant to the dialogue history and the ones that are more faithful to the retrieved knowledge. We di \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{4}{c}{**Test (seen topics)**} & \multicolumn{4}{c}{**Test (unseen topics)**} \\ \cline{2-9} & **BLEU-4** & **ROUGE-L** & **F1** & **KF1** & **BLEU-4** & **ROUGE-L** & **F1** & **KF1** \\ \hline \hline **Baselines** & & & & & & & & \\ \hline MemNet (w/ aux loss) (Dinan et al., 2019) & 1.5 & - & 35.5\% & - & 0.3 & - & 32.2\% & - \\ dodecaDialogue (Shuster et al., 2020) & 10 & - & **38.4\%** & - & **9.7** & - & - & - \\ Controlled GPT-2 (Rashkin et al., 2021) & 8.9 & - & - & - & 8.4 & - & - & - \\ Controlled T5 (Rashkin et al., 2021) & 8.4 & - & - & - & 8.7 & - & - & - \\ PLUG-Golden Knowledge (Li et al., 2022) & **11.5** & **31.1** & 36.0\% & **47.8\%** & 8.8 & **29.0** & **33.4\%** & **46.0\%** \\ \hline \hline **GPT-2** & & & & & & & & \\ \hline Greedy & 12.4 & 29.9 & 32.6\% & 48.6\% & 12.1 & 29.9 & 32.3\% & 46.8\% \\ \hline Beam search (\(n=5\), \(r=1\)) & 15.0 & 33.1 & 35.6\% & 64.5\% & 13.9 & 32.2 & 34.5\% & 60.3\% \\ + PICK (\(n=5\), \(r=5\)) & **16.6** & **34.1** & **37.0\%** & **73.7\%** & **15.6** & **33.7** & **36.4\%** & **71.0\%** \\ \hline Beam search (\(n=10\), \(r=1\)) & 15.4 & 33.4 & 35.8\% & 68.9\% & 14.3 & 32.5 & 34.7\% & 64.5\% \\ + PICK (\(n=10\), \(r=10\)) & **16.7** & **34.5** & **37.4\%** & **80.4\%** & **16.0** & **34.5** & **37.1\%** & **78.2\%** \\ \hline Top-\(k\) sampling (\(k=3\), \(r=1\)) & 8.7 & 26.3 & 29.0\% & 39.7\% & 8.1 & 25.5 & 28.2\% & 37.8\% \\ + PICK (\(k=3\), \(r=10\)) & **14.9** & **33.0** & **36.2\%** & **67.6\%** & **14.2** & **32.7** & **35.6\%** & **64.6\%** \\ \hline Top-\(p\) sampling (\(p=0.5\), \(r=1\)) & 11.5 & 28.3 & 31.2\% & 46.2\% & 10.4 & 27.6 & 30.1\% & 43.5\% \\ + PICK (\(p=0.5\), \(r=10\)) & **16.0** & **34.1** & **37.2\%** & **72.7\%** & **15.2** & **34.0** & **36.9\%** & **70.2\%** \\ \hline \hline **TS5** & & & & & & & \\ \hline Greedy & 14.7 & 33.0 & 35.6\% & 56.0\% & 14.4 & 32.4 & 35.0\% & 56.2\% \\ \hline Beam search (\(n=5\), \(r=1\)) & 16.3 & 34.8 & 37.7\% & 77.8\% & 15.6 & 34.7 & 37.4\% & 78.8\% \\ + PICK (\(n=5\), \(r=5\)) & **16.3** & **34.9** & **37.8\%** & **79.6\%** & **15.6** & **34.8** & **37.6\%** & **80.2\%** \\ \hline Beam search (\(n=10\), \(r=1\)) & **16.2** & **34.8** & 37.7\% & 81.8\% & **15.5** & 34.7 & 37.5\% & 82.7\% \\ + PICK (\(n=10\), \(r=10\)) & 16.1 & 34.8 & **37.7\%** & **84.3\%** & 15.4 & **34.8** & **37.6\%** & **84.8\%** \\ \hline Top-\(k\) sampling (\(k=3\), \(r=1\)) & 11.7 & 30.3 & 33.3\% & 49.3\% & 11.4 & 29.9 & 32.8\% & 49.8\% \\ + PICK (\(k=3\), \(r=10\)) & **15.9** & **34.2** & **37.3\%** & **71.8\%** & **15.2** & **34.0** & **37.0\%** & **72.4\%** \\ \hline Top-\(p\) sampling (\(p=0.5\), \(r=1\)) & 13.9 & 32.3 & 35.1\% & 55.3\% & 14.0 & 31.9 & 34.6\% & 55.6\% \\ + PICK (\(p=0.5\), \(r=10\)) & **16.9** & **34.9** & **38.0\%** & **74.6\%** & **16.4** & **34.7** & **37.5\%** & **74.7\%** \\ \hline \hline \end{tabular} \end{table} Table 3: Overall performance comparisons. PICK significantly improves the performances of all models and decoding methods, even on the top-\(k\) and top-\(p\) sampling that gained low automatic metrics scores on their vanilla responses. The best performances in each section are in **bold**, while the overall best is underlined. rectly utilize the codes and models provided4 and adjust the generation parameters as required. Footnote 4: [https://github.com/zhaoxlpku/KnowledGPT.git](https://github.com/zhaoxlpku/KnowledGPT.git) We note the **vanilla** responses performance as our lower bound: the responses performance on each decoding method without PICK (i.e. taking the top-1 hypotheses from beam search). ### Training details During training, the concatenated utterances are delimited using speaker ID of either <speaker1> or <speaker2>, and the concatenations of the topic, knowledge snippet, and the utterances are separated by a separator token \(\vartriangle\). We experiment using learning rates (lr) of \(1e-5,5e-4,1e-4,5e-4\) to fine-tune the models, and then we pick the models with the lowest loss on the dev (seen topics) split to be the models that we will be using throughout the experiments. Ultimately, the best GPT-2 model is fine-tuned with lr of 1e-5 and the T5 with 5e-4. ### Evaluation Automatic MetricsWe evaluate the final response qualities by comparing them to the gold responses. We perform the automatic evaluation using BLEU-4 Papineni et al. (2002), ROUGE-L Lin (2004), and unigram-F13. We implement the BLEU measurements following Rashkin et al. (2021). To make a fair comparison with the previous work, we utilize BLEU-4 scoring as it is implemented in Rashkin et al. (2021) and ROUGE-L scoring (the mean F1 measures) as it is implemented in 5. Further, we also use KF1 as stated in Section 4.2 for faithfulness measurement. Footnote 3: [https://huggingface.co/spaces/evaluate-metric/c/rouge](https://huggingface.co/spaces/evaluate-metric/c/rouge) Human EvaluationWe conduct manual evaluations to measure the qualities of the generated responses from two aspects: _Faithfulness_ and _Relevance_. We take 100 random generation samples from the test (unseen topics) split and ask crowd-sourced annotators 6 to evaluate on a 4-point Likert scale from 1 (low quality) to 4 (high quality). We ask for three level-1 (all-kinds) contributors and three level-3 (experienced only) contributors and report their average scores. The complete annotation guideline is attached in Appendix E. Footnote 6: [https://appen.com/](https://appen.com/) ## 5 Results ### Results with Oracle Knowledge Overall, as observed in Table 3, the proposed method achieved significantly better performances than the baselines from the previous works, especially in the comparison of BLEU-4 scores. In this table, the PICK responses are all re-ranked based on the sum of FED turn-level basic metrics and KF1. The proposed method significantly improves the performances of all models and decoding methods in all of the BLEU-4, ROUGE-L, F1, and KF1 metrics. Interestingly, for the top-\(k\) and top-\(p\) sampling decoding that previously gained low scores BLEU-4, ROUGE-L, F1, and even KF1 on their vanilla responses, there exist alternative responses that have a more similar quality to the gold response and our proposed re-ranking and scoring framework promotes that. All the re-ranked responses also obtained a huge increase of KF1 compared to their vanilla baseline, especially in the top-\(k\) and the top-\(p\) sampling, where the vanilla KF1 is far much lower than the re-ranked response's KF1. Although the scores of \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Inference methods**} & \multicolumn{5}{c}{**Test (seen topics)**} & \multicolumn{5}{c}{**Test (unseen topics)**} \\ \cline{2-9} & **BLEU-4** & **ROUGE-L** & **F1** & **KF1** & **BLEU-4** & **ROUGE-L** & **F1** & **KF1** \\ \hline Greedy & 5.8 & 20.3 & 22.0\% & 56.4\% & 4.7 & 19.0 & 20.4\% & 53.6\% \\ \hline Beam search (\(n=10\), \(r=1\)) & 2.5 & 12.6 & 14.4\% & 47.3\% & 2.1 & 12.0 & 13.6\% & 45.6\% \\ + PICK (\(n=10\), \(r=10\)) & **2.5** & **12.8** & **14.7\%** & **47.9\%** & **2.1** & **12.1** & **13.9\%** & **46.1\%** \\ \hline Top-\(k\) sampling (\(k=3\), \(r=1\)) & 4.4 & 18.5 & 20.5\% & 46.7\% & 3.7 & 17.6 & 19.4\% & 45.3\% \\ + PICK (\(k=3\), \(r=10\)) & **5.4** & **19.5** & **21.3\%** & **70.4\%** & **4.7** & **18.5** & **20.1\%** & **67.9\%** \\ \hline Top-\(p\) sampling (\(p=0.3\), \(r=1\)) & 5.6 & 20.1 & 21.8\% & 56.0\% & 4.6 & 18.8 & 20.3\% & 52.9\% \\ + PICK (\(p=0.3\), \(r=10\)) & **5.8** & **20.4** & **22.1\%** & **68.9\%** & **4.9** & **19.3** & **20.9\%** & **66.2\%** \\ \hline \hline \end{tabular} \end{table} Table 4: Results of KnowledgeGPT Zhao et al. (2020) with PICK and different inference methods. Although other decoding methods underperform the greedy baseline, our method still improves over each of them. Here, KF1 is calculated w.r.t the retrieved knowledge instead of the gold knowledge. The best performances in each section are in **bold**, while the overall best is underlined. BLEU-4, ROUGE-L, and F1 are comparable in the inference of the T5 model using beam search, we can still see the improvement in KF1, signifying that the proposed method better addresses the use of knowledge in its response. Lastly, PICK also closes the performance gap between unseen and seen evaluation, as it is especially observed in the GPT-2 model performances. We provide samples of the responses in Table A1. ### Results with Retrieved Knowledge We reproduce the greedy decoding performance similar to what was reported in Zhao et al. (2020). Here, we use the same scoring metrics on PICK as in SS5.1. Although other decoding methods' under-perform in comparison to greedy in the knowledge retrieval setting (see Table 4), PICK still shows improvement over each decoding method taken individually as the BLEU-4, ROUGE-L, F1, and KF1 scores all increase. Interestingly, although our method improves the generated response's performance compared to its vanilla counterpart, the top performances are still comparable with the greedy decoding performance, except for the significantly better KF1 on the responses re-ranked by PICK. It is important to note that the KF1 here is calculated w.r.t the retrieved knowledge instead of the gold knowledge. We also investigate the underperformance of KnowledGPT being inferenced through decoding methods other than greedy, and we found that the issue persists through repeated trials in different settings. We leave this issue out of the scope of this paper. ### Human Evaluation We conduct the manual evaluation on responses from GPT-2 and T5 models decoded with beam search 10. We compare both the vanilla (\(r\)=1) and the PICK responses (\(r\)=10) and evaluate the quality of the responses in the aspect of _Relevance_ and _Faithfulness_. Table 5 shows that PICK responses from GPT-2 and T5 models are more faithful and relevant than the vanilla generations. These findings correlate well with the automatic results shown in Table 3, where for both of the responses, PICK responses achieved higher BLEU-4, ROUGE-L, F1, and KF1 scores in comparison to the vanilla responses. We attach a detailed visualization of the Likert score distribution in Figure B. It is also known from previous works that attempts to make the system more faithful usually lead to trade-offs between the response's relevance and faithfulness scores Rashkin et al. (2021), either due to the response being not quite as pertinent to the previous conversation turns or it gives overly extractive responses. This result also showcases the merit of the proposed re-ranking method to improve the faithfulness of the responses without a trade-off on the response's relevance towards the previous conversation turn. \begin{table} \begin{tabular}{l|c c} \hline \hline **Models** & **Faithfulness** & **Relevance** \\ \hline Gold responses & 2.64 & 2.09 \\ \hline \hline **GPT-2** & & \\ Beam search (\(n\) = 10, \(r\) = 1) & 2.26 & 1.48 \\ + PICK (\(n\) = 10, \(r\) = 10) & **2.49*** & **1.86*** \\ \hline **T5** & & \\ Beam search (\(n\) = 10, \(r\) = 1) & 2.51 & 1.87 \\ + PICK (\(n\) = 10, \(r\) = 10) & **2.56** & **2.01*** \\ \hline \hline \end{tabular} \end{table} Table 5: PICK enables models to produce responses more faithful to the provided knowledge and relevant to the dialogue history, as shown from the human evaluation on responses from GPT-2 and T5 models. In each section, * indicates that this result is significantly better (\(p\)-value \(<0.05\)) from their respective baseline comparison. See Figure B for a detailed visualization of the Likert score distribution. Figure 2: Comparing combinations of automatic evaluation of response qualities w.r.t dialogue history and knowledge snippet, PICK with FED turn-level basic metrics and KF1 produced responses with the best qualities. The comparisons are shown here as a heatmap of the sum of mean-normalized BLEU-4, ROUGE-L, F1, and KF1 w.r.t gold responses. The x-axis labels the knowledge-oriented metrics, and the y-axis labels the dialogue-history-oriented metrics used in the comparisons. TL denotes turn level, and DL denotes dialogue level. ## 6 Analysis and Discussion ### Rescoring Metrics We study the effectiveness of automatic metrics explored in Section 4.2 and 4.3. We employ GPT-2+PICK with each automatic metric as the scoring method by normalizing the performances on each of their mean and standard deviations for a fairer comparison. We report the normalized comparison as a heatmap in Figure 2. The best performance is achieved by using FED turn-level that measures the basic qualities of responses w.r.t the dialogue history (FED turn-level basic) alongside KF1. ### Decoding Strategy We perform ablation of GPT-2+PICK and vanilla on the decoding methods to find the optimum configurations with our proposed method. For beam search, we increase the number of beams starting from \(10\). For the top-\(k\) and nucleus sampling, we increase the threshold of \(k\) and \(p\) to perform the sampling, starting from \(k=3\) and \(p=0.3\). For each experiment, we keep \(r=10\), and we note the comparison of performances w.r.t the gold as a probe towards the degree of the responses being in a certain desired quality range that the gold responses reflect (Figure 3). From Figure 3, We observe that increasing the number of beams does not improve the performance of the generated responses, while as the \(p\) and \(k\) increase, the performances degrade. We conjecture that loosening the respective \(p\) and \(k\) sampling thresholds weakens the mitigation of bad continuations and, in turn, produces worse generations (i.e., generations with considerably low perplexity (Holtzman et al., 2019)). The two sampling strategies show different optimum thresholds, as top-\(k\) sampling responses start to deteriorate with \(k\) larger than \(3\), while for the nucleus sampling, we see benefits in relaxing the \(p\) threshold to \(0.5\). We conjecture that due to the probability selection of the nucleus sampling, top-\(p\) tokens within \(p\leq 0.5\) could still be reliable, as it produces generations with considerably low perplexity as mentioned in (Holtzman et al., 2019). ### Number of Return Sequences (_r_) We also perform further ablation of GPT-2+PICK and vanilla on varying \(r\). We extend \(r\) on the nucleus sampling with \(p=0.5\), and on the beam search, we set the \(r\) to follow the number of beams \(n\) to retain the top \(n\) choices when a new token in the sequence is generated. The performance comparisons are noted in Figure 4. Our observation shows that increasing \(r\) in both the search and sampling decoding experiments promotes the existence of responses that are more similar to the gold response. With the gold response holding the desired qualities we aim to achieve, these findings also indicate that increasing the \(r\) could help increase the response qualities generated Figure 4: Performance comparison with unigram-F1 while varying \(r\) to follow the number of beams (left figure) and varying \(r\) on nucleus sampling with \(p\) = 0.5 (right figure). Increasing \(r\) in both the search and sampling decoding experiments promotes the existence of responses that are more similar to the gold response. Figure 3: Performance comparison with unigram-F1 while varying the number of beams, \(k\) and \(p\) in respectively beam search, top-\(k\) and top-\(p\) sampling with \(r\) kept at 10. Extending the number of beams does not help, but \(k\) and \(p\) threshold help to mitigate a bad response formation. by the same model to some extent. Figure 4. shows that PICK response performance begins to saturate around \(n=10\) and \(r=10\) in the beam search and nucleus sampling experiment, most likely because the best candidate response is consistently found within that range of return sequences. ### Error Analysis To better understand our method's limitation, we provide manually sampled study cases with GPT-2+PICK using beam search (\(n\) = 50, \(r\) = 50), in which better responses are not selected by the scorer. We observe three kinds of errors. First, the current metric fails to promote the selection of better responses. We conjecture that this happens due to the low correlation of the automatic metrics towards the human judgments Yeh et al. (2021); hence, implementing better human preference metrics will aid in better response promotion. Second, in some cases, substandard responses are selected due to their high overlap with the knowledge snippet. This could be because the metrics used rely on the spurious correlation between attribution and word overlap and thus do not reliably distinguish attributable abstractive responses McCoy et al. (2019); Dziri et al. (2022). We perform a further ablation study on this error in Appendix D. Third, the knowledge snippets provided are irrelevant to the dialogue history. We provide these entries of case study samples in Table A2. ## 7 Conclusion This work investigates the alignment of KGD responses to faithfulness and relevance. We propose PICK, a straightforward yet effective generation re-ranking framework for KGD. PICK is model-agnostic, does not require further model tuning, nor requires additional labelled data for the language modelling alignment. Experimental results show that the proposed method enables models to produce better responses that are more faithful to the provided knowledge and relevant to the dialogue history. ## Acknowledgements We thank the anonymous reviewers for their valuable and constructive comments. We thank Tiezheng Yu for the insightful discussions. This work has been partially funded by the PF20-43679 Hong Kong PhD Fellowship Scheme, Research Grant Council, Hong Kong, and the Hong Kong Fellowship Scheme by the Hong Kong Research Grants Council (RGC). ## Limitations While our proposed method, PICK, is model-agnostic and can be adopted by various model architectures, our exploration in this work is limited to GPT-2 and T5. Generating multiple alternative responses within a single decoding process also can increase the computational overhead of the KGD system, thus ways to increase PICK's efficiency would be beneficial in the future. Additionally, while PICK improves the faithfulness and relevance of responses, it may not address other challenges in knowledge-grounded dialogue systems such as self-consistency, engagingness, long-term coherence, and more. Further research is needed to explore these limitations and develop more comprehensive approaches for generating better responses. We leave these explorations open for future work. ## Ethics Statement In this paper, we propose a re-ranking framework, targeting better correlating the final generation with some concrete attributes of the response. However, our work has a broader impact given the current popularity of ChatGPT. ChatGPT replies on a reward model to model human feedback for reinforcement learning. However, the training of the reward model requires a huge amount of human annotations, which is time- and resource-consuming. Then it comes to a question -what is the expression of human preference and whether it is possible to model human preference without heavy human annotations? Though far from perfect, we take an initial step in this direction by exploring the usage of automatic metrics to re-rank the responses. We believe it is a promising and valuable research topic.
2301.13738
CSS code surgery as a universal construction
We define code maps between Calderbank-Shor-Steane (CSS) codes using maps between chain complexes, and describe code surgery between such codes using a specific colimit in the category of chain complexes. As well as describing a surgery operation, this gives a general recipe for new codes. As an application we describe how to `merge' and `split' along a shared $\overline{X}$ or $\overline{Z}$ operator between arbitrary CSS codes in a fault-tolerant manner, so long as certain technical conditions concerning gauge fixing and code distance are satisfied. We prove that such merges and splits on LDPC codes yield codes which are themselves LDPC.
Alexander Cowtan, Simon Burton
2023-01-31T16:17:25Z
http://arxiv.org/abs/2301.13738v6
# CSS code surgery as a universal construction ###### Abstract We define code maps between Calderbank-Shor-Steane (CSS) codes using maps between chain complexes, and describe code surgery between such codes using a specific colimit in the category of chain complexes. As well as describing a surgery operation, this gives a general recipe for new codes. As an application we describe how to'merge' and'split' along a shared \(\overline{X}\) or \(\overline{Z}\) operator between arbitrary CSS codes in a fault-tolerant manner, so long as certain technical conditions concerning gauge fixing and code distance are satisfied. We prove that such merges and splits on LDPC codes yield codes which are themselves LDPC. ## 1 Introduction Quantum computers have become larger and more sophisticated in recent years [1, 12], but fault-tolerance is necessary to perform many practically relevant quantum algorithms. Qubit stabiliser error-correction codes are a well-studied approach to fault-tolerant quantum computing [22] and are favourable both for their practicality and theoretical simplicity. Such codes store logical data using entangled states of physical qubits and repeated many-body measurements, and so long as the physical errors on the qubits stay below a certain threshold the logical data is protected. The most well-known example of a qubit stabiliser code is the toric code, in which qubits are embedded on the surface of a torus, and properties of the logical space are determined by the topology of the surface [17, 30]. This is a basic example of a qubit Calderbank-Shor-Steane (CSS) code; there are several equivalent ways of defining CSS codes, but for our purposes we shall describe them as codes which are all _homological_ in a suitable sense [4, 6]. This means that we can study CSS codes using the tools of homological algebra [42]. This approach has recently seen much success, for example in the construction of so-called good low-density parity check (LDPC) code families using a balanced product of chain complexes [38]. Such code families have an encoding rate \(k/n\) of logical to physical qubits which is constant in the code size, while maintaining a linear code distance \(d\), a substantial asymptotic improvement over simpler examples such as the toric code. The main caveat is, informally, that the connectivity between physical qubits is non-local. This complicates the architecture of the system, and also complicates the protocols for performing logical gates. There have been several recent works on protocols for logical gates in CSS codes [31, 11, 8, 39, 26], of varying generality. Here, we build on this work by defining surgery, in the abstract, using arbitrary CSS codes which form a categorical span, although the practical implementation of such surgery has several important caveats. The idea is that merging two codes works by identifying a common structure in each code and quotienting it out. CSS code surgery is particularly convenient when the CSS codes are _compatible_, in the sense that they have at least one identical \(\overline{Z}\) or \(\overline{X}\) logical operator. In this case, the common structure being quotiented out is the logical operator. In order to formalise this, we take a step back and look at the category of chain complexes \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\). We start by giving a recap of the relevant categorical background of chain complexes, and the view of classical linear binary codes and qubit CSS codes using chain complexes. We then define code maps between CSS codes using morphisms between chain complexes. These are maps which send \(X\)-checks to \(X\)-checks and \(Z\)-checks to \(Z\)-checks in a coherent way, and have a convenient presentation as phase-free ZX diagrams, which we prove in Proposition 4.12. We believe that code maps crop up throughout the CSS code literature. We see 3 primary use-cases for code maps: 1. Encoders/decoders [18, 16, 24]. 2. Constructing new codes. 3. Designing fault-tolerant logical operations. We intend to expound on code maps in future work, but presently we focus on items 2 and 3. We define CSS code merges as a colimit - specifically, a coequaliser/pushout - in the category of chain complexes. Not only does the construction describe a surgery operation, but it also gives a general recipe for new codes. An application of our treatment is the description of certain classes of code surgery whereby the codes are merged or split along a \(\overline{Z}\) or \(\overline{X}\) operator. This is closely related to the notion of 'welding' in [36], and generalises the cases for 2D topological codes given in [25, 33, 37]. We prove that merging two LDPC codes in such a manner still yields an LDPC code. We give a series of examples, including the specific case of lattice surgery between surface codes. Lastly, we discuss how to apply such protocols in practice. We prove that when 3 technical conditions are satisfied then code surgery can be performed fault-tolerantly, allowing us to perform logical parity measurements on codes. ### Guide to reading the paper Section 2 gives a bird's eye view of category theory and universal constructions, which will be useful later on. Section 3 describes the category of chain complexes with morphisms as matrices over \(\mathbb{F}_{2}\). Category theorists may wish to skip past these sections. We then give a rundown of CSS codes viewed as chain complexes in Section 4. Readers familiar with basic category theory and this perspective of CSS codes can safely skip to Section 4.3, where we introduce the notion of _code maps_, that is coherent transforms between codes. We introduce surgery of codes as a colimit in Section 5. This is when the notion of 'gluing' codes together comes in, and we prove several results about these codes when the colimit uses logical \(\overline{Z}\) or \(\overline{X}\)-operators. Lastly, we introduce a protocol for performing logical \(\overline{Z}\otimes\overline{Z}\) and \(\overline{X}\otimes\overline{X}\) measurements fault-tolerantly in Section 6. Universal constructions In this section we provide a cartoon introduction to category theory and universal constructions. See [32] for a more in-depth introduction. A _category_ is a collection of _objects_ and _morphisms_. We will begin by drawing an object as a box with a decoration, such as \[\includegraphics[]{figures/c1.eps}\.\] Morphisms are arrows between objects, like this \[\includegraphics[]{figures/c1.eps}\.\] The arrow notation suggests that we can _compose_ these. \[\includegraphics[]{figures/c1.eps}\.\] The _product_ of two objects in a category is an object, together with two arrows, \[\includegraphics[]{figures/c1.eps}\.\] The product decoration combines the two decorations, \[\includegraphics[]{figures/c1.eps}\.\] The product also must satisfy a _universal property_. This states that any other object that also combines the two decorations is already compatible with the product object in a unique way. In other words, for all test objects there exists a unique _comparison_ morphism: The real product is the minimal object that projects down to the factors. Any other test object lives over the real product. This universal property has the immediate consequence that any other object that satisfies all these requirements, will be isomorphic via a unique isomorphism that commutes with the other morphisms, A _pullback_ is a product with constraints: The resulting square should _commute_: if we compose any two paths of arrows with the same source object and the same target object then these paths should be equal. As with products, we also require the pullback to satisfy a universal property. All of these statements have _dual_ statements, which we get by reversing all the arrows. When we do this we sometimes put a _co-_ prefix on the terminology. For example, a _coproduct_, which would normally be called a sum, looks like this Once again, we require any such candidate coproduct to satisfy a universal property We think of a coproduct as a way of gluing together objects. By adding constraints we can express where we wish to glue The answer to this question is called a _pushout_: it is an object together with two morphisms, that satisfies a universal property, We have purposefully avoided describing the decorations in these diagrams: how they work, what they mean. A more in-depth introduction to category theory would describe these systematically, possibly mentioning the category of _finite sets and functions_. In this case, objects are sets, with _elements_, and we can combine these in various ways to make other sets. Instead of telling this story, we skip to the punchline, which is that there are no elements, or rather, an element of an object is really a morphism into that object: \[\tikzfig{eq:f-f satisfying the universal property of coproducts. Here is one candidate: This coproduct will not be unique (except for some degenerate cases), but the universal property of the coproduct guarantees it is unique up to unique isomorphism. We have reinvented the _direct sum_ of vector spaces. For a pushout of vector spaces we get This is gluing of a two dimensional vector space and a three dimensional vector space along a one dimensional vector space. But what about products? A curious thing happens in the category \(\mathtt{Mat}_{\mathbb{P}_{2}}\); we can get the dual universal construction by transposing matrices. For example, the above coproduct becomes the product: and similarly with pullbacks. The transpose duality of \(\mathtt{Mat}_{\mathbb{F}_{2}}\) will follow us throughout the rest of this paper. Here we have been taking the objects of \(\mathtt{Mat}_{\mathbb{F}_{2}}\) to be just natural numbers. In the rest of the paper we will use a slightly different definition for the objects: each natural number \(n\) is replaced by a basis set of size \(n\) for an \(n\)-dimensional vector space. ## 3 Chain complexes We now recap some elementary homological algebra. All of this section is known [42], but we fix notation and look explicitly at the particular category of interest. Let \(\mathtt{Mat}_{\mathbb{F}_{2}}\) be the category which has as objects based finite-dimensional vector spaces over \(\mathbb{F}_{2}\), so each vector space \(V\) has a specified basis \(\tilde{V}\), and we have \(V\cong\mathbb{F}_{2}^{|\tilde{V}|}\). A morphism \(f:V\to W\) in \(\mathtt{Mat}_{\mathbb{F}_{2}}\) is a \(\dim\ W\times\dim\ V\) matrix valued in \(\mathbb{F}_{2}\). Each \(V\) has a dual space \(V^{*}\). As \(V\cong V^{*}\), we may fix the duals such that \(V^{*}=V\), and \(\tilde{V}^{*}=\tilde{V}\). This has the benefit of forcing the dual of any matrix \(f:V\to W\), which is given by \(f^{*}:W^{*}\to V^{*}\), to strictly be the transpose \(f^{\intercal}:W\to V\). Let \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\) be the category of bounded chain complexes in \(\mathtt{Mat}_{\mathbb{F}_{2}}\). We now recap some of the basic properties of this category. A chain complex \(C_{\bullet}\) looks like this: \[\cdots\xrightarrow{\phantom{\text{$C_{n+1}$}}\phantom{\text{$C_{n+1}$}} \phantom{\text{$C_{n}$}}\phantom{\text{$C_{n}$}}\phantom{\text{$C_{n}$}} \phantom{\text{$C_{n-1}$}}\phantom{\text{$C_{n-1}$}}\phantom{\text{$C_{n-1}$}} \phantom{\text{$C_{n-1}$}}\cdots\] where each component \(C_{i}\) is a based vector space and \(n\in\mathbb{Z}\) is called the degree of the component in \(C_{\bullet}\). \(C_{\bullet}\) has \(\mathbb{F}_{2}\)-matrices as differentials \(\partial_{n}:C_{n+1}\to C_{n}\) such that \(\partial_{n}\circ\partial_{n+1}=0\pmod{2}\), \(\forall n\in\mathbb{Z}\). To disambiguate differentials between chain complexes we will use \(\partial_{n}^{C_{\bullet}}:=\partial_{n}\in C_{\bullet}\) when necessary. All our chain complexes are bounded, meaning there is some \(k\in\mathbb{Z}\) such that \(C_{n>k}=0\) and \(l\in\mathbb{Z}\) such that \(C_{n<l}=0\), i.e. it is bounded above and below. We call \(k-l\) the length of \(C_{\bullet}\) for \(k\) and \(l\) the smallest and largest possible values respectively. **Definition 3.1**.: _Given a chain complex \(C_{\bullet}\) we let_ \[Z_{n}(C_{\bullet})=\ker(\partial_{n-1});\quad B_{n}(C_{\bullet})=\mathrm{im} (\partial_{n})\] _and call \(Z_{n},B_{n}\) the \(n\)-cycles and \(n\)-boundaries. We also define a quotient \(H_{n}(C_{\bullet})=Z_{n}(C_{\bullet})/B_{n}(C_{\bullet})\), and call \(H_{n}\) the \(n\)th homology space of \(C_{\bullet}\)._ Recall that \(\dim(\ker(\partial_{n-1}))=\mathrm{null}(\partial_{n-1})=\dim C_{n}-\mathrm{ rank}(\partial_{n-1})\). Note that throughout we sometimes use \(\ker(f)\) of a matrix \(f\) to mean the kernel object, i.e. subspace, and sometimes the kernel morphism, i.e. inclusion map. It should be clear from context which is meant. **Example 3.2**.: _Let \(\Gamma\) be a finite simple undirected graph. We can form the incidence chain complex \(C_{\bullet}\) of \(\Gamma\), which has \(\tilde{C}_{-1}=V(\Gamma)\) and \(\tilde{C}_{0}=E(\Gamma)\), say, so that \(C_{-1}=\mathbb{F}_{2}^{|V(\Gamma)|}\), \(C_{0}=\mathbb{F}_{2}^{|E(\Gamma)|}\). All other components are zero. The sole nonzero differential \(\partial_{-1}\) is the incidence matrix of \(\Gamma\), with \((\partial_{-1})_{ij}=1\) if the \(j\)th edge is attached to the \(i\)th vertex, and 0 otherwise. \(H_{0}(C_{\bullet})\) is determined by the graph homology of \(\Gamma\)[42]._ **Definition 3.3**.: _A morphism \(f_{\bullet}:C_{\bullet}\to D_{\bullet}\) in \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\) is called a chain map, and consists of a collection of matrices \(\{f_{i}:C_{i}\to D_{i}\}_{i\in\mathbb{Z}}\) such that each resultant square of maps commutes:_ As we specified _bounded_ chain complexes only a finite number of the \(f_{i}\) matrices will be non-zero. A chain map \(f_{\bullet}\) is an isomorphism in \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\) iff all \(f_{i}\) are invertible, in which case one can think of the isomorphism as being a 'change of basis' for all components, which thus transforms the differential matrices appropriately. Observe also that every pair of chain complexes has at least two chain maps, the zero chain maps, between them, given by a collection of entirely zero matrices either way. **Lemma 3.4**.: _A chain map at a component \(f_{n}:C_{n}\to D_{n}\) lifts to a matrix \(H_{n}(f_{\bullet}):H_{n}(C_{\bullet})\to H_{n}(D_{\bullet})\)._ Proof.: It is easy to check that \(f_{n}\) induces matrices from \(Z_{n}(C_{\bullet})\to Z_{n}(D_{\bullet})\) and the same for \(B_{n}\). This lemma is equivalent to saying that \(H_{n}(-)\) is a functor from \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\rightarrow\mathtt{Mat}_{\mathbb{F}_ {2}}\). \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\) has several known categorical properties which will be useful to us. One way to see a chain complex \(C_{\bullet}\) in \(\mathtt{Mat}_{\mathbb{F}_{2}}\) is as a \(\mathbb{Z}\)-graded \(\mathbb{F}_{2}\)-vector space, with specified bases and a distinguished map \(\partial:C_{\bullet}\to C_{\bullet}\) with components \(\partial_{i}:C_{i+1}\to C_{i}\), such that \(\partial\circ\partial=0\). Many of the properties of \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\) are inherited directly from those of \(\mathbb{Z}\)-graded \(\mathbb{F}_{2}\)-vector spaces. **Lemma 3.5**.: \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\) _is an additive category, i.e. it has all finite biproducts._ Proof.: Adding two chain maps obviously gives a chain map. Define the biproduct of \(C_{\bullet}\) and \(D_{\bullet}\) with shorthand \((C\oplus D)_{\bullet}\) of chain complexes \(C_{\bullet},D_{\bullet}\). It has components \[(C\oplus D)_{n}=C_{n}\oplus D_{n}\] and the same for differentials. This is both a categorical product and coproduct. Lastly, the zero object is i.e. all \(\mathbf{0}_{i}\) are 0. **Lemma 3.6**.: _Homology preserves direct sums (coproducts): given chain complexes \(C_{\bullet}\) and \(D_{\bullet}\),_ \[H_{n}((C\oplus D)_{\bullet})\cong H_{n}(C_{\bullet})\oplus H_{n}(D_{\bullet})\] This is obvious, considering the blocks of each differential in \((C\oplus D)_{\bullet}\). **Definition 3.7**.: _Let the dual chain complex \(C^{*}_{\bullet}\) have components_ \[(C^{*})_{n}=C_{-n}\] _and differentials_ \[\partial_{n}^{C^{\bullet}_{\bullet}}=(\partial_{-n-1}^{C_{\bullet}})^{\intercal}.\] Our choice of duals means that \(C^{**}_{\bullet}=C_{\bullet}\) on the nose. **Lemma 3.8**.: _For a chain complex \(C_{\bullet}\),_ \[H_{i}(C_{\bullet})\cong H_{-i}(C^{*}_{\bullet})\] Proof.: \[\ker((\partial_{i}^{C_{\bullet}})^{\intercal})/\mathrm{im}((\partial_{i-1}^{ C_{\bullet}})^{\intercal})\cong\mathrm{im}(\partial_{i}^{C_{\bullet}})^{ \perp}/\ker(\partial_{i-1}^{C_{\bullet}})^{\perp}\cong\ker(\partial_{i-1}^{C _{\bullet}})/\mathrm{im}(\partial_{i}^{C_{\bullet}}).\] The dual of a chain map \(f\) is straightforward: it has matrices \((f^{*})_{i}=(f_{-i})^{\intercal}\). Thus \(f^{**}_{\bullet}=f_{\bullet}\) on the nose. **Remark 3.9**.: _These are categorical duals with respect to a tensor product of chain complexes. As we only need this tensor product for a very specific construction in Section 6 we relegate it to Appendix B._ ## 4 Quantum codes Here we introduce classes of both classical and quantum codes as chain complexes. We give easy examples such as the surface and toric codes. Up until Section 4.3, this part is also well-known, although we describe the relationship between \(Z\) and \(X\) operators in greater detail than we have found elsewhere. ### Codes as chain complexes Binary linear classical codes which encode \(k\) bits using \(n\) bits can be described by a \(m\times n\) parity check \(\mathbb{F}_{2}\)-matrix \(P\). The parity check matrix \(P\), when applied to any codeword of length \(n\), gives \(Pc=0\), and thus \(k=\dim\ker(P)\); if the result is non-zero then an error has been detected, and under certain assumptions can be corrected. The distance \(d\) of a binary linear classical code is the minimum Hamming weight of its nonzero codewords, and one characterisation of codes is by their metrics \([n,k,d]\). Given \(P\), \(G\) is uniquely defined up to isomorphism as the matrix \(\ker(P)\), so we use only \(P\) to define the code. We may trivially view a binary linear classical code as a length \(1\) chain complex in \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\), with indices chosen for convenience: \[C_{\bullet}=\cdots\xrightarrow{}C_{0}\xrightarrow{\partial_{-1}^{C_{\bullet}} }C_{-1}\xrightarrow{}\cdots\] where \(C_{0}=\mathbb{F}_{2}^{n}\), \(C_{-1}=\mathbb{F}_{2}^{m}\), and \(\partial_{-1}^{C_{\bullet}}=P\), the chosen \(m\times n\) parity check matrix. Then we have \(k=\dim\ H_{0}(C_{\bullet})=\dim\ Z_{0}(C_{\bullet})\), where \(Z_{0}(C_{\bullet})\) is the codespace. **Example 4.1**.: _Let \(C_{\bullet}\) be a \([3,1,3]\) repetition code, encoding 1 bit into 3 bits. In this case, let_ \[P=\begin{pmatrix}1&1&0\\ 1&0&1\end{pmatrix}\] **Example 4.2**.: _Let \(C_{\bullet}\) be the \([7,4,3]\) Hamming code. Then let_ \[P=\begin{pmatrix}1&1&0&1&1&0&0\\ 1&0&1&1&0&1&0\\ 0&1&1&1&0&0&1\end{pmatrix}\] From Example 3.2 we know that each graph defines length 1 chain complex, and so every graph evidently specifies a classical code, with edges as physical bits and vertices parity checks. An easy case is for cycle graphs, whereby we have a repetition code with one redundant parity check, i.e. vertex. In general, the cycle graph \(\mathcal{C}_{n}\) with \(n\) vertices gives an \([n,1,n]\) repetition code. We now move on to quantum codes. Qubit Calderbank-Shor-Steane (CSS) codes are a type of stabiliser quantum code. Let \(\mathscr{P}_{n}=\mathscr{P}^{\otimes\,n}\) be the Pauli group over \(n\) qubits. Stabiliser codes start by specifying an Abelian subgroup \(\mathscr{S}\subset\mathscr{P}_{n}\), called a stabiliser subgroup, such that the codespace \(\mathscr{H}\) is the mutual \(+1\) eigenspace of all operators in \(\mathscr{S}\). That is, \[U\left|\psi\right\rangle=\left|\psi\right\rangle\ \ \ \ \forall U\in\mathscr{S}, \left|\psi\right\rangle\in\mathscr{H}\] We then specify a generating set of \(\mathscr{S}\), of size \(m\). For CSS codes, this generating set has as elements tensor product strings of either \(\{I,X\}\) or \(\{I,Z\}\) Pauli terms, with no scalars other than 1. One can define two parity check \(\mathbb{F}_{2}\)-matrices \(P_{X},P_{Z}\), for the \(X\)s and \(Z\)s, which together define a particular code. Each column in \(P_{X}\) and \(P_{Z}\) represents a physical qubit, and each row a measurement/stabiliser generator. \(P_{X}\) and \(P_{Z}\) thus map \(Z\) and \(X\) operators on physical qubits respectively to sets of measurement outcomes, with a 1 outcome if the operators anticommute with a given stabiliser generator, and 0 otherwise; these outcomes are also called _syndromes_. \(P_{X}\) is a \(m_{X}\times n\) matrix, and \(P_{Z}\) is \(m_{Z}\times n\), with \(m_{X},m_{Z}\) marking the division of the generating set into \(X\)s and \(Z\)s respectively, satisfying \(m=m_{X}+m_{Z}\). We do not require the generating set to be minimal, and hence \(P_{X}\) and \(P_{Z}\) need not be full rank. **Definition 4.3**.: _We say that \(w^{Z}\) is the maximal weight of all \(Z\)-type generators and \(w^{X}\) the same for the \(X\)-type generators. These are the highest weight rows of \(P_{Z}\) and \(P_{X}\) respectively. Similarly, we say that \(q^{Z}\), \(q^{X}\) is the maximal number of \(Z\), \(X\) generators sharing a single qubit. These are the highest weight columns of \(P_{Z}\) and \(P_{X}\)._ CSS codes are characterised by \(\llbracket n,k,d\rrbracket\), with \(k\) the number of encoded qubits and \(d\) the code distance, which we define presently. That the stabilisers must commute is equivalent to the requirement that \(P_{X}P_{Z}^{\intercal}=P_{Z}P_{X}^{\intercal}=0\). We may therefore view these matrices as differentials in a length 2 chain complex: \[C_{\bullet}=C_{1}\xrightarrow{\partial_{0}}C_{0}\xrightarrow{\partial_{-1} }C_{-1}\] where \(\partial_{0}=P_{Z}^{\intercal}\) and \(\partial_{-1}=P_{X}\), or the other way round (\(\partial_{0}=P_{X}^{\intercal},\partial_{-1}=P_{Z}\)) if desired, but we start off with the former for consistency with the literature. The quantum code then has \(C_{0}\cong\mathbb{F}_{2}^{n}\). We will typically fix \(C_{0}=\mathbb{F}_{2}^{n}\) for convenience. The code also has \(k=\dim\ H_{0}(C_{\bullet})\). To see this, observe first that \(C_{0}\) represents the space of \(Z\) Paulis on the set of physical qubits, with a vector being a Pauli string e.g. \(v=\begin{pmatrix}1&0&1\end{pmatrix}^{\intercal}\leadsto Z\otimes I\otimes Z\). Each vector in \(H_{0}(C_{\bullet})\) can be interpreted as an equivalence class \([v]\) of \(Z\) operators on the set of physical qubits, modulo \(Z\) operators which arise as \(Z\) stabilisers. That this vector is in \(Z_{0}(C_{\bullet})\) means that the \(Z\) operators commute with all \(X\) stabilisers, and when the vector is not in \([0]=B_{0}(C_{\bullet})\) it means that the \(Z\) operators act nontrivially on the logical space. A basis of \(H_{0}(C_{\bullet})\) constitutes a choice of individual logical Paulis \(\overline{Z}\), that is a tensor product decomposition of the space of logical \(Z\) operators, and we set \(\overline{Z}_{1}=\overline{Z}\otimes\overline{I}\cdots\otimes\overline{I}\) on _logical_ qubits, \(\overline{Z}_{2}=\overline{I}\otimes\overline{Z}\cdots\otimes\overline{I}\) etc. There is a logical qubit for every logical \(Z\), hence \(k=\dim\ H_{0}(C_{\bullet})\). To get the logical \(X\) operators, consider the dual \(C_{\bullet}^{*}\). The vectors in \(H_{0}(C_{\bullet}^{*})\) then correspond to \(\overline{X}\) operators in the same manner. As a consequence of Lemma 3.8 there must be an \(\overline{X}\) operator for every \(\overline{Z}\) operator and vice versa. **Lemma 4.4**.: _A choice of basis \(\{[v]_{i}\}_{i\leq k}\) for \(H_{0}(C_{\bullet})\) implies a choice of basis \(\{[w]_{j}\}_{j\leq k}\) for \(H_{0}(C_{\bullet}^{*})\)._ Proof.: First, recall that we have the nondegenerate bilinear form \[\cdot:\mathbb{F}_{2}^{n}\times\mathbb{F}_{2}^{n}\rightarrow\mathbb{F}_{2}; \quad u\cdot v=u^{\intercal}v\] which is equivalent to \(\cdot:C_{0}\times(C^{*})_{0}\rightarrow\mathbb{F}_{2}\); computationally, this tells us whether a \(Z\) operator commutes or anticommutes with an \(X\) operator. Now, let \(u\in Z_{0}(C_{\bullet})\) be a (possibly trivial) logical \(Z\) operator, and \(v\in B_{0}(C_{\bullet}^{*})\) be a product of \(X\) stabilisers. Then \(P_{X}u=0\), and \(v=P_{X}^{\intercal}w\) for some \(w\in C_{-1}\). Thus \(u\cdot v=u^{\intercal}v=u^{\intercal}P_{X}^{\intercal}w=(P_{X}u)^{\intercal}w=0\), and so products of \(X\) stabilisers commute with logical \(Z\) operators. The same applies for \(Z\) stabilisers and logical \(X\) operators. As a consequence, \(v\cdot w=(v+s)\cdot(w+t)\) for any \(v\in Z_{0}(C_{\bullet})\), \(w\in Z_{0}(C_{\bullet}^{*})\), \(s\in B_{0}(C_{\bullet})\), \(t\in B_{0}(C_{\bullet}^{*})\), and so we may define \([v]\cdot[w]=v\cdot w\) for any \([v]\in H_{0}(C_{\bullet})\), \([w]\in H_{0}(C_{\bullet}^{*})\) with representatives \(v,w\). The duality pairing of \(C_{0},(C^{*})_{0}\) thus lifts to \(H_{0}(C_{\bullet}),H_{0}(C_{\bullet}^{*})\), and a choice of basis \(\{[v]_{i}\}_{i\leq k}\) for \(H_{0}(C_{\bullet})\) implies a choice of basis of \(H_{0}(C_{\bullet}^{*})\), determined uniquely by \([v]_{i}\cdot[w]_{j}=\delta_{i,j}\). The above lemma ensures that picking a tensor product decomposition of logical \(Z\) operators also entails the same tensor product decomposition of logical \(X\) operators, so that \(\overline{X}_{i}\overline{Z}_{j}=(-1)^{\delta_{i,j}}\overline{Z}_{j} \overline{X}_{i}\), for operators on the \(i\)th and \(j\)th logical qubits. Let \[d^{Z}=\min_{v\in Z_{0}(C_{\bullet})\setminus B_{0}(C_{\bullet})}|v|;\quad d^ {X}=\min_{w\in Z_{0}(C_{\bullet}^{*})\setminus B_{0}(C_{\bullet}^{*})}|w|\] where \(|\cdot|\) is the Hamming weight of a vector, then the code distance \(d=\min(d^{Z},d^{X})\). \(d^{Z}\) and \(d^{X}\) are called the systolic and cosystolic distances, and represent the lowest weight nontrivial \(Z\) and \(X\) logical operators respectively. As all the data required for a CSS code is contained within the chain complex \(C_{\bullet}\) - and potentially a choice of basis of \(H_{0}(C_{\bullet})\) - then we could define a CSS code as just the single chain complex, but it will be convenient to have direct access to the dual complex as well. **Definition 4.5**.: _A CSS code is a pair \((C_{\bullet},C_{\bullet}^{*})\) of a length 2 chain complex centred at degree 0 and its dual:_ \(A\) based _CSS code additionally has a choice of basis for \(H_{0}(C_{\bullet})\), and hence for \(H_{0}(C_{\bullet}^{*})\). We call the first of the pair the \(Z\)-type complex, as vectors in \(C_{0}\) correspond to \(Z\)-operators, and the second the \(X\)-type complex._ **Remark 4.6**.: _As \(C_{\bullet}^{**}=C_{\bullet}\), we see that given any CSS code \((C_{\bullet},C_{\bullet}^{*})\) we can exchange \(Z\) and \(X\) stabilisers (and operators) to obtain \((C_{\bullet}^{*},C_{\bullet})\)._ Employing the direct sum \((C\oplus D)_{\bullet}\) of chain complexes we have the CSS code (\((C\oplus D)_{\bullet},(C^{*}\oplus D^{*})_{\bullet}\)), which means the CSS codes \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) perform in parallel on disjoint sets of qubits, without any interaction. The \(Z\) and \(X\) operators will then be the tensor product of operators in each. In summary, there is a bijection between length 1 chain complexes in \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\) and binary linear classical codes, and between length 2 chain complexes in \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\) and CSS codes. That this is a bijection is because we have been quite careful: under our definitions, two classical codes with different matrices but which are the same up to, say, Gaussian elimination are counted as being different classical codes. Morally, these could be considered equivalent codes for some purposes, as they have the same codespaces, code distances and number of physical bits. A similar qualification applies to quantum CSS codes. Additionally, there is no guarantee that the chain complexes will give _useful_ codes. For example, a length 2 chain complex could be an exact sequence and thus have a homology space of zero at all components, in which case there are no logical qubits in the code; even if there is nontrivial homology, the (co-)systolic distance could be very low and thus not practical as a code. There are many classical and quantum codes which do not fit into this classification using \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\), such as nonlinear classical codes and stabiliser quantum codes which are not CSS (although any \([\![n,k,d]\!]\) stabiliser code can be mapped to a \([\![4n,2k,2d]\!]\) CSS code [7]). There are also fault-tolerant topological quantum systems which certainly don't live in \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\), although they have similar homological properties [30, 14]. Lastly, there are CSS codes for higher dimensional qudits, but for simplicity we stick to qubits. Rather than just individual codes we tend to be interested in families of codes, where \(n,k,d\) scale with the size of code in the family. Of particular practical interest are quantum _low density parity check_ (LDPC) CSS codes, which are families of codes where all \(w^{Z}\), \(w^{X}\), \(q^{Z}\) and \(q^{X}\) in the family are bounded from above by a constant. Equivalently, this means the Hamming weight of each column and row in each differential is bounded by a constant. ### Basic quantum codes **Example 4.7**.: _Let \((C_{\bullet},C_{\bullet}^{*})\) be the \([\![9,1,3]\!]\) Shor code, so we have \(C_{-1}=\mathbb{F}_{2}^{2}\), \(C_{0}=\mathbb{F}_{2}^{9}\), \(C_{1}=\mathbb{F}_{2}^{6}\). The parity check matrices are given by_ \[P_{X}=\begin{pmatrix}1&1&1&1&1&1&0&0&0\\ 1&1&1&0&0&0&1&1&1\end{pmatrix};\quad P_{Z}=\begin{pmatrix}1&1&0&0&0&0&0&0&0\\ 1&0&1&0&0&0&0&0&0\\ 0&0&0&1&0&1&0&0&0\\ 0&0&0&0&0&0&1&1&0\\ 0&0&0&0&0&0&1&0&1\end{pmatrix}\] _We then have \(\dim\;\;Z_{0}(C_{\bullet})=\dim\;\;C_{0}-\operatorname{rank}(P_{X})=9-2=7\) and \(\dim\;\;B_{0}(C_{\bullet})=\operatorname{rank}(P_{Z}^{\intercal})=6\). Thus \(k=\dim\;\;H_{0}(C_{\bullet})=1\). \(H_{0}(C_{\bullet})\) has a single nonzero equivalence class \([v]\in H_{0}(C_{\bullet})\), with a representative \(v=\begin{pmatrix}1&1&1&1&1&1&1&1&1\end{pmatrix}^{\intercal}\). Similarly \(H_{0}(C_{\bullet})^{*}\) has the nonzero vector \(w=\begin{pmatrix}1&1&1&1&1&1&1&1&1\end{pmatrix}^{\intercal}\), which is a representative of \([w]\in H_{0}(C_{\bullet})\). Hence, we have two logical operators \(\overline{Z}=\bigotimes_{i}^{8}Z_{i}\), \(\overline{X}=\bigotimes_{i}^{8}X_{i}\) with \(Z_{i}\) on the \(i\)th qubit and the same for \(X_{i}\). We equally have, say, \(\overline{Z}=Z_{1}\otimes Z_{4}\otimes Z_{7}\) and \(\overline{X}=X_{1}\otimes X_{2}\otimes X_{3}\) in the same equivalence classes as those above, \([v]\) and \([w]\)._ We now consider two examples which come from square lattices. This can be done much more generally. In Appendix C we formalise categorically the procedure of acquiring chain complexes - and therefore CSS codes - from square lattices, which are a certain type of cell complex. **Example 4.8**.: _Consider the following square lattice:_ _Edges in the lattice are qubits, so \(n=18\), the 9 \(X\)-checks are associated with vertices and the 9 \(Z\)-checks are associated with faces, which are indicated by white circles. Grey vertices indicate periodic boundary conditions, so the lattice can be embedded on a torus. This is an instance of the standard toric code [30]._ _The abstracted categorical homology from before is now the homology of the tessellated torus, with cycles, boundaries etc. having their usual meanings. Letting the code be \((C_{\bullet},C_{\bullet}^{\bullet})\), we have \(k=\dim\ H_{0}(C_{\bullet})=2\), and (co)systolic distances are the lengths of the essential cycles of the torus._ **Example 4.9**.: _Now consider a different square lattice:_ _This represents a patch of surface code \((D_{\bullet},D_{\bullet}^{*})\), where we have two smooth sides, on the left and right, and two rough sides to the patch, on the top and bottom. Observe that we have 'dangling' edges at the top and bottom, which do not terminate at vertices. We have_ \[\dim\ D_{1}=\dim\ D_{-1}=6;\quad n=\dim\ D_{0}=13;\quad k=\dim\ H_{0}(D_{ \bullet})=1\] _The systolic distance is \(3\), the length of the shortest path from the top to bottom boundary, and the cosystolic distance \(3\), the same but from left to right._ ### Code maps One may wish to convert one code into another, making a series of changes to the set of stabiliser generators to be measured, and potentially also to the physical qubits. The motivation behind such protocols is typically to perform logical operations which are not available natively to the code; not only might the target code have other logical operations, but the protocol is itself a map between logical spaces when chosen carefully. An example of a change to the measurements and qubits is code deformation. We do not formalise code deformation here, as that has some specific connotations [41]. Instead we define a related notion, called a _code map_, which has some overlap. A code map is also related to, but not the same as, the 'homomorphic gadgets' from [26]. **Definition 4.10**.: _A \(\overline{Z}\)-preserving code map \(\mathcal{F}_{\overline{Z}}\) from a CSS code \((C_{\bullet},C_{\bullet}^{*})\) to \((D_{\bullet},D_{\bullet}^{*})\) is a dual pair of chain maps \((f_{\bullet},f_{\bullet}^{*})\), for \(f_{\bullet}:C_{\bullet}\to D_{\bullet}\) and \(f_{\bullet}^{*}:D_{\bullet}^{*}\to C_{\bullet}^{*}\)._ \[\mathcal{F}_{\overline{Z}}\] Note that the second chain map is strictly speaking obsolete, as all the data is contained in a single chain map \(f_{\bullet}\), but as with chain complexes it will be handy to keep both around. Let us unpack this definition. \(\mathcal{F}_{\overline{Z}}\) first maps \(Z\)-operators in \(C_{0}\) to \(Z\)-operators in \(D_{0}\), using \(f_{0}\). It may map a single \(Z\) on a qubit to a tensor product of \(Z\)s, or to \(I\). It then has a map \(f_{1}\) on \(Z\) generators, and another \(f_{-1}\) on \(X\) generators. Recalling Definition 3.3, we have: With two commuting squares labelled I and II. I stipulates that applying products of \(Z\) stabiliser generators on the code and then performing the code map should be equivalent to performing the code map and then applying products of \(Z\) stabiliser generators, i.e. \(f_{0}\circ\partial_{0}^{C_{\bullet}}=\partial_{0}^{D_{\bullet}}\circ f_{1}\). II stipulates that performing the \(X\) measurements and then mapping the code should be equivalent to mapping the code and then performing \(X\) measurements, so there is a consistent mapping between all measurement outcomes, i.e. \(f_{-1}\circ\partial_{-1}^{C_{\bullet}}=\partial_{-1}^{D_{\bullet}}\circ f_{0}\). Then there is the chain map \(f_{\bullet}^{*}\). This has the component \(f_{0}^{\intercal}:D_{0}\to C_{0}\), which maps an \(X\)-operator in \(D_{0}\) back to an \(X\)-operator in \(C_{0}\). Similarly for \(f_{-1}^{\intercal}\) and \(f_{1}^{\intercal}\), each of which come with commuting squares which are just the transposed conditions, e.g. \(f_{1}^{\intercal}\circ(\partial_{0}^{D_{\bullet}})^{\intercal}=(\partial_{0}^ {C_{\bullet}})^{\intercal}\circ f_{0}^{\intercal}\), so they say nothing new. This is not surprising, as all the data for \(f^{*}\) is given by \(f\) already. We now show that this definition entails some elementary properties. For a start, Lemma 3.4 implies that a code map gives a map from a \(\overline{Z}\) operator in \(H_{0}(C_{\bullet})\) to \(\overline{Z}\)s in \(H_{0}(D_{\bullet})\); this can also map to a tensor product of logical \(\overline{Z}\)s, and in particular map \(\overline{Z}\) to zero i.e. \(\overline{I}\), but it must not map a \(\overline{Z}\) to an operator which can be detected by the \(X\) stabiliser measurements. Hence \((f_{\bullet},f_{\bullet}^{*})\) preserves the fact that any \(\overline{Z}\) is an undetectable operator on the codespace. A similar requirement holds for \(\overline{X}\) operators, but this time the condition is inverted. Every \(\overline{X}\) in \(H_{0}(D_{\bullet}^{*})\) must have a map only to logical operators in \(H_{0}(C_{\bullet}^{*})\), but the other way is not guaranteed. Let \(n_{C}\) and \(n_{D}\) be the number of physical qubits in codes \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) respectively. We may interpret \(\mathcal{F}_{\overline{Z}}\) as a \(\mathbb{C}\)-linear map \(M\) in FHilb, the category of Hilbert spaces. This \(\mathbb{C}\)-linear map has the property that \(MU_{Z}=U_{Z}^{\prime}M\), where \(U_{Z}\) is a tensor product of \(Z\) Paulis on \(n_{C}\) qubits and \(U_{Z}^{\prime}\) is a tensor product of \(Z\) Paulis on \(n_{D}\) qubits. In particular, given any \(U_{Z}\) we have a specified \(U_{Z}^{\prime}\). The same is not true the other way round, as the map \(f_{0}\) is not necessarily injective or surjective. Similarly, \(MU_{X}=U_{X}^{\prime}M\). This time, however, given any unique \(U_{X}^{\prime}\) on \(n_{D}\) qubits we have a specified \(U_{X}\) but vice versa is not guaranteed, depending on \(f_{0}^{\intercal}\). As a consequence, the linear map \(M\) is _stabiliser_, in the sense that it maps Paulis to Paulis, but not _unitary_ in general. \(M\) is unitary iff \(f_{0}\) is invertible. If \(M\) is not even an isometry, it cannot be performed deterministically, and the code map must include measurements on physical qubits. There will in general be Kraus operators corresponding to different measurement outcomes which will determine whether the code map has been implemented as desired; for now we assume that \(M\) is performed deterministically, and leave this complication for Section 6. Similarly, while the code map can be interpreted as a circuit between two codes, we do not claim that such a circuit can be performed fault-tolerantly in general. **Remark 4.11**.: _For the following proposition, and at various points throughout the rest of the paper, we will use the ZX-calculus, a formal graphical language for reasoning about computation with qubits. We do not give a proper introduction to this calculus for brevity, but Sections 1-3 of [40] are sufficient for the interested reader. Our use of ZX diagrams is unsophisticated, and primarily for convenience._ **Proposition 4.12**.: _Let \(\mathcal{F}_{\overline{Z}}\) be a \(\overline{Z}\)-preserving code map between codes \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) with qubit counts \(n_{C}\) and \(n_{D}\). The interpretation of \(\mathcal{F}_{\overline{Z}}\) as a \(\mathbb{C}\)-linear map \(M\) in \(\mathtt{FHilb}\) has a presentation as a circuit with gates drawn from \(\{\mathrm{CNOT},\left|+\right\rangle,\left\langle 0\right|\}\)._ Proof.: We start with the linear map \(M:(\mathbb{C}^{2})^{\otimes\,n_{C}}\rightarrow(\mathbb{C}^{2})^{\otimes\,n_{D}}\): By employing the partial transpose in the computational basis we convert it into the state i.e. inserting \(n_{C}\) Bell pairs. By the definition of \(f_{0}\) we know that this has an independent stabiliser, with one \(Z\) and \(n_{C}-1\)\(I\)s followed by some \(n_{D}\)-fold tensor product of \(Z\) and \(I\), for each of the \(n_{C}\) qubits. From \(f_{0}^{\intercal}\) it also has an independent stabiliser, with some \(n_{C}\)-fold tensor product of \(X\) and \(I\) followed by \(n_{D}-1\) Is and one \(X\), for each of the \(n_{D}\) qubits. \(\left|\psi\right\rangle\) is therefore a stabiliser state. Further, from Theorem 5.1 of [28] it has a presentation as a 'phase-free ZX diagram', of the form where the top \(n_{C}\) qubits do not have a green spider. We perform the partial transpose again to convert the state \(\left|\psi\right\rangle\) back into the map \(M\), which has the form Any ZX diagram of this form can be expressed as a matrix over \(\mathbb{F}_{2}\), mapping \(X\)-basis states from \((\mathbb{C}^{2})^{\otimes\,n_{C}}\) to \((\mathbb{C}^{2})^{\otimes\,n_{D}}\). The example above, ignoring the ellipses, has the matrix \[\begin{pmatrix}1&0&1\\ 1&1&1\end{pmatrix}\] which is equal to \(f_{0}\); the point of the above rigmarole is thus to say that \(f_{0}\) is precisely a linear map between \(X\)-basis states, which one can check easily. We can then perform Gaussian elimination on \(f_{0}\), performing row operations, which produce CNOTs on the r.h.s. of the diagram in the manner of [29], until the matrix is in reduced row echelon form. We then perform column operations producing CNOTs on the l.h.s. of the diagram, until the matrix has at most one \(1\) in each row and column. This can be performed using the leading coefficients to remove all other \(1\)s in that row. The final matrix just represents a permutation of qubits with some states and effects. An empty column corresponds to a \(\left\langle 0\right|\) effect, and an empty row a \(\left|+\right\rangle\) state. We thus end up with a presentation of \(M\) in the form On our example, this is then which one can check maps \(Z\otimes I\otimes I\mapsto Z\otimes Z\) etc. As a consequence \(\bar{M}=M\), i.e. the conjugate of \(M\) is just \(M\). **Corollary 4.13**.: _If \(n_{C}=0\) then the map \(M\) is actually a stabiliser state of the form \(M=\left|+\right\rangle^{\otimes n_{D}}\). When \(n_{D}=0\) then \(M=\left\langle 0\right|^{\otimes n_{C}}\)._ Proof.: When \(n_{C}=0\) we see that \(M\) has exactly \(n_{D}\) independent stabilisers with \(1\)\(X\) and \(n_{D}-1\)\(I\)s, for each qubit to put \(X\) on. The flipped argument applies when \(n_{D}=0\). **Definition 4.14**.: _An \(\overline{X}\)-preserving code map \(\mathcal{F}_{\overline{X}}\) from a CSS code \((D_{\bullet},D_{\bullet}^{\ast})\) to \((C_{\bullet},C_{\bullet}^{\ast})\) is a pair of chain maps \((f_{\bullet},f_{\bullet}^{\ast})\), for \(f_{\bullet}:C_{\bullet}\to D_{\bullet}\) and \(f_{\bullet}^{\ast}:D_{\bullet}^{\ast}\to C_{\bullet}^{\ast}\)._ So \(\mathcal{F}_{\overline{X}}\) is just mapping in the other direction to \(\mathcal{F}_{\overline{Z}}\) from before, and we say that \(\mathcal{F}_{\overline{X}}\) is _opposite_ to \(\mathcal{F}_{\overline{Z}}\). In this case, when we interpret \(\mathcal{F}_{\overline{X}}\) as a \(\mathbb{C}\)-linear map \(L\), it has the property that \(LU_{X}=U_{X}^{\prime}L\) and that any \(U_{X}\) gives a specified \(U_{X}^{\prime}\), and \(LU_{Z}=U_{Z}^{\prime}L\), but that any \(U_{Z}^{\prime}\) gives a specified \(U_{Z}\) but not vice versa. By inspecting the stabilisers we see that, for \(\mathcal{F}_{\overline{Z}}\) with interpretation \(M\) and \(\mathcal{F}_{\overline{X}}\) with interpretation \(L\), \(L=M^{\dagger}=M^{\dagger}\). **Corollary 4.15**.: _Let \(\mathcal{F}_{\overline{X}}\) be an \(\overline{X}\)-preserving code map between codes \((D_{\bullet},D_{\bullet}^{\ast})\) and \((C_{\bullet},C_{\bullet}^{\ast})\) with qubit counts \(n_{D}\) and \(n_{C}\). The interpretation of \(\mathcal{F}_{\overline{X}}\) as a \(\mathbb{C}\)-linear map \(L\) in \(\mathtt{FHilb}\) has a presentation as a circuit with gates drawn from \(\{\mathrm{CNOT},\left|0\right\rangle,\left\langle+\right|\}\)._ **Corollary 4.16**.: _If \(n_{D}=0\) then \(L=\left|0\right\rangle^{\otimes n_{C}}\), and if \(n_{C}=0\) then \(L=\left\langle+\right|^{\otimes n_{D}}\)._ **Corollary 4.17**.: _The restrictions of \(\mathcal{F}_{\overline{Z}}\) and \(\mathcal{F}_{\overline{X}}\) to use only \(H_{0}(f_{\bullet})\) and \(H_{0}(f_{\bullet}^{\ast})\) also have interpretations as \(\mathbb{C}\)-linear maps on logical qubits in the same way, and Proposition 4.12 and Corollary 4.15 also apply to such interpretations._ While our definitions are for chain complexes of length \(2\), in principle one can map between any two codes with an arbitrary number of meta-checks, or between a classical code and quantum code, which could be interpreted as'switching on/off' either \(X\) or \(Z\) stabiliser measurements. While code maps are related to code deformations, we are aware of code deformation protocols which do not appear to fit in the model of chain maps described. For example, when moving defects around on the surface code for the purpose of, say, defect braiding [20], neither \(\overline{Z}\) nor \(\overline{X}\) operators are preserved in the sense we give here. ## 5 CSS code surgery ### Colimits in \(\operatorname{\mathtt{Ch}}(\operatorname{\mathtt{Mat}}_{\mathbb{F}_{2}})\) To understand code surgery we require some additional chain complex technology, namely colimits. Coproducts, pushouts and coequalisers are directly relevant for our applications. We have already covered coproducts in Lemma 3.5, so we describe pushouts and coequalisers here. **Definition 5.1**.: _The pushout of chain maps \(f_{\bullet}:A_{\bullet}\to C_{\bullet}\) and \(g_{\bullet}:A_{\bullet}\to D_{\bullet}\) gives the chain complex \(Q_{\bullet}\), where each component is the pushout \(Q_{n}\) of \(f_{n}\) and \(g_{n}\). The differentials \(\partial_{n}^{Q_{\bullet}}\) are given by the unique mediating map from each component's pushout. Specifically, if we have the pushout_ _then for degrees \(n,n+1\) we have_ _where_ \[Q_{n}=(C\oplus D)_{\bullet}/f_{n}\sim g_{n};\quad k_{n}(c)=[c]\in Q_{n}; \quad l_{n}(d)=[d]\in Q_{n}.\] _with \([c]\) being the equivalence class in \(Q_{n}\) having \(c\) as a representative, and the same for \([d]\). As \(k_{n}\circ\partial_{n}^{C_{\bullet}}\circ f_{n+1}=l_{n}\circ\partial_{n}^{D_{ \bullet}}\circ g_{n+1}\) and the inner square is a pushout in \(\operatorname{\mathtt{Mat}}_{\mathbb{F}_{2}}\), there is a unique matrix \(\partial_{n}^{Q_{\bullet}}\). The differentials satisfy \(\partial_{n}^{Q_{\bullet}}\circ\partial_{n+1}^{Q_{\bullet}}=0\), and one can additionally check that this is indeed a pushout in \(\operatorname{\mathtt{Ch}}(\operatorname{\mathtt{Mat}}_{\mathbb{F}_{2}})\) by considering the universal property at each component._ **Definition 5.2**.: _The coequaliser of chain maps \(C_{\bullet}\xrightarrow[g]{f}D_{\bullet}\) is a chain complex \(E_{\bullet}\) and chain map \(coeq(f,g)_{\bullet}:D_{\bullet}\to E_{\bullet}\), which we will just call \(coeq_{\bullet}\). We have \(E_{n}=D_{n}/f_{n}\sim g_{n}\) and \(coeq_{n}(d)=[d]\)._ Doing some minor diagram chasing one can check that this is indeed a coequaliser in \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\). **Remark 5.3**.: _We can view the pushout_ _as the coequaliser of \(A_{\bullet}\xrightarrow{\frac{\tau_{\bullet}\circ f_{\bullet}}{\omega_{\bullet} \circ g_{\bullet}}}(C\oplus D)_{\bullet}\) for the inclusion maps \(C_{\bullet}\xrightarrow{\tau_{\bullet}}(C\oplus D)_{\bullet}\),_ _The difference is that the pair of chain maps \(k_{\bullet},l_{\bullet}\) have been replaced with the single map \(coeq_{\bullet}\), so we have_ _We can view coequalisers as instances of pushouts as well, doing a sort of reverse of the procedure above._ **Remark 5.4**.: _As with all colimits, those above are defined by the category theory only up to isomorphism. Because we are working over a field, the isomorphism class of a chain complex \(Q_{\bullet}\) is completely determined by the dimensions of the underlying vector spaces \(\{\dim Q_{i}\}_{i}\) and its Betti numbers, which is the set \(\{\dim H_{i}(Q_{\bullet})\}_{i}\) of dimensions of the homology spaces. This is a homological version of the rank-nullity theorem. These are very large isomorphism classes, and we require more fine grained control over which chain complexes are chosen by the colimits. Thus we pick out the obvious objects from their isomorphism classes, which are those described in the definitions of the colimits above._ **Lemma 5.5**.: \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\) _is an Abelian category, and thus is finitely complete and cocomplete, meaning that it has all finite limits and colimits._ While this is well-known, we sketch a proof of this lemma in Appendix A for completeness. There we also sketch some additional limits, but we only need colimits in this work. ### Generic code surgery We now give a general set of definitions for surgery between arbitrary compatible CSS codes; the condition for compatibility is very weak here. Working at this level of generality means that we cannot prove very much about the output codes or relevant logical maps. As a consequence, we will then focus on particular surgeries which make use of 'gluing' or 'tearing' along logical \(\overline{Z}\) or \(\overline{X}\) operators in Section 5.3. **Definition 5.6**.: _Let \((C_{\bullet},C_{\bullet}^{*})\), \((D_{\bullet},D_{\bullet}^{*})\) and \((A_{\bullet},A_{\bullet}^{*})\) be CSS codes, such that there is a span of chain complexes_ _The \(Z\)-type merged code of \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) along \(f_{\bullet},g_{\bullet}\) is the code \((Q_{\bullet},Q_{\bullet}^{*})\) such that \(Q_{\bullet}\) is the pushout of the above diagram._ Recall from Remark 5.3 that we can view any pushout as a coequaliser. We thus have \[A_{\bullet}\xrightarrow{\iota_{C}o^{\bullet}f_{\bullet}}(C\oplus D)_{\bullet} \xrightarrow{coeq_{\bullet}}Q_{\bullet}\] and we call \(coeq_{\bullet}\) the \(Z\)-merge chain map. We can bundle this up into a \(Z\)-merge code map: \[\begin{CD}((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{\ast})\\ @V{coreq_{\bullet}}V{}V@V{coeq_{\bullet}}V{}V\\ (Q_{\bullet},Q_{\bullet}^{\ast})\end{CD} \tag{1}\] We then call \(coeq_{\bullet}^{\ast}:Q_{\bullet}^{\ast}\to(C\oplus D)_{\bullet}^{\ast}\) an \(X\)-split chain map, and hence we have an \(X\)-split code map too: \[\begin{CD}((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{\ast})\\ @V{coeq_{\bullet}}V{}V@V{coeq_{\bullet}}V{}V\\ (Q_{\bullet},Q_{\bullet}^{\ast})\end{CD} \tag{2}\] **Definition 5.7**.: _Let \((C_{\bullet},C_{\bullet}^{\ast})\), \((D_{\bullet},D_{\bullet}^{\ast})\) and \((A_{\bullet},A_{\bullet}^{\ast})\) be CSS codes, such that there is a span of chain complexes_ \[\begin{CD}A_{\bullet}^{\ast}@>{g_{\bullet}^{\ast}}>{}>D_{\bullet}^{\ast}\\ @V{f_{\bullet}^{\ast}}V{}V\\ C_{\bullet}^{\ast}\end{CD}\] _the \(X\)-type merged code of \((C_{\bullet},C_{\bullet}^{\ast})\) and \((D_{\bullet},D_{\bullet}^{\ast})\) along \(f_{\bullet}^{\ast},g_{\bullet}^{\ast}\) is the code \((Q_{\bullet},Q_{\bullet}^{\ast})\) such that \(Q_{\bullet}^{\ast}\) is the pushout of the above diagram._ We have an \(X\)-merge chain map and thus \(X\)-merge code map using the coequaliser picture, so \[\begin{CD}\mathcal{E}_{\overline{X}}\Bigg{\|}@V{(Q_{\bullet},Q_{\bullet}^{ \ast})}V{coeq_{\bullet}}V@V{coeq_{\bullet}}V{}V@V{coeq_{\bullet}^{\ast}}V{}V \\ ((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{\ast})\end{CD}\] We also have a \(Z\)-split chain map and the \(Z\)-split code map \(\mathcal{E}_{\overline{Z}}\) by taking the opposite. \[\begin{CD}\mathcal{E}_{\overline{Z}}\Bigg{\|}@V{(Q_{\bullet},Q_{\bullet}^{ \ast})}V{}V@V{coeq_{\bullet}}V{}V@V{coeq_{\bullet}^{\ast}}V{}V@V{coeq_{\bullet}^{ \ast}}V{}V\\ ((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{\ast})\end{CD}\] This is rather abstract, so let's see a small concrete example. **Example 5.8**.: _Consider the following pushout of cell complexes:_ _We have not properly formalised pushouts of square lattices in the main body for brevity, but we do so in Appendix C. Informally, we are just 'gluing along' the graph in the top left corner, where the edges to be glued are coloured in blue._ _We can consider this pushout to be in \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\)1, giving the pushout:_ Footnote 1: Categorically, this is because there is a cocontinuous functor from the appropriate category of square lattices to \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\). _with_ \[A_{\bullet}=\mathbb{F}_{2}\xrightarrow{\partial_{-1}^{A_{\bullet}}}\mathbb{F }_{2}^{2}\] \[C_{\bullet}=\mathbb{F}_{2}\xrightarrow{\partial_{0}^{C_{\bullet}}}\mathbb{F} _{2}^{3}\xrightarrow{\partial_{-1}^{C_{\bullet}}}\mathbb{F}_{2}^{2}\] \[D_{\bullet}=\mathbb{F}_{2}\xrightarrow{\partial_{0}^{D_{\bullet}}}\mathbb{F} _{2}^{3}\xrightarrow{\partial_{-1}^{D_{\bullet}}}\mathbb{F}_{2}^{2}\] _and_ \[\partial_{-1}^{A_{\bullet}}=\begin{pmatrix}1\\ 1\end{pmatrix};\quad\partial_{0}^{C_{\bullet}}=\partial_{0}^{D_{\bullet}}= \begin{pmatrix}1\\ 1\\ 1\end{pmatrix};\quad\partial_{-1}^{C_{\bullet}}=\begin{pmatrix}1&1&0\\ 0&1&1\end{pmatrix};\quad\partial_{-1}^{D_{\bullet}}=\begin{pmatrix}1&0&1\\ 0&1&1\end{pmatrix}.\] _One can see from the cell complexes that we have_ \[Q_{\bullet}=\mathbb{F}_{2}^{2}\xrightarrow{\partial_{0}^{Q_{\bullet}}}\mathbb{ F}_{2}^{5}\xrightarrow{\partial_{-1}^{Q_{\bullet}}}\mathbb{F}_{2}^{2}\] _with_ \[\partial_{0}^{Q_{\bullet}}=\begin{pmatrix}1&0\\ 1&1\\ 1&0\\ 0&1\end{pmatrix};\quad\partial_{-1}^{Q_{\bullet}}=\begin{pmatrix}1&1&0&1&0\\ 0&1&1&0&1\end{pmatrix}\] _Rather than compute the pushout maps, let us instead give the coequaliser \(\text{coeq}_{\bullet}:(C\oplus D)_{\bullet}\to Q_{\bullet}\):_ \[(C\oplus D)_{1}\xrightarrow{\partial_{0}^{(C\oplus D)_{\bullet}}}(C \oplus D)_{0}\xrightarrow{\partial_{-1}^{(C\oplus D)_{\bullet}}}(C\oplus D)_{-1}\] \[\xrightarrow{\left\downarrow^{\text{coeq}_{1}}\right\downarrow}Q_{ 0}\xrightarrow{\partial^{Q_{\bullet}}}Q_{0}\xrightarrow{\partial^{Q_{\bullet}} _{-1}}Q_{-1}\] _We immediately see that \(\text{coeq}_{1}=\operatorname{id}\). For the other two surjections we have_ \[\text{coeq}_{0}=\begin{pmatrix}1&0&0&0&0&0\\ 0&1&0&0&0&1\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&1&0\end{pmatrix};\quad\text{coeq}_{-1}=\begin{pmatrix}1&0&1&0\\ 0&1&0&1\end{pmatrix}\] _Finally we interpret all the chain complexes in this pushout as being the \(Z\)-type complexes of CSS codes \((A_{\bullet},A_{\bullet}^{\ast})\), \((C_{\bullet},C_{\bullet}^{\ast})\) etc. Thus we have a \(Z\)-merge code map \(\mathcal{F}_{\overline{Z}}\), with an interpretation \(M\) as a \(\mathbb{C}\)-linear map, using \(\text{coeq}_{0}\) and \(\text{coeq}_{0}^{\intercal}\). We refrain from writing out the full \(32\)-by-\(64\) matrix, but as a ZX-diagram using gates from \(\{\operatorname{CNOT},\left|+\right\rangle,\left\langle 0\right|\}\) we have simply_ _We know from Lemma 3.4 that this map must restrict to a map on logical qubits. However, easy calculations show that \(H_{0}((C\oplus D)_{\bullet})=0\), while \(H_{0}(Q_{\bullet})=1\). That is, in the code \(((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{\ast})\) there are no logical qubits - there are still operators which show up as errors and some which don't, but all of those which don't are products of \(Z\) or \(X\) stabiliser generators. By Corollary 4.13 and Corollary 4.17 the logical map in \(\operatorname{\mathsf{FHilb}}\) is then just \(\left|+\right\rangle\). This trivially preserves both \(\overline{Z}\) and \(\overline{X}\) operators, although its opposite code map \(\mathcal{F}_{\overline{X}}\) does not preserve \(\overline{Z}\) operators._ This example was very simple, but the idea extends in quite a general way. To give an idea of how general this notion of CSS code surgery is, consider the balanced product codes from [3, 38]. The balanced product of codes is by definition a coequaliser in \(\operatorname{\mathsf{Ch}}(\operatorname{\mathsf{Mat}}_{\mathbb{F}_{2}})\), and so we can convert it into a pushout using routine category theory. The coequaliser is \[(C\otimes A\otimes D)_{\bullet}\xrightarrow{\left.\begin{array}{c}f_{ \bullet}\\ g_{\bullet}\end{array}}(C\otimes D)_{\bullet}\xrightarrow{\text{coeq}_{ \bullet}}(C\otimes\Delta_{A}\,D)_{\bullet}\] where \(g_{\bullet}\) and \(f_{\bullet}\) represent left and right actions of \(A_{\bullet}\) respectively. Recall that we did not explicitly define the tensor product in the main body for brevity, but see Appendix B. Then to this coequaliser we can associate a pushout, \[((C\otimes A\otimes D)\oplus(C\otimes D))_{\bullet}\xrightarrow{ \left.\begin{array}{c}(g_{\bullet}\mid\operatorname{id}_{\bullet})\\ \end{array}}(C\otimes D)_{\bullet}\] \[\xrightarrow{\left.\begin{array}{c}(f_{\bullet}\mid\operatorname {id}_{\bullet})\\ \end{array}}\xrightarrow{\left.\begin{array}{c}\left|+\right.\\ \end{array}\right.\\ (C\otimes D)_{\bullet}\xrightarrow{\left.\begin{array}{c}\left|+\right.\\ \end{array}\right.\\ \end{array}}(C\otimes A\,D)_{\bullet}\] where one can check that the universal property is the same in both cases. Thus we can think of a balanced product as a merge of tensor product codes, with the apex being two adjacent tensor product codes. As the maps in the span are evidently not monic, the merge is of a distinctly different sort from Example 5.8, and also the \(\overline{Z}\)- and \(\overline{X}\)-merges we will describe in Section 5.3. It would be convenient if we could guarantee some properties of pushouts in general; for example, if the pushout of LDPC codes was also LDPC, or if the homologies were always preserved. Unfortunately, the definition is general enough that neither of these are true. We discuss this in slightly greater detail in Appendix D, but the gist is that we need to stipulate some additional conditions to guarantee bounds on these quantities. ### Surgery along a logical operator The procedure of merging here is closely related to that of 'welding' in [36]. Our focus is not just on the resultant codes, but the maps on physical and logical data. On codes generated from square lattices, the merges here will correspond to a pushout along a'string' through the lattice. **Definition 5.9**.: _Let \(C_{\bullet}=C_{1}\xrightarrow{\partial_{0}}C_{0}\xrightarrow{\partial_{-1}}C_ {-1}\) be a length 2 chain complex. Let \(v\in C_{0}\) be a vector such that \(v\in\ker(\partial_{-1}^{C_{\bullet}})\backslash\mathrm{im}(\partial_{0}^{C_{ \bullet}})\). We now construct the logical operator subcomplex \(V_{\bullet}\). This has:_ \[\tilde{V}_{0}=\mathrm{supp}\ v;\quad\partial_{-1}^{V_{\bullet}}=\partial_{-1} ^{C_{\bullet}}\restriction_{\mathrm{supp}\ v};\quad V_{-1}=\mathrm{im}( \partial_{-1}^{V_{\bullet}})\] _where \(\mathrm{supp}\ v\) is the set of basis vectors in the support of \(v\), and \(\partial_{i}\restriction_{S}\) is the restriction of a differential to a subset \(S\) of its domain. All other components and differentials of \(V_{\bullet}\) are zero._ There is a monic \(f_{\bullet}:V_{\bullet}\hookrightarrow C_{\bullet}\) given by the inclusion maps of \(V_{0}\subseteq C_{0}\) etc. **Definition 5.10**.: _Let \(V_{\bullet}\) be a logical operator subcomplex of two chain complexes_ \[C_{\bullet}=C_{1}\xrightarrow{\partial_{0}}C_{0}\xrightarrow{\partial_{-1}}C _{-1}\] _and_ \[D_{\bullet}=D_{1}\xrightarrow{\partial_{0}}D_{0}\xrightarrow{\partial_{-1}}D _{-1}\] _simultaneously, so there is some vector \(v\in C_{0}\) and \(w\in D_{0}\) such that \(\tilde{V}_{0}=\mathrm{supp}\ v=\mathrm{supp}\ w\), \(\partial_{-1}^{V_{\bullet}}=\partial_{-1}^{C_{\bullet}}\restriction_{\mathrm{ supp}\ v}=\partial_{-1}^{D_{\bullet}}\restriction_{\mathrm{supp}\ w}\) etc. Then there is a monic span_ _This monic span has the pushout_ _with components_ \[Q_{1}=C_{1}\oplus D_{1};\quad Q_{0}=C_{0}\oplus D_{0}/v\sim w;\quad Q_{-1}=C_{ -1}\oplus D_{-1}/(\mathrm{im}(\partial_{-1}^{C_{\bullet}}\restriction_{ \mathrm{supp}\ v})\sim\mathrm{im}(\partial_{-1}^{D_{\bullet}}\restriction_{ \mathrm{supp}\ w})).\] The construction here is inspired by [11]. **Definition 5.11**.: _Let \((C_{\bullet},C^{*}_{\bullet})\) and \((D_{\bullet},D^{*}_{\bullet})\) be CSS codes. Let \((V_{\bullet},V^{*}_{\bullet})\) be a CSS code such that \(V_{\bullet}\) is a logical operator subcomplex of \(C_{\bullet}\) and \(D_{\bullet}\); this means that \((V_{\bullet},V^{*}_{\bullet})\) can be seen as merely a classical code, as \(V_{1}=0\). Then the \(Z\)-type merged CSS code \((Q_{\bullet},Q^{*}_{\bullet})\) is called the \(\overline{Z}\)-merged code of \((C_{\bullet},C^{*}_{\bullet})\) and \((D_{\bullet},D^{*}_{\bullet})\) along \((V_{\bullet},V^{*}_{\bullet})\)._ We are particularly interested in based CSS codes, i.e. when the logical spaces have bases. **Definition 5.12**.: (Separation) _Let \(\tilde{H}_{0}(C_{\bullet})=\{[u]_{i}\}\), \(\tilde{H}_{0}(D_{\bullet})=\{[v]_{j}\}\) be the bases for the spaces of logical \(Z\) operators in \(C_{\bullet}\) and \(D_{\bullet}\) respectively. Let \(V_{\bullet}\) be a logical operator subcomplex such that for the inclusion maps \(f_{\bullet}\) and \(g_{\bullet}\), \(\operatorname{im}(f_{0})\) is the space spanned by nonzero basis vectors of \(u\) for some \(u\in[u]_{i}\) and \(\operatorname{im}(g_{0})\) is the space spanned by nonzero basis vectors of \(v\) for some \(v\in[v]_{j}\), i.e. the chosen \(\overline{Z}\) logical operator acts on a single logical qubit in each of the bases. Further, assume that all \(u^{\prime}\in\ker(\partial_{-1}^{C_{\bullet}})\backslash\operatorname{im}( \partial_{0}^{C_{\bullet}})\) such that \(\operatorname{supp}\,(u^{\prime})\subseteq\operatorname{supp}\,(u)\) obey \(u^{\prime}\in[u]\), and also all \(v^{\prime}\in\ker(\partial_{-1}^{D_{\bullet}})\backslash\operatorname{im}( \partial_{0}^{D_{\bullet}})\) such that \(\operatorname{supp}\,(v^{\prime})\subseteq\operatorname{supp}\,(v)\) obey \(v^{\prime}\in[v]\). Lastly, assume that every vector \(s\in V_{0}\), \(f_{0}(s)\in\ker(\partial_{-1}^{C_{\bullet}})\backslash\operatorname{im}( \partial_{0}^{C_{\bullet}})\iff g_{0}(s)\in\ker(\partial_{-1}^{D_{\bullet}}) \backslash\operatorname{im}(\partial_{0}^{D_{\bullet}})\). By the previous assumption every vector in \(\operatorname{im}(f_{0})\) which is a nontrivial logical operator must belong to the same equivalence class, and the same for \(\operatorname{im}(g_{0})\). Then we say that \(V_{\bullet}\) is a separated logical operator subcomplex. We also say that the corresponding \(\overline{Z}\)-merged code of \((C_{\bullet},C^{*}_{\bullet})\) and \((D_{\bullet},D^{*}_{\bullet})\) along \((V_{\bullet},V^{*}_{\bullet})\) is separated._ The intuition here, following [11], is that it is convenient when the logical operators we glue along do not themselves contain any nontrivial logical operators belonging to a different logical qubit; if they do, the gluing procedure may yield a more complicated output code, as we could be merging along multiple logical operators simultaneously. In Appendix E we demonstrate that it is possible for this condition to not be satisfied, using a patch of octagonal surface code. Additionally, we do not want the gluing procedure to send any logical \(\overline{Z}\) operators to stabilisers. We now prove some basic results. **Lemma 5.13**.: _Let \((Q_{\bullet},Q^{*}_{\bullet})\) be a separated \(\overline{Z}\)-merged code with metrics \(\llbracket n_{Q},k_{Q},d_{Q}\rrbracket\), and let \(\llbracket n_{C},k_{C},d_{C}\rrbracket\), \(\llbracket n_{D},k_{D},d_{D}\rrbracket\) be the metrics of \((C_{\bullet},C^{*}_{\bullet})\) and \((D_{\bullet},D^{*}_{\bullet})\) respectively. Let \(n_{V}=\dim V_{0}\) be the Hamming weight of \(u\) and \(v\). Then_ \[n_{Q}=n_{C}+n_{D}-n_{V};\quad k_{Q}=k_{C}+k_{D}-1\] _Further, let \(\{[u]_{i}\}\) and \(\{[v]_{j}\}\) be the bases for \(H_{0}(C_{\bullet})\) and \(H_{0}(D_{\bullet})\) respectively, and say w.l.o.g. that \(u\in[u]_{1}\) and \(v\in[v]_{1}\) are the vectors quotiented by the pushout. Then \(H_{0}(Q_{\bullet})\) has a basis \(\{[w]_{l}\}\) for \(l\leq k_{C}+k_{D}-1\), where \([w]_{1}=[u]_{1}=[v]_{1}\), \([w]_{l}=[u]_{l}\) when \(1<l\leq k_{C}\) and \([w]_{l}=[v]_{l-k_{C}+1}\) for \(k_{C}<l\leq k_{C}+k_{D}-1\)._ Proof.: \(n_{Q}\) is immediate by the definition. For \(k_{Q}\), we start off by considering the code \(((C_{\bullet}\oplus D_{\bullet}),(C_{\bullet}\oplus D_{\bullet})^{*})\). Given \(u\in[u]_{1}\) and \(v\in[v]_{1}\), any other representatives \(y\in[u]_{1}\), \(x\in[v]_{1}\) belong to the same equivalence class in \(H_{0}(Q_{\bullet})\), as \(y\sim u\sim v\sim x\). 2 Footnote 2: These equivalences are from different quotients: \(y\sim u\) and \(v\sim x\) are homology quotients, while \(u\sim v\) is a quotient from the pushout. Now, let \(s\in[u]_{i}\), \(i\neq 1\). Let \(z\in D_{0}\) be a vector such that \(z\neq v\) and \(\operatorname{supp}\,(z)\subset\operatorname{supp}\,(v)\). Then either \(s+z\in[u]_{i}\), if \(z\in\operatorname{im}(\partial_{0}^{D_{\bullet}})\), or \(s+z\notin Z_{0}(C_{\bullet}\oplus D_{\bullet})\) otherwise, by the separation property. If \(s+z\notin Z_{0}(C_{\bullet}\oplus D_{\bullet})\) then \(s+z\notin Z_{0}(Q_{\bullet})\), as the quotiented vectors \(u,v\) are both in \(Z_{0}(C_{\bullet}\oplus D_{\bullet})\). So for any \(s\in[u]_{i}\), \(i\neq 1\), there is no \([t]\in H_{0}(D_{\bullet})\) with representative \(t\) such that \(s\sim t\) in \(Q_{0}\). A similar argument applies the other way around. The quotient is surjective, so there are no additional homology classes in \(H_{0}(Q_{\bullet})\). Thus the bases \(\{[u]_{i}\}\) and \(\{[v]_{j}\}\) remain partitioned, with the sole exception of \([u]_{1}\) and \([v]_{1}\) as above. We would like to study not only the resultant code given some \(\overline{Z}\)-merge, but also the map on the logical space. We will now switch from pushouts to coequalisers. Recall the \(Z\)-merge code map \(\mathcal{F}_{\overline{Z}}\) from Equation 1. We call this a \(\overline{Z}\)-merge code map when the merge is along a \(\overline{Z}\)-operator as above, and from now on we assume that all merges are separated. **Lemma 5.14**.: _Let the \(\overline{Z}\)-merge code map_ \[\mathcal{F}_{\overline{Z}}\] _of a separated \(\overline{Z}\)-merged code have its interpretation \(M\) as a \(\mathbb{C}\)-linear map. Then \(M\) acts as_ \[\mathcal{F}_{\overline{Z}}=\begin{pmatrix}1&0&0&0\\ 0&0&0&1\end{pmatrix}\] _on each pair of qubits in \(((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{*})\) which are equivalent in \((Q_{\bullet},Q_{\bullet}^{*})\) and \(M\) acts as identity on all other qubits._ Proof.: \(M\) must have the following maps on Paulis on each pair of qubits being merged: \[Z\otimes I\mapsto Z;\quad I\otimes Z\mapsto Z;\quad X\otimes X\mapsto X\] which uniquely defines the matrix above. In other words we have \(\left|00\right\rangle\mapsto\left|0\right\rangle\), \(\left|11\right\rangle\mapsto\left|1\right\rangle\), \(\left|01\right\rangle\mapsto\left|0\right\rangle,\left|10\right\rangle\mapsto \left|0\right\rangle\) etc, which has the convenient presentation as the ZX diagram on the left above. **Lemma 5.15**.: _Let \((Q_{\bullet},Q_{\bullet}^{*})\) be a separated \(\overline{Z}\)-merged code of \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) along \((V_{\bullet},V_{\bullet}^{*})\). Call \(f=H_{0}(coef_{\bullet})\). Then_ \[f([u]_{i}+[v]_{j})=[w]_{l}\] _where \([w]_{l}\) was defined in Lemma 5.13._ This is obvious by considering the surjection in question and using Lemma 5.13. It essentially says that on each pair of logical operators in \(((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{*})\) which are both being quotiented, \(\mathcal{F}_{\overline{Z}}\) acts as: \[\overline{Z}\otimes\overline{I}\mapsto\overline{Z};\quad\overline{I}\otimes \overline{Z}\mapsto\overline{Z};\quad\overline{X}\otimes\overline{X}\mapsto \overline{X}\] where the map on \(X\)s is inferred from the dual. **Lemma 5.16**.: \[d_{Q}^{X}\geq\min(d_{C}^{X},d_{D}^{X})\] Proof.: By considering the code map \(\mathcal{F}_{\overline{Z}}\), we see that any \(X\) logical operator in \((Q_{\bullet},Q_{\bullet}^{*})\) has a preimage which is also an \(\overline{X}\) logical operator in \(((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{*})\). This is because \(f^{*}\), the dual part of \(\mathcal{F}_{\overline{Z}}\), restricts to \(H_{0}(f^{*})\) **Remark 5.17**.: _Note that we do not in general have a lower bound on \(d_{Q}^{Z}\) in terms of \(d_{C}^{Z}\) and \(d_{D}^{Z}\). We can see this from the discussion in Section 4.3. Given the code map \(\mathcal{F}_{\overline{Z}}\), the chain map \(f_{0}:(C\oplus D)_{0}\to Q_{0}\) restricts to \(H_{0}(f)\), but this does not preclude there being other vectors in \((C\oplus D)_{0}\backslash\ker\partial_{-1}^{(C\oplus D)_{\bullet}}\) which are mapped into one of the equivalence classes in \(H_{0}(Q_{\bullet})\). In computational terms, while we cannot have detectable \(X\) operators in the initial codes which are mapped to logicals by the code map \(\mathcal{F}_{\overline{Z}}\), this is unfortunately possible with detectable \(Z\) operators. We illustrate this with an example in Appendix F._ We now show that, if we consider two codes to be merged as instances of LDPC families, their combined \(\overline{Z}\)-merged code code is also LDPC. Recall Definition 4.3. **Lemma 5.18**.: (LDPC) _Say our input codes \((C_{\bullet},C_{\bullet}^{*})\), \((D_{\bullet},D_{\bullet}^{*})\) have maximal weights of generators labelled \(w_{C}^{Z}\), \(w_{C}^{X}\) and \(w_{D}^{Z}\), \(w_{D}^{X}\) respectively. Let \((Q_{\bullet},Q_{\bullet}^{*})\) be a separated \(\overline{Z}\)-merged code of \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) along \((V_{\bullet},V_{\bullet}^{*})\). Then_ \[w_{Q}^{Z}=\max(w_{C}^{Z},w_{D}^{Z});\quad w_{Q}^{X}<w_{C}^{X}+w_{D}^{X}.\] _Similarly, letting the input codes have maximal number of shared generators on a single qubit \(q_{C}^{Z}\), \(q_{C}^{X}\) and \(q_{D}^{Z}\), \(q_{D}^{X}\) we have_ \[q_{Q}^{Z}\leq q_{C}^{Z}+q_{D}^{Z};\quad q_{Q}^{X}=\max(q_{C}^{X},q_{D}^{X})\] Proof.: None of the \(Z\)-type generators are quotiented by a \(\overline{Z}\)-merge map, so \(w_{Q}^{Z}=w_{(C\oplus D)}^{Z}=\max(w_{C}^{Z},w_{D}^{Z})\). For the \(X\)-type generators, in the worst case the two generators which are made to be equivalent by the merge are the highest weight ones. For these generators to appear in \(V_{-1}\) they must have at least two qubits in each of their support which is in \(V_{0}\), and thus these qubits are merged together, so \(w_{Q}^{X}<w_{C}^{X}+w_{D}^{X}\). Next, using again the fact that none of the \(Z\)-type generators are quotiented, a single qubit could in the worst case be the result of merging two qubits in \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) which each have the maximal number of shared \(Z\)-type generators, so \(q_{Q}^{Z}\leq q_{C}^{Z}+q_{D}^{Z}\). For the \(X\) case, if a qubit is in \(V_{0}\) then all \(X\)-type generators it is in the support of must appear in \(V_{-1}\). Therefore, when any two qubits are merged all of their \(X\)-type generators are also merged. Thus \(q_{Q}^{X}=q_{(C\oplus D)}^{X}=\max(q_{C}^{X},q_{D}^{X})\). Note that as \(w^{Z}\), \(w^{X}\) and \(q^{Z}\), \(q^{X}\) are at worst additive in those of the input codes, the \(\overline{Z}\)-merge of two LDPC codes is still LDPC, assuming the pushout is still well-defined using matching \(\overline{Z}\) operators for each member of the code families. Next, we dualise everything, and talk about \(\overline{X}\)-merges. **Definition 5.19**.: _Let \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) be CSS codes. Let \((V_{\bullet},V_{\bullet}^{*})\) be a CSS code such that \(V_{\bullet}^{*}\) is a logical operator subcomplex of \(C_{\bullet}^{*}\) and \(D_{\bullet}^{*}\), and \(Q_{\bullet}^{*}\) is the merged complex along \(V_{\bullet}^{*}\). Then the CSS code \((Q_{\bullet},Q_{\bullet}^{*})\) is called the \(\overline{X}\)-merged code of \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) along \((V_{\bullet},V_{\bullet}^{*})\)._ In this case we glue along an \(\overline{X}\) logical operator instead. The notion of separation, Lemma 5.13 and Lemma 5.18 carry over by transposing appropriately. An \(\overline{X}\)-merge map \(\mathcal{E}_{\overline{X}}\) can be defined similarly, and a similar result as Lemma 5.14 applies to separated \(\overline{X}\)-merged codes. **Lemma 5.20**.: _Let the \(\overline{X}\)-merge code map of a separated \(\overline{X}\)-merged code have its interpretation \(L\) as a \(\mathbb{C}\)-linear map. Then \(L\) acts as_ \[\raisebox{-14.226378pt}{\includegraphics[height=14.226378pt]{figs/201.eps}}= \frac{1}{\sqrt{2}}\begin{pmatrix}1&0&0&1\\ 0&1&1&0\end{pmatrix}\] on each pair of qubits in \(((C\oplus D)_{\bullet}^{\bullet},(C\oplus D)_{\bullet})\) which are equivalent in \((Q_{\bullet}^{\bullet},Q_{\bullet})\), i.e. \(|++\rangle\mapsto|+\rangle\), \(|--\rangle\mapsto|-\rangle\), and \(L\) acts as identity on all other qubits._ Proof.: This time, \(L\) must have the maps \[X\otimes I\mapsto X;\quad I\otimes X\mapsto X;\quad Z\otimes Z\mapsto Z\] Similarly, the maps on logical operators are \[\overline{X}\otimes\overline{I}\mapsto\overline{X};\quad\overline{I}\otimes \overline{X}\mapsto\overline{X};\quad\overline{Z}\otimes\overline{Z}\mapsto \overline{Z}\] Having discussed \(\overline{Z}\)- and \(\overline{X}\)-merged codes, we briefly mention splits. These are just the opposite code maps to \(\mathcal{F}_{\overline{Z}}\) and \(\mathcal{E}_{\overline{X}}\). In both cases, all the mappings are determined entirely by Lemma 5.13 by taking transposes or adjoints when appropriate. **Remark 5.21**.: _In practice, when the CSS codes in question hold multiple logical qubits it may be preferable to merge/split along multiple disjoint \(\overline{Z}\) or \(\overline{X}\) operators at the same time. Such a protocol is entirely viable within our framework, and requires only minor tweaks to the above results. The same is true should one wish to merge/split along operators within the same code._ We now look at a short series of examples. ### Examples of surgery #### 5.4.1 Lattice surgery Lattice surgery is the prototypical instance of CSS code surgery. It starts with patches of surface code and then employs separated splits and merges to perform non-unitary logical operations [25]. The presentation we give of lattice surgery is quite idiosyncratic, in the sense that we perform the merges on physical edges/qubits, whereas the standard method is to introduce additional edges between patches to join them together. We remedy this in Section 6. Consider the pushout of cell complexes below: As before, we informally consider this to be 'gluing along' the graph in the top left, but for completeness it is formalised in Appendix C. By considering the pushout to be in \(\mathtt{Ch}(\mathtt{Mat}_{\overline{F}_{2}})\), we have: Letting \(coeq_{\bullet}:(C\oplus D)_{\bullet}\to Q_{\bullet}\) be the relevant coequaliser map, we see that \(\mathcal{F}_{\overline{Z}}=(coeq_{\bullet},coeq_{\bullet}^{\ast})\) constitutes a separated \(\overline{Z}\)-merge map. In particular, observe that \(\mathcal{F}_{\overline{Z}}\) sends the logical operators: \[\overline{Z}\otimes\overline{I} \mapsto\overline{Z}\] \[\overline{I}\otimes\overline{Z} \mapsto\overline{Z}\] \[\overline{X}\otimes\overline{X} \mapsto\overline{X}\] as predicted by Lemma 5.15. The first two give \(H_{0}(coeq_{\bullet})=\begin{pmatrix}1&1\end{pmatrix}\) and the last \(H_{0}(coeq_{\bullet}^{\ast})=\begin{pmatrix}1\\ 1\end{pmatrix}\). \(\mathcal{F}_{\overline{Z}}\) is evidently \(\overline{Z}\)-preserving but not \(\overline{X}\)-preserving, as \(\overline{X}\otimes\overline{I}\) is taken to an operation which is detected by the \(Z\) stabilisers. Observe that we end up with a greater cosystolic distance of \((Q_{\bullet},Q_{\bullet}^{\ast})\) than we started with in \(((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{\ast})\). If we instead consider the pair \((coeq_{\bullet},coeq_{\bullet}^{\ast})\) as an \(\overline{X}\)-preserving code map \(\mathcal{F}_{\overline{X}}\), then it is a separated \(\overline{X}\)-split map. In terms of cell complexes we would have 3 Footnote 3: Pedantically, this is a morphism in the opposite category of cell complexes \(\mathtt{OACC}^{\text{op}}\). We similarly have a separated \(\overline{X}\)-merge map and separated \(\overline{Z}\)-split map with the obvious forms by dualising appropriately. **Remark 5.22**.: _While it is convenient to choose logical operators along patch boundaries to glue along, so that the complexes can all be embedded on the 2D plane, this is not necessary. One could intersect two patches along any matching operator._ Recall the toric code \((C_{\bullet},C_{\bullet}^{\ast})\) from Example 4.8. We can merge two copies of \(C_{\bullet}\) along a logical \(\overline{Z}\) operator, which corresponds to an essential cycle of each torus. The resultant code will then look like two tori intersecting, depending somewhat on the choices of essential cycle: The \(\overline{Z}\)-merge map on logical qubits will be the same as for patches. #### 5.4.2 Shor code surgery Of course, the pushout we take does not have to come from square lattices. Let \(C_{\bullet}\) and \(D_{\bullet}\) be two copies of Shor codes from Example 4.7.4 We can perform separated merges between them. We give two examples. First, for a \(\overline{Z}\)-merge, we take the logical \(\overline{Z}\) operator \(\overline{Z}=\bigotimes_{i}^{8}Z_{i}\) and apply Definition 5.9 to get the logical operator subcomplex: Footnote 4: The Shor code can be constructed as a cellulation of the projective plane, so it is actually not wholly dissimilar from the lattice codes [21]. with \(V_{0}=\mathbb{F}_{2}^{9}\), \(V_{-1}=\mathbb{F}_{2}^{2}\), and all other components zero. This is just \(C_{\bullet}\) from Example 4.7 truncated to be length \(1\), as this logical \(\overline{Z}\) operator has support on all physical qubits. The monic chain map \(f_{\bullet}\) given by inclusion into the Shor code is just and the same for \(g_{\bullet}\). The pushout of will then be where \(\partial_{-1}^{Q_{\bullet}}=P_{X}\) and \(\partial_{0}^{Q_{\bullet}}=\left(P_{Z}^{\mathrm{T}}|P_{Z}^{\mathrm{T}}\right)\). The map on logical data is fully determined by Lemma 5.15. We have ended up with virtually the same code as the Shor code, except that we have a duplicate for every \(Z\)-type generator, i.e. every measurement of \(Z\) stabilisers is performed twice and the result noted separately. While this example is very simple, it highlights that the result of a merge can have somewhat subtle features, such as duplicating measurements, which the two input codes do not. 5 Footnote 5: One can fix this categorically by specifying additional conditions on \(V_{\bullet}\) if desired. For our second case, we use a different (but equivalent) logical operator, \(\overline{Z}=Z_{1}\otimes Z_{4}\otimes Z_{7}\). We still glue two copies of the Shor code, but now we have \(V_{0}=\mathbb{F}_{2}^{3}\), \(V_{-1}=\mathbb{F}_{2}^{2}\) and \(\partial_{-1}^{V_{\bullet}}=\begin{pmatrix}1&1&0\\ 1&0&1\end{pmatrix}\). That is, our logical operator subcomplex is just the repetition code from Example 4.1. We then have where \[f_{0}=\begin{pmatrix}1&0&0\\ 0&0&0\\ 0&0&0\\ 0&1&0\\ 0&0&0\\ 0&0&0\\ 0&0&1\\ 0&0&0\\ 0&0&0\end{pmatrix}\] and the same for \(g_{0}\), forming again a monic span of chain complexes. The resultant \(\overline{Z}\)-merged code is then \[Q_{\bullet}=\cdots\xrightarrow{}\mathbb{F}_{2}^{12}\xrightarrow{\partial_{0} ^{Q_{\bullet}}}\mathbb{F}_{2}^{15}\xrightarrow{\partial_{-1}^{Q_{\bullet}}} \mathbb{F}_{2}^{2}\xrightarrow{}\cdots\] and the large matrices \(\partial_{0}^{Q_{\bullet}}\) and \(\partial_{-1}^{Q_{\bullet}}\) are easily obtained by quotienting out rows and columns from \(\partial_{0}^{C_{\bullet}}\oplus\partial_{0}^{D_{\bullet}}\) and \(\partial_{-1}^{C_{\bullet}}\oplus\partial_{-1}^{D_{\bullet}}\). **Remark 5.23**.: _We do not expound on this example, but the protocols for performing \(\overline{X}\) and \(\overline{Z}\) measurements by generalised lattice surgery in [11] can be seen as using separated \(\overline{Z}\)- and \(\overline{X}\)-merged codes, with the caveat that they don't perform the merge maps; instead they initialise fresh qubits in the ancillary hypergraph code and measure all stabiliser generators. The present work has overlap with their protocols, but we do not subsume them; for example their \(\overline{X}\otimes\overline{Z}\) and \(\overline{Y}\) measurement methods are outside of our formalism, as they lead to non-CSS codes._ ## 6 Fault-tolerant logical operations We now describe how our abstract formalism leads to a general set of fault-tolerant logical operations for CSS codes. We consider this to be a good application of the homological algebraic formalism, as we suspect these logical operations would be challenging to derive without the machinery of \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\). 6 So far in our description of code maps there are two main assumptions baked in: that one can perform linear maps between CSS codes (a) deterministically and (b) fault-tolerantly, both of which are desired for performing quantum computation. Footnote 6: An alternative approach could be to use Tanner graphs. For assumption (a), we can only implement code maps which are interpreted as an isometry deterministically. If they are not, instead we must perform measurements on physical qubits. Recall from Proposition 4.12 that every code map has an interpretation constructed from CNOTs and some additional states and effects taken from \(\left\{\left|+\right\rangle,\left\langle 0\right|\right\}\) for a \(\overline{Z}\)-preserving code map or \(\left\{\left\langle+\right|,\left|0\right\rangle\right\}\) for an \(\overline{X}\)-preserving code map. This means that in order to implement the code map non-deterministically, one need only apply CNOTs and measure some qubits in the \(Z\)-basis (for a \(\overline{Z}\)-preserving code map) or the \(X\)-basis (\(\overline{X}\)-preserving code map). Of course, should we acquire the undesired measurement result, we induce errors in our code map. There is no protocol for correcting these errors in all generality. For assumption (b), there is no protocol for performing arbitrary CNOT circuits on physical qubits in a code fault-tolerantly. However, when performing CSS code surgery which is a separated \(\overline{Z}\)- or \(\overline{X}\)-merge, we have a protocol which addresses both (a) and (b). ### Procedure summary Our procedure for performing a fault-tolerant \(\overline{Z}\otimes\overline{Z}\) measurement is as follows: 1. Find a matching \(\overline{Z}\) logical operator which belongs to both initial codes, in the sense of Definition 5.9. 2. Verify that this logical operator satisfies the separation property of Definition 5.12 in both codes. 3. Verify that the \(\overline{Z}\) logical operator is gauge-fixable in those codes, in the sense of Definition 6.1 below. 4. Verify that the merge is bounded below, in the sense of Definition 6.7 below. 5. Perform the merge as described in Proposition 6.9. We do not know how difficult it will be in general to perform the verification in steps (2), (3) and (4) for codes (or families of codes) of interest. ### Full description of procedure The first additional technical condition involves gauge fixing. For reasons of brevity we do not describe the connection between lattice surgery and gauge fixing, but refer the interested reader to [41]. Briefly, we will consider the whole system to be a subsystem code, and fix the gauges of the \(\overline{Z}\) operators we are gluing along. **Definition 6.1**.: _Let \(C_{\bullet}\) be a chain complex and \(u\) be a representative of the equivalence class \([u]\in H_{0}(C_{\bullet})\), which is a basis vector for \(H_{0}(C_{\bullet})\). Let \(x\) be a vector in \(C_{0}\) such that \(\left|x\right|=1\) and \(x\cdot u=1\). We say that \(x\) is a qubit in the support of \(u\). Recall from Lemma 4.4 that \(u\) has a unique paired basis vector \([v]\in H_{0}(C_{\bullet}^{\bullet})\) such that \([u]\cdot[v]=1\). It is possible to safely correct a qubit \(x\) when there is a vector \(v\in[v]\) such that \(x\cdot v=1\) and \(y\cdot v=0\) for all other qubits \(y\) in the support of \(u\). We say that \(u\) is gauge-fixable when it is possible to safely correct all qubits in the support of \(u\)._ **Example 6.2**.: _Consider the Shor code from Example 4.7 and Section 5.4.2. The \(\overline{Z}\) operator_ \[v=\begin{pmatrix}1&1&1&1&1&1&1&1&1\end{pmatrix}^{\mathsf{T}}\] _has qubits in its support for which it is not possible to safely correct, as there are only 4 representatives of the nonzero equivalence class \([w]\in H_{0}(C_{\bullet}^{\bullet})\) but 9 qubits for which being able to safely correct is necessary. However, it is possible to safely correct all qubits in the support of the \(\overline{Z}\) operator_ \[u=\begin{pmatrix}1&0&0&1&0&0&1&0&0\end{pmatrix}^{\mathsf{T}},\] _where \(u\in[v]\), with the fixing operators:_ \[\begin{pmatrix}1&1&1&0&0&0&0&0\end{pmatrix}^{\mathsf{T}};\quad\begin{pmatrix}0 &0&0&1&1&1&0&0&0\end{pmatrix}^{\mathsf{T}};\quad\begin{pmatrix}0&0&0&0&0&0&1&1& 1\end{pmatrix}^{\mathsf{T}}\] The same definition of gauge-fixability applies if we exchange \(X\) and \(Z\) appropriately. Next we will require the tensor product of chain complexes, for which see Appendix B. **Definition 6.3**.: _Let \(V_{\bullet}=\begin{array}{c}V_{0}\\ \end{array}V_{-1}\) and \(P_{\bullet}=\begin{array}{c}P_{1}\\ \end{array}P_{0}\) be length 1 chain complexes. Then we can make the tensor product chain complex \(W_{\bullet}=(P\otimes V)_{\bullet}\). Explicitly,_ \[W_{\bullet}=\begin{array}{c}W_{1}\\ \end{array}W_{0}\begin{array}{c}W_{-1}\end{array}\] _with_ \[W_{1}=P_{1}\otimes V_{0}=V_{0};\quad W_{0}=(P_{0}\otimes V_{0})\oplus(P_{1} \otimes V_{-1})=(\mathbb{F}_{2}^{2}\otimes V_{0})\oplus V_{-1};\quad W_{-1}= P_{0}\otimes V_{-1}=\mathbb{F}_{2}^{2}\otimes V_{-1}\] _Also, \(\partial_{0}^{W_{\bullet}}=\begin{pmatrix}\operatorname{id}_{V_{0}}\\ \operatorname{id}_{V_{0}}\\ \partial_{-1}^{V_{\bullet}}\end{pmatrix}\) and \(\partial_{-1}^{W_{\bullet}}=\begin{pmatrix}\operatorname{id}_{\mathbb{F}_{2} ^{2}}\otimes\partial_{-1}^{V_{\bullet}}&\partial_{0}^{P_{\bullet}}\otimes \operatorname{id}_{V_{-1}}\end{pmatrix}=\begin{pmatrix}\partial_{-1}^{V_{ \bullet}}&0&\operatorname{id}_{V_{-1}}\\ 0&\partial_{-1}^{V_{\bullet}}&\operatorname{id}_{V_{-1}}\end{pmatrix}\)._ In the case where \(V_{\bullet}\) is a string along a patch of surface code, say of the form: then \(W_{\bullet}\) will be of the form as a square lattice, see Definition C.8. We can see this as the 'intermediate section' used to perform lattice surgery. **Lemma 6.4**.: _Let \(V_{\bullet}\) be a \(\overline{Z}\) logical operator subcomplex of a chain complex \(C_{\bullet}\), and let \(V_{\bullet}\) satisfy the separation property from Definition 5.12. Then_ \[w_{W}^{X}=w_{C}^{X}+1;\quad w_{W}^{Z}=q_{C}^{X}+2;\quad q_{W}^{X}=\max(q_{C}^{X}, 2);\quad q_{W}^{Z}=\max(w_{C}^{X},1)\] _and \(\dim H_{0}(W_{\bullet})=\dim H_{0}(V_{\bullet})\geq 1\)._ Proof.: Observe that \(\partial_{-1}^{V_{\bullet}}\) has maximum row weight \(w_{C}^{X}\) and column weight \(q_{C}^{X}\). Then inspect the matrices \(\partial_{0}^{W_{\bullet}}\) and \(\partial_{-1}^{W_{\bullet}}\) from Definition 4.3. For \(\dim H_{0}(W_{\bullet})\), we use the Kunneth formula, for which see Lemma B.3, which in this case says \(H_{0}((P\otimes V)_{\bullet})=(H_{0}(P_{\bullet})\otimes H_{0}(V_{\bullet})) \oplus(H_{1}(P_{\bullet})\otimes H_{-1}(V_{\bullet}))\). We then have \[\dim H_{0}(P_{\bullet})=1;\quad\dim H_{1}(P_{\bullet})=1;\quad\dim H_{0}(V_{ \bullet})\geq 1;\quad\dim H_{-1}(V_{\bullet})=0\] where the last comes from the fact that \(V_{-1}=\operatorname{im}(\partial_{-1}^{V_{\bullet}})\), using Definition 5.9. Note \(\dim H_{0}(V_{\bullet})\geq 1\) as there is always at least one nonzero vector which is mapped to zero by construction, and \(B_{0}(V_{\bullet})=0\). \(\dim H_{0}(V_{\bullet})\) may be greater than \(1\), as there may be other nonzero vectors in \(Z_{0}(V_{\bullet})\) which previously corresponded to products of \(Z\) stabilisers in \(C_{\bullet}\) and \(D_{\bullet}\) before taking the logical operator subcomplex. Thus \(\dim H_{0}(W_{\bullet})=\dim H_{0}(V_{\bullet})\geq 1\). **Definition 6.5**.: _Let \(V_{\bullet}\) be a simultaneous \(\overline{Z}\) logical operator subcomplex of both \(C_{\bullet}\) and \(D_{\bullet}\), satisfying the separation property. Then define the'sandwiched code' \((T_{\bullet},T_{\bullet}^{*})\), with \(T_{\bullet}\) as the pushout of a pushout:_ _where the middle term is \(W_{\bullet}=(P\otimes V)_{\bullet}\) from Definition 6.3 above, and the two inclusion maps \(V_{\bullet}\hookrightarrow W_{\bullet}\) map \(V_{0}\) into each of the copies of \(V_{0}\) in \(W_{0}\), and the same for \(V_{-1}\)._ Colloquially, we are gluing first one side of the code \(W_{\bullet}\) to \(C_{\bullet}\), and then the other side to \(D_{\bullet}\). 7 Footnote 7: We could equally do it the other way, in which case the two pushouts would be flipped, but this does not change \(T_{\bullet}\). **Lemma 6.6**.: _The'sandwiched code' \((T_{\bullet},T_{\bullet}^{*})\) has_ \[n_{T}=n_{C}+n_{D}+r;\quad k_{T}=k_{C}+k_{D}-1;\quad d_{T}^{X}\geq\min(d_{C}^{X },d_{D}^{X})\] _and_ \[w_{T}^{X}\leq w_{C\oplus D}^{X}+1;\quad w_{T}^{Z}\leq\max(w_{C\oplus D}^{Z},q _{C\oplus D}^{X}+2);\quad q_{T}^{Z}\leq q_{C\oplus D}^{Z}+w_{C\oplus D}^{X}; \quad q_{T}^{X}=\max(q_{C\oplus D}^{X},2)\] Proof.: For \(n_{T}\), just apply Lemma 5.13 twice. For \(k_{T}\), use Lemma 6.4 but observe that all elements of \(H_{0}(W_{\bullet})\) are merged with \([0]\in H_{0}(C_{\bullet})\) and \([0]\in H_{0}(D_{\bullet})\), and thus do not contribute to \(k_{T}\), apart from the single one corresponding to a logical operator in \(H_{0}(C_{\bullet})\) and \(H_{0}(D_{\bullet})\), by the separation property. Bearing this in mind one can also apply Lemma 5.13 twice. For \(d_{T}^{X}\), we first show that \((R_{\bullet},R_{\bullet}^{*})\) has \(d_{R}^{X}\geq d_{C}^{X}\). Every \(\overline{X}\) operator in \((W_{\bullet},W_{\bullet}^{*})\) which is not sent to a stabiliser by the first pushout must anticommute with the \(\overline{Z}\) operator used to construct \(V_{\bullet}\), and thus must have support on those qubits. In addition, it must have a matched \(\overline{X}\) operator in \((C_{\bullet},C_{\bullet}^{*})\), which also has support on those qubits. As the only other \(\overline{X}\) operators in \((R_{\bullet},R_{\bullet}^{*})\) are those in \((C_{\bullet},C_{\bullet}^{*})\) which are unaffected by the merge, having no support on the qubits being merged, \(d_{R}^{X}\geq d_{C}^{X}\). Then \(d_{T}^{X}\geq\min(d_{C}^{X},d_{D}^{X})\) using Lemma 5.16. For \(w_{T}^{X}\), the pushouts will glue each \(X\) type stabiliser generator in \(W_{\bullet}\) into those in \(C_{\bullet}\) and \(D_{\bullet}\) in such a way that they will have exactly one extra qubit in the support, by the product construction of \(W_{\bullet}\); we can see this from \(\partial_{-1}^{W_{\bullet}}\) in Definition 6.3, as there is exactly a single \(1\) which is not part of the \(\partial_{-1}^{V_{\bullet}}\) in any given row of the matrix. For \(w_{T}^{Z},q_{T}^{Z}\) and \(q_{T}^{X}\) we just use Lemma 6.4 and apply Lemma 5.18 twice. The intuition here is that rather than gluing two codes \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) together directly along a logical operator, we have made a low distance hypergraph code \((W_{\bullet},W_{\bullet}^{*})\) and used that to sandwich the codes. A consequence of the above lemma is that this'sandwiching' procedure maps LDPC codes to LDPC codes. Importantly, under suitable conditions the two pushouts let us perform a code map on logical qubits fault-tolerantly. **Definition 6.7**.: _Let \((T_{\bullet},T_{\bullet}^{*})\) have no logical \(\overline{Z}\) operators with weight lower than \(d_{C\oplus D}\). Then we say that the merged code has distance bounded below._ **Remark 6.8**.: _Note that the only \(Z\) operators which can lower the distance are those with support on the logical \(\overline{Z}\) which is used to construct \(V_{\bullet}\), as all others will be unchanged by the quotient. The condition for a merge to have distance bounded below is quite a tricky one, as we do not know of a way to check this easily. Because of Lemma 6.6, this problem is isolated to \(\overline{Z}\) operators, as the distance is guaranteed to be bounded below for \(\overline{X}\) operators._ **Proposition 6.9**.: _Let \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) be CSS codes which share a separated gauge-fixable \(\overline{Z}\) operator on \(m\) physical qubits and \(r\)\(X\)-type stabiliser generators each; let the relevant logical qubits be \(i\) and \(j\), and let \(V_{\bullet}\) be the logical operator subcomplex of \(C_{\bullet}\) and \(D_{\bullet}\) such that the codes admit a separated \(\overline{Z}\)-merge. Further, let \(d\) be the code distance of \(((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{*})\), and let the merged code \((T_{\bullet},T_{\bullet}^{*})\) have distance bounded below. Then there is a fault-tolerant procedure with distance \(d\) for implementing a \(\overline{Z}\otimes\overline{Z}\) measurement on the pair \(i\), \(j\) of logical qubits, which gives the code \((T_{\bullet},T_{\bullet}^{*})\). This procedure requires \(r\) auxiliary clean qubits and an additional \(m\)\(Z\)-type stabiliser generators._ Proof.: We aim to go from the code \(((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{*})\) to \((T_{\bullet},T_{\bullet}^{*})\). The code map we apply to physical qubits is as follows. We call the physical qubits in the support of the logical operators to be glued together the _participating_ qubits. We initialise a fresh qubit in the \(|+\rangle\) state for each pairing of \(X\)-measurements on the two logical operators of qubits \(i\) and \(j\), that is for each qubit in \((W_{\bullet},W_{\bullet}^{*})\) which is not glued to a qubit in \((C_{\bullet},C_{\bullet}^{*})\) or \((D_{\bullet},D_{\bullet}^{*})\). We now modify the stabilisers to get to \((T_{\bullet},T_{\bullet}^{*})\). To start, change the \(X\) stabiliser generators with support on the participating qubits to have one additional fresh qubit each, so that each pairing of \(X\)-measurements shares one fresh qubit. We add a new \(Z\) stabiliser generator with weight \(a+2\) for each participating qubit in one of the logical operators to be glued, where \(a\) is the number of \(X\) type generators of which that physical qubit is in the support. One can see this using Definition 6.3, as on the middle code \((W_{\bullet},W_{\bullet}^{*})\) we have \[P_{Z}=(\partial_{0}^{W_{\bullet}})^{\intercal}=\left(\mathrm{id}_{\mathbb{F}_ {2}^{m}}\quad\mathrm{id}_{\mathbb{F}_{2}^{m}}\quad(\partial_{-1}^{V_{\bullet} })^{\intercal}\right)\] We then measure \(d\) rounds of all stabilisers. All of the qubits in the domain of the last block of \(P_{Z}\) above are those which were initialised to \(|+\rangle\). The only other qubits which contribute to the new \(Z\) stabiliser generators are those on either side of the sandwiched code, i.e. those along the \(\overline{Z}\) logical operators of qubits \(i\) and \(j\). Each of the physical qubits in the support of these logical operators is measured exactly once by the new \(Z\) stabiliser generators, and they are measured in pairs, one from each side; therefore performing these measurements and recording the total product is equivalent to measuring \(\overline{Z}\otimes\overline{Z}\). We will now check this, and verify that it is fault-tolerant. Let the outcome of a new \(Z\)-type measurement be \(c_{\lambda}\in\{1,-1\}\), and the overall outcome \(c_{L}=\prod_{\lambda\leq m}c_{\lambda}\). Whenever \(c_{\lambda}=-1\) we apply the gauge fixing operator \(X_{\lambda}=\bigotimes_{(i\in v\ |\ v_{i}=1)}X_{i}\) for the specified \(v\in C_{0}^{*}\) (or one could choose a gauge fixing operator using \(D_{0}^{*}\) instead). We let \(X_{c_{L}}=\prod_{(\lambda\ |\ c_{\lambda}=-1)}X_{\lambda}\). On participating physical qubits, the merge is then \[X_{c_{L}}\prod_{\lambda}\frac{I+c_{\lambda}Z}{2}=\prod_{\lambda}\frac{I+Z}{2} X_{c_{L}}\] where we abuse notation somewhat to let \(I\) and \(Z\) here refer to tensor products thereof. As each \(X_{\lambda}\) belongs to the same equivalence class of logical \(\overline{X}\) operators in \(H_{0}(C_{\bullet})\), if \(c_{L}=1\) then \(X_{c_{L}}\) acts as identity on the logical space; if \(c_{L}=-1\) then \(X_{c_{L}}\) acts as \(\overline{X}\) on logical qubit \(i\) in the code before merging. One can then see that these two branches are precisely the branches of the logical \(\overline{Z}\otimes\overline{Z}\) measurement. As the measurements were performed using \(d\) rounds of stabilisers, and the gauge fixing operators each have support on at least \(d\) qubits, the overall procedure is fault-tolerant with code distance \(d\). We also check that the procedure is insensitive to errors in the initialisation of fresh qubits. If a qubit is initialised instead to \(|-\rangle\), or equivalently suffers a \(Z\) error, then the new \(Z\) stabiliser measurements are insensitive to this change, and it will just show up at the \(X\) measurements on either side of the fresh qubit. If it suffers some other error, say sending it to \(|1\rangle\), then each new stabiliser measurement with that qubit in its support may have its result flipped. By construction of \(V_{\bullet}\), each fresh qubit is in the support of an even number of new \(Z\) stabiliser measurements, and so initialising the fresh qubits incorrectly will not change \(c_{L}\). As ZX diagrams, the branches are: \[\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0,0)*{xy(0)*{xy(0)*{xy(0)*{xy(0)}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)}}\}\}\}\}\}\}\,\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\ For the prototypical example of lattice surgery we then have: We also look at a less obvious example, that of fault-tolerant surgery of the Shor code, in Appendix G. By dualising appropriately one can perform an \(\overline{X}\)-merge by sandwiching in a similar manner. We can also do the 'inverse' of the merge operation fault-tolerantly: **Corollary 6.10**.: _Let \((T_{\bullet},T_{\bullet}^{*})\) be a CSS code formed by sandwiching codes \((C_{\bullet},C_{\bullet}^{*})\) and \((D_{\bullet},D_{\bullet}^{*})\) together along a \(\overline{Z}\) operator. Then there is a fault-tolerant procedure to implement a code map on logical qubits \(\mathcal{E}_{\overline{X}}\) from \((T_{\bullet},T_{\bullet}^{*})\) to \(((C\oplus D)_{\bullet},(C\oplus D)_{\bullet}^{*})\)._ Proof.: As the initial code is already a sandwiched code we can just take the opposite of sandwiching. We delete the qubits belonging to the intermediate code \((W_{\bullet},W_{\bullet}^{*})\) but not \((C_{\bullet},C_{\bullet}^{*})\) or \((D_{\bullet},D_{\bullet}^{*})\) by measuring them out in the \(X\)-basis. The code map \(\mathcal{E}_{\overline{X}}\) on participating logical qubits is by following precisely the same logic as for traditional lattice surgery [25]. Again, by dualising appropriately we get the last split operation. Given a procedure for making \(\overline{Z}\otimes\overline{Z}\) and \(\overline{X}\otimes\overline{X}\) logical measurements and the isometries from splits, one can easily construct a logical CNOT between suitable CSS codes following, say, [5] and observing that the same ZX diagrammatic arguments apply. Augmented with some Clifford single-qubit gates and non-stabiliser logical states one can then perform universal computation. As opposed to some other methods of performing entangling gates with CSS codes, e.g. transversal 2-qubit gates, the schemes above require only the \(m\) qubits from the respective \(\overline{Z}\) or \(\overline{X}\) operators to participate, and we expect \(m\ll n\) for practical codes. Unlike that of [11], our method does not require a large ancillary hypergraph product code, which can have significantly worse encoding rate and code distance scaling than the LDPC codes holding data. Our method does not require the code to be'self-ZX-dual' in the sense of [8], and unlike [26] our method does not require the code to be defined on any kind of manifold, and is purely algebraic in description; moreover, these works study single qubit gates and measurements rather than entangling gates. ## 7 Conclusions and further work We believe our constructions are flexible and conceptually quite simple. The immediate next step is to benchmark our CSS code surgery against other methods of performing entangling logical gates and characterise which CSS codes admit gauge-fixable separated logical operators such that the merges have distance bounded below. The pushouts we gave along logical operators are the most obvious cases. By taking pushouts of more interesting spans other maps on logical data can be obtained, although by Proposition 4.12 and Corollary 4.17 all code maps as we defined them are limited and do not allow for universal quantum computation; we also do not know whether other pushouts would allow the maps on logical data to be performed fault-tolerantly. Herein we assumed that the two codes being 'glued' are different codes, but the same principles apply if we have only one code we would like to perform internal surgery on. In this case, the correct universal construction to use should be a coequaliser. There may be other uses of colimits in \(\mathtt{Ch}(\mathtt{Mat}_{\mathbb{F}_{2}})\). The method of constructing good families of quantum LDPC codes in [38] uses a balanced product, which is also a coequaliser, and the instance used there could be generalised. The initial classical codes come from expander graphs, and the quotient is with respect to actions of a finite group \(G\). This group could be changed to some other differential graded algebra, and the starting codes do not have to come from graphs. A generalised balanced product in this way cannot have asymptotically better metrics than those of [38], up to constant factors, as their construction already saturates the relevant bounds. However, it would be interesting to see if one can obtain better metrics for concrete instances. Calculating the homologies of such codes is likely to require tools such as spectral sequences. It should be possible to extend the definitions of \(\overline{X}\)- and \(\overline{Z}\)-merges straightforwardly to include metachecks [10], say by specifying that the logical operator subcomplex \(V_{\bullet}\) now runs from \(V_{0}\) to \(V_{-2}\), so it has \(X\)-checks and then metachecks on \(X\)-checks, but we have not proved how this affects metachecks in the merged code. There are several ways in which our constructions could be generalised to other codes. The obvious generalisation is to qudit CSS codes. For qudits of prime dimension \(q\), everything should generalise fairly straightforwardly using a different finite field \(\mathbb{F}_{q}\) but in this case the cell complexes will require additional data in the form of an orientation on edges, as is familiar for qudit surface codes. When \(q\) is not prime, one formalism for CSS codes with dimension \(q\) looks to be chain complexes in \(\mathbb{Z}_{q}\)-FFMod, the category of free finite modules over the ring \(\mathbb{Z}_{q}\). As \(\mathbb{Z}_{q}\) is not generally a P.I.D. this may complicate the homological algebra. Second, if we wish to upgrade to more general stabiliser codes we can no longer use chain complexes. The differential composition \(P_{X}P_{Z}^{\intercal}\) is a special case of the symplectic product \(\omega(M,N)=M\omega N^{\intercal}\) for \(\omega=\begin{pmatrix}0_{n}&I_{n}\\ -I_{n}&0_{n}\end{pmatrix}\)[23], but by generalising to such a product we lose the separation of \(Z\) and \(X\) stabilisers to form a pair of differentials. It is unclear what the appropriate notion of a quotient along an \(\overline{X}\) or \(\overline{Z}\) operator is for such codes. For quantum codes which are not stabiliser but are based on cell complexes, such as the Kitaev model [30], there are no stabiliser generators, but the codes are still 'CSS-like', in the sense that vertices correspond to actions of the group algebra \(\mathbb{C}G\) and faces actions of the function algebra \(\mathbb{C}(G)\), with each measurement outcome corresponding to an irreducible representation of the quantum double \(D(G)=\mathbb{C}(G){\leadsto}\mathbb{C}G\). More generally we can replace \(\mathbb{C}G\) and \(\mathbb{C}(G)\) with \(H\) and \(H^{*}\) for any semisimple Hopf algebra \(H\)[35, 14] while retaining the relevant features of the model. Just as there are no stabiliser generators, there are no longer \(Z\) and \(X\)-operators, but there are ribbon operators. As special cases there are ribbon operators which correspond to actions of only \(\mathbb{C}G\) or \(\mathbb{C}(G)\). The first author recently generalised lattice surgery to Kitaev models [15], albeit with some caveats. In the same way that CSS codes generalise stabiliser codes based on cell complexes, we imagine there could be a general class of commuting projector models using the quantum double, which are not necessarily defined on a tessellated manifold. The details of such a class are not known to us, and generalising the notion of'sites' on a lattice seems difficult. We speculate that the notion of 'gluing' along, say, a \(\mathbb{C}G\) operator could work for such commuting projector models. ## 8 Acknowledgements AC thanks Aleks Kissinger for helpful discussions about Lemma 4.4, and both Aleks Kissinger and John van de Wetering for helpful discussions about Proposition 4.12. AC also thanks the Wolfson Harrison UK Research Council Quantum Foundation Scholarship for making this work possible. We are grateful to Christophe Vuillot for spotting a crucial error in an earlier version of this paper, and for providing us with the illustrative example in Appendix F.
2302.14507
Ask and You Shall be Served: Representing and Solving Multi-agent Optimization Problems with Service Requesters and Providers
In scenarios with numerous emergencies that arise and require the assistance of various rescue units (e.g., medical, fire, \& police forces), the rescue units would ideally be allocated quickly and distributedly while aiming to minimize casualties. This is one of many examples of distributed settings with service providers (the rescue units) and service requesters (the emergencies) which we term \textit{service oriented settings}. Allocating the service providers in a distributed manner while aiming for a global optimum is hard to model, let alone achieve, using the existing Distributed Constraint Optimization Problem (DCOP) framework. Hence, the need for a novel approach and corresponding algorithms. We present the Service Oriented Multi-Agent Optimization Problem (SOMAOP), a new framework that overcomes the shortcomings of DCOP in service oriented settings. We evaluate the framework using various algorithms based on auctions and matching algorithms (e.g., Gale Shapely). We empirically show that algorithms based on repeated auctions converge to a high quality solution very fast, while repeated matching problems converge slower, but produce higher quality solutions. We demonstrate the advantages of our approach over standard incomplete DCOP algorithms and a greedy centralized algorithm.
Maya Lavie, Tehila Caspi, Omer Lev, Roei Zivan
2023-02-28T11:53:33Z
http://arxiv.org/abs/2302.14507v1
Ask and You Shall be Served: Representing & Solving Multi-agent Optimization Problems with Service Requesters and Providers ###### Abstract. In scenarios with numerous emergencies that arise and require the assistance of various rescue units (e.g., medical, fire, & police forces), the rescue units would ideally be allocated quickly and distributedly while aiming to minimize casualties. This is one of many examples of distributed settings with service providers (the rescue units) and service requesters (the emergencies) which we term _service oriented settings_. Allocating the service providers in a distributed manner while aiming for a global optimum is hard to model, let alone achieve, using the existing Distributed Constraint Optimization Problem (DCOP) framework. Hence, the need for a novel approach and corresponding algorithms. We present the Service Oriented Multi-Agent Optimization Problem (SOMAOP), a new framework that overcomes the shortcomings of DCOP in service oriented settings. We evaluate the framework using various algorithms based on auctions and matching algorithms (e.g., Gale Shapely). We empirically show that algorithms based on repeated auctions converge to a high quality solution very fast, while repeated matching problems converge slower, but produce higher quality solutions. We demonstrate the advantages of our approach over standard incomplete DCOP algorithms and a greedy centralized algorithm. Multi-Agent System; Multi-Agent Optimization; Distributed Problem Solving; Distributed Constraint Optimization Problems + Footnote †: journal: Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), A. Ricci, W. Yeeh, N. Agmon, B. An (eds), May 29 - June 2, 2023, London, United Kingdom, \(\copyright\) 2023 International Foundation for Autonomous Agents and Multiagent Systems (www:ufzamas.org). All rights reserved. ## 1. Introduction Advances in computation and communication have resulted in realistic distributed applications in which people interact with technology to reach and optimize mutual goals, such as saving lives in disaster response (Nguyen et al., 2018) and maximizing user satisfaction while minimizing energy usage in smart homes (Sundundhi et al., 2018; Sundhi et al., 2018). Thus, there is a growing need for optimization methods to support decentralized decision making in complex multi-agent systems. Many of these systems share the underlying structure of a _service oriented system_, which includes two sets of agents: one set of agents that can provide services and the other of agents that require services to be provided. Consider, for example, a disaster rescue scenario, where rescue units (medical personnel, fire fighters, police, etc.) need to coordinate their actions to save as many people as possible from numerous disaster sites. This coordination problem is particularly challenging due to the following characteristics: **1) Optimization of a Global Objective:** the various rescue units need to work together as a team towards a common goal (e.g., saving as many victims as possible). **2) Decentralized Coordination:** often there is no centralized entity that coordinates agents, but rather a diverse set of agents (e.g., medical personnel, fire fighters and disaster site coordinators) making personal coordination decisions. While we use the disaster rescue scenario as a motivating setting throughout this paper, these factors are present in a much larger class of multi-agent coordination problems. A common approach to solve these types of problems is to model them as _distributed constraint optimization problems_ (DCOPs), where decision makers are modeled as cooperative _agents_ that assign _values_ to their _variables_(Sundhi et al., 2018; Sundhi et al., 2018; Sundhi et al., 2018; Sundhi et al., 2018). The goal in a DCOP is to optimize a global objective in a decentralized manner. The global objective is decomposed into constraints that define the utility agents derive (or costs they incur) from combinations of assignments to variables (Ball et al., 2017; Ball et al., 2017; Ball et al., 2017). This model captures how a rescue unit (an agent) with a schedule (a variable) is assigned a disaster site to go to (a value for the variable), with the goal of saving as many victims as possible (the global objective). For each combination of assignments of disaster sites to the police units' schedules, a (possibly) different number of victims will be saved (the utility). In DCOP algorithms, agents exchange messages, communicating selected value assignments or their estimated utilities. The information received by an agent is used to adjust its variable assignments. The local quality of the assignments they select is measured according to the constraints they are subject to. If we examine the properties of the service oriented systems described above, it is apparent that the DCOP model does not naturally apply to them. On the contrary, in many of them, the constraints are defined by entities (e.g., disaster site coordinators) different from the agents making the decisions (e.g., rescue units). These entities require a service to be performed, but they do not assign variables. Rather, they are affected by the consequences of the decisions made by the agents performing the actions. Thus, while the solution is determined by the set of the "original" DCOP agents (the ones assigning variables, e.g., rescue units), the quality of the solution (i.e., the global utility derived from it) is measured according to the satisfaction of the service requiring agents (e.g., disaster site coordinators) from the services provided to them. In our disaster response example, consider the ambulances that are required to evacuate casualties from disaster sites to hospitals. The number of casualties and the severity of their wounds in each disaster site determines the utility derived from evacuating them to hospitals (e.g., not much utility in evacuating people with very minor wounds). To use the standard DCOP model for solving this problem, we would have the ambulances hold complete and coherent information regarding _all disaster sites_ they can drive to and exchange messages with _all rescue units_ (e.g., ambulances, police units and fire fighters) that can attend to casualties from the same sites (neighboring units). Moreover, to calculate the utility that they would derive from each decision they make, the ambulances would require knowledge of _all assignments made by neighboring units_ and the utility (or cost) of _all constraints_ representing the outcome of each possible combination of their assignments. Such a modelling requires agents to have detailed knowledge on almost all other agents, defeating the purpose of a distributed setting. The fact that the dominant model used to represent and solve multi-agent optimization problems seems deficient for so many distributed realistic applications is what motivates this work. We propose an alternative abstract model, _Service Oriented Multi-Agent Optimization Problem_ (SOMAOP). In contrast to standard DCOP, SOMAOP offers a paradigm for multi-agent optimization that can handle service oriented settings. In this model, agents are divided into two sets: service requesters (SRs) and service providers (SPs). This approach allows us to adopt (and adapt) existing AI and OR centralized methods for assigning service providers to service requesters. Thus, in this paper we: **(1)** Present the SOMAOP model. **(2)** Propose algorithms based on auctions and matching algorithms for solving SOMAOPs. **(3)** Conduct empirical comparison between various SOMAOP algorithms, and empirically show that algorithms based on repeated auctions converge to a high quality solution very fast, while repeated matching problems converge slower, but produce higher quality solutions. **(4)** Compare SOMAOP algorithms to DCOP algorithms in solving service oriented problems. Our results demonstrate that the SOMAOP model allows the use of algorithms that converge fast to high quality solutions while maintaining the problem's distributed structure and without requiring complete and coherent information to be held by the service providing agents. ## 2. Service Oriented Multi-Agent Optimization The Service Oriented Multi-Agent Optimization Problem (SOMAOP) is a multi-agent problem in which there is a clear distinction between two disjoint sets of agents: _service requesting_ agents (SRs) and _service providing_ agents (SPs). We create a bipartite graph with the SRs and SPs as the nodes. Each service providing agent (SP) is connected by an edge to SR nodes that require services that it can perform. Each service requesting agent (SR) is similarly connected by an edge to SP nodes that can provide the services that it requires. Each agent can communicate solely with agents that are connected to it by an edge. The variables of the problem are held by the SPs, with assignments to the variables reflecting the actions (services) that they will perform. A solution to the problem will include an assignment to each of the variables. The solution's quality will be determined by the satisfaction of the SRs from the actions chosen by the SPs (the services assigned to be provided to them) and will be reflected in a global utility, which the agents aim to maximize. Thus, one set of agents (SPs) selects the actions that are performed, while the other set (SRs) evaluates the outcome of these actions. Formally, a SOMAOP is a tuple \((SP,SR,S,PS,RS,X,D,U)\), where \(SP=\{SP_{1},SP_{2},\ldots,SP_{n}\}\) is a set of \(n\) service providing agents and \(SR=\{SR_{1},SR_{2},\ldots,SR_{m}\}\) is a set of \(m\) service requesting agents. The capabilities provided and requested as services are formalized as _skills_. The set of all skills is \(S=\{S_{1},S_{2},...,S_{k}\}\). Each \(SP_{i}\in SP\) has a set of providable skills, \(PS_{i}\subseteq S\). For each \(s\in PS_{i}\), the SP has a workload \(w_{i}^{s}\) that defines the amount of the skill it can provide as a service. For example, an ambulance can evacuate a limited number of casualties. For each skill \(s\in PS_{i}\), the SP also has a work time function \(t_{i}^{s}(w)\) that defines the time it takes to complete \(w\) workload of this skill. The workload of a providable skill \(s\) decreases when the SP schedules the skill to be provided as a service to an SR (providable skill \(s\) is depleted when \(w_{i}^{s}=0\)). On the other hand, each \(SR_{j}\in SR\) has a set of requested skills, \(RS_{j}\subseteq S\). For each requested skill \(s\in RS_{j}\), the SR has a workload \(w_{j}^{s}\) that defines the amount of service required of the skill it requests. The workload of a requested skill \(s\) decreases when an SP schedules to provide the skill as a service to the SR (requested skill \(s\) is no longer required when \(w_{j}^{s}=0\)). For each of its requested skills \(s\in RS_{j}\) there is an optimal team size for performance capability, \(q_{j}^{s^{*}}\), defining the number of SPs that are requested to cooperate simultaneously when performing the service (e.g., if a requested skill \(s\) with \(w_{j}^{s}=2\) has \(q_{j}^{s^{*}}=2\), \(SR_{j}\) will prefer two SPs to each schedule to provide half of the requested workload of \(s\) simultaneously rather than a single SP to provide the full requested workload). Additionally, each requested skill has a maximal utility \(q_{j}^{s^{*}}\), defining how much utility could be derived if the full service is completed immediately, with \(q_{j}^{s^{*}}\) SPs sharing the workload of the service simultaneously. Lastly, each requested skill \(s\) has a latest completion time \(t_{maxj}^{s}\), after which the service is no longer required. \(X=\{X_{1},X_{2},...,X_{n}\}\) includes sets of variables for each SP, i.e., for each service provider \(SP_{i}\), \(1\leq i\leq n\), \(X_{i}\) includes the set of variables \(x_{i_{1}},x_{i_{2}},\ldots,x_{i_{k_{i_{2}}}}\) representing the services that \(SP_{i}\) will provide; \(\lambda_{i}\) is the maximal number of services that it can perform. An assignment to \(SP_{i}\)'s variable \(x_{i_{a}}\) is a service tuple \((SR_{a},s_{a},w_{a},t_{a})\) representing the SR that the service will be provided to, the skill provided, the workload provided and the expected start time for performing the service, respectively. The order of the variables defines the order in which the agent will execute the services, i.e., \(SP_{i}\) will first perform the service assigned to \(x_{i_{1}}\), then the service assigned to \(x_{i_{2}}\), etc. \(D=\{D_{1},D_{2},\ldots,D_{n}\}\) includes sets of variable domains such that \(D_{i_{1}},1\leq i\leq n\), includes the set of domains \(d_{i_{1}},d_{i_{2}},\ldots,d_{i_{l_{2}}}\), which include the values that can be assigned to variables \(x_{i_{1}},x_{i_{2}},\ldots,x_{i_{l_{2}}}\) of \(SP_{i}\) respectively (i.e., \(d_{i_{1}}\) contains all of the service tuples that \(SP_{i}\) can schedule to provide first). The domains can also include a non-service assignment in cases where a SP is purposefully not assigned to a service e.g., in cases when SPs need time to recharge. A solution \(\sigma\) to the SOMAOP is an assignment to each of the variables held by the set of SPs, of a value from its domain. The utility derived by a service requesting agent \(SR_{j}\) from solution \(\sigma\) is denoted by \(U_{j}(\sigma)\). It is calculated as a function of the utility \(SR_{j}\) will derive from the services scheduled for each requested skill \(s\in RS_{j}\) as specified by \(\sigma\), denoted \(u_{j}^{s}(\sigma)\). \(u_{j}^{s}(\sigma)\) is bounded by \(u_{j}^{s}\) and is affected by three factors: 1) The time the SR will spend awaiting service for \(s\): the utility to be derived from the service will decrease with a latency penalty function, corresponding to the time the SR awaits service. 2) The amount of workload scheduled to be performed and its timing. 3) The performance capability of the SPs providing the service: the performance capability of \(SR_{j}\)'s requested skill \(s\) is affected by the number of SPs that provide the workload of the service simultaneously (Bahdan et al., 2016). This is denoted by the capability function, \(Cap_{j}^{s}(q)\). The function can represent minimum required or maximum allowed numbers of agents by setting the capability to 0 for fewer agents, or by not increasing the capability when more than the maximum number of required agents share a service, respectively. \(Cap_{j}^{s}(q)\) will reach its maxima at \(q=q_{j}^{s^{*}}\). We assume \(Cap\) is weakly monotonically increasing in \(q\). \(U(\sigma)\) defines the global utility derived from solution \(\sigma\) and is a function of the utilities received by each of the SRs, i.e., \(U(\sigma)=F(U_{1}(\sigma),U_{2}(\sigma),\ldots,U_{m}(\sigma))\). The goal of the agents in SOMAOP is to maximize the global utility function U. ## 3. Algorithms for Solving SOMAOP The general approach we take is to design iterative distributed algorithms in which the building blocks are existing methods for assigning service providers to services, used in the Operation Research literature, which we adapt to a distributed environment. Specifically, we will focus on two approaches: auctions (Bahdan et al., 2016; Kliem et al., 2017; Kliem et al., 2018; Kliem et al., 2019) and matching (Bahdan et al., 2016; Kliem et al., 2019; Kliem et al., 2019). ### Repeated Parallel Auctions (RPA) The RPA algorithm creates allocations of SPs to SRs by using a repeated auction process (Kliem et al., 2017; Kliem et al., 2018; Kliem et al., 2019). In each of the algorithm's iterations (a predefined number), an auction occurs between the SPs (sellers that offer providable skills) and the SRs (the buyers who offer bids on the skills they require). The auction begins with each SP sending a _service proposal_ to its neighboring SRs for each of their joint skills (skills that the SP can provide and the SR requests). A service proposal from \(SP_{i}\) to \(SR_{j}\) for providable skill \(s\) is composed of \(SP_{i}\)'s proposed workload for \(s\) and the proposed service start time at which \(SP_{i}\) proposes to begin providing \(s\) to \(SR_{j}\). Upon receiving service proposals from its SP neighbors, each SR responds by sending _service requests_ to the SPs that it would most want to provide it each of its requested skills. A service request from \(SR_{j}\) to \(SP_{i}\) for requested skill \(s\) is composed of a \(SR_{j}\)'s requested workload for \(s\), a requested start time for \(SP_{i}\) to begin to provide \(s\) to \(SR_{j}\) and a bid value that expresses the utility it could derive from receiving \(s\) with the workload requested at the start time requested. Once all service requests for the iteration are sent, the SPs will attempt to create a schedule (each SP starts with an empty schedule in each iteration). The SP attempts to schedule the service requests in descending order of bid value. A schedule attempt for request \(r\) succeeds if the completion time of the last scheduled request is earlier than the requested start time of \(r\) and if the SP has enough workload left to provide it, given the services needed to fulfil the already-scheduled requests. The SP will continue to attempt to schedule requests until an attempt fails. The SP responds to a scheduled request with a service proposal to provide the service as requested. The SP responds to an unscheduled request with an updated service proposal including its updated remaining providable skills and workloads (the original skills and workloads, minus those needed for the scheduled requests) and its updated proposed service start time (the next time possible after the scheduled requests). This begins the next auction (iteration), and the process occurs again. Algorithm 1 depicts the main procedure of the SPs. Initially, a SP will propose its neighboring SRs the earliest possible service start time as well as its entire workload per joint skill (lines 14-17, as there are no requests in the initial iteration). In later iterations, the SP creates a new schedule by responding to service requests received from SRs, ordered highest bid first (lines 3-4). The SP will schedule request \(r\) for skill \(s(r)\) if the earliest possible start time for the request, \(t_{ earliest}^{r}\), is earlier than (or equal to) the start time requested \(t_{start}(r)\) and if the SP has enough workload \(w_{i}^{s(r)}\) to fulfil the requested workload, \(w(r)\) (line 6). If a request is scheduled, the SP proposes to provide the request (line 7). The SP then schedules the request in the next free slot in \(X_{i}\), updates \(w_{i}^{s}\) according to the the requested workload and \(t_{earliest}\) according to the expected completion time of the request (lines 8-9). If a request is not feasible, the SP stops the scheduling process (line 11). The SP responds to the unscheduled requests by sending the SRs new proposals to provide the skills after the scheduled requests, along with the workloads they will be able to provide at this later time (lines 14-17). ``` 1:for fixed number of iterations do 2: Reset \(w_{i}^{s}\)\(\forall\)\(s\in PS_{i}\), \(X_{i}\), \(t_{earliest}\gets 0\) 3:\(requests\leftarrow\) requests received from SRs in previous iteration, ordered by highest bid value 4:for\(r\inrequests\)do 5:\(t_{earliest}^{r}\leftarrow\) earliest time after \(t_{earliest}\) that \(SP_{i}\) can begin serving \(r\) 6:if\(t_{earliest}^{r}\leq t_{start}(r)\) and \(w_{i}^{s(r)}\geq w(r)\)then 7: send proposal(\(SR(r),s(r)\), \(w_{i}^{s(r)},t_{earliest}^{r}\)) 8: schedule(\(r\)) 9: Update \(w_{i}^{s(r)},t_{earliest}\) 10:else 11: break 12:endif 13:endfor 14:for\(SR_{j}\in\) neighbors do 15:for\(s\in PS_{i}\cap RS_{j}\) not proposed in current iteration do 16:\(t_{earliest}^{s}\leftarrow\) earliest time after \(t_{earliest}\) that \(SP_{i}\) can begin serving skill \(s\) to \(SR_{j}\) 17: send proposal(\(SR_{j}\), \(s\), \(w_{i}^{s},t_{earliest}^{j,s}\)) 18:endfor 19:endfor 20:endfor ``` **Algorithm 1** RPA: Service Provider \(i\) Algorithm 2 depicts the main procedure of the SRs. For each of its requested skills \(s\), the SR will iterate over the proposals received from SPs that correspond with \(s\), ordered by the quality of the proposals received (lines 3-5); i.e., maximal utility gain possible from the workload the SP has proposed, normalized by workload. For each proposal, the SR will allocate workload equal either to the workload proposed or the remaining unallocated workload requested (line 6). The bid value is then calculated by the SR and is sent as a request to the proposing SP. The remaining workload requested \(w_{j}^{s}\) is updated according to the allocation (line 9). The allocation for a requested skill ends when the allocation has satisfied the request (lines 10-11) or there are no more proposals to allocate. The algorithm does not aim to allocate exactly \(q^{s^{s}}\) SPs to \(s\) but rather enough SPs to provide the workload requested. #### 3.1.1. RPA convergence The convergence of RPA depends on how bids are calculated. There are several parameters that can be considered when a SR decides on a bid to be sent to a SP regarding a specific skill s, such as the expected satisfaction from the service, the expected starting time and the amount of workload potentially received for skill s by other SPs prior to this starting time. We will prove that when the following assumptions hold, the algorithm is guaranteed to converge. First, we assume that SRs will bid higher for an earlier starting time. Formally, for each SR, \(SR_{j}\), for every two bids for the same skill \(s\), \(b_{s}\) and \(b_{s}^{\prime}\) with starting times \(st_{b}\) and \(st_{b^{\prime}}\), \(b_{s}>b_{s}^{\prime}\) if and only if \(st_{b}<st_{b^{\prime}}\). We will further assume that a \(SP\) will schedule that skill \(s\) is applied for serving some \(SR\), at most once. While our model is more abstract and there are other possibilities for calculating the bids, these assumptions hold in many realistic scenarios, where the quality of service provided by different agents is similar and the starting time is a critical parameter. One such scenario is the mass casualty incident problem we address in our experimental evaluation. In our convergence proof we will use the following notations. We will denote by \(TS\) the set of scheduled services that will not change in following iterations (the set can only grow as the algorithm proceeds). It includes scheduled services such as \(ts_{ijs}^{k}\), which indicates that the \(k\)'th service provided by \(SP_{i}\) will be to \(St_{j}\) on skill \(s\), and that this fact will _not change later on_. When \(TS\) includes all provided service requests, the algorithm converges. We will further denote by \(hb^{k}\) the highest bid in iteration \(k\) for a service that is not yet in \(TS\) and by \(hb_{SP}^{k}\), \(hb_{SR}^{k}\) and \(hb_{s}\) the SP and SR and skill that \(hb\) corresponds to, respectively. **Observation 1**.: _In iteration \(k+1\), \(hb_{s}^{k}\) of \(hb_{SR}^{k}\) will be the first service that is not in \(TS\) on \(hb_{SP}^{k}\)'s schedule._ This is because SPs order services according to their bid sizes. **Observation 2**.: _The only way that \(hb^{k+1}\) can be smaller than \(hb^{k}\) is when the service that \(hb^{k}\) corresponds to was added to \(TS\)._ As Observation 1 notes, the highest bid would be the first one (apart from those in \(TS\)) handled by SPs. If \(hb^{k}\) is not the highest bid in iteration \(k+1\), it can only be if it was added to \(TS\) or if there is a larger bid sent in iteration \(k+1\). **Lemma 3.1**.: _The number of consecutive iterations in which \(TS\) does not grow is bounded by \(2|SP|\cdot|S|\)._ Proof.: Under a given \(TS\) set, the highest possible bid not yet in \(TS\) will be added to \(TS\) (since it is not surpassed, the \(SP\) agent getting the bid will always give it a high priority, and the requesting agent gets it as soon as possible (otherwise, it would have given a higher bid). Thus, when discussing changes to \(TS\) we can focus on looking at the highest possible bid that is not in \(TS\) yet. Initially \(TS\) is empty. After the first iteration of the algorithm, \(hb_{SP}^{1}\) schedules the corresponding request as a service. Since all \(SRs\) in the first iteration considered the earliest possible arrival time of each \(SP\), this bid will remain the highest and will not change. Thus, this scheduled service is added to \(TS\). In each of the following iterations, each SP has a schedule that was determined according to the bids it received in the previous iteration. According to Observation 1, following iteration \(k\), in iteration \(k+1\), \(hb_{s}^{k}\) will be scheduled first among all services not yet in \(TS\) by \(hb_{SP}^{k}\). Thus, either \(hb^{k+1}\) is the same as \(hb^{k}\), or, according to Observation 2, it was replaced by a higher bid. In both cases, \(hb_{SP}^{k}\) will never submit an earlier arrival time to \(hb_{SR}^{k}\) than the one it submitted at iteration \(k+1\) and therefore, it will never receive a bid for this service that is higher than the one it got for it in this iteration. Thus, the maximal number of different highest bids between consecutive additions to \(TS\) is bounded by two iterations for each SP on each skill, i.e., \(2|SP|\cdot|S|\). That is, after that number of iterations, it is guaranteed that one of those bids was the maximal possible one, and thus would be added to \(TS\). **Proposition 3.2**.: _RPA converges within \(2|SP|^{2}\cdot|S|^{2}\) iterations._ Proof.: According to our assumption, each SP serves an SR on a skill only once, i.e., the number of services that are added to \(TS\) is bounded by \(|SP|\cdot|S|\). From Lemma 3.1, the maximal number of iterations between each increment to the size of set \(TS\) is bounded by \(2|SP|\cdot|S|\). Thus, the maximal number of iterations before the algorithm converges is bounded by \(2|SP|^{2}\cdot|S|^{2}\). Our assumption that each \(SP\) will serve a \(SR\) agent with skill \(s\) at most once can easily be relaxed to serving the \(SR\) agent with some fixed number of times. The proof for Proposition 3.2 will only need to be slightly changed, multiplying our convergence bound by a fixed amount. ### Distributed Simulated Repeated Matching Algorithm (DSRM) The DSRM algorithm creates allocations of SPs to SRs by repeatedly simulating the outcome of a matching algorithm over time. Each agent (SP as well as SR) has an internal clock that begins at \(t=0\) and progresses throughout the DSRM algorithm. Each iteration considers a simulated time \(t\) at which the agents execute an iterative Gale Shapley inspired many-to-one matching algorithm [15] to match SPs with SRs. The outcome of the matching algorithm is translated to service tuples by the SRs and scheduled by the SPs. Once a SP is matched to a service to a SR, it can determine when it will finish providing the service, by calculating how long it will take to complete its assigned workload (using \(t_{i}^{s}(w)\)). This way it can also know what its remaining workload will be at a future time. At each iteration, we simulate as though the previous allocations already happened, which means that the provided services and workload are updated as well as the internal clock of each agent (an explanation on how to distributedly synchronize the internal clocks to the next relevant start time in each iteration follows). In each iteration we want to make a decision for the next allocation at this time. This simulated matching process will end when there are no more SRs with remaining requested skills or no more SPs with remaining providable skills. The final schedule is the solution to the problem and will include the assignments that were "executed" during the simulated process in the order that they were simulated. The iterative Gale Shapley inspired many-to-one matching algorithm is performed as follows. In each iteration, the SR calculates a bid value for each of its requested skills, for each neighboring SP that can provide the service. This bid value expresses the utility it could derive from receiving the service from the SP. The SRs share the bids with the SPs. Then, a distributed version of the Gale Shapley college admissions algorithm (DGS) [4] is executed to create a many-to-one matching. Each SR acts separately and simultaneously for each of its requested skills. Both the SPs and the SRs' requested skills rank one another according to the bid values. The SPs that have been matched will not take part in the next iteration. The SRs will take part in the next iteration if they have at least one requested skill \(s\) that has not been matched with \(q_{j}^{s^{0}}\) SPs (defined by \(\min\{q_{j}^{s^{*}}\), number of SP neighbors with \(s\in PS_{i}\}\)), and there is at least one neighboring SP to provide the skill. The iterative matching algorithm ends when there are no SPs or SRs left to match. Note that the algorithm does not aim to allocate just enough SPs to provide the workload requested but rather \(q^{s^{*}}\) SPs for each skill. ``` 1:\(t_{last}\gets 0\), \(t\gets 0\), \(allocation\_t\leftarrow\)null 2:repeat 3:for\(SR_{j}\in\) neighbors do 4:for\(s\in PS_{i}\cap RS_{j}\)do 5:\(t_{earliest}^{j}\leftarrow\) earliest time after \(t\) that \(SP_{t}\) can begin serving \(SR_{j}\) 6: send proposal(\(SR_{j}\), \(s_{i}^{w}\), \(t_{earliest}^{j}\)) 7:endfor 8:endfor 9: Receive bids from SRs and rank them accordingly 10: Update neighbors 11:while\(\mid\)neighbors\(>0\)and has no match do 12: Run Distributed Gale Shapley 13: neighbors \(\leftarrow\) neighboring SRs that require some skill from \(PS_{i}\) and haven't completed their matching 14:endwhile 15:\(assignment\_t\leftarrow\) receive assignment from matched \(SR\) 16:\(t_{last}\gets t\) 17:\(t=minimal\_SP\_finish\_time()\) 18:\(pa\leftarrow\) partial assignment_t complete in \(t-t_{last}\) 19: schedule(\(pa\)) 20: Update \(w_{i}^{(assignment\_t)}\) according to \(pa\) 21:until\(\mid\)neighbors\(|=0\)or\(PS_{i}=\emptyset\) ``` **Algorithm 3** DSRM: Service Provider \(SP_{i}\) Algorithm 4 depicts the main procedure of the SRs. At first, similarly to the SPs, a SR initializes the times \(t\), \(t_{last}\) to \(0\) and its allocation for time \(t\) as empty (line 1). The algorithm will end when the SR has no more requested skills or when the SR no longer has SPs that can provide its requested skills (line 25). At each time \(t\) in which the simulation is performed, the SR receives the SP's service proposals (line 3), calculates bids (as described in the following sub-section) for each of its neighbors per skill they have that the SR requires and sends service requests to the SPs (lines 5-8). The calculated bids are used to rank the SPs in the DGS algorithm. Then, the DGS algorithm is performed iteratively for each of the requested skills simultaneously, until each skill \(s\in RS_{j}\) has been matched with \(q_{j}^{s^{\prime}}\) SPs (defined by \(\min\{q_{j}^{s^{\prime}}\), number of SP neighbors with \(s\in PS_{i}\}\)), or there are no SPs left to match with (lines 11-16). The SR allocates services to be performed by the SPs by dispersing the load evenly between the matched SPs, considering their available providable skills (line 18). Lastly, the simulation time is updated and the SR's remaining requested skills are updated according to the workload that has been completed in the elapsed time (\(t-t_{last}\)) (lines 20-23). To find the minimal next simulation time (line 17 in algorithm 3, line 20 in algorithm 4), we use a simple distributed algorithm (inspired by (Brandt, 2001)). Each agent (whether a SP or SR) holds a minimal time (for a SP it will be initialized as the completion time of its allocation; for a SR it will be initialized as the earliest completion time of its allocated SPs) and sends this time to its neighbors. Each agent receives its neighbors' messages and saves the minimal time. When the minimal time of an agent is revised, it is sent to its neighbors. This algorithm (that finds the next minimal simulation time) will converge in \(O(d)\) iterations (d being the diameter of the communication graph), as agent \(a\) that has the true minimal time will surely never change it. Therefore, at most, the message will have to reach the furthest agent from \(a\) in the graph. #### 3.2.1. DSRM Properties In order to establish the following property we first assume that there is a minimal amount of workload that an SP will perform when assigned to apply some skill, serving some SR. We note this minimal fraction of workload by \(\epsilon\). **Proposition 3.3**: _DSRM converges to a solution in a pseudo-polynomial number of iterations._ According to our assumption, the number of possible assignments to apply a skill for some SR is bounded by the number of SRs (m) times the number of skills (k) times the maximal number of fractions of workload (\(\frac{w}{e}\)), where \(w\) is the maximal workload requested for any skill. Since in every iteration of the algorithm at least one SP is assigned to perform some skill in order to serve some SR, and this assignment is not changed in later iterations, the number of iterations is bounded by: \(n\cdot m\cdot k\cdot\frac{w}{e}\). Thus, the number of iterations before the algorithm converges is pseudo polynomial. **Proposition 3.4**: _The quality of the solutions found by DSRM as a function of the number of iterations is monotonically increasing._ The solution is incrementally built. After beginning empty, at each iteration, at least one assignment of a SP to perform a skill for an SR is added to the partial solution. Each such assignment has positive utility. Therefore the quality of the solution (which is the sum of the utilities derived for each such assignment) is increasing with each iteration. #### 3.2.2. Calculating DSRM Algorithm Bids We propose two functions for calculating bidding values: **Simple**: assigns each SP neighbor a bid value for each of its \(s\in RS_{j}\) that represents the utility the SR would derive if the SP was to provide as much of \(s\) as possible to the SR disregarding other SPs' abilities and the Cap function. **Truncated**: assigns positive values only to a number of SPs for each of its \(s\in RS_{j}\). The number of SPs that will receive positive bids is equal to \(q_{j}^{s^{\prime}}\). The SR chooses the \(q_{j}^{s^{\prime}}\) SPs with the earliest expected start times and assigns to each of them a value that represents the marginal utility it should receive, taking into account the SPs that could arrive before it as well as the effect of the Cap function (more details can be found in the supplementary material). ## 4. Experimental Evaluation To evaluate the proposed algorithms' performance, we created two different simulators. The first simulates the coordination between SPs and SRs of an abstract SOMAOP, with an objective of maximizing the global utility function. The second simulates a specific and realistic instance of SOMAOP, the coordination between medical units (SPs) and disaster sites (SRs) in a Mass Casually Incident (MCI) setting with an objective of minimizing the number of casualties with a low survival probability (Mayer, 2001). All results presented are averages of solving attempts of the 50 simulated problems, by the algorithms. Figure 1 and 2 present the global utility as a function of Non-Concurrent Logic Operations (NCLOs) (Kalton, 2002; Kalton, 2003; Kalton, 2004) for four scenarios in the abstract simulator and the MCI simulator, respectively. Each scenario has a different _magnitude_, or ratio of SP size to SR size. The scenarios in our experiments had 40 and 20 SPs and a magnitude of \(4:1\) and \(2:1\). In each scenario we compared five algorithms: RPA, DSRM using the simple bid function, DSRM using the truncated bid function, the Distributed Gale Shapley College Admissions algorithm (DGS) as a one-shot schedule and a centralized greedy algorithm. In the centralized greedy algorithm pairs of SP and a requested skill of an SR are selected and scheduled sequentially, ordered according to the maximal utility per workload. The algorithm continues until there are no more SRs with remaining requested skills that can be served by the SPs. Figure 1. Abstract Simulator ### Comparing SOMAOP Algorithms The results presented in Figure 1 show a clear and consistent advantage of the version of DSRM that uses the truncated bid function. In comparison to DSRM, RPA converges earlier, but to solutions with a lower global utility. The GS algorithm converges fastest, since it only performs a single shot schedule. The DSRM version that uses a simple bid function produced solutions with a lower utility on average than the results produced by the version that used truncated bid. Moreover, its runtime was longer due to the larger number of iterations it performs in each execution of the Distributed Gale Shapley algorithm. As the amount of SPs increases, the runtime of DSRM increases (regardless of the type of utility being used). In contrast, in RPA the convergence time is faster than DSRM regardless of the amount of SPs. The solutions that DGS produces have lower utility than the utility of solutions produced by DSRM with a simple bid function when there are 20 SPs and higher when there are 40. It seems that this is the effect of DSRM's readjustment each time SPs are planning to end a service. When there are many SPs, such adjustments occur often. This results in SPs abandoning their services for higher bidders, meaning their time is wasted and thus the utility decreases. In these cases, DGS performs better despite creating a single-SR-schedule for the SPs, as the scheduled services are completed at the earliest available time with no delay. DSRM with a truncated bid is not fazed by the amount of SPs as the bids are calculated in a way that is less sensitive to changes. The centralized greedy algorithm is shown as a horizontal line describing the average final utility of the algorithm (as opposed to utility over NCLOs). This approach produced lower utility results in all of the problem sizes shown. Figure 2 presents similar results of the algorithms solving MCI problems. Again, DSRM using truncated bid yields the highest quality results, and RPA converges fast regardless to the amount of SPs. However, DSRM with simple bid converges much faster on this simulator. The reason is that there are strict ordering constraints between skills applied by SPs in this simulator, i.e., medical treatment must be given before evacuation to the hospital. Thus, optional outcomes are ruled out and the size of the solution space is much smaller than that of the abstract simulator. This is also the reason for the clear difference between the results of DSRM with a simple bid function and the results of DGS. ### Comparing SOMAOP and DCOP Algorithms To compare SOMAOP algorithms with DCOP algorithms, we need to describe how an instance of SOMAOP is modeled as a DCOP (similar to how multi agent task allocation problems were modeled as DCOPs in (Brands et al., 2017)). First, we note that in a DCOP there is only one type of agents, i.e., the DCOP agents are the SPs and there are no agents representing the SRs. Thus, in DCOP, each of the SP agents must be able to communicate with the other SP agents. A SP agent neighbors another SP agent if they can both provide the same skill to a SR. Additionally, besides holding variables and variable domains as they do in SOMAOP, the SPs must also hold the constraint information (the utility derived from different combinations of decisions regarding service providing). Moreover, many DCOP algorithms require the SP to correctly calculate the utility from an assignment to its variables, thus, it must also know the assignments of its neighboring SPs. _Information coherence_ of a DCOP as the extent to which each agent is aware of the characteristics of the DCOP (i.e., other agents' assignments or the constraints of the problem). High coherence is associated with the agents having a more complete and intelligible awareness of the state of other agents in the DCOP and the constraints among them. Low coherence is associated with the agents having an incomplete and unintelligible awareness of the DCOP elements. One possible reason for low coherence is the attempt to preserve agents' privacy. Low coherence may also be associated with imperfect communication (Zhou et al., 2017; Wang et al., 2018). We distinguish two types of coherence, inspired by (Kang et al., 2018)'s definitions of privacy guarantees in DCOPs: **1)** Assignment coherence: The extent to which an agent is aware of the assignments chosen by other agents to their variables. **2)** Constraint coherence: The extent to which an agent is aware of the cost incurred by the constraints in the problem. The separation of the responsibilities between the two sets of agents in SOMAOP such that only the SRs need to be able to evaluate possible solutions and be aware only of the utility calculation regarding their own set of requested skills, allows the SPs to focus only on their own current state. All the SP needs to know is the information regarding the utility derived from its own choice of assignments. This information is delivered to the SP by its neighboring SRs in SOMAOP. Thus, the required information coherence in SOMAOP is negligible for both forms of coherence defined. Figure 3. Constraint coherence Abstract Simulator Figure 2. MCI Simulator In DCOP algorithms, in order for the agents to be able to evaluate the quality of their value assignments, they must know all the constraints they are involved in. Thus, the required constraint coherence of standard DCOP algorithms is high. In terms of assignment coherence, the SOMAOP model eliminates the need for the SPs to know of other SPs' assignments as SRs are the only ones that must see the "bigger picture" of assignments in the system. Therefore, the SOMAOP algorithms do not require assignment coherence for the SPs. DCOP algorithms, on the other hand, requires the agents to know all neighborequirents assignments, i.e., the assignment coherence requirement in DCOP is also high. The high requirement for the coherence of the information held by the SP agents in DCOP violates the essential distributed properties which are preserved in the SOMAOP. If each SP has access to all constraints regarding each of the neighboring SRs as well as access to all of the other SPs' assignments, perhaps a centralized approach is equivalently appropriate. Using the same scenarios as in the experiments presented above, we compare the SOMAOP algorithms - RPA and DSRM - to DCOP's DSA (Sundundar et al., 2012) and Max-Sum (Maswani et al., 2017; Wang et al., 2018). We begin with the DSA: we used DSA-C with a probability \(p=0.7\) for replacing a value assignment. In each iteration each agent selected a random variable \(x_{t}\) to which it considered whether to replace its assignment to the best alternative. To evaluate the relation between information coherence and the quality of solutions reported by DSA, we limited the information coherence of the agents performing the algorithm and compared the results to the outcomes of the SOMAOP algorithms. To limit information coherence we define \(p_{c},p_{a}\in[0,1]\), which determine the amount of information an agent knows regarding its neighbors' constraints and assignments respectively. For example, \(p_{c}=0.5\) translates to a 50% chance of an agent being aware of the cost incurred by a specific constraint in the problem. Figures 3 and 4 present the results for constraint coherence (\(p_{a}=1\)), and assignment coherence (\(p_{c}=1\)), respectively. The results in Figure 3 show that for problems with a 4:1 ratio between SPs and SRs respectively, DSA outperforms DSRM with a truncated bid function when the Constraint Coherence is above \(p_{c}=0.75\). In problems with a 2:1 ratio, even with \(p_{c}=1\), our algorithms provide a better average final global utility. The results presented in Figure 4 show similar outcomes. For problems with a 4:1 ratio between SPs and SRs respectively, DSA outperforms DSRM with a truncated bid function only when the Assignment Coherence is above \(p_{a}=0.75\) when there are 20 SPs, and above \(p_{a}=0.5\) for 40 SPs. In problems with a 2:1 ratio, even with \(p_{a}=1\), our algorithms provide a better average final global utility. Similar results were also shown in the MCI simulator.The results show that although the DCOP framework can be used to solve SOMAOPs, it requires a high information coherence from the agents in order to achieve similar (or worse) results than those of SOMAOP algorithms. The Max-Sum algorithm operates on a bipartite factor graph (Maswani et al., 2017; Wang et al., 2018). This characteristic makes Max-Sum seem like a natural choice for solving service-oriented multi-agent problems. However, when used to solve problems whose inherent structure is of a bipartite graph (including service providers and service requesting agents), the algorithm fails to overcome its inherent symmetry and performs poorly (Kumar et al., 2018; Wang et al., 2018). Additionally, since the constraints held by SRs in SOMAOP can involve a large number of SPs, i.e., they are constraints with high arity, the function-nodes in Max-Sum must use exponential runtime in order to generate messages. To see how well Max-Sum can handle instances of SOMAOP, the algorithm was implemented on the abstract simulator and compared with our proposed algorithms. Here too, we implemented an iterative approach (which significantly outperformed a single shot approach) in which the solution was built incrementally by performing Max-Sum in each iteration in order to allow the SPs to select their next action. Figure 5 presents the average quality of the solutions produced by the algorithms, solving 30 problems, with 20 SPs and 5 SRs. The exponential runtime of Max-sum prevented us from experimenting with larger problems. The results indicate that Max-sum produces solutions with far lower quality than the SOMAOP algorithms. ## 5. Conclusions Many realistic distributed problems include service requesters and service providers. In the last two decades, distributed optimization problems have been represented and solved using the DCOP model and algorithms, which are not suitable for representing the two types of agents in service oriented multi agent optimization problems. Additionally, they require high information coherence, which is often unwanted or simply unrealistic in the environments of real-life problems. We proposed SOMAOP, a novel model for representing such problems and algorithms for solving them. The algorithms use well studied allocation methods as building blocks, and update the agents' estimations (bids) of utility from the services available following each iteration. Our empirical results demonstrate the advantages of the proposed iterative processes for solving this type of problem. Figure 4. Assignment Coherence Abstract Simulator Figure 5. Max-Sum Comparison in the Abstract Simulator ## Acknowledgments This work was supported in part by Israel Science Fund (ISF) grants 1965/20 and 3152/20, as well as funding from the Bi-national Science Foundation (BSF).
2309.09142
Performance of Graph Neural Networks for Point Cloud Applications
Graph Neural Networks (GNNs) have gained significant momentum recently due to their capability to learn on unstructured graph data. Dynamic GNNs (DGNNs) are the current state-of-the-art for point cloud applications; such applications (viz. autonomous driving) require real-time processing at the edge with tight latency and memory constraints. Conducting performance analysis on such DGNNs, thus, becomes a crucial task to evaluate network suitability. This paper presents a profiling analysis of EdgeConv-based DGNNs applied to point cloud inputs. We assess their inference performance in terms of end-to-end latency and memory consumption on state-of-the-art CPU and GPU platforms. The EdgeConv layer has two stages: (1) dynamic graph generation using k-Nearest Neighbors (kNN) and, (2) node feature updation. The addition of dynamic graph generation via kNN in each (EdgeConv) layer enhances network performance compared to networks that work with the same static graph in each layer; such performance enhancement comes, however, at the added computational cost associated with the dynamic graph generation stage (via kNN algorithm). Understanding its costs is essential for identifying the performance bottleneck and exploring potential avenues for hardware acceleration. To this end, this paper aims to shed light on the performance characteristics of EdgeConv-based DGNNs for point cloud inputs. Our performance analysis on a state-of-the-art EdgeConv network for classification shows that the dynamic graph construction via kNN takes up upwards of 95% of network latency on the GPU and almost 90% on the CPU. Moreover, we propose a quasi-Dynamic Graph Neural Network (qDGNN) that halts dynamic graph updates after a specific depth within the network to significantly reduce the latency on both CPU and GPU whilst matching the original networks inference accuracy.
Dhruv Parikh, Bingyi Zhang, Rajgopal Kannan, Viktor Prasanna, Carl Busart
2023-09-17T03:05:13Z
http://arxiv.org/abs/2309.09142v1
# Performance of Graph Neural Networks for Point Cloud Applications ###### Abstract Graph Neural Networks (GNNs) have gained significant momentum recently due to their capability to learn on unstructured graph data. Dynamic GNNs (DGNNs) are the current state-of-the-art for point cloud applications; such applications (viz. autonomous driving) require real-time processing at the edge with tight latency and memory constraints. Conducting performance analysis on such DGNNs, thus, becomes a crucial task to evaluate network suitability. This paper presents a profiling analysis of EdgeConv-based DGNNs applied to point cloud inputs. We assess their inference performance in terms of end-to-end latency and memory consumption on state-of-the-art CPU and GPU platforms. The EdgeConv layer has two stages: (1) dynamic graph generation using \(k\)-Nearest Neighbors (\(k\)NN) and, (2) node feature updation. The addition of dynamic graph generation via \(k\)NN in each (EdgeConv) layer enhances network performance compared to networks that work with the same static graph in each layer; such performance enhancement comes, however, at the added computational cost associated with the dynamic graph generation stage (via \(k\)NN algorithm). Understanding its costs is essential for identifying the performance bottleneck and exploring potential avenues for hardware acceleration. To this end, this paper aims to shed light on the performance characteristics of EdgeConv-based DGNNs for point cloud inputs. Our performance analysis on a state-of-the-art EdgeConv network for classification shows that the dynamic graph construction via \(k\)NN takes up upwards of \(95\%\) of network latency on the GPU and almost \(90\%\) on the CPU. Moreover, we propose a quasi-Dynamic Graph Neural Network (qDGNN) that halts dynamic graph updates after a specific depth within the network to significantly reduce the latency on both CPU and GPU whilst matching the original networks inference accuracy. Graph neural network, point cloud, \(k\)-nearest neighbors, dynamic graph construction, performance profiling ## I Introduction Graphs are effective data structures for representing intricate relationships (edges) among entities (nodes) with a high degree of interpretability. This has led to the widespread adoption of graph theory in various domains. Notably, Graph Neural Networks (GNNs) [1] have demonstrated remarkable success in addressing both conventional tasks like computer vision [2, 3] and natural language processing [4, 5], as well as non-traditional tasks such as protein interface prediction [6] and combinatorial optimization [7]. This versatility has made GNNs an integral part of deep learning methodologies, as evidenced by their numerous applications across diverse problem domains [1, 8]. A point cloud is an unstructured collection of raw data points, where each point represents a specific location associated with an object or shape, typically defined within a coordinate system (such as Cartesian or spherical). The ability of a point cloud to capture 3D environments make it crucial for numerous scene understanding tasks [9, 10] and applications across various domains. For instance, point clouds play a vital role in autonomous driving [11, 12], virtual reality [13], augmented reality [14], construction industry [15], robotics, computer graphics, and many more. The availability of affordable and accessible point cloud acquisition systems, such as LIDAR scanners and RGBD cameras [16], has further emphasized the importance of efficient learning and processing techniques for point clouds. Prior to the advent of deep learning-based methods, learning on point clouds involved constructing hand-crafted features [17]. With the introduction of deep learning, the techniques applied to point clouds can be broadly classified into two categories. These two categories are differentiated based on the pre-processing steps performed on the point clouds prior to network input [18]: (1) _Structured-grid based networks:_ These networks preprocess point clouds into a structured input format for the deep neural network. Structuring is typically achieved by generating representative views from the point clouds [19, 20, 21] or by voxelizing the point clouds into 3D grids [22]. (2) _Raw point cloud based networks:_ In this category, networks operate directly on the raw point clouds with minimal to no preprocessing. One approach involves passing multi-layer perceptrons (MLPs) through individual point features to learn local spatial relationships [23, 24, 25]. Another approach involves systematically constructing graphs using point features and applying Graph Neural Networks (GNNs) to learn labels [26, 27, 28]. Structured-grid based networks suffer from drawbacks associated with point cloud transformation. Transforming point clouds into voxels or image views can be computationally expensive and result in bulky data that is difficult to handle. Moreover, these transformations introduce various errors, such as quantization errors, which can impact the inherent properties of point clouds [23]. To address these issues, raw point cloud based networks have emerged as a solution. Among these networks, EdgeConv based networks [26, 29, 30] have become state-of-the-art in various point cloud-related tasks. They are an advancement over the basic PointNet [23] style architectures that directly operate on point features. The EdgeConv layer excels in learning local features and relationships within point clouds while maintaining permutation invariance [31]. The dynamic graph construction in the learned feature space allows EdgeConv to capture semantic similarities between points, regardless of their geometric separation. By leveraging these strengths, EdgeConv-based networks have shown remarkable performance in tasks involving point clouds, overcoming the limitations associated with structured-grid based approaches. Point cloud processing finds significant application in autonomous vehicles, which serves as a prominent use case for edge computing. These applications operate under stringent latency and memory constraints [32]. In intelligent visual systems within autonomous vehicles, point cloud processing typically constitutes just one stage within a multi-stage compute pipeline. As a result, the imposed constraints on latency and memory become even more critical and imperative to address [33]. Analyzing and profiling the performance of networks that process point clouds is a crucial task, particularly considering the prominence of EdgeConv-based networks in this domain. In this work, we make the following contributions: * **Latency analysis:** We perform an in-depth analysis of end-to-end layer-wise latency for EdgeConv-based networks used in classification and segmentation tasks on both state-of-the-art CPU and GPU platforms. * **Breakdown analysis:** Given the two-stage operation of the EdgeConv layer involving (1) dynamic graph construction and (2) GNN-based node feature updation, we perform breakdown analysis between \(k\)NN graph construction and GNN-based feature updating at each layer and across different layers in EdgeConv networks. * **Effects of varying \(k\):** EdgeConv networks dynamically construct a graph from a point cloud using the \(k\)-nearest neighbors (\(k\)NN) algorithm. We study the effects of varying the value of \(k\) from its optimal value, which is determined through a validation set. This analysis examines the impact of varying \(k\) on the network's inference accuracy, inference latency, and memory consumption. * **Quasi-Dynamic GNNs:** Dynamic graph construction after each layer in a Dynamic GNN (DGNN) improves its performance compared to static GNN counterparts. We investigate the extent to which performance improvement is affected when employing a quasi-Dynamic strategy, where the graph is made static towards the end of the network. * **Memory consumption:** We analyze the memory consumption of EdgeConv networks on both CPU and GPU platforms. * **Bottleneck analysis:** By identifying performance bottleneck in the aforementioned networks, we suggest potential research directions that could help mitigate such bottlenecks and improve overall performance. * **Hardware acceleration opportunities:** We discuss the potential for hardware acceleration of EdgeConv-based networks on FPGA devices, which has garnered significant research interest due to its promising benefits. Our aim is to deepen our understanding of EdgeConv-based networks' performance characteristics when processing point clouds and provide insights into optimization opportunities and hardware acceleration possibilities. Section II briefly introduces GNNs and GNNs as they relate to point clouds. Section III describes the networks and datasets used for the experiments, along with the computing platforms on which the experiments were performed. Results are analyzed in section IV. Discussion and conclusion follow in section V and VI respectively. ## II Background ### _Graph Neural Networks (GNNs)_ Graphs \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) are defined via a set of vertices (nodes) \(\mathcal{V}\) and a set of edges (links) connecting these vertices, \(\mathcal{E}=\{(j,i):j,i\in\mathcal{V}\text{ and a link exists from }j\to i\}\). Edges typically have directional context, symbolized via the ordered pairs; an undirected edge between nodes \(i\) and \(j\) is represented often via two directed edges: \((i,j)\) and \((j,i)\). \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is the adjacency matrix for a graph with \(n\) nodes representing the above edges and their weights (for a weighted graph). A graph can have features associated with both its nodes and edges - additionally, a graph may even have global graph level attributes. Typical learning tasks on graphs occur at either the node level (viz. node classification) [34], edge level (viz. link prediction) [35] or at the graph level [36]. Neural message passing is a mechanism in GNNs whereby information is passed (shared) across nodes and edges within the graph to update node embeddings via a set of learnable parameters [37]. GNNs employing this message passing framework are called Message Passing Neural Networks (MPNNs) [37]. For a node \(i\) with a node feature vector \(\mathbf{x_{i}}\) and a neighbor set \(\mathcal{N}(i)\), the general formulation for message passing can be described by Equation 1[37] (illustrated in Figure 1), \[\mathbf{x^{{}^{\prime}}_{i}}=\Psi_{\Theta}(\mathbf{x_{i}},\sum_{j\in\mathcal{N}(i)} \mathcal{M}_{\Phi}(\mathbf{x_{i}},\mathbf{x_{j}},\mathbf{e_{ji}})). \tag{1}\] The above equation can be decomposed into the following stages: * **Message generation.** A message is constructed between a node \(i\) and its neighbor \(j\) using node level features \(\mathbf{x_{i}}\) and \(\mathbf{x_{j}}\) and edge level feature \(\mathbf{e_{ji}}\) as \(\mathcal{M}_{\Phi}(\mathbf{x_{i}},\mathbf{x_{j}},\mathbf{e_{ji}})\). Fig. 1: The message passing mechanism employed in GNNs. * **Aggregation.** Such constructed messages from all of the nodes neighbors are aggregated via the aggregation function \(\sum_{j\in\mathcal{N}(i)}\); the aggregation function \(\sum\) is typically permutation invariant for point cloud applications. * **Updation.** Finally, the aggregated message along with \(\mathbf{x_{i}}\) is used to learn the feature update for node \(i\), \(\mathbf{x_{i}}^{\prime}\) via the function \(\Psi_{\Theta}\). For a given layer, such message-aggregate-update paradigm is applied to all the nodes of the graph with parameters \((\Theta,\Phi)\) shared across all the nodes within a layer. The functions \(\mathcal{M}_{\Phi}\) and \(\Psi_{\Theta}\) are typically multi-layer perceptrons (\(\mathcal{MLP}\)s). Such GNN layers, employing the message passing framework to update the node features, as above, are central to GNNs such as Graph Convolutional Network (GCN) [1], GraphSAGE [38], Graph Isomorphism Network (GIN) [39], Graph Attention Network (GAN) [40], Principal Neighborhood Aggregation (PNA) [41], etc. EdgeConv layer [26] also utilizes this message passing paradigm - however, unlike the above networks, each EdgeConv layer first performs dynamic graph construction via a \(k\)-nearest neighbor (\(k\)NN) algorithm to construct a \(k\)NN graph; a \(k\)NN graph is a directed graph in which each node is connected to its \(k\) nearest neighboring nodes via edges (Figure 2). Then, the EdgeConv layer performs message passing within this \(k\)NN graph to update the node embeddings. ### _GNNs for Point Clouds_ The authors in PointNet [23] introduced the notion of ingesting raw, unordered point clouds to learn directly on the associated point level features. This network, at its core, passes each point level feature vector through a shared \(\mathcal{MLP}\) to update the point features. In the message passing framework, \[{\mathbf{x_{i}}^{\prime}}=\Psi_{\Theta}(\mathbf{x_{i}}) \tag{2}\] Essentially, no messages are passed and no aggregations occur - it directly updates all the node embeddings. The final layer is a global-level max pool layer which generates a global feature vector for graph level tasks. Due to the shared \(\mathcal{MLP}\) and global max pool layer, such a network is fully invariant to permutations in the ordering of the \(n\) input points. This permutation invariance is a key characteristic of GNNs despite the PointNet not traditionally being a graph neural network. A class of PointNet derivative networks [42, 43, 44, 45] use associated approaches to learn directly from the point features. EdgeConv layer uses both message passing and dynamic graph construction, as shown in Figure 2, to reach state-of-the-art results for point cloud applications. The message passing paradigm in EdgeConv can be described as below, by (3), \[{\mathbf{x_{i}}^{\prime}}=\sum_{j\in\mathcal{N}(i)}\mathcal{M}_{\Phi}(\mathbf{x_{i}}, \mathbf{x_{j}}-\mathbf{x_{i}}) \tag{3}\] \[\sum\rightarrow\max(.) \tag{4}\] \[\mathcal{M}_{\Phi}(\mathbf{x},\mathbf{y})\rightarrow\mathcal{MLP}_{\Phi}(\mathbf{x}\,||\, \mathbf{y}) \tag{5}\] where the \(\max(.)\) in (4) is channel-wise along the nodes and the \(||\) in (5) is a concatenation operation; \(\mathcal{MLP}_{\Phi}\) is a multi-layer perceptron (parameterized by \(\Phi\)) with a ReLU non-linearity. The inclusion of the edge feature \(\mathbf{x_{j}}-\mathbf{x_{i}}\) adds local context to the updated node embedding, and the node-level feature \(\mathbf{x_{i}}\) helps the network retain global information. A single EdgeConv layer takes in an input tensor \(\mathbf{X}\in\mathbb{R}^{n\times c}\) for a point cloud with \(n\) points, each point represented by a vector node embedding of length \(c\). The output of \(k\)NN graph construction on \(\mathbf{X}\) is a tensor \(\mathbf{X}^{\prime}\in\mathbb{R}^{n\times k\times c}\) representing for each node, its \(k\) neighboring nodes and their node embeddings (feature vectors). Before passing \(\mathbf{X}^{\prime}\) through an \(\mathcal{MLP}\) layer, a nodes feature vector is subtracted from the feature vector of its neighbors (\(\mathbf{x_{j}}-\mathbf{x_{i}}\)) and concatenated to the resultant (\(\mathbf{x_{i}}\,||\,\mathbf{x_{j}}-\mathbf{x_{i}}\)). The tensor thus obtained, \(\mathbf{X_{in}^{\prime}}\in\mathbb{R}^{n\times k\times 2c}\) is passed through \(\mathcal{MLP}_{\Phi}\{2c,\,a_{1},\,a_{2},\,a_{3},\,...,\,a_{m}\}\), which is an \(m\)-layer \(\mathcal{MLP}\) with ReLU activations, to generate \(\mathbf{X_{out}^{\prime}}\in\mathbb{R}^{n\times k\times a_{m}}\) which is finally aggregated (via \(\max\)) to generate \(\mathbf{Y}\in\mathbb{R}^{n\times a_{m}}\) which is the final output from the EdgeConv layer. \(\mathcal{MLP}_{\Phi}\{1024,512,256\}\) is a \(2\)-layer perceptron; the input channels to the first layer is \(1024\), the output channels of the first layer is \(512\) and the output channels of the final (second) layer is \(256\). ## III Experimental Setting ### _Platform Details_ The performance analysis for EdgeConv is conducted on state-of-the-art GPU and CPU platforms (See Table I). The GPU used, for both training and inference, is NVIDIA RTX A6000, which has \(10,752\) NVIDIA Ampere architecture CUDA cores. The CPU utilized for inference is AMD Ryzen Threadripper 3990x with 64 CPU cores. Additionally, we use the PyTorch [46] and PyTorch Geometric [47] libraries to facilitate training and inference on the above platforms - no additional kernel-level optimization is performed. ### _Networks_ The base network that we utilize in the experiments is shown in Figure 3. This network follows the network setting in [26] that achieves state-of-the-art results on the ModelNet40 [48] graph-level classification dataset. The network comprises of four (dynamic) EdgeConv layers. Each layer constructs a \(k\)-nearest neighbor graph based on the current (latest) node embeddings before performing message passing on this graph via a single-layered \(\mathcal{MLP}\) to update the embeddings. Each \begin{table} \begin{tabular}{c|c c} \hline **Platforms** & CPU & GPU \\ \hline \hline Platform & AMD Threadripper 3990x & Nvidia RTX A6000 \\ Platform Technology & TSMC 7 nm & TSMC 7 nm \\ Frequency & 2.90 GHz & 1.8 GHz \\ Peak Performance & 3.7 TFLOPS & 38.7 TFLOPS \\ On-chip Memory & 256 MB L3 cache & 6 MB L2 cache \\ Memory Bandwidth & 107 GB/s & 768 GB/s \\ \hline \end{tabular} \end{table} TABLE I: Specifications of platforms \(\mathcal{MLP}\) in the EdgeConv layer utilizes a ReLU activation and includes a BatchNorm layer. The final \(\mathcal{MLP}\) uses a dropout layer instead of BatchNorm. We train the above network for a range of \(k\) values (\(5\), \(10\), \(15\), \(20\), \(25\), \(30\)) for the EdgeConv layer. Additionally, we also train two quasi-Dynamic variants of such Dynamic GNNs by making static the last and the last two EdgeConv layers, respectively. Making static an EdgeConv layer, here, refers to removing the dynamic graph generating \(k\)NN block from an EdgeConv layer. In such quasi-DGNNs, we perform dynamic graph construction in each of the initial few EdgeConv layers of the network that form the dynamic portion of the network. The latter EdgeConv layers of the network (with the \(k\)NN block removed) do not reconstruct the graph again - this static portion of the network directly uses the last graph that was constructed in the dynamic portion of the network. We thus refer to dynamic EdgeConv layers (with \(k\)NN block) as Dynamic EdgeConv (DEC) and non-dynamic ones (without the \(k\)NN block) as simply EdgeConv (EC). Specifically, the last and the last two DEC layers of the network in Figure 3 are converted to EC layers to analyze the performance of quasi-DGNNs. All the above networks are trained for a total of \(100\) epochs on the entire ModelNet40 training dataset - we use the Adam optimizer [49] with a learning rate of \(0.001\) and a step learning rate scheduler with a gamma of \(0.5\) and a step size of \(20\). ### _Datasets_ We utilize the ModelNet40 [48] dataset which contains \(12,311\) graphs. We split the entire dataset into 80% for training and 20% for testing. The dataset comprises 3D point clouds in the form of CAD models from 40 categories - we pre-process the dataset by centering it and scaling it to a range of \((-1,1)\). Additionally, we sample \(1024\) points during both training and testing. We test using a fixed random seed for reproducibility and equivalence across the tested networks. ### _Performance Metrics_ We utilize classification accuracy as the performance metric. Since point cloud applications are usually real-time, we give importance to latency and memory consumption figures while assessing the networks performance. To this extent, we perform an exhaustive analysis of how latency and memory usage are distributed across a DGNN and within a DEC layer to identify valid performance trade-offs and bottlenecks. ## IV Results and Analysis ### _Baseline Model Latency Analysis_ The baseline latency analysis is performed on a fully dynamic network (with all EdgeConv layers as Dynamic EdgeConv) (Figure 3) with \(k=20\). The value of \(k\) is obtained by cross-validation over a set of values \((5,10,15,20,25,30)\). For cross-validation, we split the training data into a train and validation set. Once a value of \(k\) is selected, we re-train the network over the entire training data. Figure 4 contains Figure 4(a)(i) and 4(a)(ii) that show the distribution of latency across all the layers of the network shown in Figure 3 on GPU and CPU, respectively. Figures 4(b)(i) and 4(b)(ii) contain the per-layer latency analysis and latency distribution within the DEC layers for GPU and CPU, respectively. These figures indicate that the dynamic graph construction via the \(k\)NN algorithm is the bottleneck driving down the networks (and the DEC layers) performance. ### _Analysis Under Varying \(k\)_ We analyze the effect of varying the number of nearest neighbors on the performance of the DEC layer and the point cloud classification model - we first train the network in Figure 3 for different values of \(k\) associated with its DEC layers and then perform inference latency and accuracy analysis. As seen in Figure 5(a), the performance drops as we move away from the optimal \(k\), this performance drop is sharper as we move towards the origin (towards smaller and smaller \(k\)'s). The network latency, for all \(k\) values, on both CPU and GPU, is again dominated by the \(k\)NN algorithm. ### _Quasi-Dynamic GNN (qDGNN)_ Dynamic GNNs improve upon the performance of basic GNNs - this is markedly so for point cloud applications where DGNNs have the added capability of being able to identify and learn from semantically similar points irrespective of their geometric similarity (distance). Such DGNNs, however, as already seen, have a large computational cost linked to the dynamic graph construction operation which effectively bottlenecks the networks performance. Fig. 2: Illustration of the EdgeConv layer. A point cloud (blue) is the input to the EdgeConv layer. EdgeConv layer uses \(k\)NN to generate a directed \(k\)NN graph. The message passing paradigm, applied to this graph, uses a nodes neighbors to update node embeddings (message passing: red lines, node feature updates: green dots). The output of the EdgeConv layer is the point cloud with updated node features (green). Figure 6(a) clearly shows that making static the latter dynamic layers does not affect the networks accuracy - the corresponding performance gains associated with such Quasi-Dynamic GNNs are indicated in Figure 6(b), 6(c)(i) and 6(c)(ii) which show a drastic speed-up when compared against the fully dynamic baseline. This suggests that we can reach state-of-the-art performance, whilst being fast enough to operate at the edge, by utilizing a combination of dynamic and static EdgeConv layers for point clouds. ### _Memory Consumption_ As an example, the memory consumed for the \(k\)NN operation is plotted in Figure 7; however, it is important to note that from a memory consumption point of view, the \(k\)NN graph construction operation is not a bottleneck - several other operators take up a much larger memory footprint compared to \(k\)NN. The linear nature of the curves is also self-explanatory, increase in \(k\) leads to a proportional increase in memory required to serve the \(k\)NN operator. Meanwhile, removing \(k\)NN from latter layers of the network (DEC \(\rightarrow\) EC) directly leads to a proportional reduction in memory consumption. Fig. 4: Latency analysis of the baseline model. 4(a)(i) and 4(a)(ii) show the layer-wise latency (per-graph instance inference level) on GPU and CPU. DEC_2, for example, indicates the second DEC layer in Figure 3. 4(b)(i) and 4(b)(ii) analyze the individual DEC layers (comparing \(k\)NN vs update latency for each). Note that the EdgeConv layer refers to a DynamicEdgeConv layer. Fig. 5: Accuracy and latency analysis under varying \(k\) values. 5(a) shows the variation of accuracy with \(k\). 5(b), 5(c)(i) and 5(c)(ii) show the latency variation and distribution, respectively, versus \(k\). The \(update\) legend in 5(c)(i) and 5(c)(ii) is related to the message passing layer of the Dynamic EdgeConv layer. Fig. 3: Network utilized in the experiment (for classification task). The input point cloud of shape \(n\times 3\) (\(n\) nodes with each node having \(3\) features) is transformed by four successive Dynamic EdgeConv (DEC) layers. The Dynamic suffix underscores the graph construction that occurs in each EdgeConv layer via the \(k\)NN algorithm. The output of each DEC layer is concatenated to shape \(n\times 320\), before being passed through a linear transformation that yields a \(n\times 1024\) shaped output. A global max pool layer reduces this output to a vector of size \(1024\) that is then passed through an MLP and a log-softmax layer to finally output the class log-probability vector. ## V Discussion The experimental results show the significant bottleneck introduced by the dynamic graph construction \(k\)NN layer in point cloud processing networks. \(k\)NN operation occupies up to \(95\%\) latency of the (base) network on GPU and close to \(90\%\) on CPU. Despite this, such a layer is crucial in boosting the network performance to enable a wide array of complex real-world edge applications. In this paper, we shed light on this problem whilst also providing a simple solution - quasi-Dynamic Graph Neural Networks (qDGNN). Such networks significantly improve the latency of the network (reduction in latency by up to \(58\%\) on GPU and up to \(69\%\) on CPU) whilst maintaining the same level of performance as is demonstrated by DGNNs. Accelerating \(k\)NN layers on FPGA and deploying the DGNN on an FPGA platform is also a potential solution that has attracted significant research interest recently [50, 51]; optimizing \(k\)NN algorithms [52] is also an area that has been studied with much interest. ## VI Conclusions and Future Work In this paper, we examined the latency bottleneck associated with a DynamicEdgeConv (EdgeConv) layer whilst examining its inference performance - optimizing the dynamic graph construction stage in such networks is a problem that remains to be solved and a promising research avenue due to its scope of applicability. ## Acknowledgement and Statement This work is supported by the DEVCOM Army Research Lab (ARL) under grant W911NF2220159. **Distribution Statement A**: Approved for public release. Distribution is unlimited.
2309.12563
Passive Reflection Codebook Design for IRS-Integrated Access Point
Intelligent reflecting surface (IRS) has emerged as a promising technique to extend the wireless signal coverage of access point (AP) and improve the communication performance cost-effectively. In order to reduce the path-loss of the cascaded user-IRS-AP channels, the IRS-integrated AP architecture has been proposed to deploy the IRSs and the antenna array of the AP within the same antenna radome. To reduce the pilot overhead for estimating all IRS-involved channels, in this paper, we propose a novel codebook-based IRS reflection design for the IRS-integrated AP to enhance the coverage performance in a given area. In particular, the codebook consisting of a small number of codewords is designed offline by employing an efficient sector division strategy based on the azimuth angle. To ensure the performance of each sector, we optimize its corresponding codeword for IRS reflection pattern to maximize the sector-min-average-effective-channel-power (SMAECP) by applying the alternating optimization (AO) and semidefinite relaxation (SDR) methods. With the designed codebook, the AP performs the IRS reflection training by sequentially applying all codewords and selects the one achieving the best communication performance for data transmission. Numerical results show that our proposed codebook design can enhance the average channel power of the whole coverage area, as compared to the system without IRS. Moreover, our proposed codebook-based IRS reflection design is shown to achieve significant performance gain over other benchmark schemes in both single-user and multi-user transmissions.
Yuwei Huang, Lipeng Zhu, Rui Zhang
2023-09-22T01:24:21Z
http://arxiv.org/abs/2309.12563v2
# Passive Reflection Codebook Design for IRS-Integrated Access Point ###### Abstract Intelligent reflecting surface (IRS) has emerged as a promising technique to control wireless propagation wireless environment for improving the communication performance cost-effectively and extending the wireless signal coverage of access point (AP). In order to reduce the path-loss of the cascaded user-IRS-AP channels, the IRS-integrated AP architecture has been proposed to deploy the antenna array of the AP and the IRSs within the same antenna radome. To reduce the pilot overhead for estimating all IRS-involved channels, in this paper, we propose a novel codebook-based IRS reflection design for the IRS-integrated AP to enhance the coverage performance in a given area. In particular, the codebook consisting of a small number of codewords is designed offline by employing an efficient sector division strategy based on the azimuth angle. To ensure the performance of each sector, we optimize its corresponding codeword for IRS reflection pattern to maximize the sector-min-average-effective-channel-power (SMAECP) by applying the alternating optimization (AO) and semidefinite relaxation (SDR) methods. With the designed codebook, the AP performs the IRS reflection training by sequentially applying all codewords and selects the one achieving the best communication performance for data transmission. Numerical results show that our proposed codebook design can enhance the average channel power of the whole coverage area, as compared to the system without IRS. Moreover, the proposed codebook-based IRS reflection design is compared with several benchmark schemes, which achieves significant performance gain in both single-user and multi-user transmissions. Intelligent reflecting surface (IRS), IRS-integrated AP, codebook design, sector division. ## I Introduction With the recent progress in digitally-controlled metasurfaces, intelligent reflecting surface (IRS) has emerged as as economically efficient method to create intelligent and adaptable radio environments, catering to the needs of next-generation wireless communication systems [1]. Specifically, IRS consists of a massive number of passive reflecting elements, which are able to tune the amplitudes and/or phase shifts of incident signals in real time, thereby enabling dynamic control over the wireless signal propagation environment. Thus, IRS can be applied in wireless communication systems to achieve assorted purposes, such as passive beamforming, interference nulling/cancellation, channel distribution enhancement, etc [2, 3]. Due to its passive nature, IRS eliminates the need for radio frequency (RF) chains, resulting in significantly reduced hardware costs and energy consumption. As a result, IRS is considered as a promising candidate for the six-generation (6G) wireless communication systems [4, 5], which can achieve a quantum-leap improvement in capacity and energy efficiency over today's wireless systems. Owing to the great potential of IRS, it has been extensively investigated for various wireless systems and applications, such as non-orthogonal multiple access (NOMA) [6, 7], orthogonal frequency division multiplexing (OFDM) [8, 9], secrecy communication [10, 11], mobile edge computing (MEC) [12, 13], multiple-input multiple-output (MIMO) [14, 15], unmanned aerial vehicle (UAV)-ground communications [16, 17], multi-antenna communication [18, 19], relaying communication [20, 21], and so on. In practice, it is essential to ensure the proper deployment of IRSs between the users and base station (BS) (or access point (AP)). The deployment of IRSs should reduce the path loss resulting from the product of distances along the cascaded user-IRS-BS (or AP) channels [1]. Most of the existing works considered to deploy IRSs close to user terminals (e.g., at hotspot, cell edge, and on moving vehicles) to improve their communication rates. For example, the authors in [22] proposed to deploy the IRS near the user cluster, where the transmit beamforming at the BS and the reflect beamforming of the IRS were jointly optimized to minimize the total transmit power at the BS, and the power scaling law with the number of reflecting elements at the IRS was derived. In [14], the authors invoked an IRS at the boundary of multiple cells to assist the downlink transmission to cell-edge users and mitigate the inter-cell interference, where the active precoding matrices at the BSs and the phase shifts at the IRS were jointly optimized to maximize the weighted sum-rate of all users. In contrast, a novel IRS-empowered BS architecture was introduced in [23] to deploy multiple IRSs in proximity to the BS. In this work, a novel approach was proposed to lower the overhead of channel estimation by selecting specific portions of the cascaded channels for estimation purposes, and a transmission protocol including two-stage was designed to realize the efficient process of IRS channel estimation and user data transmission. To further enhance the system performance, an innovative strategy was introduced in [24] for hybrid deployment to harness the complementary strengths of both user- and BS-side IRSs, where the additional inter-IRS reflection link can be exploited to achieve higher achievable rate of the users [25, 26]. Nevertheless, the above strategies all deploy IRSs within an environment, where the separation distance between the IRS and its connected user or BS (or AP) remains considerable, typically exceeding several hundred carrier wavelengths. In such scenarios, the substantial path loss resulting from the product-distance of the cascaded user-IRS-BS (or AP) channels potentially undermines the performance benefits offered by the IRSs, and the signaling overhead between the IRSs and the BS (or AP) for channel estimation and remote control remains a challenging issue. In order to address the aforementioned issues, a novel architecture, termed the _"IRS-integrated BS (or AP)"_, has been proposed in [27]. This architecture involves co-locating BS's (or AP's) antenna array and IRSs within the same antenna radome to serve users. Consequently, the separation distance between the BS's (or AP's) antenna array and IRSs can be significantly reduced, which typically ranges from several to tens of wavelengths. This substantially mitigates the path loss associated with IRS reflection channels and also diminishes the signaling overhead between the IRSs and BS (or AP). For the IRS-integrated BS (or AP), a practically important problem is how to design the IRS reflection based on the available channel state information (CSI) of the users for improving their communication performance. Towards this end, two solutions have been proposed in [27]. First, the element-wise channel model was adopted to construct the singe-reflection and double-reflection channels in terms of the angle-of-arrivals (AoAs) at the BS (or AP) and the complex coefficients of the incident paths. Then, the AoAs and complex coefficients of all paths from each user can be estimated by the antenna array at the BS (or AP) by turning off all IRSs. Based on them, the effective channel of each user with IRS reflection turned on can be derived for IRS reflection design. However, this method needs to acquire the knowledge of the AoAs and complex coefficients of all channel paths from each user (instead of its channel only), which is thus more difficult to be implemented compared to traditional channel estimation methods for IRS-assisted communications [30, 31, 32, 33, 34, 35]. Second, an iterative random phase maximization algorithm (IRPA) was proposed to find a suboptimal IRS reflection solution without the need to explicitly estimate the CSI of any user. Specifically, a given number of IRS training reflection patterns were randomly generated, and then the reflection pattern achieving the best communication performance was selected as the one for data transmission. Despite of its simplicity for implementation, it was shown in [27] that IRPA usually needs a sufficiently long training time for achieving a good communication performance, which may be inefficient for data transmission. To avoid the estimation of the AoAs and complex coefficients of all incident paths, as well as reduce the channel training overhead, in this paper, we propose a novel codebook-based IRS reflection design for the IRS-integrated AP (IRS-AP), which is installed on the ceiling to communicate with the users randomly distributed in a given coverage area on the ground, as shown in Fig. 1. Specifically, a codebook with a small number of IRS reflection patterns/codewords is first designed offline and stored at the IRS-AP. Then, the users send pilot signals to the IRS-AP for effective channel estimation, while in the meanwhile, the IRS-AP evaluates the communication performance according to the estimated channels under different IRS reflection patterns/codewords. Finally, the IRS reflection pattern/codeword achieving the best communication performance is selected for data communication. The main contributions of this paper are summarized as follows. * First, the codebook for IRS reflection is designed to enhance the average channel power of the whole coverage area. To achieve a small codebook size for reducing the channel training overhead, an efficient sector division strategy is proposed to divide the whole coverage area into several non-overlapped sectors based on the azimuth angle. Then, for single-user transmission, the number of sectors is set as the given codebook size to maximize the user's effective channel power for improving its communication performance. While for the multi-user transmission, the overall codebook is constructed as the union of all IRS codebooks with different numbers of sectors, in order to cater to users at different locations. * Next, the IRS reflection pattern/codeword corresponding to each sector is designed to improve its worst-case performance, which is the average effective channel power over all azimuth angles in this sector by fixing the elevation angle as its largest value and thus defined as _sector-min-average-effective-channel-power (SMAECP)_. Specif Fig. 1: System model and architecture of the IRS-integrated AP. ically, the IRS reflection pattern/codeword is optimized to maximize the SMAECP of the corresponding sector, subject to the unit-modulus constraints of all IRS reflecting elements. The formulated problem is transformed to an approximated problem by discretizing the continuous azimuth angles, which is then efficiently solved by applying the alternating optimization (AO) and semidefinite relaxation (SDR) methods. * Finally, numerical results are provided to validate that the proposed codebook design can enhance the average channel power of the whole coverage area, as compared to the system without IRS. Besides, the proposed codebook-based IRS reflection design is compared with several benchmark schemes, which achieves significant performance gain in both single-user and multi-user transmissions. In addition, it is shown that the proposed codebook-based IRS reflection design only requires slow adaptation to wireless channels for reducing the channel training overhead, where the IRS reflection pattern can remain unchanged for a long time owing to the sector division strategy adopted for codebook design. Furthermore, the optimal number of sectors for multi-user transmission is shown to vary with different user locations. It is worth noting that there exist several works on reflection codebook design for IRS-assisted communications (see, e.g., [36, 37, 38, 39]). However, these works usually need large codebook size (e.g., hundreds to thousands) to achieve high IRS directional gain for ensuring the system performance. Besides, the corresponding IRS reflection pattern should be updated frequently to track user channel changes, which incurs high channel training overhead in practice. In contrast, the size of the codebook designed in this paper is dramatically reduced based on the efficient sector division strategy, and the IRS reflection pattern can remain unchanged for a long time to achieve slow adaptation to wireless channels, both of which help reduce the channel training overhead for adjusting IRS reflection pattern. In addition, the conventional codebook designs consider IRS single reflection only, while our considered codebook design needs to take both IRS single reflection and double reflection into consideration due to multiple IRSs deployed inside the AP's antenna radome (see Fig. 1). The rest of this paper is organized as follows. Section II presents the system and channel models of the IRS-integrated AP. Section III presents the procedures for reflection codebook design based on sector division. Section IV presents the proposed solution to the formulated problem for codeword design. Section V presents numerical results to verify the efficacy of our proposed codebook-based IRS reflection design. Finally, Section VI gives the conclusion of this paper. _Notation:_ In this paper, italic, bold-face lower-case and bold-face upper-case letters denote scalars, vectors, and matrices, respectively. For a matrix \(\mathbf{A}\), its transpose, conjugate transpose, the determinant, the trace and the rank are denoted as \(\mathbf{A}^{T}\), \(\mathbf{A}^{H}\), \(\det(\mathbf{A})\), \(\text{trace}(\mathbf{A})\), and \(\text{rank}(\mathbf{A})\), respectively. \([\mathbf{A}]_{i,j}\) denotes the \((i,j)\)-th entry of the matrix \(\mathbf{A}\). For a vector \(\mathbf{a}\), the norm is denoted as \(\|\mathbf{a}\|\), and \(\text{diag}(\mathbf{a})\) denotes a square diagonal matrix with the elements of \(\mathbf{a}\) on the main diagonal. For a complex number \(s\), the conjugate and amplitude are respectively denoted as \(s^{*}\) and \(|s|\). \(\mathbb{R}^{x\times y}\) denotes the space of \(x\times y\) real matrices. \(\mathbb{C}^{x\times y}\) denotes the space of \(x\times y\) complex matrices. The distribution of a circularly symmetric complex Gaussian (CSCG) random variable with mean zero and variance \(\sigma^{2}\) is denoted by \(\mathcal{CN}(0,\sigma^{2})\), and \(\sim\) stands for "distributed as". \(\mathbf{I}_{M}\) denotes an identity matrix with its dimension of \(M\). \(i\) denotes the imaginary unit, i.e., \(i^{2}=-1\). \(\otimes\) represents the Kronecker product. \(\mathcal{O}(\cdot)\) denotes the the big O notation. \(\lfloor b\rfloor\) denotes the floor of a real number \(b\). ## II IRS-Integrated AP: System and Channel Models ### _System Model_ As shown in Fig. 1, we consider an IRS-AP installed on the ceiling1 of light \(H_{\text{AR}}\) from the ground to communicate with users at arbitrary locations in a given coverage area, denoted by \(\mathcal{A}\). In particular, the IRS-AP deploys an (active) antenna array and \(J=4\) (passive) IRSs with distinct orientations within the same cuboid-shaped antenna radome, where \(\mathcal{J}\triangleq\{1,2,3,4\}\) denotes the set of all IRSs. The antenna array is deployed at the center of the top surface of the antenna radome facing to the ground, while IRSs 1-4 are strategically positioned on the left, right, back, and front side-faces of the antenna radome, with each IRS oriented perpendicular to the antenna array. The length, width, and thickness of the antenna radome are respectively denoted as \(d_{l}\), \(d_{w}\), and \(d_{t}\). Without loss of generality, we consider a three-dimensional (3D) coordinate system for the IRS-AP. In this system, we set the origin \(O\) as the antenna array's center, and put the antenna array at the \(x\)-\(O\)-\(y\) plane. The uniform planar array (UPA) is adopted to model the antenna array, which consists of a total of \(M=M_{x}\times M_{y}\) antennas, with \(M_{x}\) and \(M_{y}\) representing the number of antennas along the \(x\) and \(y\) axes, respectively. The spacing between adjacent antennas is denoted as \(d_{A}\). Each IRS \(j,\ j\in\mathcal{J}\), is also modeled as a UPA of size \(N_{j}=N_{j,1}\times N_{j,2}\), with \(N_{j,1}\) and \(N_{j,2}\) denoting the number of reflecting elements along axes \(x\) (or \(y\)) and \(z\), respectively. The distance between two adjacent reflecting elements within each IRS is defined as \(d_{I}\), and each reflecting element is configured with an aperture area of \(A=\sqrt{A}\times\sqrt{A}\). For convenience, we denote the sets of all AP's antennas and all reflecting elements at IRS \(j\) as \(\mathcal{M}\triangleq\{1,2,\cdots,M\}\) and \(\mathcal{N}_{j}\triangleq\{1,2,\cdots,N_{j}\},\ j\in\mathcal{J}\), respectively. Notice that each IRS only has half-space reflection space, each IRS must be placed to face the AP's antenna array for effective signal reflection between them. Moreover, as shown in Fig. 1, let \(\theta\) and \(\phi\) denote the elevation and azimuth angles of a particular signal path arriving at the IRS-AP with respect to (w.r.t.) the AP's antenna array. Furthermore, we model the coverage area \(\mathcal{A}\) by the IRS-AP in terms of \((\theta,\phi)\) as \(\mathcal{A}=\{(\theta,\phi)|\theta\in[0,\theta_{\max}],\phi\in[0,2\pi)\}\) with \(\theta_{\max}\) (\(0\leq\theta_{\max}\leq\frac{\pi}{2}\)) denoting the maximum elevation angle, which is pre-determined based on the IRS-AP's coverage requirement. **Remark II.1**: _Based on the antenna radome's size (i.e., \(d_{l}\), \(d_{w}\), and \(d_{t}\)), and the coverage area's range \(\mathcal{A}\) (i.e., \(\theta_{\max}\)), we can determine the maximum number of deployable reflecting elements at each IRS \(j\), which is defined as \(N_{j,\max}=N_{j,1,\max}\times N_{j,2,\max},\ j\in\mathcal{J}\), with \(N_{j,1,\max}\) and \(N_{j,2,\max}\) representing the maximum number of reflecting elements that can be deployed along axes \(x\) (or \(y\)) and \(z\) of IRS \(j\), respectively. To deploy more reflecting elements at each IRS, we set \(\sqrt{A}=d_{I}\). It can be shown by elementary geometry that \(N_{1,1,\max}=N_{2,1,\max}=\lfloor d_{w}/d_{I}\rfloor\), \(N_{3,1,\max}=N_{4,1,\max}=\lfloor d_{l}/d_{I}\rfloor\), and \(N_{j,2,\max}=\lfloor\min(\frac{d_{I}}{d_{I}},\frac{d_{I}}{d_{I}\tan\theta_{ \max}},\frac{d_{w}}{d_{I}\tan\theta_{\max}})\rfloor\), where \(\frac{d_{I}}{d_{I}\tan\theta_{\max}}\) and \(\frac{d_{w}}{d_{I}\tan\theta_{\max}}\) are adopted to guarantee that the signals arriving from the reflection half-space of one IRS are not obstructed by its opposite side IRS. Next, when the number of reflecting elements at IRS \(j\) is given as \(N_{j}\leq N_{j,\max}\), the reflecting elements should be deployed to reduce the antenna-IRS distance and inter-IRS distance to minimize the path loss among them. To this end, we set \(N_{j,2}=N_{j,2,\max}\) and \(N_{j,1}=N_{j}/N_{j,2,\max},\ j\in\mathcal{J}\), by first deploying reflecting elements along axis \(z\), and then along axis \(x\) (or \(y\))._ ### _Channel Model_ In general, the effective channel between any location in the coverage area \(\mathcal{A}\) and the IRS-AP is the superposition of the channel responses of multiple paths, where each path's the channel response is given by the product of its complex coefficient for the arriving uniform plane wave (UPW) (which is determined by the propagation environment) and the effective array response vector (EARV) at the IRS-AP. Moreover, the EARV at the IRS-AP for each signal path to the antenna array from a given AoA, i.e., \((\theta,\phi)\), is the superposition of the array response vectors corresponding to its three split signal paths arriving at the AP's antenna array, namely, the direct path without any IRS reflecting element's reflection, the single-reflection path with only one IRS reflecting element's reflection involved, and the double-reflection path with two IRS reflecting elements' successive reflections involved2, as shown in Fig. 1. In the following, we first model the direct, single- and double-reflection array response vectors in terms of the AoA \((\theta,\phi)\), and then derive their overall EARV at the IRS-AP for the path direction \((\theta,\phi)\). Based on them, we finally model the effective channel between any location in the coverage area \(\mathcal{A}\) and the antenna array. Footnote 2: More than two reflections among different IRSs are ignored in this paper for simplicity due to the significantly higher path loss with the increased number of IRS reflections. Let \(\mathbf{e}(\bar{\phi},\bar{M})=[1,e^{i\pi\bar{\phi}},\cdots,e^{i(\bar{M}-1)\pi \bar{\phi}}]^{T}\) denote the one-dimensional (1D) steering vector function, with \(\bar{M}\) representing the array size and \(\bar{\phi}\) representing the steering angle. Thus, the direct array response vector at the AP's UPA for the path direction \((\theta,\phi)\in\mathcal{A}\), denoted as \(\mathbf{h}_{d}(\theta,\phi)\in\mathbb{C}^{M\times 1}\), is given by \[\mathbf{h}_{d}(\theta,\phi) =\sqrt{G_{A}(\theta,\phi)}\mathbf{e}\left(\frac{2d_{A}}{\lambda}\sin \theta\cos\phi,M_{x}\right)\] \[\otimes\mathbf{e}\left(\frac{2d_{A}}{\lambda}\sin\theta\sin\phi,M_{y }\right), \tag{1}\] with \(\lambda\) denoting the carrier wavelength and \(G_{A}(\theta,\phi)\) denoting the antenna gain of the AP corresponding to direction \((\theta,\phi)\). According to [27], denote \(\mathbf{f}_{j}^{n_{j}}(\theta,\phi)\in\mathbb{C}^{M\times 1}\) as the single-reflection array response vector for the path direction \((\theta,\phi)\in\mathcal{A}\) via the IRS \(j\)'s \(n_{j}\)-th reflecting element to the AP's antenna array with \(j\in\mathcal{J},\ n_{j}\in\mathcal{N}_{j}\), and \(\mathbf{g}_{j,q}^{n_{j},n_{q}}(\theta,\phi)\in\mathbb{C}^{M\times 1}\) as the double-reflection array response vector for the path direction \((\theta,\phi)\in\mathcal{A}\) via the IRS \(j\)'s \(n_{j}\)-th reflecting element and then the IRS \(q\)'s \(n_{q}\)-th reflecting element to the AP's antenna array with \(q\neq j\in\mathcal{J},n_{j}\in\mathcal{N}_{j},n_{q}\in\mathcal{N}_{q}\), respectively. In practice, \(\mathbf{f}_{j}^{n_{j}}(\theta,\phi)\) and \(\mathbf{g}_{j,q}^{n_{j},n_{q}}(\theta,\phi)\) can be obtained/estimated via various methods, such as the element-wise channel model [27], the ray-tracing technique [28], or machine learning based on channel estimation method [29]. In this paper, we adopt the element-wise channel proposed in [27] to model \(\mathbf{f}_{j}^{n_{j}}(\theta,\phi)\) and \(\mathbf{g}_{j,q}^{n_{j},n_{q}}(\theta,\phi)\), which are assumed to be known for the IRS reflection codebook design of our main interest in this paper. Then, denote by \(\vartheta_{j,n_{j}}\) the reflection coefficient of IRS\(j\)'s \(n_{j}\)-th reflecting element with \(j\in\mathcal{J},n_{j}\in\mathcal{N}_{j}\), where we set its amplitude to the maximum value of one, i.e., \(|\vartheta_{j,n_{j}}|=1\), for the purpose of maximizing the reflected signal power, and \(\mathbf{\Theta}\triangleq\{\vartheta_{j,n_{j}},\ j\in\mathcal{J},n_{j}\in\mathcal{ N}_{j}\}\). Therefore, the EARV at the IRS-AP for the path direction \((\theta,\phi)\in\mathcal{A}\) is given by [27] \[\mathbf{h}(\theta,\phi,\mathbf{\Theta}) =\mathbf{h}_{d}(\theta,\phi)+\sum_{j=1}^{J}\sum_{n_{j}=1}^{N_{j}}\mathbf{f }_{j}^{n_{j}}(\theta,\phi)\vartheta_{j,n_{j}}\] \[+\sum_{j=1}^{J}\sum_{q\neq j}^{J}\sum_{n_{j}=1}^{N_{j}}\sum_{n_{q} =1}^{N_{q}}\mathbf{g}_{j,q}^{n_{j},n_{q}}(\theta,\phi)\vartheta_{j,n_{j}}\vartheta_{ q,n_{q}}. \tag{2}\] Based on (II-A), the effective channel between any location in the coverage area \(\mathcal{A}\) and the IRS-AP can be modeled by the following multi-path channel, i.e., \[\tilde{\mathbf{h}}=\sum_{\psi=1}^{\Psi}a_{\psi}\mathbf{h}(\theta_{\psi},\phi_{\psi}, \mathbf{\Theta}), \tag{3}\] where \(\Psi\geq 1\) denotes the total number of (significant) channel paths from any location in the coverage area \(\mathcal{A}\) to the IRS-AP, \(a_{\psi}\) represents the complex coefficient of the \(\psi\)-th path, \(\theta_{\psi}\in[0,\theta_{\max}]/\!\phi_{\psi}\in[0,2\pi)\) denote the elevation/azimuth AoAs of the \(\psi\)-th path arriving at the IRS-AP, respectively. Specifically, we assume that the first path in (3) corresponding to \(\psi=1\) is the line-of-sight (LoS) path, while the remaining \(\Psi-1\) paths are non-LoS (NLoS) paths. In particular, the complex coefficient of the LoS path between any location in the coverage area \(\mathcal{A}\) and the AP is determined by their distance. As shown in Fig. 1, the horizontal coordinates for the location of the path direction \((\theta,\phi)\in\mathcal{A}\) can be expressed as \((H_{AR}\tan\theta\cos\phi,H_{AR}\tan\theta\sin\phi)\), and thus the distance between them is given by \(\sqrt{(H_{AR}\tan\theta\cos\phi)^{2}+(H_{AR}\tan\theta\sin\phi)^{2}}=H_{AR} \tan\theta\) which is only dependent on the elevation angle \(\theta\) of the path direction. As a result, the complex LoS path coefficient between the location of the path direction \((\theta,\phi)\in\mathcal{A}\) and the IRS-AP is given by \[a_{1}=\frac{\lambda}{4\pi H_{AR}\tan\theta}e^{-i\frac{2\pi}{\lambda}H_{AR}\tan \theta}, \tag{4}\] which is a function in terms of the elevation angle \(\theta\). In contrast, the complex coefficients of other NLoS paths depend on the scatterers in the environment and will be modeled in Section V for simulations. To facilitate comprehension, in Table I, we have compiled a summary of symbol notations utilized in this paper together with their corresponding physical meanings. ## III Reflection Codebook Design Based on Sector Division In this section, we propose a codebook-based IRS reflection design for the IRS-AP. By assuming that the single-reflection and double-reflection array response vectors for all AoAs, i.e., \(\{\mathbf{f}_{j}^{n_{j}}(\theta,\phi)\}\) and \(\{\mathbf{g}_{j,q}^{n_{j},n_{q}}(\theta,\phi)\}\), are available, a codebook consisting of a small number of IRS reflection patterns/codewords is first designed offline and sorted at the IRS-AP. Then, when the users send pilot signals to the IRS-AP for estimating their effective channels (i.e., \(\mathbf{h}\) in (3)), the IRS-AP evaluates the communication performance based on the estimated channels by varying IRS reflection patterns according to the codebook. Finally, the codeword achieving the best communication performance is selected for data transmission. To design the codebook efficiently, we only consider the LoS channel between all locations in the coverage area \(\mathcal{A}\) and the IRS-AP. As a result, the effective LoS channel between the location of the path direction \((\theta,\phi)\) and the IRS-AP is given by \[\mathbf{h}_{LoS}(\theta,\phi,\mathbf{\Theta})=a_{1}\mathbf{h}(\theta_{1}, \phi_{1},\mathbf{\Theta})\] \[=\frac{\lambda}{4\pi H_{AR}\tan\theta}e^{-i\frac{2\pi}{\lambda}H_ {AR}\tan\theta}\mathbf{h}(\theta,\phi,\mathbf{\Theta}). \tag{5}\] The reason for such consideration is that the LoS channel is deterministic and usually dominants in the channel power as compared to NLoS channels, while the NLoS channels depend on the scatterers in the propagation environment, which are in general randomly distributed and difficult to model [40, 41]. In the following, we first propose an efficient sector division strategy for achieving a small codebook size to reduce the channel training overhead, then explain the principle of codeword design for each divided sector, and finally present the procedure to construct the codebook for IRS reflection patterns. Notice that in conventional reflection codebook design for IRS-assisted communication systems (see, e.g., [36, 37, 38, 39]), only the single-reflection paths via the IRSs exist between the users and AP. Thus, the high-resolution and precise-alignment IRS reflection patterns/codewords can be designed to provide high directional gains to the single-reflection paths. Specifically, the whole coverage area is divided into multiple non-overlapped sectors by equally dividing the intervals of the elevation and azimuth angles, which renders that each sector can be approximated by a single point in the angular domain. Then, an IRS reflection pattern/codeword is designed to align with the approximated point for each sector. Nevertheless, this sector division strategy will result in a large codebook size (e.g., hundreds to thousands) for IRS with similar number of reflecting elements, which needs extremely high overhead for IRS reflection training. In contrast, in our proposed IRS-AP, the double-reflection paths via different IRSs play a significant role due to the close distance among IRSs [27], which leads to no explicit directional gain obtained via IRS reflection design. As a result, it is practically viable to design the low-resolution and wide-coverage codewords for the IRS-AP, which entails a codebook with smaller size (i.e., only several to dozens) and thus greatly reduces the channel training overhead. Based on the above, we propose to divide the whole coverage area into multiple non-overlapped sector by _only_ dividing the interval of the azimuth angle. Specifically, let \(D\) denote the number of sectors for dividing the whole coverage area \(\mathcal{A}\), where \(\mathcal{D}=\{1,2,\cdots,D\}\) denotes the set of all sectors and \(D\) is set as a small positive number to limit the codebook size. The division for the interval of the azimuth angle is given by \[[0,2\pi)=\Big{[}0,\frac{2\pi}{D}\Big{)}\cup\cdots\cup\Big{[}(D-1)\times\frac{2 \pi}{D},2\pi\Big{)}. \tag{6}\] Then, the \(d\)-th sector can be expressed as \[\mathcal{A}_{D,d}=\] \[\Big{\{}(\theta,\phi)\Big{|}\theta\in\Big{[}0,\theta_{\max}\Big{]},\phi\in\Big{[}(d-1)\times\frac{2\pi}{D},d\times\frac{2\pi}{D}\Big{)}\Big{\}},\] \[d\in\mathcal{D}. \tag{7}\] Next, a codeword for IRS reflection pattern should be designed to ensure the coverage of sector \(\mathcal{A}_{D,d}\). To this end, we propose to focus on the worst-case performance in sector \(\mathcal{A}_{D,d}\), which occurs at the locations with the largest elevation angle w.r.t. the antenna array, i.e., \(\theta=\theta_{\max}\). This is because these locations at \(\theta=\theta_{\max}\) have the largest distance with the AP, i.e., \(H_{AR}\tan\theta_{\max}\), and accordingly the smallest path gain with the AP, i.e., \(|a_{1}|\) as shown in (4). Moreover, we propose a performance metric for the maximum elevation angle \(\theta_{\max}\), which is the average effective (LoS) channel power over all azimuth angles in sector \(\mathcal{A}_{D,d}\) by fixing the elevation angle as \(\theta_{\max}\), i.e., \[E_{D,d}(\mathbf{\Theta}_{D,d})=\frac{D}{2\pi}\int_{\frac{2\pi}{D }(d-1)}^{\frac{2\pi}{D}d}||\mathbf{h}_{LoS}(\theta_{\max},\phi,\mathbf{\Theta}_{D,d})||^{2}d\phi, \tag{8}\] where \(\mathbf{\Theta}_{D,d}=\{\vartheta_{j,n_{j}}^{D,d},\ j\in\mathcal{J},n_{j}\in \mathcal{N}_{j}\}\) and \(\vartheta_{j,n_{j}}^{D,d}\) denotes the corresponding reflection coefficient of IRS \(\mathcal{F}_{j}^{3}\)\(n_{j}\)-th reflecting element with its modulus as \(1\), i.e., \(|\vartheta_{j,n_{j}}^{D,d}|=1\). We define \(E_{D,d}(\mathbf{\Theta}_{D,d})\) as _SMAECP_ for sector \(\mathcal{A}_{D,d}\), and consider the _SMAECP maximization_ as the objective of designing the corresponding IRS reflection pattern/codeword. As such, the optimization problem for designing the IRS reflection pattern/codeword for sector \(\mathcal{A}_{D,d}\), subject to the unit-modulus constraints for all IRS reflecting elements, can be formulated as follows. \[\text{(P1.}D.d)\] \[\text{s.t. }|\vartheta_{j,n_{j}}^{D,d}|=1,\ j\in\mathcal{J},\ n_{j} \in\mathcal{N}_{j}. \tag{9}\] Let \(\mathbf{\Theta}_{D,d}^{*}\) denote the solution to problem (P1.\(D.d\)), with the details for obtaining it provided in Section IV. In general, the effective (LoS) channel power of all paths from the coverage area \(\mathcal{A}\), i.e., \(||\mathbf{h}_{LoS}(\theta,\phi,\mathbf{\Theta})||^{2},\theta\in[0,\theta_{\max}], \phi\in[0,2\pi)\), can be improved by applying the proposed codeword design obtained by solving (P1.\(D.d\))'s. The performance improvement generally increases with the number of sectors \(D\), which will be shown in Section V. This is because each codeword only needs to cover a smaller sector in the angular domain by increasing the number of sectors \(D\), which helps improve the SMAECP for each sector, i.e., \(E_{D,d}(\mathbf{\Theta}_{D,d}^{*}),\ d\in\mathcal{D}\). As a result, for single-user transmission, the number of sectors \(D\) can be considered as the codebook size, and the corresponding codebook can be constructed as \(\mathcal{C}_{D}=\{\mathbf{\Theta}_{D,d}^{*},d\in\mathcal{D}\}\) with \(\mathbf{\Theta}_{D,d}^{*}\) denoting its \(d\)-th codeword for covering sector \(\mathcal{A}_{D,d}\), which is obtained via solving problem (P1.\(D.d\)). For ease of illustration, we show some examples in Fig. 2 by setting the codebook size (or equivalently the number of sectors) as \(D=1\), \(D=2\), and \(D=4\), respectively. The codebook design for multi-user transmission based on the obtained codebook for single-user transmission, i.e., \(\mathcal{C}_{D}\)'s, will be specified later in Section V. ## IV Proposed Solution to (P1.\(D.d\)) In this section, we aim to solve the formulated problem (P1.\(D.d\)) for codeword design in Section III. Notice that problem (P1.\(D.d\)) is difficult to be optimally solved due to the non-concave objective function and the non-convex unit-modulus constraints in (9). Besides, the objective function is a continuous integral function, which is hard to be dealt with. Furthermore, due to the existence of both single-reflection and double-reflection signals, the objective function intricately couple all IRSs reflecting elements' reflection coefficients. In the following, we first approximate the continuous integral in the objective function by a discrete-form summation, and then apply the AO and SDR methods to obtain an efficient solution to problem (P1.\(D.d\)). Since all \(D\) problems (P1.\(D.d\)) have a similar form and can be solved in parallel, we ignore the subscript \((\cdot)_{D,d}\) in \(\mathbf{\Theta}_{D,d}\) and the superscript \((\cdot)^{D,d}\) in all \(\vartheta_{j,n_{j}}^{D,d}\)'s, and accordingly rewrite (P1.\(D.d\)) as (P1) for notational simplicity. Fig. 2: Examples for codebook design based on sector division for single-user transmission (top-view). To tackle the continuous integral in the objective function of (P1), we make an approximation for it by uniformly discretizing the interval of azimuth angle in \(\mathcal{A}_{D,d}\), i.e., \([(d-1)\times\frac{2\pi}{D},d\times\frac{2\pi}{D})\), into \(L\) subsets, where we denote \(\mathcal{L}\triangleq\{1,2,\cdots,L\}\). As a result, problem (P1) can be approximated as (P2): \[\max_{\mathbf{\Theta}} \frac{1}{L}\sum_{l=1}^{L}||\mathbf{h}_{LoS}(\theta_{\max},\phi_{l}, \mathbf{\Theta})||^{2}\] s.t. \[(9),\] where \(\phi_{l}=(d-1)\times\frac{2\pi}{T}+(l-\frac{1}{2})\frac{2\pi}{DL}\). However, problem (P2) is still difficult to be solved since the reflection coefficients of all IRSs are coupled in the objective function. To tackle this challenge, we employ the AO method for iterative optimization of the reflection coefficients of a single IRS while keeping the reflection coefficients of the remaining \((J-1)\) IRSs fixed. Under any given \(\vartheta_{q,n_{q}},\ q\neq j\in\mathcal{J},n_{q}\in\mathcal{N}_{q}\), problem (P2) is reduced to the following optimization problem for designing the reflection coefficients of all reflecting elements at IRS \(j\), i.e., (P2-\(j\)): \[\max_{\mathbf{\tilde{\Theta}}_{j}} ||\mathbf{B}_{j}\mathbf{\vartheta}_{j}+\mathbf{c}_{j}||^{2}\] s.t. \[|\vartheta_{j,n_{j}}|=1,\ n_{j}\in\mathcal{N}_{j},\] (10) where \(\mathbf{\vartheta}_{j}=[\vartheta_{j,1},\vartheta_{j,2},\cdots,\vartheta_{j,N_{j }}]^{T}\), \(\mathbf{B}_{j}=[\mathbf{b}_{j,1},\mathbf{b}_{j,2},\cdots,\mathbf{b}_{j,N_{j}}]\) with \[\mathbf{b}_{j,n_{j}}=\frac{1}{L}\sum_{l=1}^{L}a(\theta_{\max})\Big{(} \mathbf{f}_{j}^{n_{j}}(\theta_{\max},\phi_{l})\] \[+\sum_{q\neq j}\sum_{n_{q}=1}^{N_{q}}\mathbf{g}_{j,q}^{n_{q},n_{q}}( \theta_{\max},\phi_{l})\vartheta_{q,n_{q}}\] \[+\sum_{q\neq j}\sum_{n_{q}=1}^{N_{q}}\mathbf{g}_{q,j}^{n_{q},n_{j}}( \theta_{\max},\phi_{l})\vartheta_{q,n_{q}}\Big{)},\ n_{j}\in\mathcal{N}_{j}, \tag{11}\] and \[\mathbf{c}_{j}=\frac{1}{L}\sum_{l=1}^{L}a(\theta_{\max})\Big{(}\mathbf{h }_{d}(\theta_{\max},\phi_{l})\] \[+\sum_{q\neq j}\sum_{n_{q}=1}^{N_{q}}\mathbf{f}_{q}^{n_{q}}(\theta_{ \max},\phi_{l})\vartheta_{q,n_{q}}\] \[+\sum_{q\neq j}\sum_{\tau\neq j,\tau\neq q}\sum_{n_{q}=1}^{N_{q}} \sum_{n_{r}=1}^{N_{r}}\mathbf{g}_{q,\tau}^{n_{q},n_{r}}(\theta_{\max},\phi_{l}) \vartheta_{q,n_{q}}\vartheta_{r,n_{r}}\Big{)}. \tag{12}\] Notice that problem (P2-\(j\)) is still non-convex. To solve it, we adopt the SDR technique [42]. Towards this end, we define \[\tilde{\mathbf{B}}_{j}=\begin{bmatrix}\mathbf{B}_{j}^{H}\mathbf{B}_{j}&\mathbf{B}_{j}^{H}\mathbf{ c}_{j}\\ \mathbf{c}_{j}^{H}\mathbf{B}_{j}&0\end{bmatrix},\ \tilde{\mathbf{\vartheta}}_{j}=\begin{bmatrix}\mathbf{ \vartheta}_{j}\\ 1\end{bmatrix}. \tag{13}\] Accordingly, (P2-\(j\)) can be equivalently transformed into (P3-\(j\)): \[\max_{\mathbf{\tilde{\vartheta}}_{j}} \tilde{\mathbf{\vartheta}}_{j}^{H}\tilde{\mathbf{B}}_{j}\tilde{\mathbf{ \vartheta}}_{j}\] s.t. \[|\tilde{\vartheta}_{j,n_{j}}|=1,\ n_{j}\in\mathcal{N}_{j},\] (14a) \[\tilde{\vartheta}_{j,N_{j}+1}=1.\] (14b) Furthermore, we define \(\tilde{\mathbf{\Theta}}_{j}=\tilde{\mathbf{\vartheta}}_{j}\tilde{\mathbf{\vartheta}}_{j}^ {H}\) with \(\tilde{\mathbf{\Theta}}_{j}\succeq\mathbf{0}\) and \(\text{rank}(\tilde{\mathbf{\Theta}}_{j})=1\). As a result, problem (P3-\(j\)) can be further transformed into (P4-\(j\)): \[\max_{\mathbf{\tilde{\Theta}}_{j}} \text{trace}(\tilde{\mathbf{B}}_{j}\mathbf{\Theta}_{j})\] s.t. \[|\tilde{\mathbf{\Theta}}_{j}|_{n_{j},n_{j}}=1,2,\ n_{j}=1,\cdots,N_{j },N_{j}+1,\] (15a) \[\tilde{\mathbf{\Theta}}_{j}\succeq\mathbf{0},\] (15b) \[\text{rank}(\tilde{\mathbf{\Theta}}_{j})=1.\] (15c) However, problem (P4-\(j\)) is still challenging to be optimally solved due to the non-convex rank-one constraint in (15c). To deal with it, we remove this constraint, and obtain a relax version of (P4-\(j\)), which is denoted as (P5-\(j\)) and can be optimally solved by existing convex optimization solvers such as CVX [43]. Denote by \(\tilde{\mathbf{\Theta}}_{j}^{*}\) the optimal solution to (P5-\(j\)). Now, it remains to reconstruct the solution to problem (P4-\(j\)) or equivalently (P3-\(j\)) based on \(\tilde{\mathbf{\Theta}}_{j}^{*}\). Specifically, if \(\text{rank}(\tilde{\mathbf{\Theta}}_{j}^{*})=1\), then \(\tilde{\mathbf{\Theta}}_{j}^{*}\) serves as the optimal solution to (P4-\(j\)). In this case, we have \(\tilde{\mathbf{\Theta}}_{j}^{*}=\tilde{\mathbf{\vartheta}}_{j}^{*}\tilde{\mathbf{\vartheta}}_ {j}^{*}\), where \(\tilde{\mathbf{\vartheta}}_{j}^{*}\) becomes the optimal solution to (P3-\(j\)). However, if \(\text{rank}(\tilde{\mathbf{\Theta}}_{j}^{*})>1\), we need to employ the Gaussian randomization procedure to reconstruct a high-quality rank-one solution to problem (P4-\(j\)) or (P3-\(j\)). Specifically, assuming the eigenvalue decomposition of \(\tilde{\mathbf{\Theta}}_{j}^{*}\) as \(\tilde{\mathbf{\Theta}}_{j}^{*}=\mathbf{U}\mathbf{\Sigma}\mathbf{U}^{H}\), we set \(\hat{\mathbf{\vartheta}}_{j}=\mathbf{U}\mathbf{\Sigma}^{\frac{1}{2}}\mathbf{r}\) with \(\mathbf{r}\sim\mathcal{CN}(0,\mathbf{I}_{N_{j+1}})\). Therefore, we construct a feasible solution to (P3-\(j\)) as \(\tilde{\vartheta}_{j,n_{j}}=e^{i\text{arg}(\tilde{\vartheta}_{j,n_{j}}/\tilde{ \vartheta}_{j,N_{j}+1})},\ n_{j}=1,2,\cdots,N_{j}+1\). To ensure the performance, the randomization process needs to be repeated numerous times and the best-selected solution to problem (P3-\(j\)) is denoted as \(\tilde{\mathbf{\vartheta}}_{j}^{*}\). Using the solution of \(\tilde{\mathbf{\vartheta}}_{j}^{*}\) to problem (P3-\(j\)), we can accordingly obtain the solution to (P2-\(j\)) as \(\mathbf{\vartheta}_{j}^{*}\) based on (13). Consequently, the SDR-based algorithm for solving problem (P2-\(j\)) is completed. After obtaining the solution to (P2-\(j\)) as \(\mathbf{\vartheta}_{j}^{*}\), we proceed to derive our proposed solution to (P2). To begin, we generate \(\Gamma\) solutions for \(\mathbf{\Theta}\) to satisfy \(|\vartheta_{j,n_{j}}|=1,\ j\in\mathcal{J},n_{j}\in\mathcal{N}_{j}\), where each \(\vartheta_{j,n_{j}}\)'s phase shift follows a uniform distribution in the interval \([0,2\pi)\). Next, we choose the solution, which achieves the largest objective value of (P2), as our initial point. Then, we sequentially solve problem (P2-\(j\)) for \(j\) ranging from 1 to \(J\) to update IRS reflecting elements' reflection coefficients. Once the reflection coefficients of all IRSs' reflecting elements have been updated, we evaluate whether the increase in the objective value of (P2) is less than a specific small positive threshold \(\epsilon\), or if the iteration number has reached the maximum allowable number of iteration \(I_{\max}\). If either condition is met, the algorithm outputs the solution to (P2), denoted as \(\mathbf{\Theta}_{D,d}^{*}\); otherwise, we repeat the above process. The overall algorithm is summarized in Algorithm 1. It can be shown that the complexity of solving problem (P2-\(j\)) via the SDR method is given by \(\mathcal{O}(N_{j}^{2}M+N_{j}^{3})\)[44], and thus the overall complexity of Algorithm 1 to solve (P2) (and accordingly (P1)) is given by \(\mathcal{O}(I\sum_{j=1}^{J}(N_{j}^{2}M+N_{j}^{3}))\), where \(I\leq I_{\max}\) denotes the number of iterations required to achieve the convergence of Algorithm 1. ## V Numerical Results In this section, we provide numerical results to validate the performance of our proposed codebook-based IRS reflection design for the IRS-AP. Unless otherwise specified, we set the simulation parameters as follows. We set the carrier frequency as \(f_{c}=6\) GHz, which leads to the corresponding wavelength as \(\lambda=0.05\) m. The antenna radome's hight from the ground is set as \(H_{AR}=5\) m. The dimension of the AP's antenna array is set as \(M=4\) with \(M_{x}=M_{y}=2\). The spacing between two adjacent AP's antennas is established as \(d_{A}=\lambda/2=0.025\) m. The length, width, and thickness of the antenna radome are respectively set as \(d_{l}=d_{w}=5\lambda\) and \(d_{t}=\lambda/2\). The maximum elevation angle of all paths from the coverage area \(\mathcal{A}\) is set as \(\theta_{\max}=4\pi/9\). The element spacing within each IRS is set as \(d_{I}=\lambda/2=0.025\) m, and each reflecting element's aperture area is set as \(A=(\lambda/2)^{2}\). According to Remark II.1, we can set \(N_{j,1}=10\) and \(N_{j,2}=1\), and thus \(N_{j}=10\) for each IRS \(j,\ j\in\mathcal{J}\). In the context of solving problem (P1.\(D\).\(d\)), we set \(L=40\). We set the stopping threshold and maximum number of iterations for Algorithm 1 as \(\epsilon=10^{-5}\) and \(I_{\max}=100\), respectively. The number of random initialization is set as \(\Gamma=100\). Furthermore, the half-isotropic radiation pattern is adopted for AP's antennas, for which we have \(G_{A}(\theta,\phi)=2\) for \(0\leq\theta\leq\pi/2\) and \(G_{A}(\theta,\phi)=0\) otherwise. All results are computed as average across 100 random and independent user distributions and channel realizations. ### _Performance Evaluation of Codebook Design_ In this subsection, we aim to evaluate the performance of our proposed codebook design based on sector division in Section III. First, we plot the elevation power pattern of the designed codeword \(\mathbf{\Theta}_{4,1}^{*}\) in Fig. 3, which is the codeword for sector \(\mathcal{A}_{4,1}\) by setting the number of sectors as \(D=4\). To better demonstrate the effectiveness of our proposed codeword design via solving (P1.\(D\).\(d\)), we consider three different channel power for each elevation angle \(\theta\) as follows. * **Average effective channel power:** \(\frac{2}{\pi}\int_{0}^{\pi/2}|a_{1}|^{2}||\mathbf{h}(\theta,\phi,\mathbf{\Theta}_{4,1} ^{*})||^{2}d\phi\). * **Average reflection channel power:** \(\frac{2}{\pi}\int_{0}^{\pi/2}|a_{1}|^{2}||\mathbf{h}(\theta,\phi,\mathbf{\Theta}_{4,1} ^{*})-\mathbf{h}_{d}(\theta,\phi)||^{2}d\phi\). * **Average direct channel power:** \(\frac{2}{\pi}\int_{0}^{\pi/2}|a_{1}|^{2}||\mathbf{h}_{d}(\theta,\phi)||^{2}d\phi\). It is observed from Fig. 3 that the average direct channel power decreases with the elevation angle \(\theta\), since the path gain, i.e., \(|a_{1}|^{2}\) in (4), decreases with \(\theta\) as the distance \(H_{AR}\tan\theta\) becomes longer. This demonstrates the severe near-far propagation issue in conventional multi-antenna AP systems without IRSs, and also validates the rationality of considering worst-case performance improvement as the objective for codeword design (see problem (P1.\(D\).\(d\))). Besides, it is observed that the average reflection channel power is small when \(\theta\leq\pi/6\) due to the weaker IRS reflection gain for these locations. However, these locations do not need the reflection gain from the IRSs as compared to farther locations since they have strong enough average direct channel power with the AP owing to the shorter distance between them. In addition, the reflection channel power is observed to be limited when \(\theta\geq 7\pi/18\) due to the longer distance with the AP. Thus, the locations with larger elevation angles w.r.t. the AP will also limit the average effective channel power. Moreover, the average effective channel power is observed to be larger than the average direct channel power under all considered \(\theta\)'s, even though the proposed codeword design only focuses on improving the worst-case performance (i.e., at the largest elevation angle) in each sector (see problem (P1.\(D\).\(d\))). Furthermore, the worst-case performance at the elevation angle with \(\theta=\theta_{\max}\) is observed to be improved by \(6\) dB based on the designed codeword \(\mathbf{\Theta}_{4,1}^{*}\). Next, Fig. 4 shows the azimuth power pattern of the designed codeword \(\mathbf{\Theta}_{4,1}^{*}\) by fixing the elevation angle as \(\theta=\theta_{\max}\), where we consider the following three different channel power for each azimuth angle \(\phi\). * **Effective channel power:** \(|a_{1}|^{2}||\mathbf{h}(\theta_{\max},\phi,\mathbf{\Theta}_{4,1}^{*})||^{2}\). * **Reflection channel power:** \(|a_{1}|^{2}||\mathbf{h}(\theta_{\max},\phi,\mathbf{\Theta}_{4,1}^{*})-\mathbf{h}_{d}( \theta_{\max},\phi)||^{2}\). Fig. 3: Elevation power pattern of the designed codeword \(\mathbf{\Theta}_{4,1}^{*}\). * **Direct channel power:**\(|a_{1}|^{2}||\mathbf{h}_{d}(\theta_{\max},\phi)||^{2}\). It is observed from Fig. 4 that the direct channel power is the same for all azimuth angles since they have the same distance with the AP, i.e., \(H_{AR}\tan\theta_{\max}\), which leads to the same path gain with the AP, i.e., \(|a_{1}|^{2}\) (see (4)). Besides, it is observed that the effective channel power is much higher in sector \(\mathcal{A}_{4,1}\), i.e., \(0\leq\phi<\pi/2\), as compared to other sectors, i.e., \(\mathcal{A}_{4,2}\) with \(\pi/2\leq\phi<\pi\), \(\mathcal{A}_{4,3}\) with \(\pi\leq\phi<3\pi/2\), and \(\mathcal{A}_{4,4}\) with \(3\pi/2\leq\phi<2\pi\). This is because the designed codeword \(\mathbf{\Theta}_{4,1}^{*}\) focuses on enhancing the channel power of all locations in sector \(\mathcal{A}_{4,1}\) via solving problem (P1.4.1). In this case, the reflection channel power for all locations in sector \(\mathcal{A}_{4,1}\) can be more significantly enhanced with the IRS reflection pattern as \(\mathbf{\Theta}_{4,1}^{*}\). Besides, the single-reflection and double-reflection signals from all locations in \(\mathcal{A}_{4,1}\) via the IRSs with the pattern as \(\mathbf{\Theta}_{4,1}^{*}\) are almost in-phase with those propagated through their direct channels, and thus the effective channel power of all locations in \(\mathcal{A}_{4,1}\) can be notably improved. However, for the locations in sectors \(\mathcal{A}_{4,2}\), \(\mathcal{A}_{4,3}\), and \(\mathcal{A}_{4,4}\), their reflection channel power achieved by IRS reflection pattern as \(\mathbf{\Theta}_{4,1}^{*}\) is much lower, and the reflection signals via the IRSs with the pattern as \(\mathbf{\Theta}_{4,1}^{*}\) can be combined either constructively or destructively with that via direct channel, both of which result in less enhancement or even loss in their effective channel power as compared to direct channel power. Then, for performance comparison, we consider the following benchmark schemes. * **Random codebook:** In this benchmark scheme, the codebook size is set as \(D\), and all codewords are randomly generated to satisfy the unit-modulus constraint, where the phase shifts of all reflecting elements in each codeword follow the uniform distribution in the interval \([0,2\pi)\). Similarly, the codeword with the best performance is employed for IRS reflection during user data transmission. * **Unity reflection:** In this benchmark scheme, we set the reflection coefficients of all reflecting elements as 1, i.e., \(\vartheta_{j,n_{j}}=1,\ j\in\mathcal{J},n_{j}\in\mathcal{N}_{j}\). * **No-IRS:** In this benchmark scheme, we consider the conventional multi-antenna AP system without integrated IRSs. Fig. 5 shows the average SMAECP, i.e., \(\frac{1}{D}\sum_{d=1}^{D}E_{D,d}(\mathbf{\Theta}_{D,d}^{*})\), of the whole coverage area \(\mathcal{A}\) achieved by all considered schemes versus the number of sectors \(D\). It is observed that our proposed codebook design outperforms all benchmark schemes under all considered \(D\)'s. Specifically, when the number of sectors is set as \(D=8\), our proposed codebook design can achieve \(3.63\) dB, \(6.95\) dB, and \(7.66\) dB gain over random codebook, unity reflection, and no-IRS schemes, respectively. It is also observed that the average SMAECP can be improved by increasing the number of sectors \(D\) in our proposed codebook design. Since each codeword only needs to cover a smaller sector by increasing the number of sectors \(D\), which increases its alignment accuracy and thus better enhances the SMAECP. Furthermore, all schemes with IRSs are observed to outperform the no-IRS scheme, which indicates the IRS-AP is an efficient solution to improve the coverage performance. ### _Performance Evaluation for Single-User Transmission_ In this subsection, we aim to evaluate the performance of our proposed codebook-based IRS reflection design for single-user transmission. For performance comparison, the upper-bound performance is considered in the following simulations, where the AoAs and complex coefficients of all paths are assumed to be perfectly estimated at the AP for constructing the effective channel with the user, and the successive refinement algorithm proposed in [27] is adopted for IRS reflection design, termed as "perfect CSI". We adopt the Rician fading channel to generate the effective channel with the user in (3) by setting \(\sum_{\psi=2}^{\Psi}\mathbb{E}(|a_{\psi}|^{2})/|a_{1}|^{2}=1/\kappa\) and \(a_{\psi}=\frac{|a_{1}|}{\sqrt{\kappa(\Psi-1)}}\mathcal{CN}(0,1)\) with \(\psi=2,3,\cdots,\Psi\), where \(\kappa\geq 0\) denotes the Rician factor. Besides, we fix \(\theta=\theta_{\max}\) (i.e., worst-case user locations) and randomly generate \(\phi\) from interval \([0,2\pi)\) to consider the cell-edge user. In addition, we randomly generate \(\theta_{\psi}\) and \(\phi_{\psi},\ \psi=2,\cdots,\Psi\), from intervals \([0,\theta_{\max}]\) and \([0,2\pi)\), respectively, to account for random scattering environment, where the total number of signal propagation paths is set as \(\Psi=5\). Moreover, the performance metric is selected as the achievable rate of the user, i.e., \(R=\log_{2}\left(1+\frac{P||\mathbf{h}||^{2}}{\sigma^{2}}\right)\), where \(P\) denotes the transmit Fig. 4: Azimuth power pattern of the designed codeword \(\mathbf{\Theta}_{4,1}^{*}\). Fig. 5: Average SMAECP versus the number of sectors \(D\). power of the user and \(\sigma^{2}\) denotes the noise power at the AP's antenna array. #### V-B1 Effect of Codebook Size First, Fig. 6 shows the achievable rate of the user versus the codebook size \(D\), where the Rician factor is set as \(\kappa=10\) dB. As we can see, the achievable rate of the user increases with \(D\) in our proposed codebook-based reflection design, since the codewords achieving higher SMAECP can be generated to better enhance the effective channel power of the user, which leads to a higher achievable rate. The achievable rate of the user for the random codebook-based reflection design is also observed to increase with \(D\). This is because more codeword candidates are randomly generated, which enables a higher chance to select a codeword to achieve better performance. In addition, our proposed codebook-based reflection design is observed to outperform all benchmark schemes under all considered \(D\)'s. Specifically, when \(D=8\), our proposed codebook-based reflection design has 46.91%, 161.51%, and 193.83% rate improvements over random codebook, unity reflection, and no-IRS schemes, respectively. Furthermore, it is observed that the achievable rate in our proposed codebook-based reflection design can approach that in the scheme with perfect CSI by setting the codebook size as \(D=8\), which thus greatly reduces the channel training overhead required for obtaining perfect CSI. #### V-B2 Effect of Rician Factor Next, Fig. 7 shows the effect of Rician factor \(\kappa\) on the achievable rate of the user for all schemes, where we set the codebook size as \(D=4\). It is observed that the performance achieved by our proposed codebook-based reflection design increases with Rician factor \(\kappa\) due to the stronger LoS channel. In addition, our proposed codebook-based reflection design is observed to achieve a better performance than other benchmark schemes under all considered \(\kappa\)'s, even for strong NLoS environment (i.e., \(\kappa=0\) dB). This is because our proposed codebook-based reflection design can enhance the channel power of all paths from the coverage area \(\mathcal{A}\), including the LoS channel and all NLoS channels (in general, see Fig. 4), and thus the achievable rate of the user in our proposed codebook-based reflection design can be better enhanced. #### V-B3 Slow Adaptation for IRS Passive Reflection Then, we aim to show that our proposed codebook-based IRS reflection design can achieve considerable communication performance improvement even with slow adaptation to wireless channels. Specifically, by assuming that all involved channels are constant during each fading block, the AP only needs to implement the IRS reflection training at the first fading block, then the IRS reflection pattern can remain unchanged for a long time (e.g., in the order of hundreds to thousands of fading blocks), which can help reduce the channel training overhead and the controlling overhead for adjusting IRS reflection pattern. The reason for such consideration lies in that our proposed codebook design is based on sector division (see Section III), where the codeword is generated to ensure the performance of all locations within its corresponding sector (see Problem (P1.\(D\).\(d\))). As a result, as long as the user stays in the same sector, the IRS reflection pattern does not need to be changed for ensuring the average performance of the user. Towards this end, we consider a scenario with a low-mobility user by fixing its elevation angle as \(\theta=\theta_{\max}\) and changing its azimuth angle from \(0\) to \(2\pi\) with the angular speed of \(\frac{\pi}{36}\) rad/s. As a result, the linear speed of the user is given by \(v=2\) m/s, which results in a Doppler frequency with the maximum value of \(f_{\max}=vf_{c}/c=40\) Hz, where \(c=3\times 10^{8}\) m/s denotes the speed of light. The duration of each fading block is set as \(1/(10f_{\max})=0.0025\) s, and thus each instant includes 400 fading block. Let \(T>0\) denote the time duration of fixing an IRS reflection pattern, i.e., the IRS reflection is updated once after \(T\) s. Fig. 8 depicts the average achievable rate (over 400 fading blocks) at each time instant \(t\) by respectively setting \(D=8\) and \(D=4\), where the Rician factor is set as \(\kappa=10\) dB and "fast adaptation" refers to updating the IRS reflection at each fading block, and Fig. 9 shows the corresponding user movement trajectories. It is observed from Fig. 8(a) that in the case of \(D=8\), the performance achieved by the slow adaptation with \(T=3\) s can approach that achieved by the fast adaptation at all time instants, while the performance achieved by the slow adaptation with \(T=6\) s becomes worse when \(10\) s \(\leq t\leq 12\) s. This is because as shown in Fig. 9(a), in the case of \(D=8\), the user is in sector \(\mathcal{A}_{8,1}\) when \(1\) s \(\leq t\leq 9\) s and in sector \(\mathcal{A}_{8,2}\) when \(10\) s \(\leq t\leq 18\) s. Accordingly, for the slow adaptation with \(T=3\) s, the IRS reflection pattern can always be updated to cover the sector where the user is located. In contrast, for the slow adaptation with \(T=6\) s, the IRS reflection pattern is designed to cover sector \(\mathcal{A}_{8,1}\) when Fig. 6: Achievable rate of the user versus codebook size \(D\). Fig. 7: Achievable rate of the user versus Rician factor \(\kappa\). \(7\) s \(\leq t\leq 12\) s, while the user moves to sector \(\mathcal{A}_{8,2}\) when \(10\) s \(\leq t\leq 18\) s, which thus results in performance loss when \(10\) s \(\leq t\leq 12\) s. Then, the IRS reflection pattern will be updated at \(t=13\) s, which will cover \(\mathcal{A}_{8,2}\) and re-boost the performance achieved by the slow adaptation with \(T=6\) s when \(13\) s \(\leq t\leq 18\) s. The above results indicate that the time duration for fixing IRS reflection pattern, i.e., \(T\), should be properly set based on the user's moving speed to achieve the best tradeoff between improving the user's achievable rate and reducing channel training overhead. Furthermore, by comparing the performance achieved by the slow adaptation with \(T=6\) s in the case of \(D=8\) (see Fig. 8 (a)) and that in the case of \(D=4\) (see Fig. 8(b)), it can be observed that the performance achieved by slow adaptation with \(T=6\) s is similar as that achieved by fast adaptation under all time instants in the latter case. Since in the latter case, when \(1\) s \(\leq t\leq 18\) s, the user is always in sector \(\mathcal{A}_{4,1}\) (see Fig. 9(b)) and the IRS reflection always covers sector \(\mathcal{A}_{4,1}\), which can ensure its average performance. This indicates that a smaller codebook size \(D\) could help reduce the channel training overhead by setting a longer time duration for fixing the IRS reflection pattern, although it will compromise the user's achievable rate due to lower channel power gain of the designed codewords (see Fig. 6). ### _Performance Evaluation for Multi-User Transmission_ Finally, we discuss how to extend our proposed codebook-based IRS passive design for single-user transmission to multi-user transmission. Since in multi-user transmission, the users may be distributed at arbitrary locations in the coverage area \(\mathcal{A}\), the IRS reflection should be designed to cover larger sectors including more users' locations in general, which thus leads to a small number of sectors for the selected codeword, \(D\). As a result, how to select the value of \(D\) for the codebook for multi-user transmission is a challenging problem in practice. To tackle this problem, we propose to construct the overall codebook for multi-user transmission as the union of all codebooks for single-user transmission with different values of \(D\) (or numbers of sectors), where their sum is equal to the new codebook size for multi-user transmission, denoted as \(X\). Let \(\tilde{\mathcal{C}}_{X}\) denote the overall codebook for multi-user transmission with the size of \(X\). For example, to construct the codebook \(\tilde{\mathcal{C}}_{15}\), we first design the codebooks for single-user transmission with \(D=1\), \(D=2\), \(D=4\), and \(D=8\), respectively, and then construct \(\tilde{\mathcal{C}}_{15}\) as the union of \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), \(\mathcal{C}_{4}\), and \(\mathcal{C}_{8}\). Specifically, when \(X=1+2+4+8=15\), we have \(\tilde{\mathcal{C}}_{15}=\mathcal{C}_{1}\cup\mathcal{C}_{2}\cup\mathcal{C}_{4} \cup\mathcal{C}_{8}\). The same procedure is applied for constructing the codebook \(\tilde{\mathcal{C}}_{X}\) with other codebook sizes or values of \(X\). To evaluate the performance of the above proposed codebook-based IRS reflection design for multi-user transmission, we set the number of users as \(K=4\). The effective channel for each user \(k\) is defined as \(\boldsymbol{h}_{k}\), which can be obtained as (3) and modeled as Rician fading (see Section V-B). Similar as single-user transmission, the locations of the \(K=4\) users are all randomly generated in the cell-edge of the coverage Fig. 8: Performance comparison between fast adaptation and slow adaptation. \(7\) s \(\leq t\leq 12\) s, while the user moves to sector \(\mathcal{A}_{8,2}\) when \(10\) s \(\leq t\leq 18\) s, which thus results in performance loss when \(10\) s \(\leq t\leq 12\) s. Then, the IRS reflection pattern will be updated at \(t=13\) s, which will cover \(\mathcal{A}_{8,2}\) and re-boost the performance achieved by the slow adaptation with \(T=6\) s when \(13\) s \(\leq t\leq 18\) s. The above results indicate that the time duration for fixing IRS reflection pattern, i.e., \(T\), should be properly set based on the user’s moving speed to achieve the best tradeoff between improving the user’s achievable rate and reducing channel training overhead. Fig. 9: Illustration of user movement trajectory. are \(\mathcal{A}\) and the random-scattering environment is employed for generating NLoS channels. It is assumed that the IRS-AP adopts the minimum mean square error (MMSE) combining and successive interference cancellation (SIC) technique to decode the signals from different users, and thus the sum-rate of all users is given by \(R_{sum}=\log_{2}\text{det}\big{(}\boldsymbol{I}_{M}+\sum_{k=1}^{K}\frac{p_{k}} {\sigma^{2}}\boldsymbol{h}_{k}\boldsymbol{h}_{k}^{H}\big{)}\) with \(p_{k}=P/K\) denoting the transmit power of user \(k\), which is used as the performance metric for simulation. Fig. 10 shows the sum-rate of all users by all considered schemes versus the codebook size \(X\), where we set \(\kappa=10\) dB and the codebook size for random codebook is also set as \(X\). It is observed that our proposed codebook-based reflection design outperforms all benchmark schemes under all considered \(X\) values in multi-user transmission. Specifically, by setting the codebook size as \(X=15\), our proposed codebook-based reflection design is observed to achieve 55.29%, 149.06%, and 180.85% sum-rate improvements over random codebook, unity reflection, and no-IRS schemes, respectively. Moreover, Fig. 11 shows the probability of the optimal number of sectors for the selected codeword to achieve the best communication performance for single-user transmission (i.e., largest achievable rate of the user) and multi-user transmission (i.e., maximum sum-rate of the users), respectively, where we consider 1000 realizations of user locations and set \(\kappa\rightarrow\infty\) for considering LoS environment. It is observed that the optimal number of sectors for single-user transmission is always fixed as its largest value (see Fig. 11(a)), i.e., \(8\), while that in multi-user transmission varies with different realizations of user locations (see Fig. 11(b)). This is because in single-user transmission, the achievable rate of the user is only dependent on its effective channel power, which can be significantly enhanced by setting the number of sectors to the largest value to obtain the largest power-gain codeword. In contrast, in multi-user transmission, the users are distributed at random locations in the coverage area \(\mathcal{A}\), and the sum-rate of the users is affected by other factors, such as the interference among them, both of which render that the optimal number of sectors for the selected codeword varies with different user locations. ## VI Conclusions In this paper, we proposed a codebook-based IRS reflection design for the IRS-AP to enhance the average channel power of the whole coverage area. The codebook only contained a small number of codewords for achieving small codebook size to reduce the channel training overhead, which was designed offline based on the proposed sector division strategy. Then, each codeword for IRS reflection pattern was optimized to maximize the SMAECP in its served sector, which was efficiently obtained by adopting the AO and SDR methods. By applying the codewords in the designed codebook, the AP could select the optimal IRS reflection pattern with the best communication performance for data transmission. Numerical results demonstrated that our proposed codebook-based IRS reflection design can outperform other benchmark schemes in both single-user and multi-user transmissions. It was also shown that the proposed sector division strategy based on the azimuth angle enabled that the codebook-based IRS reflection can achieve slow adaptation to wireless channels, where the number of sectors and the time duration for updating IRS reflection should be properly selected to balance the tradeoff between improving user achievable rate and reducing channel training overhead. There are several promising directions worthy of further investigation for reflection codebook design in IRS-AP based communications, such as accounting for imperfections in IRS phase-shift model [45] when designing the reflection codebook, designing the reflection codebook for frequency-selective wideband channels, and so on.
2309.07608
Identifying and analysing toxic actors and communities on Facebook by employing network analysis
There has been an increasingly widespread agreement among both academic circles and the general public that the Social Media Platforms (SMPs) play a central role in the dissemination of harmful and negative sentiment content in a coordinated manner. A substantial body of recent scholarly research has demonstrated the ways in which hateful content, political propaganda, and targeted messaging on SMPs have contributed to serious real-world consequences. Adopting inspirations from graph theory, in this paper we apply novel network and community finding algorithms over a representative Facebook dataset (n=608,417) which we have scrapped through 630 pages. By applying Girvan-Newman algorithm over the historical dataset our analysis finds five communities of coordinated networks of actors, within the contexts of Indian far-right Hindutva discourse. This work further paves the path for future potentials of applying such novel network analysis algorithms to SMPs, in order to automatically identify toxic coordinated communities and sub-communities, and to possibly resist real-world threats emerging from information dissemination in the SMPs.
Ritumbra Manuvie, Saikat Chatterjee
2023-09-14T11:16:16Z
http://arxiv.org/abs/2309.07608v1
# Identifying and analysing toxic actors and communities on Facebook by employing network analysis ###### Abstract There has been an increasingly widespread agreement among both academic circles and the general public that the Social Media Platforms (SMPs) play a central role in the dissemination of harmful and negative sentiment content in a coordinated manner. A substantial body of recent scholarly research has demonstrated the ways in which hateful content, political propaganda, and targeted messaging on SMPs have contributed to serious real-world consequences. Adopting inspirations from graph theory, in this paper we apply novel network and community finding algorithms over a representative Facebook dataset (n=608,417) which we have scrapped through 630 pages. By applying Girvan-Newman algorithm over the historical dataset our analysis finds five communities of coordinated networks of actors, within the contexts of Indian far-right Hindux discourse. This work further paves the path for future potentials of applying such novel network analysis algorithms to SMPs, in order to automatically identify toxic coordinated communities and sub-communities, and to possibly resist real-world threats emerging from information dissemination in the SMPs. ## 1 Introduction Studies show that disruptive actors around the world have been successfully using Social Media Platforms (SMPs) to fuel hate speech against minorities and spread negative sentiment contents (see Belew and Massanari 2018, Matamoros-Fernandez and Farkas 2021, Manuvie and Chatterjee 2023). Consequently, these actors tend to manipulate the public discourse by suppressing independent voices through counter-speech, or excessive content production which disrupts the core values of democratic structures, often acting in a coordinated way. In the present study, our goal is to apply novel community searching algorithms over a select Facebook dataset of 630 pages and algorithmically analyse their networked and coordinated behaviour in the digital domain. These pages were identified on Facebook using the keyword search methods and following Facebook's recommendation system between April 2020 and December 2020. Keywords surrounding popular discourses such as "Love Jihad", "Corona Jihad", and "spit-jihad" alongside identity-specific slang terminology were used to identify the initial set of pages. Subsequently, the Facebook recommendation system was used to manually 'like' and collect URLs of 803 pages that are associated with cross-sharing and cross-posting anti-muslim and anti-migrant content and discourses respectively. Through careful human iteration, the size of the list of pages was reduced to 653 pages with page likes above 1000 likes, and a posting frequency of daily. As the access to the CrowdTangle platform was made available to us, these pages were batch uploaded to the CrowdTangle dashboard. Of the total 653 pages, only 630 Facebook pages were tracked by CrowdTangle due to system limitations in terms of privacy and performance of the page at the time of the upload. The 630 pages were further sorted based on discourses, narratives, and fan-following of individual political leaders or ideological groups into CT lists. The pages were further monitored for qualitative analysis to ensure that these pages fall in the category of regular hate-speakers or perpetrators of hateful content. Results of these analyses were shared with Facebook's parent company Meta, focusing on extremely hateful content and lists. In response to our report, Meta provided a statement suggesting that it deploys an automated language processing system to flag and remove content that is violative of its hate-speech policies, or where a post falls into the category of a dangerous organisation or dangerous actor. We, therefore, decided to run two open access sentiment analysis and hate-speech detection models over our CrowdTangle dataset of toxic actors (see Manuvie and Chatterjee, 2023). Textual content associated with each post along with the predicted sentiment and hate-speech labels is released in our public GitHub page.1 These sequences of text strings and associated sentiment and hate speech labels (as predicted by the XLM-T-based language models) can be considered as "weak-labelled" datasets for further research purposes. Backed by our qualitative discourse analysis of these groups over the last two years, we can confirm that not everything that is posted by these actors can qualify to be hate-speech content, or can be validly detected. However, our results establish that the amount of negative messaging and hate content posted on these groups repeatedly should be sufficient to de-platform at least some of them under Meta's current policies on content moderation. Footnote 1: footnotemark: The goal of the present paper is to further extend our previous study and to exhibit how community finding algorithms can be applied on such datasets to find underlying network effects, and to (algorithmically) find underlying communities or groups of actors that are responsible for disseminating information in the SMPs in a coordinated manner. To do this, we have structured the paper in the following way. In section 2, we discuss the methodology underlying our data selection and data archiving process. In section 3, overall statistics of the dataset is shown. The detailed processes of data-reduction and network analysis are subsequently presented in section 4. ## 2 Methodology The overarching goal of our research is to search, identify and validate Facebook actors (i.e., pages or groups) that are part of an information ecosystem - which allegedly spreads hate speech and disinformation. Such eco-systems of information dissemination and circulation can potentially form "echo chambers" connected to a certain ideology or narrative or even can create social polarization on digital platforms. We study the patterns in the behaviour of the actors within the Facebook platform (i.e., link sharing) and further attempt to identify the groups and communities of information dissemination by employing network analysis. The first step to identify the toxic actors is to begin with a primary list of toxic fan pages and groups that contribute to the process of disseminating hate narratives against religious minorities in India. To do this, we chose the following as the determinant factors: * The individuals who have a public track record on hate speech and/or providing hate narratives targeting religious minorities particularly, Muslims. * They have declared or exhibited their affinity toward the Hinduva ideology. The individuals are chosen due to their influential position in society to mobilise people against the minorities in the pursuit of achieving the Hinduva aims. We also document their nexus to governance institutions, political leadership and Hinduva-based organisations. Here, we use the following indicators to record their influential position in society - politicians, media persons, vigilante leaders, Hinduva-based religious influencers, state officials, or Government officers. Built on this selection process, the second task for mapping and monitoring the fan pages and the groups generating hate content in the name of hate actors (individuals) was, to list the fan pages and groups found through a simple search on Facebook and CrowdTangle platforms with their names. CrowdTangle Search tool helps in searching content across social media by putting a keyword, hashtag or URL into the search bar. We modified the search tool to include filters to sort by countries, language options (Hindi and English) and timeframe. Based on such search combinations, we found an initial list having 630 pages spreading hate narratives. However, the number of pages fluctuates, as the pages are removed (by Facebook), added (as the new pages are formed), or deleted from Facebook (by page admins or by Facebook). Although such fluctuations do affect the overall statistics, in our research process, we download the historical dataset (in CSV format) and archive this dataset with a time-stamp, so that our findings/claims can be reproduced later using that archived dataset - even if some pages or groups from our compiled lists are deleted later from the Facebook platform. Subsequently, for data visualisation and inspecting, we ingest the dataset to an elasticsearch database and plot various visualizations using Kibana frontend (hosted by Elastic Cloud).2 Footnote 2: To see the live dashboard, visit this link. ## 3 Dataset and the overall statistics Corresponding to each entry of the CrowdTangle data, there are 40 metadata fields as shown in Appendix 1. Following the CrowdTangle data sharing policies, we can not publicly share the contents of the posts. So, in the next subsections we only show the results derived by processing and analysing the dataset that is scrapped using CrowdTangle. As of 17th of April, 2023, we have scraped the Facebook historical data corresponding to 630 Facebook pages using the CrowdTangle platform from 31-12-2014 onwards and have performed our statistical analysis in this paper. The dataset consists of a total 608,417 entries. In order to inspect the timeline of the posts, we use the 'Post Created Date' and 'Post Created Time' fields. As shown in Appendix 1, the integer values corresponding to the'statistics' object provide us with information about likes, comments, shares counts. The URL strings corresponding to the field 'expandedLinks.original' provide us with the statistics of the most shared links, which we have compiled in Appendix 1. Text entries corresponding to the fields like 'Page Description','message', 'image text' or 'title' are used to understand the overall narratives and for plotting word clouds. In order to have a broader overview of the activities of the actors, we begin with a time-histogram of the datasets. In Figure 1, we show the histogram of the time series data that is scrapped between the dates 31-12-2014 and 17-04-2023. The top 20 actors are shown in the pie diagram of Figure 2. To provide a more intuitive visualisation about the relative activity of the actors, top 50 actors are shown in Figure 3 in an word cloud. Concerning the distribution of the types of posts, 45.14%, 32.48%, 10.9%, 7.12% and 3.02% contents correspond to photos, links, native video, status and live videos respectively. Considering the top admin countries, 88.06% of posts correspond to India (IN), 10.5% admins are from South Africa (SA), 0.6% from Australia (AU), 0.47% from Pakistan (PK) and 0.25% from Bangladesh (BD). Other admin countries present in the dataset are, the United States (US), Canada (CA) and Bhutan (BT). We have also extracted and plotted the distribution of branded content sponsors in Figure 4. Moreover, in order to understand the overall narrative and sentiment of messages, we have also mapped a word cloud of the shared messages in Figure 5. The plot shows their evident leaning towards right-wing Hindutva sentiment through the most occurring phrases within the word cloud, like _Jay Shree Ram_, _Bajrang Dal_, _Katar Hindustani_, _Jai Hind_ etc. Figure 3: Wordcloud of the top 50 actors. Figure 2: Top 20 actors and their percentage of posts within the dataset. Figure 4: Percentages of branded content sponsors within the dataset. Figure 5: Wordcloud of messages shared by the actors. In Appendix 2, we show the 100 most shared links and their corresponding counts. We have also implemented an automated URL validator which checks whether the links are valid, or they return a 404 error.3 We have run this algorithm on our dataset and hereby report that 57.3% of the top 1000 non-Facebook links are identified to be invalid or broken. This implies that the contents corresponding to these links are deleted by the owner from their site. It is important to note here that when Facebook contents are taken down either by the page/group admin or by Facebook, the URLs don't return us a 404 error. So, through the present version of our URL validator algorithm, it is not possible to identify such webpages. Hence, we only applied our algorithm to the top 1000 non-Facebook links (i.e., we filtered out all the links from our list which contain "www.facebook.com"). Footnote 3: See our ipython notebook code for the automated URL validation: [https://github.com/LondonS](https://github.com/LondonS) story/Crowd/Tangle-New-Actor-Searching-Algorithm/blob/main/Automated_URL_validator.ipvnb. ## 4 Network analysis Networks are commonly described by sets of two items, "nodes" and "edges" - which are the basic building blocks of a mathematical _graph_. Edges are connective lines between two nodes. In order to visualise the graph of link-sharing behaviour and connectivities between various actors in our dataset, we have mapped the whole dataset into a graph; subsequently performed graph network analysis; and finally visualised the identified sub-graphs and communities into Gephi. Methodologically, this is done by creating separate nodes and edges data frames. The nodes have the map **(Id, Label)**, whereas the edges have the map of **(Source, Target)**. We use the entries of '_account.name_' field as our 'Source', and '_expandedLinks.original_' as our 'Target'. We thereafter employ the Python Networkx package to perform novel network analysis in order to analyse the graph to find suspected coordinated behaviours and communities. Finally we also create network visualizations of the detected communities by employing the visualisation software Gephi4. The network maps together with graph analysis help us to understand the connectivities between different groups of actors and furthermore show how these networks of information eco-systems are connected through link-sharing behaviour. Footnote 4: The ipython Google Colab notebooks that perform network analysis and prepares the nodes and edges files for Gephi visulaisation can be found in our public GitHub page at [https://github.com/LondonS](https://github.com/LondonS) story/Crowd/Tangle-Network-Analysis. ### Graph centrality analysis In order to characterize the importance of different nodes within the network, we compute the _degrees_ corresponding to each node. The degree is a measure that provides us with the number of edges each node has. In other words, the degree tells us how many neighbours a particular node has. It can be assumed that nodes which have the most edges are the most important or central within the network as they are directly connected to lots of other nodes. Hence, nodes with high degrees are expected to be important actors in the network. In other words, nodes with a high degree tend to be more influential and popular in the network. These individuals may be well-connected to many others and have a greater ability to spread information or ideas. The degree of nodes can also help us understand the clustering of the network - that is, how tightly connected groups of nodes are to one another. If nodes in a group have a high degree of connection to each other but low connections to nodes outside the group, it indicates a tightly-knit community. Within our data, which contains a total of **414,490 nodes** and **537,225 edges**, the maximum degree of the graph is 23497 associated with the page "_The Kapil Sharma Fan Club_". We also compute the average degree of the nodes in the network, which is 2.6. The network graph created above by mapping the actors and their shared links, contains **54 connected components**. The number of connected components in a graph is a way of measuring how many separate clusters or groups of nodes exist in the graph. A connected component is a subgraph of a graph in which every node is connected to every other node by at least one path. In other words, all nodes in a connected component can be reached from any other node in the same component by traversing edges in the graph. Presence of 54 connected components in our dataset implies that the whole network is not fully connected and there are distinct communities or groups of nodes. Another associated network centrality measure that we computed next is _degree centrality_. This is one of the metrics that is used in the literature to evaluate the importance of a node and is defined by the number of neighbours that a node has divided by the total number of neighbours that the node could possibly have: _Degree centrality \(=\) No. of neighbours of a node / No. of neighbours the node could possibly have_ Because in social networks self-loops are not allowed (i.e, an actor can't follow itself), the number of neighbours an individual actor can possibly have, counts to every other node within the network, excluding itself. We find that the maximum degree of centrality of our network corresponds to the page "Pushpendra Kulshrestha Fans Club", with a value of 0.057. In order to understand the correlation between the degrees and the degree centrality measures of the nodes, in Figure 6 we plot a scatter diagram, showing the distribution of degrees in the horizontal axis versus the degree centralities of the nodes in the vertical axis. In Table 1 we also show the top 10 actors who have the highest degree centrality within the dataset. The scatter plot between degree centrality and degrees is employed in order to understand the relationship between these two measures. Nodes with higher degree centrality and degree are typically considered more important in the network, and considered to play key roles in information flow. Usually, nodes with higher degrees tend to have higher degree centrality, which indicates that they are more important within the network. However, there may be outliers or 'anomalous' nodes within the network, where nodes have a high degree centrality despite having a relatively low degree. These nodes may act as 'hubs' or bridges between different parts of the network. By analysing Figure 6 and Table 1, we see that there's a linear correlation between the degrees and degree centrality measures. Top two actors (namely, _Pushpendra Kulshrestha Fans Club_ and _We Support Hinduva_) generally surpass the other actors regarding their engagement in the network. Top 3rd - 9th actors from Table 1 form a cluster in the scatter plot. The next cluster we identify consists of _Tejasvi Surya Fans Club_, _True Nationalist_ and _RohitSardana Fans Club_. Other smaller clusters are highlighted in Figure 6. **Table 1. Top 10 pages with highest degree centrality measures in the whole graph** \begin{tabular}{|c|c|} \hline **Page name** & **Degree centrality** \\ \hline Pushpendra Kulshrestha Fans Club & 0.057 \\ \hline We Support Hindutva & 0.045 \\ \hline We support hindutva & 0.041 \\ \hline Amit Shah Fans Team & 0.039 \\ \hline **size**\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\##\#\#\#\##\#\#\##\#\##\###\##\# In a connected graph, _closeness centrality_ of a node is another measure of centrality in a network, calculated as the reciprocal of the sum of the length of the shortest paths between the node and all other nodes in the graph. Similarly, _betweenness centrality_ is a measure that quantifies the number of shortest paths between pairs of nodes in the network that pass through a given node. \(\textbf{Betweenness centrality}=\textbf{No. of shortest paths through a node / No. of all possible shortest paths that exist between every pair of nodes in the graph In other words, a node with high betweenness centrality lies on many of the shortest paths connecting other pairs of nodes in the network. Mathematically speaking, betweenness centrality is defined as the fraction of the shortest paths in the network that pass through a given node. It is calculated by summing the number of shortest paths between all pairs of nodes in the network that pass through the node in question and then dividing by the total number of shortest paths between all pairs of nodes. Nodes with \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Sub-graph** & **No. of nodes** & **No. of edges** & **Average distance** & **Page with max** \\ **order** & & & **between two nodes** & **centrality** \\ \hline 1 & 412,914 & 535,702 & 2.15 & _Pushpendra_ \\ & & & & _Kulshrestha Fans Club_ \\ \hline 2 & 633 & 632 & 2.00 & _Akhand_bharat_ \\ \hline 3 & 156 & 155 & 1.99 & _Hindu Heritage_ \\ & & & _Endowment_ \\ \hline 4 & 155 & 154 & 1.99 & _Pushpendra_ \\ & & & & \\ \hline 5 & 118 & 117 & 1.98 & _APR News_ \\ \hline 6 & 35 & 34 & 1.94 & _United hindu front_ \\ \hline 7 & 34 & 33 & 1.94 & _Kattar Hindu group_ \\ \hline 8 & 33 & 32 & 1.94 & _Pushpendra_ - _Pushpendra_ \\ & & & & _Pushpendra_ \\ \hline 9 & 30 & 29 & 1.93 & _Bajrang Dal_ \\ & & & & _Aryachauhan_ \\ \hline 10 & 25 & 24 & 1.92 & _Hindu Unity_ \\ \hline \end{tabular} \end{table} Table 2: **Top 10 largest sub-graphs within the network and their characteristics** high betweenness centrality are often considered important 'connecting bridges' in the network, as they can control the flow of information between different parts of the network. Removing or disrupting nodes with high betweenness centrality can have a significant impact on the structure and function of the network. From our datasets, for all the top 10 sub-graphs, the page that is enlisted in Table 2 with the highest degree centrality, also has the maximum closeness centrality and betweenness centrality. Subsequently, as a last step in our analysis, we have applied the Girvan-Newman algorithm to our dataset to detect communities within the network. The algorithm starts by calculating the betweenness centrality for all edges in the graph. Next, the edge with the highest betweenness centrality is removed from the graph. This process is repeated until the graph is split into multiple disconnected components. The resulting disconnected components that are thus found are the communities of the original graph. Out of 5, the top 3 communities that were identified by the algorithm are thereafter plotted by using Gephi visualisation software and shown in Figure 7-8. Figure 7: Gephi plot of the first community that was identified by the Girvan-Newman algorithm. The color map and the text size of page names correspond to the importance of the actors in terms of their activity. Figure 8. Gephi plots of the second (top) and third (bottom) communities that were identified by the Girvan-Newman algorithm. The color map and the text size of actor names correspond to the importance of the actors in terms of their activity. ## 5 Discussions and conclusions The systematic data collection, reduction and network analysis method shown in this paper establishes a systematic scientific graph-theory based approach that can be adopted to discover communities and major actors in any type of social media data - be it Facebook, Twitter or Reddit. By analysing the primary actors within the first three communities (shown in Figure 7-8), we find that the top actors in these communities are _Pushpendra Kulshrestha Fans Club, I Am Proud To Be A Hindu_ and _Kattar Hindu Mahakaal_ respectively. The network of actors within these communities further expose their underlying causal coordinations in information dissemination process in Facebook. Although network analysis has been earlier applied to analyse the political narratives within the contexts of other countries, application of such novel community finding algorithm in the context of right-wing Hindu's discourse in India remains unaddressed in the literature. Such analysis thus pave the path forward for applying such novel network analysis algorithms to social media data, for automatically identifying thereatful coordinated communities, and to possibly model, predict and resist future threats emerging from coordinated information dissemination in the SMPs. ## Acknowledgements The authors acknowledge the help received from the CrowdTangle - a Facebook-owned tool that tracks interactions on public content from Facebook pages and groups, verified profiles, Instagram accounts, and subreddits. The authors acknowledge the role of members of the _Foundation The London Story_ in constructive discussions on this paper.
2309.16063
Time Delay Cosmography with a Neural Ratio Estimator
We explore the use of a Neural Ratio Estimator (NRE) to determine the Hubble constant ($H_0$) in the context of time delay cosmography. Assuming a Singular Isothermal Ellipsoid (SIE) mass profile for the deflector, we simulate time delay measurements, image position measurements, and modeled lensing parameters. We train the NRE to output the posterior distribution of $H_0$ given the time delay measurements, the relative Fermat potentials (calculated from the modeled parameters and the measured image positions), the deflector redshift, and the source redshift. We compare the accuracy and precision of the NRE with traditional explicit likelihood methods in the limit where the latter is tractable and reliable, using Gaussian noise to emulate measurement uncertainties in the input parameters. The NRE posteriors track the ones from the conventional method and, while they show a slight tendency to overestimate uncertainties, they can be combined in a population inference without bias.
Ève Campeau-Poirier, Laurence Perreault-Levasseur, Adam Coogan, Yashar Hezaveh
2023-09-27T23:10:36Z
http://arxiv.org/abs/2309.16063v1
# Time Delay Cosmography with a Neural Ratio Estimator ###### Abstract We explore the use of a Neural Ratio Estimator (NRE) to determine the Hubble constant (\(H_{0}\)) in the context of time delay cosmography. Assuming a Singular Isothermal Ellipsoid (SIE) mass profile for the deflector, we simulate time delay measurements, image position measurements, and modeled lensing parameters. We train the NRE to output the posterior distribution of \(H_{0}\) given the time delay measurements, the relative Fermat potentials (calculated from the modeled parameters and the measured image positions), the deflector redshift, and the source redshift. We compare the accuracy and precision of the NRE with traditional explicit likelihood methods in the limit where the latter is tractable and reliable, using Gaussian noise to emulate measurement uncertainties in the input parameters. The NRE posteriors track the ones from the conventional method and, while they show a slight tendency to overestimate uncertainties, they can be combined in a population inference without bias. Machine Learning for Astrophysics, Honolulu, Hawaii, USA. 23202023345 ## 1 Introduction Over the past decades, the inflationary \(\Lambda\)CDM model has had striking success in explaining cosmic microwave background (CMB) observations and the detailed evolution of the Universe. The current expansion rate of the Universe, known as the Hubble constant (\(H_{0}\)), is essential for many studies, including understanding the nature of dark energy, neutrino physics, and testing general relativity. In the past decade, the measured values of \(H_{0}\) from different probes have diverged: the latest CMB and Type Ia supernovae data now disagree at more than 4\(\sigma\)(Riess et al., 2022). Time delay cosmography can provide an independent measurement of \(H_{0}\) with different systematics from existing methods. This can be done using the time delays between the multiple images of a strongly lensed variable light source. Previous measurements have achieved a precision between 2% and 8% (Birrer and Treu, 2021) using this method. Meanwhile, 1% precision is required to solve the Hubble tension (Weinberg et al., 2013; Treu et al., 2022). This could be achieved with data available in the next decade with a new generation of survey telescopes. The Rubin Observatory, in particular, is expected to detect thousands of strongly lensed quasars (Oguri and Marshall, 2010). However, current analysis methods have limitations in terms of complexity and scalability. They rely on likelihood-based approaches, such as Markov Chain Monte Carlo (MCMC) and nested sampling, which require explicit likelihoods and are not amortized. They also require sampling joint posterior distributions of nuisance parameters while only the \(H_{0}\) marginal is of interest. Hence, they scale poorly as nuisance parameters are included to ensure unbiased inference. The simulation-based inference (SBI) framework allows handling complex, high-dimensional data and models that are difficult or intractable to analyze using traditional likelihood-based methods by only relying on the availability of a realistic simulation pipeline. Neural Ratio Estimators (NREs; Cranmer et al., 2015), a specific class of SBI methods, leverage the power of machine learning to allow amortization of the inference process as well as implicit marginalization over large sets of nuisance parameters, providing an efficient way to estimate low-dimensional variables. We demonstrate the application of an NRE to time delay cosmography by predicting the \(H_{0}\) posterior distribution given Fermat potentials calculated from modeled lens parameters and image positions, the time delay measurements, and the deflector and source redshifts. We use a Set Transformer architecture (Lee et al., 2019), which allows for the amortization over lensing systems with two or four lensed images by the same model. While previous works have explored how machine learning can be used for the measurement of \(H_{0}\) with time-delay cosmography, contributions (e.g. Hezaveh et al., 2017; Levasseur et al., 2017; Morningstar et al., 2019; Pearson et al., 2019; Wagner-Carena et al., 2021; Schuldt et al., 2021; Park et al., 2021) have been limited to using neural networks (NN) to estimate the lens parameters posterior. The approach presented here is therefore complementary, since it bridges the remaining gap to fully amortize the inference of \(H_{0}\) from strong lensing data. Section 2 introduces the methodology. Section 3 describes the simulations. Section 4 presents the NN architecture and training procedure. Results are presented in section 5. ## 2 Time-delay cosmography Gravitational lensing occurs when images from a distant source get distorted by the presence of matter bending spacetime along the line of sight. In strong gravitational lensing, there is formation of multiple images of background sources due to this effect. The lensing equation, \[\boldsymbol{\beta}=\boldsymbol{\theta}-\boldsymbol{\alpha}\left(\boldsymbol{ \theta}\right)\,, \tag{1}\] summarizes this phenomenon by retracing the source plane angular position \(\boldsymbol{\beta}\) of a ray observed at the image plane angular position \(\boldsymbol{\theta}\) after a mass deflector has deviated it by an angle \(\boldsymbol{\alpha}\). The lensing potential \(\psi\) of the massive object determines the angular deflection \(\boldsymbol{\alpha}\) and the convergence \(\kappa\) according to \[\boldsymbol{\alpha}\left(\boldsymbol{\theta}\right)=\boldsymbol{\nabla}\psi \left(\boldsymbol{\theta}\right)\,;\quad\nabla^{2}\psi\left(\boldsymbol{ \theta}\right)=2\kappa\left(\boldsymbol{\theta}\right)\,. \tag{2}\] Gravitational lensing affects the light rays travel time from their source to the observer in two ways : by changing their path length and through the lensing potential itself. The presence of a mass deflector in the light's trajectory lengthens its travel time by an amount proportional to the Fermat potential \(\phi\), which is fully determined by the mass distribution in the lens and is given by \[\phi\left(\boldsymbol{\theta},\boldsymbol{\beta}\right)\equiv\frac{\left( \boldsymbol{\theta}-\boldsymbol{\beta}\right)^{2}}{2}-\psi\left(\boldsymbol{ \theta}\right)\,. \tag{3}\] To infer \(H_{0}\) with time delay cosmography, one observes a multiply-imaged time-varying background source. Each path giving rise to each image is affected by a different Fermat potential, resulting in a different light travel time. This allows the evaluation of the relative travel times between paths \(\Delta t\), which are called time delays. They are calculated between pairs of images. They are related to \(H_{0}\) by \[\Delta t\equiv\frac{D_{\Delta t}}{c}\Delta\phi\,, \tag{4}\] where \(c\) is the speed of light, \(\Delta\phi\) is the difference of Fermat potential at the position of the two distinct images, and \(D_{\Delta t}\) is the time delay distance, given by \[D_{\Delta t}\equiv\left(1+z_{d}\right)\frac{D_{d}D_{s}}{D_{ds}}\,. \tag{5}\] Here, \(z_{d}\) is the deflector redshift, \(D_{d}\) is the diameter angular distance between the observer and the deflector, \(D_{s}\) is the diameter angular distance between the observer and the source, \(D_{ds}\) is the diameter angular distance between the deflector and the source. These distances are where the \(H_{0}\) dependence is contained. In this framework, the posterior distribution of \(H_{0}\) generally takes the form \[P(H_{0}|\boldsymbol{\Delta t},\boldsymbol{d})\propto\int\,\mathrm{d}\boldsymbol {\zeta}\,P(\boldsymbol{\Delta t}|H_{0},\boldsymbol{\zeta},\boldsymbol{M})P( \boldsymbol{\zeta}|\boldsymbol{d},\boldsymbol{M})P(H_{0}) \tag{6}\] where \(\boldsymbol{d}\) represents the lensing observation, \(\boldsymbol{\zeta}\) is a set of parameters describing the lensing system, and \(\boldsymbol{M}\) includes all observational effects (e.g. instrumental noise, point spread function, image covariance matrix, deflector's light, and dust). In this context, the lensing parameters and the observational effects are nuisance parameters that must be integrated out to obtain the marginal distribution of \(H_{0}\). The main proposal of this work is to replace the traditional Monte Carlo methods to numerically approximate the \(H_{0}\) posterior. ## 3 Simulations In this work, we consider the case where the deflected light is emitted by a variable point source, such as an Active Galactic Nucleus (AGN) or a supernova. We do not consider any light profile for its host galaxy because in the following we assume that the modeling of the lensed image was performed in a previous analysis stage (e.g. with a BNN as in Park et al., 2021). We assume that the source is being distorted by a deflector following as Singular Isothermal Ellipsoid (SIE; Kormann et al., 1994), plus external shear. This model is described by 7 parameters: Einstein radius \(\theta_{E}\), \(x-\) and \(y-\)components of the position \((x_{d},\,y_{d})\), axis ratio \(f\) and its orientation \(\phi_{d}\), and modulus \(\gamma_{\text{ext}}\) and orientation \(\phi_{\text{ext}}\) of the external shear. Details about the range of uniform prior used for these parameters, the cosmology, and the variable source are included in Table 1. We compute time delay distances according to Equation (5). The \(H_{0}\) value, the source and the deflector redshifts are drawn from uniform prior distributions detailed in Table 1. We assume a flat \(\Lambda\)CDM cosmology. With the Fermat potential at the image positions and the time delay distance, we calculate the time delays from Equation (4) and relative Fermat potentials from Equation (3), meaning that doubles have one time delay - Fermat potential pair, while quads have three. For the noise model, the goal is to emulate the results of a standard analysis, which models the system parameters from the lensing observation and measures the time delays from the image light curves. Therefore, we add Gaussian noise to the lensing parameters, the image positions and the source position. As standard deviations, we use each parameter's average error from the BNN in Park et al. (2021). From those noisy estimates, we compute the Fermat potentials. For the time delays, we add Gaussian noise to the ones generated with the true parameters. This replicates the uncertainty yielded by the light curve measurements, as well as the mass-sheet degeneracy (Park et al., 2021). Table 2 summarizes all the standard deviations of the Gaussian noise distributions. ## 4 Methods ### Neural Ratio Estimation In this work, we train a Neural Ratio Estimator to learn the posterior distribution of \(H_{0}\). At its core, a NRE learns the ratio between two distributions of the parameters of interest \(\mathbf{\Theta}\) (in our case \(H_{0}\)), and the simulated observations \(x\): the joint distribution \(p(\mathbf{x},\mathbf{\Theta})\), which we can sample using our simulator, and the product of the marginals \(p(\mathbf{x})\,p(\mathbf{\Theta})\), which we can sample by pairing randomly simulations and parameters sampled from the prior. Assigning the class label \(y=1\) to the joint distribution and the class label \(y=0\) to the product of the marginals, the optimal discriminator \(\mathbf{d}^{*}\) that classifies samples from these two distributions converges to the decision function \[\mathbf{d}^{*}(\mathbf{x},\mathbf{\Theta})=p(y=1\mid\mathbf{x})=\frac{p(\mathbf{x },\mathbf{\Theta})}{p(\mathbf{x},\mathbf{\Theta})+p(\mathbf{x})\,p(\mathbf{\Theta})} \tag{7}\] The ratio \(r(\mathbf{x}\mid\mathbf{\Theta})\) between the distributions can be written as a function of the discriminator : \[r(\mathbf{x}\mid\mathbf{\Theta})\equiv\frac{p(\mathbf{x},\mathbf{\Theta})}{p(\mathbf{x })\,p(\mathbf{\Theta})}=\frac{\mathbf{d}^{*}(\mathbf{x},\mathbf{\Theta})}{1-\mathbf{d }^{*}(\mathbf{x},\mathbf{\Theta})} \tag{8}\] The product between the estimator of \(r\) learnt by the NRE, \(\hat{r}(\mathbf{x}\mid\mathbf{\Theta})\), and the prior distribution yields a posterior distribution estimator. To conduct an inference with a trained Neural Ratio Estimator, the estimator \(\hat{r}(\mathbf{x}\mid\mathbf{\Theta})\) is calculated multiple times for the same observation, but with different parameter values at each computation. ### Set Transformer Architecture For the architecture of the discriminator, we use a Set Transformer (Lee et al., 2019) to make use of the fact that different lensing configurations (doubles or quads) can have different number of time delay-relative Fermat potential pairs, and that those pairs are permutation invariant. We also explored Deep Sets (Zaheer et al., 2017), however in our experiments they were outperformed by the Set Transformer, and so we only report on the latter. The NRE takes as inputs the measured time delays, the modeled relative Fermat potentials, a \(H_{0}\) value, the source's redshift, and the deflector's redshift. See Appendix A Figure 3 for the specific details of the architecture. ### Training The training set, the validation set, and the test set contain 1,280,000 examples, 160,000 examples and 26,500 examples, respectively. The dataset is composed of approximately 83% doubles and 17% quads. We train the neural network on batches of 1,000 examples with a binary cross entropy loss as the objective function. At each batch, we draw a new realization of noise for the time delays, the parameters, the \begin{table} \begin{tabular}{l c} \hline \hline **Parameter** & **Distribution** \\ \hline \hline **Cosmology** & \\ \hline Hubble constant (km s\({}^{-1}\) Mpc\({}^{-1}\)) & \(H_{0}\sim\mathrm{U}(40,100)\) \\ Dark energy density & \(\Omega_{\Lambda}=0.7\) \\ Matter energy density & \(\Omega_{m}=0.3\) \\ \hline **Deflector** & \\ \hline Redshift & \(z_{d}\sim\mathrm{U}(0.04,0.5)\) \\ Position (\({}^{\prime\prime}\)) & \(x_{d},y_{d}\sim\mathrm{U}(-0.8,0.8)\) \\ Einstein radius (\({}^{\prime\prime}\)) & \(\theta_{E}\sim\mathrm{U}(0.5,2.0)\) \\ Axis ratio & \(f\sim\mathrm{U}(0.30,0.99)\) \\ Orientation (rad) & \(\varphi_{d}\sim\mathrm{U}(-^{-\pi}/_{2},^{\pi}/_{2})\) \\ \hline **External Shear** & \\ \hline Modulus & \(\gamma_{\text{ext}}\sim\mathrm{U}(0,0.2)\) \\ Orientation (rad) & \(\varphi_{\text{ext}}\sim\mathrm{U}(-^{\pi}/_{2},^{\pi}/_{2})\) \\ \hline **Variable point light source** & \\ \hline Redshift & \(z_{s}\sim\mathrm{U}(1,3)\) \\ Position (\({}^{\prime\prime}\)) & \(x_{s},y_{s}=(0,0)\) \\ \hline \hline \end{tabular} \end{table} Table 1: Prior distributions of all the parameters needed to generate Fermat potentials and time delays in our framework \begin{table} \begin{tabular}{l c} \hline \hline **Observables** & **Noise standard deviation** \\ \hline Time delays (days) & \(0.35\) \\ Image positions (\({}^{\prime\prime}\)) & \(0.001\) \\ \hline **Deflector** & \\ \hline Position (\({}^{\prime\prime}\)) & \(0.005\) \\ Einstein radius (\({}^{\prime\prime}\)) & \(0.011\) \\ Ellipticities & \(0.039\) \\ \hline **External shear** & \\ \hline Components & \(0.02\) \\ \hline **Active galactic nucleus** & \\ \hline Position (\({}^{\prime\prime}\)) & \(0.012\) \\ \hline \hline \end{tabular} \end{table} Table 2: Standard deviation of the Gaussian noise distributions used to mimic the uncertainties of lens modeling, time delay measurements, and image position measurements image positions, and the source position. We then compute the Fermat potentials. The training lasts for 5,000 epochs. The learning rate starts at \(1\times 10^{-4}\), and decreases by half every 500 epochs, as it was the optimal schedule we found through a hyperparameter search. ## 5 Results and Discussion In our framework, the general posterior in Equation (6) takes the specific form \[\begin{split}& P(H_{0}|\Delta t,\Delta\phi,z_{d},z_{s})=\\ &\int\,\mathrm{d}\boldsymbol{\zeta}\,\frac{P(\Delta t|H_{0}, \Delta\phi,z_{d},z_{s})P(\Delta\phi|\boldsymbol{\zeta})P(\boldsymbol{\zeta})P (H_{0})}{P(\Delta t,\Delta\phi)}\end{split} \tag{9}\] where \(P(\Delta t|H_{0},\Delta\phi)\) and \(P(\boldsymbol{\zeta})\) are normal distributions, \(P(\Delta\phi|\boldsymbol{\zeta})\) is a delta function, and \(P(H_{0})\) is a uniform distribution. We sample this posterior with PolyChord(Handley et al., 2015a,b) and find agreement with the NRE posteriors, as shown in some representative examples in Appendix B. To assess the NRE's accuracy, we perform a coverage test (Hermans et al., 2021; Cole et al., 2022) using the highest posterior density (HPD) interval of the NRE on the noisy examples from the test set. Results are displayed in Figure 1. The NRE shows a slightly underconfident behaviour, which is preferable to overconfidence. Moreover, the NRE offers a significant improvement in the analysis speed. With PolyChord, the posterior sampling process requires from 20 to 40 minutes on a CPU, and is not amortized. By contrast, once trained, the NRE only requires \(\sim\)1 second to estimate the posterior of \(H_{0}\) for a given lens, making the analysis more than 1000 times faster. We perform a population inference of \(H_{0}\). We simulate noisy data from multiple lensing systems (doubles and quads), fixing \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\). Figure 2 shows the population inferences of 3,000, 1,500, 500 and 50 lensing systems. The NRE appears unbiased because all posteriors enclose the truth in their 2\(\sigma\) interval. One of the main advantages of a simulation-based approach such as the NRE over traditional maximum-likelihood methods is that it implicitly marginalizes over nuisance parameters (Hermans et al., 2019). This is because, even though the simulator samples all parameters to generate the mock data, the classes and the loss function are independent of the nuisance parameters. While here our simulations remained simple, including further nuisance parameters in the inference is now reduced to simulating them. Another important advantage of SBI methods is that they do not require any assumption about the form of the posterior. The complexity of the posterior is only limited by the simulations themselves, which can include complex environment, noise, selection effects, etc. In contrast, traditional explicit-likelihood methods require an analytical form for both the prior and the likelihood to compute the posterior distribution. These often imply simplistic priors, and simplifying assumptions about the model's parametrization, which can introduce biases in the inference. A notable source of bias is the mass sheet degeneracy (Falco et al., 1985). In this paper, we do not consider explicitly the mass sheet degeneracy. However, we chose the noise distributions so that the uncertainty on \(H_{0}\) could reach 8% frequently, which is the error budget estimated by (Birrer and Treu, 2021) when accounting for the mass sheet degeneracy. Figure 1: Coverage diagnostic of the NRE. A perfectly consistent distribution would fall on the dashed line. An underconfident distribution would lay on the top-left area, while an overconfident distribution would be in the bottom-right region. The NRE coverage, represented by the orange solid line, indicates a weak underconfident behaviour. Figure 2: Population inferences of \(H_{0}\) with the NRE. The blue solid line, the pink dashed line, the green dashed-dotted line, and the yellow dotted line represent populations of 3,000, 1,500, 500 and 50 lensing systems, respectively. The true value \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\) is indicated by the vertical black solid line. It falls inside the 2\(\sigma\) interval for all populations. ## 6 Conclusion In this work, we used an NRE to infer \(H_{0}\) from the time delays, the relative Fermat potentials, and the source and deflector redshifts of strong lensing systems. This work bridges the gap to completely amortize the inference of \(H_{0}\) from time delay cosmography, bringing down the inference time by a factor of more than 1000 from at least 20 minutes with PolyChord to about 1 second per lens. Moreover, combining measurements from a population of 3,000 lenses suggests that our estimator is unbiased. We assumed that the parameters describing the deflector could be estimated with a precision similar to that of BNNs published in the literature (Park et al., 2021). To improve this work, more complex simulations incorporating environmental effects, such as the mass sheet degeneracy, as well as more inputs to break it, like velocity dispersion measurenments, could be used to train the NRE. We expect the NRE to fully leverage the upcoming large datasets of strong lensing observations to reach the 1% precision needed to solve the Hubble tension. Its implicit marginalization over nuisance parameters can take into account as many possible biases as can be simulated, while guaranteeing the accuracy of the inference. ## Acknowledgments This work was in part supported by Schmidt Futures, a philanthropic initiative founded by Eric and Wendy Schmidt as part of the Virtual Institute for Astrophysics (VIA). The work was in part supported by computational resources provided by Calcul Quebec and the Digital Research Alliance of Canada. Y.H. and L.P.L. acknowledge support from the Canada Research Chairs Program, the National Sciences and Engineering Council of Canada through grants RGPIN-2020-05073 and 05102, and the Fonds de recherche du Quebec through grants 2022-NC-301305 and 300397.
2303.18001
You Only Train Once: Learning a General Anomaly Enhancement Network with Random Masks for Hyperspectral Anomaly Detection
In this paper, we introduce a new approach to address the challenge of generalization in hyperspectral anomaly detection (AD). Our method eliminates the need for adjusting parameters or retraining on new test scenes as required by most existing methods. Employing an image-level training paradigm, we achieve a general anomaly enhancement network for hyperspectral AD that only needs to be trained once. Trained on a set of anomaly-free hyperspectral images with random masks, our network can learn the spatial context characteristics between anomalies and background in an unsupervised way. Additionally, a plug-and-play model selection module is proposed to search for a spatial-spectral transform domain that is more suitable for AD task than the original data. To establish a unified benchmark to comprehensively evaluate our method and existing methods, we develop a large-scale hyperspectral AD dataset (HAD100) that includes 100 real test scenes with diverse anomaly targets. In comparison experiments, we combine our network with a parameter-free detector and achieve the optimal balance between detection accuracy and inference speed among state-of-the-art AD methods. Experimental results also show that our method still achieves competitive performance when the training and test set are captured by different sensor devices. Our code is available at https://github.com/ZhaoxuLi123/AETNet.
Zhaoxu Li, Yingqian Wang, Chao Xiao, Qiang Ling, Zaiping Lin, Wei An
2023-03-31T12:23:56Z
http://arxiv.org/abs/2303.18001v1
You Only Train Once: Learning a General Anomaly Enhancement Network with Random Masks for Hyperspectral Anomaly Detection ###### Abstract In this paper, we introduce a new approach to address the challenge of generalization in hyperspectral anomaly detection (AD). Our method eliminates the need for adjusting parameters or retraining on new test scenes as required by most existing methods. Employing an image-level training paradigm, we achieve a general anomaly enhancement network for hyperspectral AD that only needs to be trained once. Trained on a set of anomaly-free hyperspectral images with random masks, our network can learn the spatial context characteristics between anomalies and background in an unsupervised way. Additionally, a plug-and-play model selection module is proposed to search for a spatial-spectral transform domain that is more suitable for AD task than the original data. To establish a unified benchmark to comprehensively evaluate our method and existing methods, we develop a large-scale hyperspectral AD dataset (HAD100) that includes 100 real test scenes with diverse anomaly targets. In comparison experiments, we combine our network with a parameter-free detector and achieve the optimal balance between detection accuracy and inference speed among state-of-the-art AD methods. Experimental results also show that our method still achieves competitive performance when the training and test set are captured by different sensor devices. Our code is available at [https://github.com/ZhaoxuLi123/AENT](https://github.com/ZhaoxuLi123/AENT). Anomaly detection, Vision Transformer, hyperspectral imagery. ## I Introduction Hyperspectral imagery (HSI) records abundant spectral information which can reflect the essential characteristics of different materials [1]. With the development of hyperspectral spectrometers, hyperspectral imaging has been widely applied to different fields, such as agricultural estimation [2], civilian rescue [3], mineral exploration [4], quality monitoring [5], and explosive detection [6]. Among them, hyperspectral anomaly detection is a critical technology to find targets from background without any prior information. During the past years, many methods have been developed for hyperspectral anomaly detection (AD). RX method [7], named after its proposors, is the cornerstone among AD methods. It supposes background spectra in the HSI obey a multi-Gaussian distribution and calculates the Mahalanobis distance between the test sample and the background samples to search anomalies. Depending on whether the global or local background spectrum is used, RX can be divided into global RX (GRX) and local RX (LRX) [8]. The later adopts a dual concentric rectangular window centering around the test pixel and selects the area between the inner and outer windows as background samples. Inspired by LRX, many dual-window-based methods are developed, such as quasi-local-RX (QLRX) [9], locally adaptable iterative RX (LAIRX) [10], weighted RX [11], linear filter-based RX [11], kernel RX (KRX) [12], cluster KRX (CKRX) [13], support vector data description method (SVDD) [14], and 2S-GLRT [15]. In addition, some methodological sparse representation with the dual window and achieved higher detection precision, such as collaborative representation (CR) [16], constrained sparse representation (CSR) [17], and dual collaborative representation (DCR) [18]. However, these dual-window-based methods have an inherent limitation. The size of the dual window needs to be set manually by users according to the spatial information of targets. Generally, the inner window size should be larger than the target size. In addition, the best size of the dual window is also related to the shape and spacing of targets. This problem limits the generalization capability of these AD methods. Multiple-window AD (MWAD) [19] and superpixel-based dual window RX (SPDWRX) [20] improved the dual window, but this problem has not been solved well. Fig. 1: Comparison of the detection accuracy (mAUC) and inference speed (frames per second, FPS) of different AD methods on our HAD100 dataset. Many researchers were not limited to the dual-window framework. Chang [21, 22, 23] proposed dummy variable trick to convert hyperspectral target detection methods to AD methods. Tu et al. [24] introduced density peak clustering [25] to hyperspectral AD task. Tao et al. [26] employed fractional Fourier transform (FrFT) as pre-processing to enhance discrimination between anomalies and background. But these methods neglect spatial information and can not deal with non-stationary background or large targets well. Some methods try to use spatial features to improve detection performance, such as attribute and edge-preserving filtering-based detection method (AED) [27], spectral-spatial feature extraction (SSFE) [28], structure tensor and guided filter (STGF) [29], segmentation-based weighting strategy [30], and kernel isolation forest-based detection method (KIFD) [31]. Besides, tensor-based methods pay attention on the 3D structure in HSI and extracted anomaly tensors from HSI cube data by tensor decomposition, e.g., tensor decomposition-based method (TDAD) [32], tensor principal component analysis (TPCA) [33], and prior-based tensor approximation (PTA) [34]. In addition, low-rank and sparsity-matrix decomposition (LRaSMD) [35] has also received widespread attention and derived many AD methods such as OSP-GoDec [36], OSP-AD [37], SDP [38], component decomposition analysis (CDA) [39], and effective anomaly space (EAS) [40]. However, the parameters of the above methods still need to be fine-tuned on different test scenes. Lately, deep learning is booming in many fields and brings more novel ideas for hyperspectral AD. Generative models such as Autoencoder (AE) and generative adversarial network (GAN) [41] can capture nonlinear and potential characteristics and have been successfully applied to the AD task. A common practice of AE-based methods is to take the vectors from the output space or the latent space of AE as the input of other anomaly detectors [42, 43, 44, 45, 46]. In particular, in order to adapt the learned latent representation to a specific density estimation, low-rank embedded network (LREN) [45] introduces a trainable density estimation module into the latent space. Another common practice is to reconstruct the spectra under certain constraints and regard the spectral reconstruction error as the detection anomaly score [47, 48, 49, 50]. This kind of methods usually use full connection layers to achieve spectral-level self-supervised learning, and need to introduce additional spatial information to improve detection accuracy. For example, robust graph autoencoder method (RGAE) [49] is embedded with a superpixel segmentation-based graph regularization term to preserve the geometric structure and the local spatial consistency of HSI. An exception to this kind of methods is AutoAD [51], which adopts convolution autoencoder to reconstruct the original HSI from pure noise input. Although some methods mentioned above (such as AutoAD) do not require any manual selection of parameters, they must be retrained when applying to test scenes. In summary, two kinds of problems restrict the generalization capability of current AD methods: (1) Some parameters need to be manually set according to the characteristics of targets in each test scene. (2) Most deep-learning-based methods need to be retrained to apply to new test scenes. To tackle these issues, we aim to develop a general network for hyperspectral AD. Such a network should be trained only once without using ground-truth labels and can be generalized to unseen test scenes without retraining. It is worth noting that there is very little prior study on this topic. Two recent methods, WeaklyAD [52] and dual-frequency autoencoder (DFAE) [53], explore this generalization issue. Specifically, WeaklyAD uses a spectral-constrained GAN to enhance the discrimination between anomalies and background in a weakly supervised way. DFAE transforms the original HSI into high-frequency and low-frequency components and then detects anomalies from these two components in parallel. Both these two methods are trained on a public test HSI and achieve competitive results on several test scenes without retraining. However, these methods are still far from the AD framework we expect. WeaklyAD needs to extract anomalous spectra for adversarial training, while DFAE still requires manual parameter setting. In addition, these two methods have not been verified on a large-scale dataset. In this paper, we introduce an image-level training paradigm and a simple data augmentation strategy called Random Mask to solve the generalization problem of hyperspectral AD. The previous deep-learning-based methods almost perform self-supervised learning on the 1D spectral level, while our network performs self-supervised learning on a set of 3D hyperspectral images. This allows our network to learn the spatial context information directly without requiring additional spatial constraints. Trained on anomaly-free hyperspectral images with random masks, our network, called **AETNet**, can achieve **A**nomaly **E**nhancement **T**ransformation on unseen test images. With the cooperation of the Random Mask strategy and a structural-difference-based loss function, our AETNet can learn the spatial-spectral context relationship between anomalies and background in an unsupervised way. Besides, a simple plug-and-play model selection module is introduced to guide the training process and search for a transform domain that is more suitable for AD task than the original data. Once trained, our method can project unseen test images into the searched transform domain, and achieve parameter-free but high-accuracy detection on these transformed test images. Moreover, since existing methods generally report their best results after fine-tuning on a limited number of test images, a more comprehensive evaluation benchmark is necessary. In this paper, we further develop a new hyperspectral AD dataset to evaluate current methods. Our dataset, called HAD100, contains 100 test HSIs with various anomaly targets. In addition to the proposed AETNet, we select 16 representative AD methods as solid baselines to build a comprehensive benchmark. The contributions of this paper can be summarized as follows: 1. We introduce an image-level training paradigm to develop a general anomaly enhancement network for hyperspectral AD. Once trained, our network can perform inference directly on unseen test images. As far as we know, we are the first attempt to train the network on a set of hyperspectral images in this field. 2. We provide a simple data augmentation strategy called Random Mask for hyperspectral AD. This strategy can help our network learn the context relationship between anomalies and background from anomaly-free hyperspectral data in an unsupervised way. 3. We develop a new hyperspectral AD dataset and comprehensively evaluate the performance of both classic and SOTA AD methods. To the best of our knowledge, this is the first unified large-scale benchmark in the hyperspectral AD field. 4. Extensive experiments show that our proposed AETNet achieves the best balance between detection accuracy and inference speed among the SOTA AD methods. And our method can still achieve competitive performance when the training and test set are captured by different sensor devices. The rest of this paper is organized as follows. Section II reviews the research basic related to our network. Section III describes the implementation details of the proposed method. Our dataset and experiments are presented in Section IV, followed by the conclusion in Section V. ## II Related Work In this section, we briefly introduce two studies involved in our network: UNet and Vision Transformer. ### _U-shaped Network_ UNet [54], named after its symmetrical U-shaped structure, is a simple but widely used segmentation network. UNet can be split into a contracting path and an expansive path. The contracting path consists of a series of down-sampling modules, regarded as encoder layers. The expansive path consists of a series of up-sampling modules, regarded as decoder layers and symmetric with the encoder layers. Adding skip connections between the encoder and decoder layers, UNet mitigates the loss of spatial information due to down-sampling. The fusion of multi-scale contextual features enhances image reconstruction process and significantly improves the segmentation performance. Nowadays, UNet and its abundant variations are widely applied to many tasks such as object segmentation [55], image super-resolution [56], image reconstruction [57], and so on. ### _Vision Transformer_ Transformer [58] is first proposed in the natural language processing and outperforms the previous complicated recurrent models and CNN models. The success of Transformer has attracted the great attention of researchers in the computer vision community. Dosovitskiy et al. [59] are the first to propose Vision Transformer (ViT) for the image classification task. Compared with CNN, ViT has a better capability to model the long-range dependencies and achieves a global receptive field in the shallow layers [60]. Therefore, CNN frameworks have been approached and even surpassed by ViT and its variations [61, 62, 63] on the image classification task. Nevertheless, expensive training cost makes ViT unsuitable for dense prediction tasks such as image segmentation. To this end, PVT [62] adds the pyramid structure to ViT to acquire multiscale feature maps. Furthermore, Swin Transformer [63] limits self-attention computation to non-overlapping local windows and achieves linear computational complexity with respect to image size. With the help of a shifted window strategy, Swin Transformer achieves cross-window information interaction. It surpasses the SOTA methods in many tasks, such as object detection, instance segmentation, and semantic segmentation. Subsequently, Cao et al. [64] combined Swin Transformer with UNet and propose the first pure Transformer-based U-shaped architecture, named Swin UNet. Swin UNet outperforms CNN or the combination of Transformer and CNN in medical image segmentation and is successfully applied to other dense prediction tasks [65]. ## III Methodology In this section, we first introduce the overview of our method. Then, we describe the implementation details of each component of our method. ### _Overview_ Our proposed method consists of three major components: mask generation module, image reconstruction module, and transform domain search module. The overview architecture of AETNet is shown in Fig. 2. The backbone of the image reconstruction module is based on Swin UNet and convolutional autoencoder (CAE), and the input of our network is a 3D HSI. The training and test data are both hyperspectral cubes with the same spatial size and band number. The training set is defined as \(\mathbf{X}=\{\mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\}\subset\mathbb{R}^{H\times W \times B}\), where \(n\) is the number of HSIs in the training set. \(H\), \(W\), and \(B\) denote a single hyperspectral cube's height, width, and band number. The test set is defined as \(\mathbf{Y}=\{\mathbf{Y}_{1},\mathbf{Y}_{2},\cdots,\mathbf{Y}_{m}\}\subset\mathbb{R}^{H\times W \times B}\), where \(m\) is the number of HSIs in the test set. In the training stage, the mask generation module generates random masks for each training HSI \(\mathbf{X}_{i}\in\mathbf{X}\). Then HSIs with random masks are fed to the image reconstruction module. After every training epoch, the transform domain search module is used to evaluate the enhancement performance of the reconstruction module and decide whether to terminate the training. In the inference stage, each test HSI \(\mathbf{Y}_{i}\in\mathbf{Y}\) is sent to the image reconstruction module, and then we obtain a reconstructed HSI \(\mathbf{Y}_{r}\). Next, anomaly detectors such as GRX take \(\mathbf{Y}_{r}\) as input to generate the anomaly score map of \(\mathbf{Y}_{i}\). ### _Random Mask Strategy_ Considering that there is no target label in the training phase, we introduce a single data augmentation strategy called Random Mask. Random masking is a popular tool in the CV community [66] and has been successfully applied to the industrial AD task [67]. Different from masks in [67], random masks in our paper have irregular shapes and random sizes and can simulate the spatial morphology of anomaly targets. It is usually believed that anomaly targets in hyperspectral remote sensing imagery have the following characteristics: (1) Anomaly targets are embedded to background in the form of irregular blocks. (2) The spectra of anomaly targets are different from those of background. (3) The area ratio of anomaly targets is low in the total image. As shown in Fig. 3, we divide the Random Mask strategy into mask map generation and masked region filling, simulating the spatial and spectral characteristics of anomaly targets, respectively. #### Iii-B1 Mask Map Generation The mask map \(\mathbf{M}\) has the same height \(H\) and weight \(W\) as the input training HSI \(\mathbf{X}_{i}\). The height and weight are equally divided by \(K\), and then \(\mathbf{M}\) is divided into \(K^{2}\) non-overlapping patches. The number of random masks \(N\) is randomly generated from the integer range of \([N_{\min},N_{\max}]\), where \(N_{\max}\) should be less than \(K^{2}\). We randomly select \(N\) patches from all the patches and then randomly select a pixel position in each selected patch. Next, an iterative expansion method is used to generate the random mask at each selected pixel position. We regard the selected pixel position as the start point of a random mask to be generated. The four-connected pixels of the start point are randomly merged into the random mask, and the probability of every connected pixel being selected is \(50\%\). The mask is continuously expanded by randomly merging the updated four-connected pixels until the area reaches \(A\), where \(A\) is randomly sampled from the integer range of \([A_{\min},A_{\max}]\). After traversing all the selected pixel positions, we get \(N\) masks with different areas and random shapes. The final mask map \(\mathbf{M}\) is a binary map where the masked regions are set to 0, and the remaining pixels are set to 1. #### Iii-B2 Mask Region Filling Considering the spectral characteristic of anomaly targets, we introduce an additional cube \(\mathbf{I}\in\mathbb{R}^{H\times W\times B}\) to fill the generated masks. The masked input HSI cube \(\mathbf{X}_{M}\) can be attained by \[\mathbf{X}_{M}=\mathbf{X}_{i}\otimes\mathbf{M}+\mathbf{I}\otimes\overline{\mathbf{M}}, \tag{1}\] where \(\otimes\) denotes element-wise multiplication and \(\overline{\mathbf{M}}\) represents the inversed mask map of \(\mathbf{M}\). Inspired by recent data augmentation approaches, we introduce two mask-filling methods, namely CutOut [68] and CutMix [69]. **CutOut:** CutOut replaces masked regions in RGB images with gray or black pixels. In this paper, we set \(\mathbf{I}\) to a three-dimensional zero matrix. **CutMix:** CutMix replaces masked regions in RGB images with the corresponding regions from other images. In this paper, we randomly choose another training HSI from a different capture flight line as \(\mathbf{I}\). ### _Network Architecture_ As shown in Fig. 4(a), our network consists of a convolutional encoder, a Swin UNet module, and a convolutional decoder. The detailed network structure is further described below. #### Iii-C1 Convolutional Autoencoder The previous deep-learning -based methods mainly use fully connected layers to perform self-supervised learning on the spectral level, resulting in the loss of spatial information. The most simple improvement is to use CAE to perform self-supervised learning on the hyperspectral cubes. CAE combines AE with convolution and pooling operations and can extract spatial features of images. We use a simple CAE. The convolutional encoder is a \(3\times 3\) convolutional layer, which changes the spectral channel number of the input cube \(\mathbf{X}_{m}\) from \(B\) to \(C\). The convolutional decoder is also a \(3\times 3\) convolutional layer, which restores the channel number from \(C\) to \(B\). Common CAEs consist of several convolutional layers, down-sample layers, deconvolutional layers, and up-sample layers. Since ViT can model long-range dependencies better Fig. 2: Overview of our proposed method. than CNN, we abandon CAE's down-sample and up-sample operations and only apply the channel dimension transformation. Meanwhile, we insert a Swin UNet module between the convolutional encoder and decoder as the middle layer of CAE to extract spatial and spectral features better. #### Iii-C2 Swin UNet Module As shown in Fig. 4(a), the Swin UNet module can also be divided into an encoder, a bottleneck, and a decoder. Unlike the structure in [64], we adopt a non-classical structure that directly processes latent feature maps without patch partition, merging, and expanding operations. **Encoder:** The encoder consists of two stages, and each stage contains a Swin Transformer block and a down-sample layer. The down-sample layer is a \(4\times 4\) strided convolutional layer which halves the spatial resolution of feature maps and doubles the channels. In the first stage of the encoder, the feature maps from the convolutional encoder with the size of \(H\times W\times C\) are sent into the first Swin Transformer block to perform representation learning. Next, the feature maps are fed into a down-sample layer, and the size changes into \(\frac{H}{2}\times\frac{W}{2}\times 2C\). Then, the feature maps are sent into the Swin Transformer block and the down-sample layer in the second stage. The size of the feature maps becomes \(\frac{H}{4}\times\frac{W}{4}\times 4C\). **Bottleneck:** The bottleneck of the Swin UNet module is a single Swin Transformer block and learns the deep feature representation. In the bottleneck, the feature maps' channel number and spatial resolution remain unchanged. **Decoder:** Symmetric to the encoder, the decoder also has two stages. In each stage, the feature maps are first fed into an up-sample layer, a \(2\times 2\) strided deconvolutional layer, to halve the channels and double the spatial resolution of feature maps. Then, we use the skip connection to fuse the up-sampled feature maps with the feature map of the same size from the encoder. Channel concatenation on the shallow and deep feature maps can reduce the information loss caused by down-sampling. After a \(1\times 1\) convolutional layer, the concatenated feature maps have the same channel number as the up-sampled feature maps. A Swin Transformer block at the end of each stage is used for the representation learning of the deep feature maps. After the decoder, the size of the feature maps returns to \(H\times W\times C\). **Swin Transformer Block:** Swin Transformer is a shifted-window-based ViT and contains two different attention modules, called Window-based Multi-head Self Attention (W-MSA) module and Shifted Window-based Multi-head Self Attention (SW-MSA) module, respectively. More details of W-MSA and SW-MSA can be found in [63]. As shown in Fig. 4(b), the difference between our Swin Transformer block and the classical structure in [63] is that LayerNorm (LN) layers in front of the two MSA modules are removed, which is validated in our network. #### Iii-C3 Residual Connection The only difference between the masked input HSI \(\mathbf{X}_{M}\) and the expected output HSI \(\mathbf{X}_{i}\) is the masked region whose area accounts for a small portion of the whole HSI. In other words, our network only needs to restore the masks to the original spectra rather than restore the whole HSI. Therefore, we introduce a residual connection to connect the input \(\mathbf{X}_{M}\) and output \(\mathbf{X}_{r}\) of our network. In this way, the network can pay more attention to the random masks and converge faster during training. ### _Loss Function_ In the previous AD networks, spectrum-wise L2 loss is widely used. However, we are more interested in the spatial texture relationship between anomaly targets and background rather than the spectral reconstruction accuracy. Spectrum-wise loss functions usually ignore the spatial dependence of neighboring pixels. Therefore, a multi-scale gradient magnitude similarity (MSGMS) loss [67] is introduced to penalize Fig. 3: Diagram of the proposed Random Mask strategy. structural differences between the reconstructed HSI \(\mathbf{X}_{r}\) and the original HSI \(\mathbf{X}_{i}\). The MSGMS is the multi-scale expansion of the gradient magnitude similarity (GMS) [70]. The gradient magnitude is defined as the root mean square error of image directional gradients along two orthogonal directions in [67, 70]. To acquire more accurate gradient magnitude maps, we adopt Sobel filters along horizontal, vertical, and diagonal dimensions. The gradient magnitude map of \(\mathbf{X}_{i}\) can be calculated according to: \[\begin{split}&\mathbf{G}_{i}=\\ &\sqrt{\left(\mathbf{X}_{i}\varoccurlyeq{\mathbf{s}_{x}}\right)^{2}+\left( \mathbf{X}_{i}\varoccurlyeq{\mathbf{s}_{y}}\right)^{2}+\left(\mathbf{X}_{i}\varoccurlyeq{ \mathbf{s}_{d1}}\right)^{2}+\left(\mathbf{X}_{i}\varoccurlyeq{\mathbf{s}_{d2}}\right)^{2}},\end{split} \tag{2}\] where \(\varoccurlyeq{\mathbf{\hat{}}}\) is the convolution operation, and \(\mathbf{s}_{x}\), \(\mathbf{s}_{y}\), \(\mathbf{s}_{d1}\), and \(\mathbf{s}_{d2}\) are \(3\times 3\) Sobel filters along the \(x\) dimension, \(y\) dimension, \(45^{\circ}\) diagonal, and \(135^{\circ}\) diagonal. Similarly, the gradient magnitude map of \(\mathbf{X}_{r}\) can be calculated according to: \[\begin{split}&\mathbf{G}_{r}=\\ &\sqrt{\left(\mathbf{X}_{r}\varoccurlyeq{\mathbf{s}_{x}}\right)^{2}+\left( \mathbf{X}_{r}\varoccurlyeq{\mathbf{s}_{y}}\right)^{2}+\left(\mathbf{X}_{r}\varoccurlyeq{ \mathbf{s}_{d1}}\right)^{2}+\left(\mathbf{X}_{r}\varoccurlyeq{\mathbf{s}_{d2}}\right)^{2}}.\end{split} \tag{3}\] The gradient magnitude similarity map between \(\mathbf{X}_{r}\) and \(\mathbf{X}_{i}\) is then generated by: \[\mathrm{GMS}\left(\mathbf{X}_{i},\mathbf{X}_{r}\right)=\frac{2\mathbf{G}_{i}\mathbf{G}_{r}+c}{ \mathbf{G}_{i}{}^{2}+\mathbf{G}_{r}{}^{2}+c}, \tag{4}\] where \(c\) is a constant that ensures numerical stability. Subsequently, down-sampled images at different scales of \(\mathbf{X}_{r}\) and \(\mathbf{X}_{i}\) are generated by \(2\times 2\) strided average pooling operation. The GMS maps between down-sampled \(\mathbf{X}_{r}\) and \(\mathbf{X}_{i}\) are calculated again according to Eqs. (2), (3), and (4). Finally, the MSGMS loss between down-sampled \(\mathbf{X}_{r}\) and \(\mathbf{X}_{i}\) is defined as: \[\begin{split}& L\left(\mathbf{X}_{i},\mathbf{X}_{r}\right)=\\ &\frac{1}{S}\sum_{l=1}^{S}\frac{1}{H_{l}W_{l}}\sum_{a=1}^{H_{l}} \sum_{b=1}^{W_{l}}\Big{(}1-\mathrm{GMS}_{a,b}\left(\mathbf{X}_{i}{}^{l},\mathbf{X}_{r}{} ^{l}\right)\Big{)},\end{split} \tag{5}\] where \(S\) is the number of reconstructed HSIs at different scales, and \(\mathbf{X}_{i}{}^{l}\) and \(\mathbf{X}_{j}{}^{l}\) are the original HSI and the reconstructed HSI at the \(l\)-th scale. \(H_{l}\) and \(W_{l}\) denote the height and width of the HSI at the \(l\)-th scale, and \(\mathrm{GMS}_{a,b}\left(\cdot\right)\) represents the value of the GMS map at pixel \((a,b)\). The reason why we choose MSGMS instead of GMS is that the size of anomaly targets is various. The receptive field of GMS is only \(3\times 3\), and thus large masked regions can only show edges on the gradient magnitude maps. MSGMS enlarges the receptive field by constructing an image pyramid and can penalize masked regions of different sizes. ### _Transform Domain Search_ In the image reconstruction module, masked input HSIs are regarded as damaged images and need to be reconstructed to original mask-free images. General image inpainting networks aim to generate undamaged images accurately, but we are not interested in that. The phenomenon that the optimized network can restore the masked training HSIs to the original HSIs indicates that the network can utilize the context information of masks and change their spectra in the reconstructed images. Hence, we believe that the image reconstruction process is a domain transformation, and regard the output space of our network as a spatial-spectral transform domain. In the Fig. 4: Network structure of the proposed AETNet. transform domain, the anomaly targets should be enhanced and have more significant differences from background. After each training epoch, the network with updated parameters generates a new transform domain. We need an automatic approach to select an appropriate transform domain. For most image inpainting networks, the reconstruction loss is an important basis for the termination of training. However, a good image reconstruction performance does not mean a good detection performance. In the target detection task, researchers select trained models according to the detection accuracy on the verification set. But no target label is available for training in hyperspectral AD. As a result, we introduce a distance measurement that implicitly reflects the detection performance after each training epoch. Given a certain test HSI \(\mathbf{Y}_{v}\in\mathbf{Y}\), we send it to the image reconstruction module that undergoes \(j\) training epochs: \[\mathbf{\tilde{Y}}_{v}^{j}=f\left(\mathbf{Y}_{v};\mathbf{\theta}_{j}\right), \tag{6}\] where \(f\left(\cdot\right)\) denotes the network structure, \(\mathbf{\theta}_{j}\) is the updated parameters after the \(j\)-th training epoch, and \(\mathbf{\tilde{Y}}_{v}^{j}\in\mathbb{R}^{B\times 1}\) is the reconstructed HSI of \(\mathbf{Y}_{v}\). For each spectrum \(\mathbf{\tilde{y}}_{j}\) on \(\mathbf{\tilde{Y}}_{v}^{j}\), the global Mahalanobis distance is calculated according to: \[\mathbf{M}\left(\mathbf{\tilde{y}}_{j}\right)=\left(\mathbf{\tilde{y}}_{j}-\mathbf{\mu}_{j} \right)^{T}\mathbf{\Lambda}_{j}^{-1}\left(\mathbf{\tilde{y}}_{j}-\mathbf{\mu}_{j}\right), \tag{7}\] where \(\mathbf{\mu}_{j}\in\mathbb{R}^{B\times 1}\) and \(\mathbf{\Lambda}_{j}^{-1}\in\mathbb{R}^{B\times B}\) are the mean vector and the inverse covariance matrix of all the spectra on \(\mathbf{Y}_{v}^{j}\), respectively. The maximum of global Mahalanobis distances \(\mathcal{M}\) can reflect the difference between the most anomalous spectrum and background on \(\mathbf{\tilde{Y}}_{v}^{j}\). A larger \(\mathcal{M}\) represents a better domain transformation of our network. The model corresponding to the peak value of \(\mathcal{M}\) is used for inference on the test set. ### _Inference_ The searched network by the transform domain search module can be used for inference on the other test HSIs in \(\mathbf{Y}\). For each test HSI \(\mathbf{Y}_{i}\in\mathbf{Y}\), its reconstructed HSI \(\mathbf{Y}_{r}\) can be obtained by: \[\mathbf{Y}_{r}=f\left(\mathbf{Y}_{i};\tilde{\mathbf{\theta}}\right), \tag{8}\] where \(\tilde{\mathbf{\theta}}\) represents the optimized parameters of the image reconstruction module. Then, GRX detector takes detection on \(\mathbf{Y}_{r}\) to obtain the anomaly score map of \(\mathbf{Y}_{i}\). As same as Eq. (7), the anomaly score of each spectrum \(\mathbf{y}\) on \(\mathbf{Y}_{r}\) is calculated according to: \[\mathbf{M}\left(\mathbf{y}\right)=\left(\mathbf{y}-\mathbf{\mu}\right)^{T}\mathbf{\Lambda}^{-1} \left(\mathbf{y}-\mathbf{\mu}\right), \tag{9}\] where \(\mathbf{\mu}\in\mathbb{R}^{B\times 1}\) and \(\mathbf{\Lambda}^{-1}\in\mathbb{R}^{B\times B}\) are the mean vector and the inverse covariance matrix of all the spectra on \(\mathbf{Y}_{r}\), respectively. We can also replace GRX detector with other anomaly detectors. Since GRX detector is efficient and parameter-free, we specify that Eq. (8) and Eq. (9) are standard procedures of our method. ## IV Experiments In this section, we first introduce our developed hyperspectral AD dataset and two metrics used to evaluate methods on our dataset. Then, we introduce the experimental settings and analyze the experimental results in detail. ### _HAD100 Dataset_ Although many novel AD methods have been proposed in recent years, there is no large-scale dataset that can comprehensively evaluate the performance of these methods. Existing methods generally report their best results after fine-tuning on different test scenes. However, fine-tuned parameters cannot be guaranteed to be applicable to other scenes. Some recent AD methods claimed that they could perform inference on unseen test scenes but were only evaluated on a limited number of test scenes. Consequently, building a unified and comprehensive evaluation benchmark of hyperspectral AD is necessary. For this purpose, we develop a large-scale **H**yperspectral **A**nomaly **D**etection dataset that contains **100** real remote sensing test scenes, called **HAD100** dataset. Our dataset has more test scenes than the current popular ABU dataset [27], which only contains 13 test scenes. All the test HSIs are downloaded from AVIRIS-NG (Airborne Visible InfraRed Imaging Spectrometer-Next Generation) website1 and uniformly cropped to patches of size \(64\times 64\). Fig. 7 shows the false color maps and ground truth of example scenes in the test set. The marked targets are mainly compact manufactured objects such Fig. 5: Example scenes on the AVIRIS-NG training set. Fig. 6: Example scenes on the AVIRIS-Classic training set. as vehicles, boats, and buildings. The size of targets ranges from 1 pixel to 69 pixels. Furthermore, our dataset contains various backgrounds, including grassland, forest, farmland, desert, lake, river, and coast. The HAD100 dataset can be downloaded from our website2. Footnote 2: [https://fhaoxul.i123.github.io/HAD100](https://fhaoxul.i123.github.io/HAD100) In order to inspire new ideas for hyperspectral AD, we also provide two training sets which are captured by AVIRIS-NG and AVIRIS-Classic3, respectively. As shown in Figs. 5 and 6, the HSIs in the training set are only cropped on the background areas, with the space sizes ranging from \(66\times 66\) to \(130\times 130\). As shown in Table II, the AVIRIS-NG training set contains 260 normal HSIs, which are captured by the same sensor device and capture flight lines as the test set. It means that most background spectra are common to both the AVIRIS-NG training set and the test set. As shown in Table I, the AVIRIS-Classic training set contains 262 normal HSIs, which are captured by a different sensor device from the test set. Within a similar data scale, the AVIRIS-NG training set and the AVIRIS-Classic training set can be used for the generalization evaluation of AD methods to different degrees. Fig. 7: False color maps and ground truth of example scenes (the test scenes 1-30) on the HAD100 dataset. Table III shows the differences between AVIRIS-Classic and AVIRIS-NG. AVIRIS-NG is the upgraded version of AVIRIS-Classic. Under the condition that the spectral range is almost unchanged, AVIRIS-NG has higher spectral resolution and more bands than AVIRIS-Classic. We remove water absorption bands and noise bands in the training and test sets. AVIRIS-NG data retains 276 bands, of which ids are 16-109, 119-145, 159-187, 228-274, and 329-407. AVIRIS-Classic data retains 162 bands, of which ids are 8-57, 66-79, 86-104, 123-149, and 173-224. ### _Evaluation Metrics_ In this paper, we use two evaluation metrics to compare the detection performance of different AD methods on our HAD100 dataset. The receiver operating characteristic (ROC) area under the curve (AUC) is the most widely used evaluation tool in the hyperspectral AD task and can directly give quantitative results. Besides, researchers usually use Box-whisker Plot to compare the anomaly-background separability of AD methods qualitatively. We replace Box-whisker Plot with our improved version of signal-to-noise probability ratio (SNPR) [71] to evaluate the anomaly-background separability quantitatively. #### Iv-B1 Auc ROC curve shows the corresponding relationship between detection probability \(P_{d}\) and false alarm probability \(P_{f}\). Given a segmentation threshold \(\tau\) and an anomaly score map \(\mathbf{M}\), we specify that pixels whose anomaly scores greater than or equal to \(\tau\) are classified as positive samples, and \(P_{d}\) and \(P_{f}\) can be calculated according to: \[P_{d}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}, \tag{10}\] \[P_{f}=\frac{\mathrm{FP}}{\mathrm{TN}+\mathrm{FP}}, \tag{11}\] where TP is the number of target pixels that are predicted as positive samples, FN is the number of target pixels that are predicted as negative samples, FP is the number of background pixels that are predicted as positive samples, and TN is the number of background pixels that are predicted as negative samples. Each \(\tau\) can produce a pair of \(P_{d}\) and \(P_{f}\) as a point (\(P_{d}\), \(P_{f}\)) on the ROC curve. We traverse every anomaly score value on \(\mathbf{M}\) as \(\tau\) and generate a discrete ROC curve. Then, the AUC value of \(\mathbf{M}\) is calculated by: \[\mathrm{AUC}=\frac{1}{2}\sum_{l=1}^{p-1}\left(P_{f}^{l+1}-P_{f}^{l}\right) \left(P_{d}^{l+1}+P_{d}^{l}\right), \tag{12}\] where \(\left(P_{f}^{l},P_{d}^{l}\right)\) denotes the \(l\)-th point on the discrete ROC curve and \(p\) is the number of points. A larger AUC value means a better detection performance. In particular, the AUC value is specified as 0.5 when the anomaly score of each pixel is equal. We use the mean AUC value (mAUC) on all the test HSIs in HAD100 dataset to evaluate the AD methods throughout the rest of this paper. #### Iv-B2 Asnpr A recent study [71] deeply analyzed the evaluation tools for hyperspectral target detection, and proposed signal-to-noise probability ratio (SNPR) to evaluate background suppression. SNPR is calculated based on the 3D ROC curve. In addition to \(P_{d}\) and \(P_{f}\), the 3D ROC curve introduces the segmentation threshold \(\tau\) as the third variable, which is sampled from the anomaly score map with Maxmin Normalization. A 3D ROC curve can generate three 2D ROC curves, which take \((P_{d},P_{f})\), \((P_{d},\tau)\), and \((P_{f},\tau)\) as variables, respectively. Similar to Eq. (12), the AUC value under the 2D ROC curve \((P_{d},\tau)\) is calculated according to: \[\mathrm{AUC}_{d,\tau}=\frac{1}{2}\sum_{l=1}^{p-1}\left(\tau^{l+1}-\tau^{l} \right)\left(P_{d}^{l+1}+P_{d}^{l}\right), \tag{13}\] and the AUC value under the 2D ROC curve \((P_{f},\tau)\) is calculated according to: \[\mathrm{AUC}_{f,\tau}=\frac{1}{2}\sum_{l=1}^{p-1}\left(\tau^{l+1}-\tau^{l} \right)\left(P_{f}^{l+1}+P_{f}^{l}\right). \tag{14}\] Then, the SNPR value is calculated according to: \[\mathrm{SNPR}=\frac{\mathrm{AUC}_{d,\tau}}{\mathrm{AUC}_{f,\tau}}. \tag{15}\] A larger SNPR value means a better background suppressionity of AD methods. In practice, we find that the above 3D ROC-based evaluation metrics are unsuitable for scenes with multiple types of targets. When a certain target has an extremely high anomaly score, other targets can have low values in the normalized anomaly score map. Whether these targets are separated from background or not, the \(\mathrm{AUC}_{f,\tau}\) value remains low. To solve this problem, we develop adaptive versions of \(\mathrm{AUC}_{d,\tau}\), \(\mathrm{AUC}_{f,\tau}\), and \(\mathrm{SNPR}\). Specifically, we set the median value of anomaly scores on the real targets as the upper bound of the anomaly score map. Calculated on the truncated anomaly score map, \(\mathrm{AUC}_{d,\tau}\), \(\mathrm{AUC}_{f,\tau}\) and \(\mathrm{SNPR}\) are robust to the maximum anomaly response value. The mean value of the adaptive \(\mathrm{SNPR}\) on all the test HSIs is calculated according to: \[\mathrm{mASNPR}=\sum_{k=1}^{n}10\log_{10}\frac{\mathrm{AAUC}_{d,\tau}^{k}}{ \mathrm{AAUC}_{f,\tau}^{k}}, \tag{16}\] where \(n\) is the total number of the test HSIs, and \(\mathrm{AAUC}_{d,\tau}^{k}\) and \(\mathrm{AAUC}_{f,\tau}^{k}\) are the adaptive \(\mathrm{AUC}_{d,\tau}\) and \(\mathrm{AUC}_{f,\tau}\) on the \(k\)-th test HSI, respectively. Since the value range of ASNPR is \([0,+\infty)\), we introduce logarithmic operation to avoid over-large ASNPR values of individual test HSIs. In particular, the logarithmic value of ASNPR is specified as 0 dB when the anomaly score of each pixel is equal. ### _Experimental Settings_ #### Iv-C1 Implementation Details of AETNet Since the training input of AETNet needs to be consistent with the test HSIs in the spatial size and band number, we generate \(64\times 64\) mask maps in the mask generation module and set the parameter \(K\) to 8. The number range \([N_{\min},N_{\max}]\) of pseudo targets in mask maps is set to \([1,32]\), and the area range \([A_{\min},A_{\max}]\) of pseudo targets is set to \([3,20]\). Regardless of the band number \(B\) of input data, the channel number \(C\) output by the convolutional encoder is fixed at 32. The number of attention heads of five Swin Transformer blocks in AETNet is set to 2, 4, 8, 4, and 2, respectively. The number of non-overlapping local windows in all Swin Transformer blocks is set to \(8\times 8\). In the calculation of MSGMS loss, we set the constant \(c\) to 1 and perform four pooling operations, i.e., the parameter \(S=5\). In the training stage, the Adam optimizer [72] with a learning rate of \(1\mathrm{e}^{-4}\) and a weight decay of 5\(\mathrm{e}^{-6}\) is used to optimize our AETNet. The batch size is set to 16. Training terminates when the peak value of the measurement \(\mathcal{M}\) in the transform domain search module does not change for 30 epochs, or the epoch number reaches 200. Then, the trained Fig. 8: Detection maps of different AD methods on the first 50 bands of the test scene 83. model corresponding to the peak value of \(\mathcal{M}\) is used for inference. Before being sent to AETNet, all the input data are processed with linear normalization and scaled to \([-0.1,0.1]\). We select the test scene 1 in HAD100 dataset as the validation HSI in the transform domain search module. If there is no specific statement, we use the AVIRIS-NG training set and CutOut by default. In addition, we use three simple data augmentation approaches to expand the training set of AETNet: **Image Crop:** We crop a single training HSI into four \(64\times 64\) HSIs. The AVIRIS-NG training set is expanded to 1048 HSIs, and the AVIRIS-Classic training set is expanded to 1040 HSIs. **Image Rotation:** At the beginning of each training epoch, every input HSI is randomly rotated by \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\), or \(270^{\circ}\). **Image Flip:** After random rotation, each input HSI is randomly flipped along the horizontal or vertical direction. The probabilities of horizontal flip and vertical flip are both 50%. #### Iv-C2 Comparative Methods Sixteen classic or SOTA hyperspectral AD methods are selected to be evaluated on our HAD100 dataset, including GRX [7], LRX [8], KRX [12], QLRX [9], 2S-GLRT [15], KSVDD [14], CR [16], KCSR [17], KIFD [31], PTA [34], FrFE [26], AED [27], RGAE [49], AutoAD [51], LREN [45], and WeaklyAD [52]. GRX is the most classic AD method and is widely used as the AD baseline. LRX, KRX, QLRX, 2S-GLRT, CR, and KCSR are dual-window-based AD methods. CR and KCSR are sparse-representation-based AD methods. PTA is one of the most advanced tensor-approximation-based AD methods, and AET introduces morphological image processing to the hyper Fig. 10: Detection maps of different AD methods on the first 50 bands of the test scene 90. Fig. 9: Detection maps of different AD methods on the first 50 bands of the test scene 87. spectral AD field. KRX, KSVDD, KCSR, and KIFD all use the kernel trick to process spectral samples. RGAE, LREN, WeaklyAD, and AutoAD are deep-learning-based methods, of which the first three are spectral-level self-supervised methods. In particular, WeaklyAD can perform inference on unseen test scenes. The codes of 2S-GLRT, KCSR, KIFD, PTA, FrFE, RGAE, AutoAD, and LREN are all official versions, and the codes of GRX and AED are widely used unofficial versions. Besides, we reproduce the codes of LRX, KRX, QLRX, KSVDD, CR, and WeaklyAD, whose detection performances are close to those in the literature. Most of the above methods adopt Maxmin Normalization for data preprocessing and linearly scale each test HSI to \([0,1]\). But some methods adopt distinctive normalization. 2S-GLRT and PTA perform Maxmin Normalization on each band of HSI data. AET uses principal component analysis (PCA) to reduce the dimension of HSI data and then linearly scale drop-dimensional HSI data to \([0,1]\). FrFE divides all values on the test HSI by the maximum value. #### V-C3 Settings of Comparative Experiments To explore the influence of channel number and spectral range on the generalization ability of AD methods, we conduct three comparative experiments on the first 50 bands, the first 100 bands, and the first 200 bands of HAD100 dataset, respectively. For WeaklyAD, we simultaneously compare its retraining version and non-retraining version, denoted as WeaklyAD\({}_{\mathcal{A}}\) and WeaklyAD\({}_{\mathcal{B}}\), respectively. To achieve a fair comparison with our AETNet, WeaklyAD\({}_{\mathcal{B}}\) extracts coarse anomaly samples on tthe test scene 1 and adopts the whole AVIRIS-NG training set as the background sample set. On the first 50 bands of the test set, the critical parameters of all comparative methods are fine-tuned to achieve the best mAUC values. Then, these carefully regulated methods are directly evaluated on the first 100 bands and first 200 bands without parameter readjustment. Furthermore, the dual window sizes \((\omega_{\mathrm{in}},\omega_{\mathrm{out}})\) of all the dual-window-based methods traverse from 3 to 29, and Table IV shows the best dual window sizes of different methods. The upper bound of the outer window size \(\omega_{\mathrm{out}}\) is set to 29 to avoid huge calculation costs and ensure the local property of the dual-window-based methods. Interestingly, the best outer window sizes \(\omega_{\mathrm{out}}\) of all the dual-window-based methods except KRX reach the upper bound of 29. In addition to the standard version of AETNet, we also provide an advanced version that replaces Eq. (9) with KCSR, named AETNet-KCSR. AETNet-KCSR adopts the same parameter settings as AETNet and KCSR. In addition to mAUC and mASNPR, we also report the average inference time of all the comparative methods on a single test HSI. All the experiments are implemented on a PC with an Intel Core i9-9980XE CPU and two GeForce RTX 2080 Ti GPUs. ### _Analysis of Detection Performance_ #### V-D1 Analysis of Experiments on the First 50 Bands The gray region (the 2nd to 5th columns) in Table V shows the detection performance and inference time of difference AD methods on the first 50 bands of HAD100 dataset. In terms of mAUC, the top three methods, AETNet-KCSR, KCSR, and CR, are all based on sparse representation, with values of 0.9948, 0.9939, and 0.9929, respectively. The standard AETNet ranks fourth with an mAUC value of 0.9925. The two worst-performing methods are RGAE and PTA, of which mAUC values are 0.9003 and 0.9031, respectively. The mAUC value of GRX reaches 0.9799, and the local improved versions of GRX (LRX, KRX, and QLRX) do not show advantages. Table VI shows the AUC values of different AD methods on all the test scenes. The colored table intuitively gives a similar conclusion as the mAUC performance in Table V. CR, KCSR, AETNet, and AETNet-KCSR have the fewest gray cells, while PTA and RGAE have the most gray cells. The standard and advanced versions of AETNet have obvious advantages on the test scenes 42, 44, and 90. Visual results of different methods on the test scene 90 are shown in Fig. 10. Compared to other methods, our AETNet has higher response values to compact targets and lower response values to borders (the false alarms on the detection maps of most AD methods). However, both AETNet and AETNet-KCSR performs less competitively on the test scenes 83 and 87. As shown in Fig. 8, our AETNet can effectively suppress most borders on the test scene 83 but generates a false target on a dark border. The rest region of the dark border is mixed with the bright borders, causing AETNet not to treat this border as a whole. As shown in Fig. 9, there is a large cloud region the test scene 87, where many methods have high anomaly response values. Our AETNet also has high anomaly response values on a group of mixed pixels of cloud and land, which has spatial characteristics similar to those of anomaly targets. The performance on the test scenes 83 and 87 indicates that the AETNet's ability to perceive mixed pixels needs further improvement. In terms of mASNPR, the top three methods, WeaklyAD\({}_{\mathcal{B}}\), AETNet-KCSR and AETNet, reach 13.75 dB, 13.15 dB, and 11.72 dB, respectively. These three methods are all deep-learning-based methods without retraining. In terms of the inference time, the standard AETNet with 0.035s ranks second after GRX with 0.022s. Except for AED and WeaklyAD\({}_{\mathcal{B}}\), the remaining methods have high calculation costs. Compared to GRX, the standard AETNet improves the AUC performance from 0.9799 to 0.9925 with little additional time, while maintaining the parameter-free characteristic of GRX. As shown in Fig. 1, the standard AETNet achieves the best balance between detection accuracy and efficiency with mAUC of 0.9925 and inference speed of 28.6 FPS. #### V-D2 Analysis of Experiments on the First 100 Bands The blue region (the 6th to 9th columns) in Table V shows the detection performance and inference time of difference AD methods on the first 100 bands. All the detection accuracy evaluations except the mASNPR value of AutoAD are lower than those on the first 50 bands. GRX is a parameter-free AD method, and AET uses PAC to reduce the spectral dimension to a fixed value. The performance degradation of these two methods indicates that the added spectral range on the first 100 bands contains distractive spectral information and increases the difficulty of AD. Our AETNet still achieves higher accuracy and efficiency than the SOTA methods. In terms of mAUC, AETNet-KCSR and AETNet win the top two places with 0.9897 and 0.9875, respectively. The detection time of most methods increases dramatically, while that of the standard AETNet remains at a low level of 0.043s. #### Vi-E3 Analysis of Experiments on the First 200 Bands The blue region (the 10th to 13th columns) in Table V shows the detection performance and inference time of difference AD methods on the first 200 bands. Compared to the first 100 bands data, the first 200 bands data has more distractive information and is more challenging for AD methods. Except for LRX, LREN, and WeaklyAD\({}_{\mathcal{B}}\), the remaining methods have lower mAUC values than those on the first 50 and 100 bands. AETNet-KCSR is still the first place in terms of mAUC, and the standard AETNet ranks fifth with a value of 0.9818. In terms of inference efficiency, GRX is still the fastest with inference time of 0.058s, followed by AED and AETNet with inference time of 0.067s and 0.07s, respectively. ### _Influence of Random Mask Strategy on Generalization Capability_ Both our standard AETNet and WeaklyAD\({}_{\mathcal{B}}\) are trained on the AVIRIS-NG training set and use GRX as the detector. As shown in Table V, AETNet has a higher gain to GRX than WeaklyAD\({}_{\mathcal{B}}\) on the unseen test scenes. Compared with WeaklyAD\({}_{\mathcal{B}}\), AETNet achieves an image-level training paradigm and adopts the Random Mask strategy to handle the generalization problem. In the following, we analyze the generalization ability under different settings of the Random Mask strategy. #### Vi-E1 Analysis of Cross-Scene Generalization Capability The AVIRIS-NG training set is collected from the same device sensor as the test set and does not include anomaly scenes. As shown in Table VII, we compare the mAUC values of AETNet with CutOut, AETNet with CutMix, and AETNet without Random Mask. All the mAUC values in three comparative experiments with different band numbers are higher than those of GRX except AETNet without Random Mask. Among them, AETNet with CutOut achieves the best mAUC values on all three comparative experiments, which are 1.53%, 1.88%, and 2.28% higher than those of GRX. It can be seen that CutOut has the most obvious effect on the cross-scene (on the same device) generalization capability of AETNet. #### Vi-E2 Analysis of Cross-Device Generalization Capability The AVIRIS-Classic training set is collected by a different device sensor from the test set. Since AVIRIS-Classic data only remains 162 bands, we select the first 50 and the first 100 bands of the AVIRIS-Classic training set as the training data. Due to the different spectral intervals and resolutions, there is no common background spectrum between the training and test data. The cross-device generalization capability is a more difficult challenge for hyperspectral AD networks. As shown in Table VIII, the mAUC values of AETNet with CutOut are 0.9904 and 0.9730, respectively, which are lower than those in the cross-scene generalization capability (on the same device) experiments. AETNet with CutMix has the lowest mAUC value of 0.9864 on the first 50 bands of the test set, but achieves the highest mAUC value of 0.9837 on the first 100 bands of the test set. In addition, AETNet with CutMix is superior to the baseline (GRX) on these two comparative experiments, and the mAUC values are increased by 0.67% and 1.27%, respectively. In summary, AETNet with CutMix is suitable for inference on the cross-device test data. ### _Analysis of Training Data Scale_ Compared with the previous AD methods, our AETNet can be trained on a training set that does not contain any test HSI. The above experiments demonstrate that the image-level training paradigm is effective for hyperspectral AD. Table IX shows the detection accuracy of AETNet and the training epoch that produces the peak value of the measurement \(\mathcal{M}\) in the transform domain search module under the AVIRIS-NG training data of different scales. With only 5% of the training data, AETNet can reach a mAUC value of 0.9916. Using 15% of the training set can achieve the detection accuracy obtained on the whole training set. In addition, the epoch required for searching the transform domain decreases as the scale of the training data increases. ### _Ablation Experiments_ #### Iv-G1 Loss Function As shown in Table X, we compare the detection accuracy of AETNet with four different loss functions. L1 loss and L2 loss penalize pixel-wise spectral differences between input and output images. MSGMS loss considers the spatial correlation of pixels and penalizes the structural differences between input and output images. SSIM loss simultaneously penalizes the pixel-wise spectral differences and the structural differences. When the Random Mask strategy is adopted, most mAUC values of AETNet with these losses are better than the baseline values achieved by GRX. MSGMS loss outperforms the other three loss functions, while L1 loss performs the worst and fails on the first 50 bands. SSIM loss has a slight advantage over L2 loss on the first 50 and 100 bands. Figs. 11 and 12 show the residual output of AETNet in the inference on the test scene 6 and 7. It can be obviously seen that anomaly targets have higher response values than background on the residual output maps when our network adopts the Random Mask strategy. Consequently, the spectral differences between anomaly targets and background are enhanced in the transform domain generated by AETNet. Compared with L2 loss, MSGMS loss obtains residual output maps with a cleaner background. In contrast, when AETNet is not trained with the Random Mask strategy, the enhancement effect on anomaly targets cannot be guaranteed. Table X shows that all four loss functions fail without the Random Mask strategy. Figs. 11(b) and 12(b) show that the residual output of AETNet without the Random Mask strategy does not focus on anomaly targets. These experimental results indicate that the Random Mask Fig. 11: Residual output maps of AETNet achieved on the test scene 6. (a) Use MSGMS loss and Random Mask strategy. (b) Only use MSGMS loss. (c) Use L2 loss and Random Mask strategy. Fig. 12: Residual output maps of AETNet achieved on the test scene 7. (a) Use MSGMS loss and Random Mask strategy. (b) Only use MSGMS loss. (c) Use L2 loss and Random Mask strategy. strategy is the foundation for the generalization capability of AETNet. #### Iv-D2 Network Structure Table XI shows the detection accuracy achieved by different network architectures on the first 50 bands. A single CAE with residual connection reaches an mAUC value of 0.9894 and exceeds WeaklyAD\({}_{B}\) with a value of 0.9887, which indicates that our image-level training paradigm is better than the traditional spectral-level training paradigm. Added with the UNet structure, the network reaches an mAUC value of 0.9906. Furthermore, the full version of AETNet, which uses Swin Transformer blocks, can reach an mAUC value of 0.9925. In addition, the residual connection plays an important role in our AETNet. When the residual connection is removed, the detection performance of all three network structures decreases dramatically. ## V Conclusion In this paper, we introduce an image-level training paradigm and Random Mask strategy to solve the generalization problem in hyperspectral anomaly detection. Our method eliminates the need for adjusting parameters or retraining on new test scenes as required by most existing methods. Moreover, we developed a large-scale hyperspectral anomaly detection dataset and a unified evaluation benchmark. Experimental results show that our method achieves a better balance between detection accuracy and inference speed than existing state-of-the-art methods. We hope our work can stimulate the community toward real-world and practical hyperspectral anomaly detection.
2301.02620
Information Flow Tracking Methods for Protecting Cyber-Physical Systems against Hardware Trojans -- a Survey
Cyber-physical systems (CPS) provide profitable surfaces for hardware attacks such as hardware Trojans. Hardware Trojans can implement stealthy attacks such as leaking critical information, taking control of devices or harm humans. In this article we review information flow tracking (IFT) methods for protecting CPS against hardware Trojans, and discuss their current limitations. IFT methods are a promising approach for the detection of hardware Trojans in complex systems because the detection mechanism does not necessarily rely on potential Trojan behavior. However, in order to maximize the benefits research should focus more on black-box design models and consider real-world attack scenarios.
Sofia Maragkou, Axel Jantsch
2022-11-27T15:33:56Z
http://arxiv.org/abs/2301.02620v1
Information Flow Tracking Methods for Protecting Cyber-Physical Systems against Hardware Trojans - a Survey - ###### Abstract Cyber-physical systems (CPS) provide profitable surfaces for hardware attacks such as hardware Trojans. Hardware Trojans can implement stealthy attacks such as leaking critical information, taking control of devices or harm humans. In this article we review information flow tracking (IFT) methods for protecting CPS against hardware Trojans, and discuss their current limitations. IFT methods are a promising approach for the detection of hardware Trojans in complex systems because the detection mechanism does not necessarily rely on potential Trojan behavior. However, in order to maximize the benefits research should focus more on black-box design models and consider real-world attack scenarios. hardware Trojans, detection, hardware security, real hardware attacks, information flow tracking, cyber-physical production systems, cyber-physical systems ## I Introduction Hardware security began facing desultory challenges much later than software [1]. In 1996 a timing attack was published [2] based on which sensitive information could be leaked from a cryptographic component. After this point, hardware security research became more systematic. From 2005 on [1, 3] the field of hardware security has gained ground in the academic and the industrial world because it breaks the chain of trust known so far. This chain of trust, from the hardware security perspective, begins at the integrated circuit (IC) supply chain, where security vulnerabilities are formed by the needs of the market for fast and cheap technology. The involvement of external entities in the design process and the internationally outsourced fabrication can create security breaches that can be even relevant for national security. Design houses, in order to stay competitive, purchase third-party intellectual property (3PIP) cores from vendors and outsource the fabrication process without always verifying the returned product with respect to hardware security breaches. The reason for that is that the verification of the purchased cores is an expensive process that requires resources and time. Those intellectual property (IP) cores or chips are integrated and distributed to the customers. Consequently, hardware security has to deal with attacks like IP piracy, reverse engineering, counterfeit chips and hardware Trojans. In a real world scenario, when an IP core is being purchased, the design house requests some design specification and the 3PIP core vendor replies with the IP core and the specifications of the IP core. Throughout this information exchange, the only trusted part is the specification requested by the design house. The core in return, is considered untrusted and it is treated as black box. Information flow tracking (IFT) methods are a promising research direction for the detection of hardware Trojans because the verification can be based on the security specification of the application and not only on potentially malicious designs. Thus, the verification methods can be adapted based on the application. In addition, those methods can be flexible regarding new attacks, and can be expandable in case of the alteration of the security specifications. ### _Known Real World Attacks_ The real world hardware attacks are much more complicated than the attacks developed by the research community, since real world attacks interact with different layers of the computing system and communicate with external systems over long distance. Compared to software, real world hardware attacks are less frequent. The information that is publicly available about real world attacks is limited and specific details are rarely known to the public. The real world attack that received most attention is the 2007 attack on a Syrian military radar [4, 5]. Even though the details were not officially revealed, all the indications suggest that the radar at a nuclear installation in Syria has been tampered. The attack took place in September 2007 and the nuclear installation was completely destroyed by Israeli bombing jets. The Israeli jets, took off from southern Israel, crossed the Mediterranean Sea and the Syrian-Turkish borders and returned four hours later. The state of the art radars did not detect the jets, which raised suspicions for malicious alteration of their functionality. Adee [4] suspects a kill-switch or a backdoor in the off-the-shelf microprocessor that could block a bombing radar by an apparently remote command (trigger) without shutting down the whole system. The difference of a kill switch and a backdoor is that the kill switch will shut off a specific chip when triggered, but a backdoor requires an intruder to implement the same effect. The hypothesis of the kill switch is more likely and, in order to be implemented, requires the injection of extra logic. The HW and SW overhead for such an attack is very small which makes it hard to detect during testing, and the threat models discussed are the malicious designer and the malicious manufacturer. The microprocessor used remains unknown. This is not the only occasion where microprocessors including a kill switch have supposedly been used. According to anonymous sources from U.S defense department, it is known that a European chip maker is building microprocessors with a kill switch, and the French defense uses this technology for military applications. Undocumented microchips were found in the servers assembled by Supermicro [6, 7], that implemented a doorway to the network of the original system, which incorporated memory, networking capacity and processing power. The attack aimed at leaking sensitive information over a long term. Stuxnet attack provides an example of the real world attack capabilities in the industrial environment [8]. Stuxnet is a worm that was introduced in the Microsoft Windows operating systems and it was targeting specific industrial control systems of Siemens which were used in Iran to run centrifuges. Until the target was found the worm was updating itself. The worm was compromising the targeted system by exploiting 'zero-day' vulnerabilities. After monitoring the operation, the worm was taking the control of the control system and it ran the centrifuges to the point of failure, returning false feedback to cover the failure until the damage was irreversible. Hybrid attacks are very common in real world scenarios. The hybrid attacks can include hardware, software and firmware parts. Such an attack can be malicious software that exploits vulnerabilities of the hardware, damaging physical resources such as Stuxnet [8]. ### _Cyber-Physical Production Systems_ Cyber-physical systems (CPS) are sophisticated systems that combine physical and cyber units. They are used in many different applications and they are the fundamental units of the internet of things (IoT). Their functionality is based on the information exchange and the interaction with each other. According to the [9], the nature of the CPS makes them particularly sensitive to attacks, due to their heterogeneous nature, their reliance on data and their large scale. When those systems are integrated in the production environment then we refer to them as cyber-physical production systems (CPPS). Often, CPPS expose a profitable surface to adversaries for hardware Trojan introduction, because they are complex, sophisticated structures that manage sensitive information with extend communication among them, which facilitates malicious functionality to stay hidden. Consequently, we consider securing the CPPS an emerging, critical issue. According to [10], the pyramid of the automation hierarchy known until recently, is decentralized in the concept of Industry 4.0. The information processing has been distributed in many control units which exchanging information with the goal to optimize the production process. The control units have moved closer to the technical processes for efficiency, creating an interactive communication net among heterogeneous systems. This creates the challenge to secure those components. Assume that a hardware Trojan is included in one of the control units. In Industry 4.0 machines use machine to machine (M2M) communication for sensitive information exchange. That means that the authentication keys are stored and processed in the machines. If the hardware Trojan leaks an authentication key to the adversary, she can take the control of the unit and possibly the control of the factory. In such a demanding environment the CPPS should stay consistent to the security requirements. Availability, integrity and confidentiality are only the basic guidelines of the properties that should be taken into consideration. The proof that the units of those systems comply to those properties and to more detailed ones can be achieved with IFT methods as we discuss in the next sections. ### _Scope_ The scope of this report is to survey how IFT methodologies can secure CPS against hardware Trojan attacks and how those methods need to be further developed in order to be applicable in real world scenarios. The remainder of this survey is organised as follows: Section II provides basic information about hardware Trojans. Section III refers to basic information for IFT methods and presents state of the art methodologies against information leakage. Finally, in section V we compare the IFT methods and we discuss future steps for research. ## II Hardware Trojans Hardware Trojans are circuits with hidden, unspecified, malicious functionality that can be included in any phase of the IC supply chain. In the environment of Industry 4.0, stealthy attacks like hardware Trojans can implement any kind of effect, including information leakage. In this report we are interested in this kind of malicious activity. Figure 1 shows a time bomb hardware Trojan from [11]. This hardware Trojan is activated when the counter reaches the value \(2^{k}-1\). When the trigger is activated, the output value at ER* becomes different from the initial signal ER. The circuitry with the counter is the trigger and the circuitry that changes the value of the signal ER is the payload. This is a simplified example. More sophisticated mechanisms have been proposed from the research community like the Trojans mentioned above. According to the taxonomy of R. Karri, J. Rajendran, K. Rosenfeld, M. Tehranipoor [12], a hardware Trojan can be described by the insertion phase, the abstraction level, the activation mechanism (trigger), the effects (payload) and the location in the design. #### Ii-B1 Insertion phase The earlier a hardware Trojan is introduced in the design the broader the range of its impact is and the lower the cost of the attack is. For instance, assume that a third party vendor infects an IP core with a hardware Trojan. This IP core can be integrated in more than one design, increasing the number of infected systems. On the other hand, the scenario of the malicious manufacturer is design-specific. The attacker, in order to introduce a Trojan, should be aware of the design details which can be acquired by reverse engineering, a technique that needs special knowledge and is expensive in time and resources. Consequently, the phase of the hardware Trojan introduction, in combination with the value of the protected assets should be taken into consideration, during the development of countermeasures. #### Ii-B2 Abstraction level Depending on the abstraction level of the design, a hardware Trojan can be injected at system level, at the development environment, at register-transfer level as soft IP core, at gate level as firm IP core, at transistor level as hard IP core or at the physical level. #### Ii-B3 Triggers There are hardware Trojans exploiting _don't care conditions_ for their trigger mechanisms [13], or data patterns in specific memory addresses [14], or even dedicated input images [15]. Some attacks have even more sophisticated triggers which are activated during the design flow, leaving no trigger signal to the possible detection algorithm [16, 17]. #### Ii-B4 Payload The most common attacks realized by hardware Trojans are sensitive information leakage and denial of service (DoS) attacks. Other attacks can be functional alteration, downgrade performance, data corruption, circuit aging, chip destruction, etc. #### Ii-B5 Attack targets The most common targets for hardware Trojan attacks are memory elements [18, 19, 20, 21] and cryptographic components [13, 22, 23]. However, there are many proposals for attacking cores such as UARTs [24] or AXI4-bus interconnects [25], FPGA LUTs [16], CPUs [26, 27, 28], etc. #### Ii-B6 Resources required For the majority of the Trojans we study, the attacker needs knowledge of the design and access to it (e.g. bitstream [29], netlists [30] or access to the design tools [16, 17, 31]). ## III Information flow tracking The basic idea behind IFT methods is that they track the influence of information of a system during computation. In order to achieve that, they assign tags (usually binary values) for each of the data element of the design and they update the value of the tag based on the applied method and the applied security properties. The verification is achieved by the observation of the value of the tags. IFT methods can be used with different verification techniques as it is described in the taxonomy in [32]. More specifically they can verify security properties through static methods like simulation, formal verification, emulation, and virtual prototyping or through dynamic methods like runtime monitoring techniques. There are many IFT methods used with different verification techniques and at different abstraction levels and tackling different problems, since not all those methods address hardware Trojans. Here in this paper we chose to present different IFT approaches and discuss their limitations and requirements. We present IFT static methods that tackle information leakage. Information leakage is the most common hardware Trojan effect and in the case of CPS it can cause economic loss or even set a human life in danger. As we discussed earlier, the runtime monitoring methods can be expensive in resources, and the recovery from those attacks can be costly too. Based on that, we chose to focus on the static IFT verification methods. Static IFT methods are applied in design-time, identifying the malicious behavior soon enough to minimize the recovery cost. Moreover, they do not add overhead in the original designs resources. ## IV IFT methods against hardware Trojans Many methodologies are using theorem proving to verify the information flow in the designs [33, 34, 35, 36]. In those methods the security properties are expressed as theorems and theorem proving tools such as Coq are used to verify them. In the proof-carrying hardware IP (PCHIP) framework [33] the IP vendors are required to deliver the HDL code of the design with formal proofs that the code is according to some security properties predefined among the two parties. For instance, such a property could describe that an instruction is allowed to access memory locations, which are defined in its op-code. With the provided security tags to the signals PCHIP can track the information flow in the design. The disadvantage of theorem proving methods is the manual conversion of the HDL core to the theorem proving language and proof checking environment (e.g. Coq and CoqIDE). Even though a conversion from HDL to Coq has been proposed [33, 34], theorem proving is far from an automated technique. The approach proposed in [37] addresses black box models. It is based on information flow security (IFS) verification which detects violations of security properties. An asset is modeled as stuck-at-0 and stuck-at-1 faults and, by leveraging the automatic test pattern generation (ATPG), faults are searched for. When a fault is detected, it means that there is an information flow from the asset to observation points. Finally the trigger mechanisms is extracted. This methodology is based on the fact that the trigger mechanism is injected in the original circuit. Fig. 1: Time bomb hardware Trojan based on [11] The tool Register Transfer Level Information Flow Tracking (RTLIFT)[38], can be applied directly to HDL code. Security tags (or labels) are assigned to every signal. Register transfer level information flow tracking (RTLIFT) uses IFT logic to securely propagate the tags throughout the design. The functionality of the additional IFT logic depends on the precision required. For instance, the output of an operation can be tainted when any of the inputs is tainted. If an untainted input influences the output to be untainted even though the other input is tainted, a false positive may occur. To avoid inaccuracies, the modules implementing the flow tracking logic take such cases into consideration. Based on the required trade off between complexity and precision, different precision levels can be achieved. Given the Verilog code, the control and the data flow precision flags (which define the required precision level), the tool generates a functionally equivalent Verilog code including IFT logic (IFT-Verilog code). The IFT-Verilog code is tested against the security properties requested for the design through simulation or formal verification. If the design passes this process, the extra logic is removed and the design is sent for fabrication. If it fails, the design has to be altered and to go through this process again. The methodology described in [39], gate-level information-flow tracking (GLIFT), can detect hardware Trojans injected by malicious third-party vendors, that alter the functionality of the original circuit or leak sensitive information. According to GLIFT, each data bit is assigned to a security label. This is implemented with additional tracking logic. It is up to the designers to define the security properties and use the GLIFT to verify the cores. For example, assume that the goal is to track the flow of a cryptographic key in order to ensure that it does not leak. The security labels of the keys will take the value 'confidential' and the security property that verifies that there is no leakage should ensure that no bit with 'confidential' label ends up in an output or memory with the label 'untrusted'. Thus, this technique can identify violations of confidentiality and integrity and, hence, expose a hardware Trojan. Both methods discussed above [38, 39] face the problem of false positives results, which have to be resolved manually. The method proposed by Wang et al. [40], called HLIFT, detects hardware Trojans based on the trigger behavior at register transfer level (RTL) with the use of control and data flow graphs (CDFG). The method can identify hardware Trojans that leak information through specific outputs pins or side channel, without functional modification and through unspecified output pins. This approach is based on a feature matching methodology that captures specific Trojan features. The features are based on three kind of Trojan triggers: always-on, immediate-on, sequential-on. This methodology can be divided in the predefinition flow and the application flow. During the predefinition flow, statement CDFGs are build based on already known infected RTL designs. Statement CDFGs are abstract, high-level and compact RTL netlists. That way unnecessary information is removed which decreases the complexity. IFT is applied on the CDFGs and a list of Trojan IFT features is created. At the application flow, the statement-level CDFG is extracted from the unknown RTL design, and it is compared for matches with the list of the extracted Trojan features. The methodology proposed in [41] uses virtual prototyping (SystemC TLM 2.0) to identify information leakage or untrusted access. At the behavioral level there is a lack of design details. Thus, the security properties applied are very strict. This can lead to false positives. This approach identifies the vulnerable paths and reports them to the user for inspection. Consequently, the inspection process is done manually, adding time overhead. The approach in [42], creates IFT models and optimizes them according to specific security properties. The security properties are compiled to security constraints and assertions, which are combined with the trimmed IFT model. Finally, the combination of the IFT model with the security constraints and assertions is verified through simulation, emulation or formal verification. In contrast to the methods presented above, the method in [43] does not use any of the mentioned verification methods. The HDL code is converted to an abstract syntax tree (AST) to identify, track and localize anomaly behavior. The AST is converted to directed data-flow graph (DFG). This process automatically recognizes interaction between IP cores. By identifying the sink and the source signals, the tool detects vulnerabilities and finally locates the threats. ## V Discussion and Conclusions The development of hardware Trojans is flourishing as they attract interest from the academia and industry. As countermeasures, IFT methodologies are very promising, because they can be flexible, adaptable and expandable based on the application. However, the IFT verification methodologies proposed so far, cannot be applied in real world scenarios. To the best of our knowledge, usually the purchased IP cores are not in a white box form (usually the cores are purchased locked in order to avoid IP piracy), or the specifications of the cores provided are considered untrusted. Thus, the IP cores purchased are treated as black boxes. That means that the internals of the purchased modules are unknown and can be leveraged from other layers of the systems (firmware or software) for potential attacks. Thus, there is a need to explore more IFT methods for black box designs without the usage of known hardware Trojan behaviors. The reason we suggest, that the known Trojan behaviors should not be taken into consideration is because the attackers want their Trojans to stay hidden, pushing the limits of the current known Trojan behaviors, in order to make them more stealthy. A case in point is the development of trigger mechanisms. In recent years there is the tendency to include the trigger mechanisms in the design flow, so that the detection methods searching for trigger behaviors cannot detect them. On the other hand, methods that are based on security properties to identify unwanted or unspecified behavior in the designs seem more flexible with respect to unknown attacks. However, the completeness of the security properties is an open problem. Another issue is the definition of the security properties by the engineers. Manual processes can result in vulnerabilities of the systems which can be leveraged by adversaries. Identifying a hardware Trojan in a real world example can be very challenging, especially since the trigger mechanism is not necessarily part of the original design. In some concepts a fault, a vulnerability, or a backdoor may be no different from a well covered Trojan. From the real world attacks we can conclude that the attack scenarios implemented are much more complete than the ones provided by academia. In the real world examples mentioned above we identify mechanisms that can communicate at great distance and can affect state of the art systems. The attacks were sophisticated enough with complicated mechanisms with more than negligible overhead. It will be useful for the research community to explore more complicated attacks, across the levels of a computing system in order to facilitate corresponding countermeasures.
2306.00090
The Källén-Lehmann representation in de Sitter spacetime
We study two-point functions of symmetric traceless local operators in the bulk of de Sitter spacetime. We derive the K\"all\'en-Lehmann spectral decomposition for any spin and show that unitarity implies its spectral densities are nonnegative. In addition, we recover the K\"all\'en-Lehmann decomposition in Minkowski space by taking the flat space limit. Using harmonic analysis and the Wick rotation to Euclidean Anti de Sitter, we derive an inversion formula to compute the spectral densities. Using the inversion formula, we relate the analytic structure of the spectral densities to the late-time boundary operator content. We apply our technical tools to study two-point functions of composite operators in free and weakly coupled theories. In the weakly coupled case, we show how the K\"all\'en-Lehmann decomposition is useful to find the anomalous dimensions of the late-time boundary operators. We also derive the K\"all\'en-Lehmann representation of two-point functions of spinning primary operators of a Conformal Field Theory on de Sitter.
Manuel Loparco, Joao Penedones, Kamran Salehi Vaziri, Zimo Sun
2023-05-31T18:08:51Z
http://arxiv.org/abs/2306.00090v4
# The Kallen-Lehmann representation in de Sitter spacetime ###### Abstract We study two-point functions of symmetric traceless local operators in the bulk of de Sitter spacetime. We derive the Kallen-Lehmann spectral decomposition for any spin and show that unitarity implies its spectral densities are nonnegative. In addition, we recover the Kallen-Lehmann decomposition in Minkowski space by taking the flat space limit. Using harmonic analysis and the Wick rotation to Euclidean Anti de Sitter, we derive an inversion formula to compute the spectral densities. Using the inversion formula, we relate the analytic structure of the spectral densities to the late-time boundary operator content. We apply our technical tools to study two-point functions of composite operators in free and weakly coupled theories. In the weakly coupled case, we show how the Kallen-Lehmann decomposition is useful to find the anomalous dimensions of the late-time boundary operators. We also derive the Kallen-Lehmann representation of two-point functions of spinning primary operators of a Conformal Field Theory on de Sitter. ## 1 Introduction ### Preliminaries * Representation theory of de Sitter isometry group * 2.1.1 Classification of UIRs * 2.1.2 Hilbert spaces of the UIRs * 2.2 Embedding space formalism * 2.2.1 Coordinate systems * 2.2.2 Fields in embedding space * 2.2.3 States in embedding space * 3 The Kallen-Lehmann decomposition in de Sitter * 3.1 Dimension \(d\geq 2\) * 3.1.1 Scalar operators * 3.1.2 Spinning operators * 3.2 dS\({}_{2}\) * 3.2.1 Scalar operators * 3.2.2 Spinning operators * 3.3 Flat space limit * 4 Inversion formulae * 4.1 Wick rotation to EAdS * 4.2 Inversion formula for \(d\geq 2\) * 4.2.1 Spurious poles * 4.2.2 Relation to the inversion formula from the sphere * 4.3 Completeness of principal series and analyticity of the spectral densities * 4.4 Boundary operator expansion * 4.5 Inversion formula in dS\({}_{2}\) * 5 Applications * 5.1 Free QFTs * 5.1.1 Spin 0 Examples * 5.1.2 Spin 1 Examples * 5.2 Conformal Field Theories * 5.2.1 Spin 0 Example * 5.2.2 Spin 1 Example * 5.2.3 Spin 2 Example * 5.2.4 Higher spin examples in dS\({}_{2}\) * 5.3 Weakly coupled QFT * 5.3.1 Anomalous dimensions from quartic interactions * 6 Outlook * Various properties of Green's functions in de Sitter * A.1 Canonical quantization of a free scalar * A.2 Proca fields in \(\mathrm{dS}_{2}\) * A.3 Analytical continuation of \(G_{\lambda,0}\) in \(\mathrm{dS}_{2}\) * A.4 Flat space limit of \(G_{\lambda,\ell}\) * Complementary series in the Kallen-Lehmann decompositions * Discrete series in free scalar theory in \(\mathrm{dS}_{2}\) * Properties of \(\phi^{\pm}_{\lambda,J}\) and \(\psi_{p,J}\) * Discrete series \(\mathcal{D}_{p}\) in the two-point function of \(\mathcal{O}^{(J)}\) when \(p<J\) * Harmonic Analysis in EAdS * F.1 Coordinates in Euclidean Anti de Sitter * F.2 Harmonic functions * F.3 Explicit form of harmonic functions up to \(J=2\) * Explicit expressions of the inversion formula * G.1 Spin 0 * G.2 Spin 1 * Inversion integrals * H.1 Free QFTs * H.1.1 Scalar Free QFT * H.1.2 Spin 1 Free QFT * H.2 CFTs * H.2.1 Scalar CFT * H.2.2 Spin 1 CFT * H.2.3 Spin 2 CFT * Diagrammatics of de Sitter * I.1 In-in formalism * I.2 EAdS-dS dictionary * I.3 Details of the anomalous dimensions computation * Introduction de Sitter (dS) spacetime is the simplest model of an expanding universe. Therefore, understanding Quantum Field Theory (QFT) in dS spacetime is the first step towards a description of quantum effects in Cosmology. The main ingredients of QFT are states in the Hilbert space and local operators labeled by points in spacetime. We have recently given a systematic account of the Hilbert space of free QFTs and Conformal Field Theories (CFTs) in dS [1] and its decomposition in Unitary Irreducible Representations (UIRs) of the isometry group \(SO(d+1,1)\) of dS\({}_{d+1}\) (see section 2.1 for a brief review). In general, this leads to the decomposition of the identity as a sum/integral over projectors into UIRs, \[\mathbb{1}=\sum_{\ell=0}\int_{\mathbb{R}}d\lambda\ \mathbb{1}_{\mathcal{P}_{ \Delta,\ell}}+\cdots \tag{1}\] where we show explicitly the contribution from principal series with dimension \(\Delta=\frac{d}{2}+i\lambda\) and \(SO(d)\) spin \(\ell\) and the dots stand for other UIRs. In this article, we continue the groundwork and derive the Kallen-Lehmann representation of two-point functions of bulk local operators in the Bunch-Davies vacuum of dS. We systematize and extend the results of previous works [2; 3; 4; 5; 6; 7; 8; 9; 10]. In particular, we employ the embedding space formalism to efficiently treat the case of bosonic traceless symmetric operators in arbitrary spacetime dimensions. The Kallen-Lehmann decomposition of a two-point function is simply obtained by inserting the resolution of the identity (1) in the middle of a Wightman two-point function. For example, for a two-point function of operators of spin \(J\) in \(d\geq 2\) we find \[\langle\mathcal{O}^{(J)}(Y_{1},W_{1})\mathcal{O}^{(J)}(Y_{2},W_{ 2})\rangle =\sum_{\ell=0}^{J}\int_{\mathbb{R}}d\lambda\,\langle\mathcal{O}^{ (J)}(Y_{1},W_{1})\mathbb{1}_{\mathcal{P}_{\Delta=\frac{d}{2}+i\lambda,\ell}} \mathcal{O}^{(J)}(Y_{2},W_{2})\rangle+\cdots \tag{2}\] \[=\sum_{\ell=0}^{J}\int_{\mathbb{R}}d\lambda\,\rho^{\mathcal{P}, \ell}_{\mathcal{O}^{(J)}}(\lambda)\left[(W_{1}\cdot\nabla_{1})\,(W_{2}\cdot \nabla_{2})\right]^{J-\ell}G_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})+\cdots\] As explained in section 2.2, \(Y\) encodes the position in dS and \(W\) encodes the indices of a tensor field. In section 3, we show that \(\langle\mathcal{O}^{(J)}(Y_{1},W_{1})\mathbb{1}_{\mathcal{P}_{\Delta=\frac{d}{ 2}+i\lambda,\ell}}\mathcal{O}^{(J)}(Y_{2},W_{2})\rangle\) can be written in terms of the propagator \(G_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})\) of a free field of spin \(\ell\) and mass squared \[m^{2}R^{2}=\lambda^{2}+\left(\frac{d}{2}+\ell-2+2\delta_{\ell,0}\right)^{2}\,, \tag{3}\] with \(R\) the curvature radius of dS and \(\delta_{\ell,0}\) is a Kronecker delta. Therefore, all the dynamical information is encoded in the spectral densities \(\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\) associated to intermediate states in principal series UIRs. The dots in (2) stand for the contribution of other UIRs. In particular, we also determine the contributions from the complementary series1 and the discrete series in the case of dS\({}_{2}\). This completes the picture in dS\({}_{2}\) and dS\({}_{3}\)2 where we have derived all the contributions to the Kallen-Lehmann representation. In section 3, we prove the positivity of the dS spectral densities and show how they morph into the standard flat space spectral densities in the limit \(R\to\infty\). In section 4, we present explicit inversion formulae that give the spectral densities \(\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\) as integrals over the associated two-point functions. To derive these formulae we analytically continue the two-point functions to Euclidean Anti-de Sitter (EAdS) space and then use harmonic analysis. The inversion formulae imply a strip of analyticity in the \(\lambda\) complex plane centered around the integration contour in (2).3 In addition, we predict the presence of spurious (or kinematical) poles with fixed residues in the spectral densities. Assuming meromorphicity in \(\lambda\), we derive the late time expansion of two-point functions and interpret it as a Boundary Operator Expansion (BOE). It would be interesting to understand the convergence properties of this BOE, and whether the same BOE can be used inside all correlation functions. Footnote 3: The width of the strip is fixed by the asymptotic behavior of the two-point function or, equivalently, by the leading boundary operator in the late time expansion. See sections 4.3 and 4.4. In section 5, we study many examples (CFTs, free and weakly coupled QFTs) and always find spectral densities that are meromorphic functions of \(\lambda\in\mathbb{C}\) and have the predicted spurious poles. For weakly coupled QFTs, we show how the Kallen-Lehmann decomposition can be used to compute anomalous dimensions of late time boundary operators. In section 6, we discuss possible future directions, and in the appendices we elaborate on many technical details. Throughout the paper, we will guide the eye of the reader by highlighting important equations. Here we list some of them as a summary of our main results: * Kallen-Lehmann decomposition of spin \(J\) operators in * dS\({}_{2}\) (3.56), * dS\({}_{d+1}\) with \(d\geq 2\) (3.31); * Flat space limit of the spectral densities (3.63); * Inversion formula for the spectral densities of * principal and discrete series in dS\({}_{2}\) (4.61), * principal series in dS\({}_{d+1}\) with \(d\geq 2\) (4.18); * Boundary operator expansion (4.51). This work fits within the recent efforts to constrain QFT observables in dS by using general principles such as unitarity and symmetries [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 6, 11, 5, 14, 15, 16, 17, 18, 19, 20]. ## 2 Preliminaries In this section, we review two mathematical tools that will be very useful in the derivation of the Kallen-Lehmann decomposition and the computation of spectral densities. The first topic concerns UIRs of the de Sitter isometry group \(SO(d+1,1)\), and the second topic is the embedding space formalism. ### Representation theory of de Sitter isometry group The \((d+1)\) dimensional de Sitter spacetime is a hypersurface in the embedding space \(\mathbb{R}^{d+1,1}\) \[-Y_{0}^{2}+Y_{1}^{2}+\cdots+Y_{d+1}^{2}=R^{2}, \tag{1}\] where \(R\) is the de Sitter radius. The embedding (1) manifests the isometry group \(SO(d+1,1)\) of dS\({}_{d+1}\), which is generated by \(L_{AB}=-L_{BA},0\leq A,B\leq d+1\) satisfying commutation relations \[[L_{AB},L_{CD}]=\eta_{BC}L_{AD}-\eta_{AC}L_{BD}+\eta_{AD}L_{BC}- \eta_{BD}L_{AC}, \tag{2}\] where \(\eta_{AB}=\text{diag}(-1,1,\cdots,1)\) is the metric on \(\mathbb{R}^{d+1,1}\). In a unitary representation, \(L_{AB}\) are realized as anti-hermitian operators on some Hilbert space. The isomorphism between \(\mathfrak{so}(d+1,1)\) and the \(d\)-dimensional Euclidean conformal algebra is realized as \[L_{ij}=M_{ij}\,\ \ \ L_{0,d+1}=D\,\ \ \ L_{d+1,i}=\frac{1}{2}(P_{i}+K_{i})\,\ \ \ L_{0,i}=\frac{1}{2}(P_{i}-K_{i}) \tag{3}\] where \(D\) is the dilatation, \(P_{i}\) (\(i=1,2,\cdots d\)) are translations, \(K_{i}\) are special conformal transformations and \(M_{ij}=-M_{ji}\) are rotations. The commutation relations of the conformal algebra following from (2) and (3) are \[[D,P_{i}]=P_{i}\,\ \ [D,K_{i}]=-K_{i}\,\ \ [K_{i},P_{j}]=2 \delta_{ij}D-2M_{ij}\,\] \[[M_{ij},P_{k}]=\delta_{jk}P_{i}-\delta_{ik}P_{j}\,\ \ [M_{ij},K_{k}]= \delta_{jk}K_{i}-\delta_{ik}K_{j}\,\] \[[M_{ij},M_{k\ell}]=\delta_{jk}M_{i\ell}-\delta_{ik}M_{j\ell}+ \delta_{i\ell}M_{jk}-\delta_{j\ell}M_{ik}. \tag{4}\] The quadratic Casimir of \(SO(d+1,1)\), which commutes with all \(L_{AB}\), is chosen to be \[C^{SO(d+1,1)}=\frac{1}{2}L_{AB}L^{AB}=D(d-D)+P_{i}K_{i}+\frac{1}{ 2}M_{ij}^{2}. \tag{5}\] Here \(\frac{1}{2}M_{ij}^{2}\equiv\frac{1}{2}M_{ij}M^{ij}\) is the quadratic Casimir of \(SO(d)\) and it is negative-definite for a unitary representation since \(M_{ij}\) are anti-hermitian. For example, for a spin-\(s\) representation of \(SO(d)\), it takes the value of \(-s(s+d-2)\). #### 2.1.1 Classification of UIRs An irreducible infinite dimensional representation of \(SO(d+1,1)\) is fixed by a complex parameter \(\Delta\)4 and a highest-weight vector \(\mathbf{S}\) of \(SO(d)\). Throughout the paper, we will only consider \(\mathbf{S}=(s,0,\cdots,0)\), i.e. spin \(s\) representation of \(SO(d)\). Such representations corresponds to the single-particle Hilbert space of a free spin \(s\) field in dS\({}_{d+1}\). More general \(\mathbf{S}\) describes fields of mixed symmetry, including form fields, spinors, tensor spinors, etc. See [21; 22; 23; 24] for recent discussions on these fields. Fixing \(\Delta\) and \(s\), the quadratic Casimir is equal to \(\Delta(d-\Delta)-s(d+s-2)\). For any \(d\geq 3\), there are four types of UIRs apart from the trivial representation [21; 25; 26]: Footnote 4: We will often call this parameter a scaling dimension. But it does not have the same group theoretical meaning as scaling dimensions in unitary CFT, since it is not associated to any operator bounded from below. * **Principal series \(\mathcal{P}_{\Delta,s}\)**: \(\Delta\in\frac{d}{2}+i\mathbb{R}\) and \(s\geq 0\). * **Complementary series \(\mathcal{C}_{\Delta,s}\)**: \(0<\Delta<d\) when \(s=0\) and \(1<\Delta<d-1\) when \(s\geq 1\). Both principal and complementary series describe free massive particles in dS\({}_{d+1}\). * **Type I exceptional series \(\mathcal{V}_{p,0}\)**: \(\Delta=d+p-1\) and \(s=0\) for \(p\in\mathbb{Z}_{>0}\). They correspond to shift symmetric scalars in dS\({}_{d+1}\)[27]. * **Type II exceptional series \(\mathcal{U}_{s,t}\)**: \(\Delta=d+t-1\) and \(s\geq 1\) with \(t=0,1,2\cdots,s-1\). The single-particle Hilbert space of a partially massless field of spin \(s\) and depth \(t\) in dS\({}_{d+1}\) furnishes the representation \(\mathcal{U}_{s,t}\). When \(d=2\), there are only principal series and complementary series up to isomorphism [28, 29, 30, 31], and the complementary series representations always have \(s=0\). When \(d=1\), since the \(SO(d)\) group becomes degenerate, the Casimir of \(SO(2,1)\) can always be written as \(\Delta(1-\Delta)\). The classification of UIRs is as follows: * **Principal series \(\mathcal{P}_{\Delta}\)**: \(\Delta\in\frac{1}{2}+i\mathbb{R}\). Its restriction to \(SO(2)\) yields [32, 33] \[\left.\mathcal{P}_{\Delta}\right|_{SO(2)}=\bigoplus_{n\in\mathbb{Z}}(n)\] (6) where \((n)\) denotes the (one-dimensional) spin \(n\) representation of \(SO(2)\). * **Complementary series \(\mathcal{C}_{\Delta}\)**: \(0<\Delta<1\). It has the same \(SO(2)\) content as \(\mathcal{P}_{\Delta}\). * **Lowest-weight discrete series \(\mathcal{D}_{p}^{+}\)**: \(C^{\mathrm{SO}(2,1)}=p(1-p),\,p\in\mathbb{Z}_{+}\). Its \(SO(2)\) spectrum has a lower bound \(p\). * **Highest-weight discrete series \(\mathcal{D}_{p}^{-}\)**: \(C^{\mathrm{SO}(2,1)}=p(1-p),\,p\in\mathbb{Z}_{+}\). Its \(SO(2)\) spectrum has an upper bound \(-p\). There is an isomorphism between representations of scaling dimension \(\Delta\) and \(\bar{\Delta}=d-\Delta\) in the principal and complementary series, which is established by the shadow transformation. To remove such redundancy, one can further impose, for example, \(\mathrm{Im}\,(\Delta)\geq 0\) in the principal series and \(\Delta>\frac{d}{2}\) in the complementary series. #### 2.1.2 Hilbert spaces of the UIRs As we will see in section 3, the derivation of the Kallen-Lehmann representation in de Sitter spacetime requires a detailed knowledge of the Hilbert space of each UIR listed above. The complementary series can be treated as a simple analytical continuation of the principal series in the derivation of the Kallen-Lehmann representation (see Appendix B for more details). The two exceptional series are absent in all examples considered in this paper. So we will only briefly review the Hilbert space of the principal series representation \(\mathcal{P}_{\Delta,s}\) in any dimension (including \(d=1\)), and the discrete series representation \(\mathcal{D}_{p}^{\pm}\) in dS\({}_{2}\), by following [26]. Given a principal series representation \(\mathcal{P}_{\Delta,s}\), its Hilbert space is spanned by a continuous family of \(\delta\) function normalized kets \(\left|\Delta,\mathbf{y}\right.\rangle_{i_{1}\cdots i_{s}}\). Here \(\mathbf{y}\) labels a point in \(\mathbb{R}^{d}\) and the indices \(\{i_{1},i_{2},\cdots i_{s}\}\), being symmetric and traceless, carry the spin \(s\) representation of \(SO(d)\). The action of the \(\mathfrak{so}(d+1,1)\) algebra on these states is realized by \[\begin{split}& P_{i}|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{s}}= \partial_{i}|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{s}}\,\\ & D|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{s}}=({\bf y}\cdot \partial_{\bf y}+\Delta)|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{s}}\,\\ & M_{k\ell}|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{s}}=\left(y_{ \ell}\partial_{k}-y_{k}\partial_{\ell}+{\cal M}^{(s)}_{k\ell}\right)|\Delta,{ \bf y}\,\rangle_{i_{1}\cdots i_{s}}\,\\ & K_{k}|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{s}}=\left(2y_{k}( {\bf y}\cdot\partial_{\bf y}+\Delta)-y^{2}\partial_{k}-2y^{\ell}{\cal M}^{( s)}_{k\ell}\right)|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{s}}\,\end{split} \tag{7}\] where \({\cal M}^{(s)}_{k\ell}\) denotes the spin-\(s\) representation of \(\mathfrak{so}(d)\) \[{\cal M}^{(s)}_{k\ell}|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{s}}=\sum_{j=1} ^{s}|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{j-1}k\,i_{j+1}\cdots i_{s}} \delta_{\ell i_{j}}-|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{j-1}\ell}i_{j+1 }\cdots i_{s}}\delta_{ki_{j}}. \tag{8}\] By introducing an auxiliary null vector \(z^{i}\in\mathbb{C}^{d}\), we define \(|{\bf y},{\bf z}\,\rangle_{\Delta,s}\equiv|\Delta,{\bf y}\,\rangle_{i_{1} \cdots i_{s}}\,z^{i_{1}}\cdots z^{i_{s}}\), which packages all the tensor components of \(|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{s}}\) into a generating function and also allows us to state the normalization condition of \(|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{2}}\) concisely \[{}_{\Delta,s}({\bf y}_{1},{\bf z}_{1}|{\bf y}_{2},{\bf z}_{2})_{\Delta,s}= \delta^{d}({\bf y}_{1}-{\bf y}_{2})({\bf z}_{1}\cdot{\bf z}_{2})^{s}. \tag{9}\] Fixing this normalization, the resolution of the identity of \({\cal P}_{\Delta,s}\) is given by \[\mathbb{1}_{{\cal P}_{\Delta,s}}=\int d^{d}{\bf y}\,|\Delta,{\bf y}\,\rangle_ {i_{1}\cdots i_{s}}\,{}^{i_{1}\cdots i_{s}}\langle\Delta,{\bf y}\,|=\frac{1}{ \left(\frac{d}{2}-1\right)_{s}}\int d^{d}{\bf y}\,|{\bf y},D_{\bf z}\rangle_{ \Delta,s}\,{}_{\Delta,s}\langle{\bf y},{\bf z}|\, \tag{10}\] where \(D_{\bf z}\) is the analogue of the ordinary derivative while preserving the nullness condition of \({\bf z}\) \[D_{z^{i}}\equiv\left(\frac{d}{2}-1+{\bf z}\cdot\partial_{\bf z} \right)\partial_{z^{i}}-\frac{1}{2}z_{i}\,\partial_{z}^{2}. \tag{11}\] A generic normalizable state in the Hilbert space of \({\cal P}_{\Delta,s}\) can be expressed as a linear combination of \(|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{s}}\) \[|\psi\rangle\equiv\int_{\mathbb{R}^{d}}\,d^{d}{\bf y}\,\psi_{i_{1} \cdots i_{s}}({\bf y})\,|\Delta,{\bf y}\,\rangle_{i_{1}\cdots i_{s}}\, \tag{12}\] where \(\psi_{i_{1}\cdots i_{s}}({\bf y}\,)\) is a smooth tensor valued wavefunction on \(\mathbb{R}^{d}\), satisfying a certain fall-off condition at \(\infty\)[26]. In the \(d=1\) case, it is easier to describe the UIRs by using the following basis of \(SO(2,1)\) \[L_{0}=-\frac{i}{2}(P+K),\ \ L_{\pm}=-\frac{i}{2}(P-K)\mp D\, \tag{13}\] where \(L_{0}\) is the (hermitian) generator of the \(SO(2)\) subgroup, and hence has integer eigenvalues in any single-valued representation of \(SO(2,1)\). The new basis satisfy the commutation relations \[[L_{0},L_{\pm}]=\pm L_{\pm},\ \ \left[L_{-},L_{+}\right]=2L_{0}\, \tag{14}\] and reality conditions \(L_{0}^{\dagger}=L_{0},L_{\pm}^{\dagger}=L_{\mp}\). The principal series representation \(\mathcal{P}_{\Delta}\) is spanned by eigenstates \(\{|n\rangle_{\Delta},n\in\mathbb{Z}\}\) of \(L_{0}\), on which \(L_{\pm}\) act as \[L_{\pm}|n\rangle_{\Delta}=(n\pm\Delta)|n\pm 1\rangle_{\Delta}. \tag{15}\] The inner product compatible with the reality conditions and the action (15) is of the form \({}_{\Delta}\langle n|m\rangle_{\Delta}=c\,\delta_{nm}\), where \(c\) is a positive constant. We can simply choose \(c=1\). With this choice fixed, the continuous \(|y\rangle\) basis reviewed above is related to the discrete \(|n\rangle_{\Delta}\) basis via the wavefunction \(\psi_{n}(y)=\left(\frac{1-iy}{1+iy}\right)^{n}\frac{\pi^{-\frac{1}{2}}}{(1+y^{ 2})^{\Delta}}\). When \(\Delta\) is a positive integer, say \(\Delta=p\), the action of \(L_{\pm}\) is truncated at \(n=\mp p\), leading to two irreducible representations. These two representations are actually \(\mathcal{D}_{p}^{\pm}\): \[\mathcal{D}_{p}^{+}=\mathrm{Span}\{|n\rangle_{p},\,n\geq p\},\quad\mathcal{D }_{p}^{-}=\mathrm{Span}\{|n\rangle_{p},\,n\leq-p\}. \tag{16}\] In this case, with the action (15) being fixed, the simple normalization \({}_{p}\langle n|m\rangle_{p}=\delta_{nm}\) is not consistent with the reality condition \(L_{\pm}^{\dagger}=L_{\mp}\). Instead, we need to use \[{}_{p}\langle n|m\rangle_{p}=\frac{\Gamma(|n|+1-p)}{\Gamma(|n|+p)}\delta_{n,m} \tag{17}\] So the resolution of identity of \(\mathcal{D}_{k}^{\pm}\) becomes \[\mathbb{1}_{\mathcal{D}_{p}^{\pm}}=\sum_{\pm n\geq p}\frac{\Gamma(|n|+p)}{ \Gamma(|n|+1-p)}|n\rangle_{p\,p}\langle n| \tag{18}\] ### Embedding space formalism In this paper, we study symmetric traceless tensor fields in dS. The embedding space formalism turns out to be very useful in the derivation of the Kallen-Lehmann representation in section 3 and the inversion formula for the spectral densities in section 4 and 5. In this section, we briefly describe the embedding space formalism for tensor fields in dS\({}_{d+1}\)[7; 23] following the similar construction in EAdS\({}_{d+1}\)[34], and for the principal series representations of \(SO(d+1,1)\) adapting a similar construction in CFT\({}_{d}\)[35]. We also notice that the construction in [34] is degenerate when \(d=1\). So we will give a separate and self-contained discussion about the embedding space formalism in this case. #### 2.2.1 Coordinate systems As mentioned in eq. (1), de Sitter spacetime can be seen as a hypersurface in embedding space \(\mathbb{R}^{d+1,1}\). Among the different slicings and coordinate systems, we will use (conformal) global coordinates and planar coordinates throughout this paper. Global coordinates are defined as \[Y^{0}=R\,\sinh t\,\qquad Y^{a}=R\,\Omega^{a}\cosh t \tag{19}\] in which \(t\in\mathbb{R}\), \(a=1,\ldots,d+1\) and \(\Omega^{a}\in S^{d}\subset\mathbb{R}^{d+1}\) is a unit vector (\(\Omega^{a}\Omega_{a}=1\)). The induced metric in global coordinates is given by \[ds^{2}=R^{2}\left(-dt^{2}+\cosh^{2}t\,d\Omega_{d}^{2}\right) \tag{20}\] where \(d\Omega_{d}^{2}\) denotes the standard metric of the unit \(S^{d}\). With a change of coordinate \(\sinh t=\tan\tau\), the metric eq. (2.20) becomes \[ds^{2}=R^{2}\frac{-d\tau^{2}+d\Omega_{d}^{2}}{\cos^{2}\tau} \tag{2.21}\] which is conformally equivalent to a finite cylinder \((-\frac{\pi}{2},\frac{\pi}{2})\times S^{d}\). The coordinates \((\tau,\Omega^{a})\) are called conformal global coordinates. The planar coordinates \(y^{\mu}=(\eta,{\bf y})\in\mathbb{R}_{-}\times\mathbb{R}^{d}\), cover the causal future of an observer at the south pole of the global \(S^{d}\) (i.e. \(Y^{i}=0\) for \(i=1,2,\cdots,d\) with \(Y^{d+1}<0\)). They are given by \[Y^{0}=R\,\frac{\eta^{2}-{\bf y}^{2}-1}{2\eta}\,\qquad Y^{i}=-R\,\frac{y^{i}}{ \eta}\,\qquad Y^{d+1}=R\,\frac{\eta^{2}-{\bf y}^{2}+1}{2\eta} \tag{2.22}\] for which the induced metric is \[ds^{2}=R^{2}\frac{-d\eta^{2}+d{\bf y}^{2}}{\eta^{2}}. \tag{2.23}\] At \(\eta\to 0^{-}\), where the metric blows up, is the future boundary of dS\({}_{d+1}\). The region covered by \(y^{\mu}\) corresponds to \(Y^{-}\equiv Y^{0}-Y^{d+1}>0\), and hence it covers half of de Sitter spacetime. In figure 2.1, we draw the Penrose diagram of de Sitter space with Cauchy slices of constant \(\eta\). From now on, we set the de Sitter radius to \(R=1\), and will restore it when discussing the flat space limit in section 3.3. #### 2.2.2 Fields in embedding space Consider a spin-\(J\) symmetric traceless tensor5\(T_{A_{1}\cdots A_{J}}({\rm Y})\) in the embedding space \(\mathbb{R}^{d+1,1}\). Asking \(Y^{2}=1\) and the tangential condition Footnote 5: Let us emphasize that we use \(\ell\) for the \(SO(d)\) spin and \(J\) for the \(SO(d+1,1)\) spin. States carry \(SO(d)\) spin while operators have \(SO(d+1,1)\) indices. \[Y^{A_{1}}T_{A_{1}\cdots A_{J}}(Y)=0 \tag{2.24}\] defines a traceless symmetric tensor in de Sitter. The projection \[T_{\mu_{1}\cdots\mu_{J}}(y)=\frac{\partial Y^{A_{1}}}{\partial y^{\mu_{1}}} \cdots\frac{\partial Y^{A_{J}}}{\partial y^{\mu_{J}}}T_{A_{1}\cdots A_{J}}(Y) \tag{2.25}\] pulls back this tensor to the desired local coordinates \(y^{\mu}\). Moreover, we can represent a symmetric and traceless tensor in the index free formalism as a polynomial by contracting its indices with a null vector \(W^{A}\) \[T(Y,W)=W^{A_{1}}\ldots W^{A_{J}}T_{A_{1}\cdots A_{J}}(Y). \tag{2.26}\] Due to the tangential condition (2.24), we can restrict to \(W^{A}\) such that \(Y\cdot W=0\). Altogether, a spin \(J\) tensor \(T_{\mu_{1}\cdots\mu_{J}}(y)\) is uniquely encoded in a degree \(J\) homogeneous polynomial \(T(Y,W)\), with \(W^{A}\) satisfying \(W^{2}=Y\cdot W=0\). The above discussion extends to differential operators. For example, the embedding space realization of the Levi-Civita connection \(\nabla_{\mu}\) is given by \[\nabla_{A}=\partial_{Y^{A}}-Y_{A}\,Y\cdot\partial_{Y}-W_{A}\,Y\cdot\partial_{W}. \tag{27}\] To recover the tensor \(T_{A_{1}\cdots A_{J}}\) with indices in \(d\geq 2\), one needs to act with the differential operator \[\begin{split} K_{A}&=\left(\frac{d-1}{2}\right) \left[\frac{\partial}{\partial W^{A}}-Y_{A}\left(Y\cdot\frac{\partial}{ \partial W}\right)\right]+\left(W\cdot\frac{\partial}{\partial W}\right)\frac{ \partial}{\partial W^{A}}\\ &-Y_{A}\left(Y\cdot\frac{\partial}{\partial W}\right)\left(W \cdot\frac{\partial}{\partial W}\right)-\frac{1}{2}\,W_{A}\left[\frac{\partial ^{2}}{\partial W\cdot\partial W}-\left(Y\cdot\frac{\partial}{\partial W} \right)\left(Y\cdot\frac{\partial}{\partial W}\right)\right]\end{split} \tag{28}\] on the polynomial \(T(Y,W)\), which is defined to be interior to the submanifold \(Y^{2}-1=W\cdot Y=W^{2}=0\). Given this definition, it is straightforward to check that \(K_{[A}K_{B]}=K\cdot K=Y\cdot K=0\), and hence its action induces a symmetric and traceless tensor on dS\({}_{d+1}\). More precisely, it acts on any monomial of \(W^{A}\) as \[K_{A_{1}}\cdots K_{A_{J}}W^{B_{1}}\cdots W^{B_{J}}=J!\left(\frac{d-1}{2} \right)_{J}\left[\frac{1}{J!}\sum_{\pi}G_{A_{\pi_{1}}}^{\;B_{1}}\cdots G_{A_{ \pi_{J}}}^{\;B_{J}}-\text{traces}\right] \tag{29}\] Figure 2.1: Penrose diagram of de Sitter spacetime. We represent global conformal time \(\tau\) on the vertical axis and the azimuthal angle \(\theta\) on the horizontal axis. We indicate with S the south pole and N the north pole of the Cauchy slices of constant \(\tau\), which are spheres. We represent Cauchy slices of constant planar time \(\eta\in(-\infty,0)\) in dark gray. Planar coordinates only cover the top right half of global de Sitter. where \(G_{AB}=\eta_{AB}-Y_{A}Y_{B}\), and the sum is over all permutations \(\pi\) of the indices \(A_{j}\). As a simple application of \(K_{A}\), the divergence of a tensor is implemented by \(\nabla\cdot K\). Treating \(T(Y,W)\) as quantum fields, then the action of \(\mathfrak{so}(d+1,1)\) is defined as \[[L_{AB},T(Y,W)]=-\left(Y_{A}\partial_{Y_{B}}-Y_{B}\partial_{Y_{A}}+W_{A} \partial_{W_{B}}-W_{B}\partial_{W_{A}}\right)T(Y,W) \tag{30}\] where the overall minus sign is introduced to ensure it to be a left action. When \(\mathbf{d=1}\),the differential operator \(K_{A}\) becomes purely second-order, and thus annihilates any vector fields in \(\mathrm{dS}_{2}\). The failure of \(K_{A}\) to recover tensor indices in this case, is related to some subtleties of \(SO(2)\) representations in contrast to higher dimensional \(SO(d)\). We will discuss such subtleties and show how to modify the embedding space formalism of \(\mathrm{dS}_{2}\) accordingly. First, it is well-known that the spin \(s\) representation of \(SO(n)\) with \(n\geq 3\), carried by a symmetric and traceless tensor \(F_{i_{1}\cdots i_{s}}\), can be encoded in a degree \(s\) polynomial \(F(z)\equiv F_{i_{1}\cdots i_{s}}z^{i_{1}}\cdots z^{i_{s}}\), where \(z^{i}\) is a null vector in \(\mathbb{C}^{n}\). When \(n=2\), proceeding as in higher dimensions, the nullness condition yields \((z^{1}+iz^{2})(z^{1}-iz^{2})=0\). So there are two different types of \(z\), namely \(z_{\pm}=(1,\pm i)\), which are related by \(O(2)\) but not \(SO(2)\). \(F(z_{\pm})\) capture the only independent components of \(F_{i_{1}\cdots i_{s}}\). Altogether, a symmetric and traceless tensor \(T\) in 2D carries a two dimensional representation of \(O(2)\), corresponding to the two chiral components of \(F\). Each chirality furnishes an irreducible representation of \(SO(2)\). In the index free formalism, the two chiralities are encoded in \(F(z_{\pm})\), where \(z_{\pm}\) are two \(SO(2)\)-inequivalent null vectors. Similarly in \(\mathrm{dS}_{2}\), a spin \(J\) tensor \(T_{\mu_{1}\cdots\mu_{J}}(y)\) also has two independent components, which should correspond to two \(SO(2,1)\)-inequivalent \(W^{A}\) in embedding space. Indeed, we find that when \(d=1\), the conditions \(Y\cdot W=W^{2}=0\) are equivalent to \(\epsilon_{ABC}Y^{B}W^{C}\pm W_{A}=0\), where \(\epsilon_{ABC}\) is the totally antisymmetric tensor in \(\mathbb{R}^{2,1}\) normalized as \(\epsilon_{012}=1\). Define \(W^{A}_{\pm}\) such that \[\epsilon^{A}_{\ BC}Y^{B}W^{C}_{\pm}\pm W^{A}_{\pm}=0. \tag{31}\] They are the analogue of \(z_{\pm}\) defined above, in the sense that the two chiral componetns of \(T_{\mu_{1}\cdots\mu_{J}}(y)\) are encoded in \(T(Y,W_{\pm})\) respectively. To prove this statement more precisely, let's consider the tensor \(T_{\mu_{1}\cdots\mu_{J}}(y)\) in conformal global coordinates \(Y^{A}=(\tan\tau,\frac{\cos\varphi}{\cos\tau},\frac{\sin\varphi}{\cos\tau})\). Define lightcone coordinates \(y^{\pm}=\tau\pm\varphi\), and then the two linearly independent components of \(T_{\mu_{1}\cdots\mu_{J}}\) are \[T_{\pm\cdots\pm}=\partial_{\pm}Y^{A_{1}}\cdots\partial_{\pm}Y^{A_{J}}T_{A_{1} \cdots A_{J}}(Y)=T(Y,\partial_{\pm}Y). \tag{32}\] It can be checked by direct computation that \(W^{A}_{\pm}=\partial_{\pm}Y^{A}\) solves eq. (31), which verifies our statement. Altogether, the tensor \(T_{\mu_{1}\cdots\mu_{J}}\) in \(\mathrm{dS}_{2}\) is encoded in \(T(Y,W_{\pm})\), with \(W^{A}_{\pm}\) satisfying (31). The pull-back to the conformal global coordinates is easily implemented by the substitution \(W^{A}_{\pm}\rightarrow\partial_{\pm}Y^{A}\). Eq. (31) can lead to some useful identities. For example, given two distinct points \(Y_{1}\) and \(Y_{2}\) in \(\mathbb{R}^{2,1}\), it implies \[\left(Y_{1}\cdot W^{\pm}_{2}\right)\left(Y_{2}\cdot W^{\pm}_{1}\right) =\left(Y_{1}\cdot Y_{2}+1\right)\left(W^{\pm}_{1}\cdot W^{\pm}_{2 }\right)\,\] \[\left(Y_{1}\cdot W^{\mp}_{2}\right)\left(Y_{2}\cdot W^{\pm}_{1}\right) =\left(Y_{1}\cdot Y_{2}-1\right)\left(W^{\pm}_{1}\cdot W^{\mp}_{2 }\right). \tag{33}\] #### 2.2.3 States in embedding space Now let us proceed to describe the embedding space definition of the states \(\left|\Delta,\mathbf{y}\,\right\rangle_{i_{1}\cdots i_{s}}\), which are defined in section 2.1.2 as a basis of the principal series representation \(\mathcal{P}_{\Delta,s}\). For this purpose, we'd like to make a detour and review the physical realization of these abstractly defined states, focusing on the \(s=0\) and \(s=1\) cases [26]. First consider a free scalar \(\phi\) of mass \(m^{2}=\Delta(d-\Delta)\) in dS\({}_{d+1}\) in planar coordinates. Its leading late time behavior is given by \[\phi(\eta,\mathbf{y}\,)\stackrel{{\eta\to 0^{-}}}{{\sim}}(- \eta)^{\Delta}\mathcal{O}(\mathbf{y}\,)+(-\eta)^{\bar{\Delta}}\widetilde{ \mathcal{O}}(\mathbf{y}\,)\, \tag{34}\] where \(\mathcal{O}(\mathbf{y}\,)\) and \(\widetilde{\mathcal{O}}(\mathbf{y}\,)\) are different linear combinations of the bulk creation and annihilation operators. More importantly, they are primary operators in the sense that \[[K_{i},\mathcal{O}(0)]=[K_{i},\widetilde{\mathcal{O}}(0)]=0,\; \;\;[D,\mathcal{O}(0)]=\Delta\mathcal{O}(0),\;\;\;[D,\widetilde{\mathcal{O}}( 0)]=\bar{\Delta}\widetilde{\mathcal{O}}(0). \tag{35}\] These facts allow us to identify \(\left|\Delta,\mathbf{y}\right\rangle\) as the (single-particle) state created by \(\mathcal{O}(\mathbf{y}\,)\) on the free Bunch-Davies vacuum \(\left|0\right\rangle\), i.e. \(\left|\Delta,\mathbf{y}\,\right\rangle=\mathcal{O}(\mathbf{y}\,)|0\rangle\). Indeed, the action (7) is consistent with this identification. Similarly, for a free spin 1 field \(V_{\mu}\) of mass \(m^{2}=(\Delta-1)(\bar{\Delta}-1)\), the late time behavior of its spatial components \(V_{i}\) is \[V_{i}(\eta,\mathbf{y}\,)\stackrel{{\eta\to 0^{-}}}{{\sim}}(- \eta)^{\Delta-1}\mathcal{A}_{i}(\mathbf{y}\,)+(-\eta)^{\bar{\Delta}-1} \widetilde{\mathcal{A}}_{i}(\mathbf{y}\,). \tag{36}\] It can be checked that \(\mathcal{A}_{i}(\mathbf{y}\,)\) and \(\widetilde{\mathcal{A}}_{i}(\mathbf{y}\,)\) are spin 1 primary operators of scaling dimension \(\Delta\) and \(\bar{\Delta}\) respectively. In addition, \(\mathcal{A}_{i}(\mathbf{y}\,)|0\rangle\) can be identified as \(\left|\Delta,\mathbf{y}\,\right\rangle_{i}\), whose transformation under the dS isometry group is given by (7). Altogether, the physical picture discussed here implies that the embedding space formalism for \(\left|\Delta,\mathbf{y}\,\right\rangle_{i_{1}\cdots i_{s}}\) is essentially the same as that of primary operators in conformal field theory [35]. Based on this observation, we define \(\left|\Delta,P\right\rangle_{A_{1}\cdots A_{s}}\), the embedding space realization of \(\left|\Delta,\mathbf{y}\,\right\rangle_{i_{1}\cdots i_{s}}\) as follows: * _Nullness:_\(P^{A}\in\mathbb{R}^{d+1,1}\) is a null vector, i.e. \(P^{A}P_{A}=0\). We will focus on the \(P^{0}>0\) part of the lightcone. * _Spin \(s\) condition_: \(\left|\Delta,P\right\rangle_{A_{1}\cdots A_{s}}\) is a symmetric and traceless tensor of \(SO(d+1,1)\), on which \(SO(d+1,1)\) acts as Lie derivatives 6. Footnote 6: More explicitly, \(L_{AB}|\Delta,P\rangle_{A_{1}\cdots A_{s}}\equiv-\mathcal{L}_{AB}|\Delta,P \rangle_{A_{1}\cdots A_{s}}\), where \(\mathcal{L}_{AB}\) denotes the derivative along the vector \(P_{A}\partial_{P^{B}}-P_{B}\partial_{P^{A}}\). * _Homogeneity:_\(\left|\Delta,\lambda P\right\rangle_{A_{1}\cdots A_{s}}=\lambda^{-\Delta}| \Delta,P\rangle_{A_{1}\cdots A_{s}}\) with \(\lambda>0\). * _Tangential condition:_\(P^{A_{1}}|\Delta,P\rangle_{A_{1}\cdots A_{s}}=0\). Due to the homogeneity condition, \(\left|\Delta,P\right\rangle_{A_{1}\cdots A_{s}}\) is completely fixed by its value on a section of the lightcone. We choose this section to be the future boundary of de Sitter (in planar coordinates) \[P^{0}_{\mathbf{y}}=\frac{1}{2}\left(1+\mathbf{y}^{2}\right), \;\;\;P^{i}_{\mathbf{y}}=y^{i},\;\;\;P^{d+1}_{\mathbf{y}}=\frac{1}{2}\left( \mathbf{y}^{2}-1\right)\, \tag{37}\] since it realizes the state \(\left|\Delta,{\bf y}\right\rangle_{i_{1}\cdots i_{s}}\) as the pull-back of \(\left|\Delta,P\right\rangle_{A_{1}\cdots A_{s}}\) \[\left|\Delta,{\bf y}\right\rangle_{i_{1}\cdots i_{s}}=\frac{\partial P_{\bf y}^ {A_{1}}}{\partial y^{i_{1}}}\cdots\frac{\partial P_{\bf y}^{A_{s}}}{\partial y ^{i_{s}}}|\Delta,P_{\bf y}\,\rangle_{A_{1}\cdots A_{s}}. \tag{38}\] In particular, the \(SO(d+1,1)\) action on \(\left|\Delta,P\right\rangle_{A_{1}\cdots A_{s}}\) induces eq. (7) via this pull-back. Because of the nullness of \(P^{A}\), there is a class of states that satisfy all the requirements of \(\left|\Delta,P\right\rangle_{A_{1}\cdots A_{2}}\) but vanish when pulled back to the local coordinates \({\bf y}\). They are of the form \(P_{(A_{1}}|\Delta+1,P\rangle_{A_{2}\cdots A_{s})}\). We can kill these states, and implement the spin \(s\) condition at the same time, by introducing an auxiliary vector \(Z^{A}\in\mathbb{R}^{d+1,1}\) satisfying \(Z^{2}=Z\cdot P=0\). Given such a vector \(Z^{A}\), the state \(\left|\Delta,P\right\rangle_{A_{1}\cdots A_{s}}\) is encoded in a degree \(s\) polynomial in \(Z\) \[\left|P,Z\right\rangle_{\Delta,s}\equiv Z^{A_{1}}\cdots Z^{A_{s}}\left|\Delta,P\right\rangle_{A_{1}\cdots A_{s}}. \tag{39}\] In this index-free formalism, the tangential condition takes the form \[P\cdot\partial_{Z}\left|P,Z\right\rangle_{\Delta,s}=0 \tag{40}\] The resolution of the identity of \({\cal P}_{\Delta,s}\), c.f. (10), can be rewritten as a conformal integral defined in [36] \[\mathbb{1}_{{\cal P}_{\Delta,s}}=\frac{2}{s!\left(\frac{d}{2}-1\right)_{s} \text{Vol GL}(1,\mathbb{R})^{+}}\int_{P^{0}>0}d^{d+2}P\,\delta(P^{2})\left|P, Dz\right\rangle_{\Delta,s\,\Delta,s}\left\langle P,Z\right|. \tag{41}\] where \[D_{Z}^{A}=\left(\frac{d}{2}-1+Z\cdot\frac{\partial}{\partial Z}\right)\frac{ \partial}{\partial Z_{A}}-\frac{1}{2}Z^{A}\frac{\partial^{2}}{\partial Z\cdot \partial Z} \tag{42}\] is the interior derivative used to strip off \(Z^{A}\). We will use the shorthand notation "\(\int_{P}\)" to denote the integral measure \[\int_{P}(\dots)\equiv\frac{2}{\text{Vol GL}(1,\mathbb{R})^{+}}\int_{P^{0}>0}d ^{d+2}P\,\delta(P^{2})(\dots) \tag{43}\] in the remainder of this paper. We also want to mention that although the states \(\left|\Delta,{\bf y}\right\rangle\) or equivalently \(\left|\Delta,P\right\rangle\) are introduced as boundary excitations in this section, they do not have to "live" on the future boundary. Given a generic interacting theory, it is sometimes more convenient to think of them as special states in the Hilbert space which transform in a particular way under the isometry group, with \({\bf y}\) being an abstract label of the states that does not necessarily have the meaning of boundary coordinates. ## 3 The Kallen-Lehmann decomposition in de Sitter In this section, we give a derivation of the Kallen-Lehmann representation in de Sitter spacetime. Our starting assumptions are the following * **Completeness of the Hilbert space**: we assume that the full Hilbert space \(\mathcal{H}\) of a unitary quantum field theory in a fixed dS\({}_{d+1}\) background can be decomposed into a direct sum/integral of \(SO(d+1,1)\) UIRs. In other words, we assume that there is a resolution of the identity in \(\mathcal{H}\), which takes the following form schematically 7 Footnote 7: In principle, the direct integral over \(\Delta\) should only be defined on the fundamental domain \(\frac{d}{2}+i\mathbb{R}_{\geq 0}\), since there is an isomorphism of between \(\mathcal{P}_{\Delta,\ell}\) and \(\mathcal{P}_{\Delta,\ell}\). Here, the equation (3.1) can be understood as a doubling trick. With that being said, \(1_{\mathcal{P}_{\Delta,\ell}}\) and \(1_{\mathcal{P}_{\Delta,\ell}}\) are identified, and the overcounting is absorbed into the measure \([d\Delta]_{\ell}\), which is also invariant under the shadow symmetry by construction. \[\mathbb{1}_{\mathcal{H}}=|\Omega\rangle\langle\Omega|+\sum_{\ell\geq 0} \int_{\frac{d}{2}+i\mathbb{R}}\left[d\Delta\right]_{\ell}\int_{P}\,|\Delta,P \rangle_{A_{1}\cdots A_{\ell}}\,^{A_{1}\cdots A_{\ell}}\langle\Delta,P|+{ \rm other~{}UIRs}\,\] (3.1) where \(|\Omega\rangle\) is the interacting BD vacuum and \(\int_{P}\,|\Delta,P\rangle_{A_{1}\cdots A_{\ell}}\,^{A_{1}\cdots A_{\ell}} \langle\Delta,P|\) gives the identity operator \(\mathbb{1}_{\mathcal{P}_{\Delta,\ell}}\) in \(\mathcal{P}_{\Delta,\ell}\) as discussed in section 2.2.3. The symbol \([d\Delta]_{\ell}\) denotes some unknown measure over the principal series, which roughly speaking, counts the "multiplicity" of \(\mathcal{P}_{\Delta,\ell}\) in \(\mathcal{H}\). Of course, this is still an oversimplification because there can be multiple copies of \(\mathcal{P}_{\Delta,\ell}\), that are distinguished by other quantum numbers. Then, in principle, we should integrate or sum over these quantum numbers. To avoid cluttering, we suppress labels of such quantum numbers, since it is easy to adapt our derivation to include them and the final expression of the Kallen-Lehmann decomposition will not be changed. * **Analyticity of two-point functions**: we assume that any Wightman function \[G_{\mathcal{O}^{(J)}}(Y_{1},Y_{2};W_{1},W_{2})=\langle\Omega|\mathcal{O}^{(J )}(Y_{1},W_{1})\mathcal{O}^{(J)}(Y_{2},W_{2})|\Omega\rangle\] (3.2) has a larger domain of analyticity in the complexified de Sitter spacetime dS\({}_{d+1}^{\mathbb{C}}\), which is defined as \[{\rm dS}_{d+1}^{\mathbb{C}}=\left\{\mathbf{Y}^{A}\in\mathbb{C}^{d+2}:-( \mathbf{Y}^{0})^{2}+(\mathbf{Y}^{1})^{2}+\ldots+(\mathbf{Y}^{d+1})^{2}=1\right\}\,.\] (3.3) More precisely, we define the following two regions in dS\({}_{d+1}^{\mathbb{C}}\): \[\mathcal{T}_{\pm}=\left\{Y^{A}+iX^{A}\in{\rm dS}_{d+1}^{\mathbb{C}}:X^{0} \gtrless\pm\sqrt{(X^{1})^{2}+\ldots+(X^{d+1})^{2}}\right\}\,\] (3.4) and postulate that, in analogy with flat space, the Wightman function \(G_{\mathcal{O}^{(J)}}\) is the boundary value of a complex function \(\mathbf{G}_{\mathcal{O}^{(J)}}(\mathbf{Y}_{1},\mathbf{Y}_{2};\mathbf{W}_{1}, \mathbf{W}_{2})\) which is analytic for \(\mathbf{Y}_{1}\in\mathcal{T}_{-}\) and \(\mathbf{Y}_{2}\in\mathcal{T}_{+}\). Here \(\mathbf{W}^{A}\) is the complexification of \(W^{A}\), satisfying \(\mathbf{W}^{2}=\mathbf{Y}\cdot\mathbf{W}=0\) and the analyticity in \(\mathbf{W}^{A}\) is trivial since \(G_{\mathcal{O}^{(J)}}\) only depends polynomially on the dot products \((W_{1}\cdot W_{2})\) and \((W_{1}\cdot Y_{2})(W_{2}\cdot Y_{1})\). From this postulate, together with locality and de Sitter covariance, it was shown in [37] that \(\mathbf{G}_{\mathcal{O}^{(0)}}(\mathbf{Y}_{1},\mathbf{Y}_{2})\) can be analytically continued in the whole _cut domain_\(\Sigma\) \[\Sigma=\left\{(\mathbf{Y}_{1},\mathbf{Y}_{2})\in{\rm dS}_{d+1}^{\mathbb{C}} \times{\rm dS}_{d+1}^{\mathbb{C}}/\mathbf{Y}_{1}\cdot\mathbf{Y}_{2}\in[1, \infty)\right\}.\] (3.5) which is basically two copies of dS\({}_{d+1}^{\mathbb{C}}\) with the cut at timelike separation removed. Notice that the cut domain includes the Euclidean sphere \(S^{d+1}\) and the Euclidean Anti de Sitter space EAdS\({}_{d+1}\). The range that \(\mathbf{Y}_{1}\cdot\mathbf{Y}_{2}\) takes in these Euclidean spaces is reported in Figure 3.1. Importantly, the Wick rotation which we will make use of in this paper, discussed in section 4.1, moves points in the complex \(\mathbf{Y}_{1}\cdot\mathbf{Y}_{2}\) plane from de Sitter to EAdS without crossing the cut. Moreover, this assumption will let us exclude the presence of exceptional or discrete series states in the Kallen-Lehmann decomposition of scalar two-point functions as well as chiral-antichiral components for spinning two-point functions in dS\({}_{2}\) (i.e. the second line in (3.49)). 8 We have strong evidence for this analytic structure of two-point functions, and for the fact that it is implied by reflection positivity when Wick rotating the correlators from de Sitter to the sphere. We will elaborate on this in future work [38]. For in depth discussions on these assumptions, we refer the reader to [3, 37, 39]. Footnote 8: We could also simply assume the absence of cuts at \(\sigma\in(-\infty,-1]\) to exclude the exceptional and discrete series, derive the Kallén-Lehmann decomposition and then finally argue for the analyticity of two-point functions in the whole cut domain. We decide to take the more conservative approach but we expect that this assumption may be relaxed. With these assumptions in mind, we start in \(d\geq 2\), where we will mainly focus on the contribution of the principal series to the Kallen-Lehmann representation. The complementary series part can be derived similarly, as is discussed in Appendix B. In particular, for dS\({}_{3}\), principal and complementary series representations lead to a full Kallen-Lehmann decomposition since they are the only UIRs of \(SO(3,1)\). In higher dimensional de Sitter, there are many more UIRs, apart from the principal and complementary series, as reviewed in section 2.1. For two-point functions of scalar operators, we are able to show that such representations do not contribute. For two-point functions of spinning operators, we do not have a general formula to incorporate all these representations, but we do not see them contributing to any example of two-point functions considered in this work. ### Dimension \(d\geq 2\) Consider the Wightman two-point function \(G_{\mathcal{O}^{(J)}}\) of a generic spin \(J\) operator \(\mathcal{O}^{(J)}\) in dS\({}_{d+1}\) (\(d\geq 2\)) in the embedding space formalism \[G_{\mathcal{O}^{(J)}}(Y_{1},Y_{2};W_{1},W_{2})=\langle\Omega|\mathcal{O}^{(J)} (Y_{1},W_{1})\mathcal{O}^{(J)}(Y_{2},W_{2})|\Omega\rangle. \tag{3.6}\] Inserting the resolution of the identity (3.1) into (3.6) yields \[G_{\mathcal{O}^{(J)}}(Y_{1},Y_{2};W_{1},W_{2}) =\langle\Omega|\mathcal{O}^{(J)}(Y_{1},W_{1})|\Omega\rangle \langle\Omega|\mathcal{O}^{(J)}(Y_{2},W_{2})|\Omega\rangle\] \[+\sum_{\ell\geq 0}\int_{\frac{d}{2}+i\mathbb{R}}[d\Delta]_{\ell} \int_{P}\langle\Omega|\mathcal{O}^{(J)}(Y_{1},\!W_{1})|\Delta,P\rangle_{A_{1} \cdots A_{\ell}}{}^{A_{1}\cdots A_{\ell}}\langle\Delta,P|\mathcal{O}^{(J)}(Y _{2},\!W_{2})|\Omega\rangle\] \[+\text{possible contributions from other UIRs}. \tag{3.7}\] Since \(|\Omega\rangle\) is a dS invariant vacuum, the one-point function \(\langle\Omega|\mathcal{O}^{(J)}(Y,W)|\Omega\rangle\) has to be an \(SO(d+1,1)\) scalar. Using \(Y^{2}-1=W^{2}=Y\cdot W\), one can easily conclude that the one-point function vanishes when \(J\geq 1\), and has to be a constant when \(J=0\). In the latter case, we redefine the operator \(\mathcal{O}\) by a constant shift such that its vacuum expectation value vanishes. Altogether, we always consider the case \(\langle\Omega|\mathcal{O}^{(J)}(Y,W)|\Omega\rangle=0\). For the second line in eq. (3.7), the problem is reduced to computing the matrix elements \(\langle\Omega|\mathcal{O}^{(J)}(Y,W)|\Delta,P\rangle_{A_{1}\cdots A_{\ell}}\), or equivalently \(\langle\Omega|\mathcal{O}^{(J)}(Y,W)|P,Z\rangle_{\Delta,\ell}\) in the index-free formalism. We will show that such matrix elements are fixed by symmetry up to a normalization constant, and then use them to derive the Kallen-Lehmann decomposition. Let us start with the \(J=0\) case. #### 3.1.1 Scalar operators First we show that for a scalar operator \(\mathcal{O}(Y)\), the matrix element \(\langle\Omega|\mathcal{O}(Y)|P,Z\rangle_{\Delta,\ell}\) vanishes when \(\ell\geq 1\). Because of its \(SO(d+1,1)\) invariance, \(\langle\Omega|\mathcal{O}(Y)|P,Z\rangle_{\Delta,\ell}\) has to be a function of scalar bilinears of the three vectors \(Y^{A},P^{A}\) and \(Z^{A}\). On the other hand, since \(P\cdot Z=P^{2}=Z^{2}=0\) and \(Y^{2}=1\), it can only depend on \(Y\cdot P\) and \(Y\cdot Z\). The dependence is fixed by the homogeneity of \(|P,Z\rangle_{\Delta,\ell}\) up to a constant \[\langle\Omega|\mathcal{O}(Y)|P,Z\rangle_{\Delta,\ell}\propto\frac{(Y\cdot Z)^{ \ell}}{(-2\,Y\cdot P)^{\Delta}}. \tag{3.8}\] We then impose the tangential condition (2.40) of the state \(|P,Z\rangle_{\Delta,\ell}\). Noticing that \(P\cdot\partial_{Z}(Y\cdot Z)^{\ell}\neq 0\) for any \(\ell\geq 1\), the proportional constant has to be zero for the tangential condition to be satisfied. Therefore, \(\langle\Omega|\mathcal{O}(Y)|P,Z\rangle_{\Delta,\ell}\) vanishes identically when \(\ell\geq 1\). When \(\ell=0\), by a similar argument, we find that \[\langle\Omega|\mathcal{O}(Y)|P\rangle=c_{\mathcal{O}}(\Delta)\mathcal{K}_{\Delta} (Y,P),\quad\mathcal{K}_{\Delta}(Y,P)=\frac{\Gamma(\Delta)}{2\pi^{\frac{d+1}{2}}} \frac{1}{(-2Y\cdot P)^{\Delta}}\, \tag{3.9}\] where \(c_{\mathcal{O}}(\Delta)\) is a \(\Delta\)-dependent constant. Plugging (3.9) into (3.7) yields \[G_{\mathcal{O}}(Y_{1},Y_{2})=\int_{\mathbb{R}}d\lambda\,\rho_{ \mathcal{O}}^{\mathcal{P},0}(\lambda)\,\int_{P}\mathcal{K}_{\frac{d}{2}+i \lambda}(Y_{1},P)\mathcal{K}_{\frac{d}{2}-i\lambda}(Y_{2},P)+\cdots \tag{3.10}\] where \(\rho_{\mathcal{O}}^{\mathcal{P},0}(\lambda)\) is a _nonnegative_ function defined by absorbing \(|c_{\mathcal{O}}(\Delta)|^{2}\) into the measure \([d\Delta]_{0}\), i.e. \([d\Delta]_{0}|c_{\mathcal{O}}(\Delta)|^{2}\equiv d\lambda\,\rho_{\mathcal{O} }^{\mathcal{P},0}(\lambda)\), with \(\Delta=\frac{d}{2}+i\lambda\). It is also an even function of \(\lambda\) by construction, because of the reason mentioned in footnote 7. The function \(\mathcal{K}_{\Delta}(Y,P)\) is the analogue of the bulk-to-boundary propagator in EAdS, but with a singularity at \(Y\cdot P=0\). Therefore, we have to specify an \(i\epsilon\) prescription to make sense of the \(P\)-integral in (3.10). The \(i\epsilon\) prescription is chosen such that \(G_{\mathcal{O}}\) given by (3.10) reproduces the standard Wightman two-point function of a free field \(\phi\) when \(\mathcal{O}=\phi\), which is reviewed in appendix A: \[G_{\lambda,0}\,(Y_{1},Y_{2})=(\eta_{1}\eta_{2})^{\frac{d}{2}} \int\frac{d^{d}{\bf k}}{(2\pi)^{d}}e^{-i{\bf k}\cdot({\bf y}_{1}-{\bf y}_{2}) }\bar{h}_{i\lambda}(|{\bf k}|\eta_{1})h_{i\lambda}(|{\bf k}|\eta_{2}) \tag{3.11}\] To match (3.11), we first write \(\mathcal{K}_{\Delta}\) in local coordinates \[\mathcal{K}_{\Delta}(\eta_{1},{\bf y}_{1};{\bf y}\,)=\frac{\Gamma (\Delta)}{2\pi^{\frac{d+1}{2}}}\frac{(-\eta_{1})^{\Delta}}{(({\bf y}_{1}-{\bf y })^{2}-\eta_{1}^{2})^{\Delta}}. \tag{3.12}\] Then add a small imaginary part to the planar patch time \(\eta_{1}\), i.e. \(\eta_{1}\to e^{\pm i\epsilon}\eta_{1}\), and perform a Fourier transformation for the boundary coordinates \({\bf y}\). This Fourier transformation for both \(\pm i\epsilon\) can be obtained by analytic continuation of the corresponding Wick rotated integral 9 Footnote 9: Here we assume \(z>0\). It can be thought as the radial component of the Poincaré coordinates of EAdS. \[\int\,d^{d}{\bf y}\frac{z^{\Delta}\,e^{-i{\bf k}\cdot{\bf y}}}{(z ^{2}+{\bf y}^{2})^{\Delta}}=\frac{2(\pi z)^{\frac{d}{2}}}{\Gamma(\Delta)}\left( \frac{2}{k}\right)^{-i\lambda}K_{i\lambda}(kz)\, \tag{3.13}\] where \(K\) is the Bessel K function. Put \(z=-e^{i\theta}\eta_{1}\), and analytically continue from \(\theta=0\) to \(\theta=\epsilon-\frac{\pi}{2}\), i.e. \(z=ie^{i\epsilon}\eta_{1}\). Using the relation \(K_{\nu}(-i\xi)=\frac{i^{\nu+1}\pi}{2}H_{\nu}^{(1)}(\xi)\) between the Bessel K functions and Hankel functions, this gives \[\int d^{d}{\bf y}\,e^{-i{\bf k}\cdot{\bf y}}\,\mathcal{K}_{\Delta} (e^{i\epsilon}\eta_{1},{\bf y}_{1};{\bf y})=e^{-i\frac{\pi(d-2)}{4}}(-\eta_{1} )^{\frac{d}{2}}\left(\frac{k}{2}\right)^{i\lambda}\,\bar{h}_{i\lambda}(|{\bf k }|\eta_{1})e^{-i{\bf k}\cdot{\bf y}_{1}}. \tag{3.14}\] Similarly, by the Wick rotation \(z=-ie^{-i\epsilon}\eta_{1}\), we obtain \[\int d^{d}{\bf y}\,e^{-i{\bf k}\cdot{\bf y}}\,\mathcal{K}_{\Delta} (e^{-i\epsilon}\eta_{1},{\bf y}_{1};{\bf y})=e^{i\frac{\pi(d-2)}{4}}(-\eta_{1} )^{\frac{d}{2}}\left(\frac{k}{2}\right)^{i\lambda}\,h_{i\lambda}(|{\bf k}|\eta _{1})e^{-i{\bf k}\cdot{\bf y}_{1}}. \tag{3.15}\] Thus, the only \(i\epsilon\) prescription consistent with (3.11) is \(\eta_{1}\to e^{i\epsilon}\eta_{1}\) and \(\eta_{2}\to e^{-i\epsilon}\eta_{2}\), which in embedding space is equivalent to \(Y_{1}\in\mathcal{T}_{-}\) and \(Y_{2}\in\mathcal{T}_{+}\) for \(\epsilon\in(0,\frac{\pi}{2}]\). This choice of \(i\epsilon\) prescription should be understood in any Wightman function in this paper, although we will suppress \(i\epsilon\) most of the time to avoid clutter of notations. As a byproduct of the discussion of the \(i\epsilon\) prescription, eq. (3.14) and eq. (3.15) also lead to the dS split representation [40; 41], namely \[\int_{P}\mathcal{K}_{\frac{d}{2}+i\lambda}(Y_{1},P)\mathcal{K}_{ \frac{d}{2}-i\lambda}(Y_{2},P)=G_{\lambda,0}(Y_{1},Y_{2})\, \tag{3.16}\] where we have used the fact that \(\int_{P}\) reduces to the flat measure \(\int d^{d}\mathbf{y}\) on \(\mathbb{R}^{d}\). Altogether, plugging eq. (3.16) into eq. (3.10), we obtain the Kallen-Lehmann decomposition of the scalar operator \(\mathcal{O}(Y)\) \[G_{\mathcal{O}}(Y_{1},Y_{2})=\int_{\mathbb{R}}d\lambda\,\rho_{ \mathcal{O}}^{\mathcal{P}}(\lambda)\,G_{\lambda,0}(Y_{1},Y_{2})+\cdots\, \tag{3.17}\] where \(\rho_{\mathcal{O}}^{\mathcal{P},0}(\lambda)\) is a nonnegative (even) function, and "\(\cdots\)" denotes possible contributions from other UIRs. For example, the contribution of the complementary series is computed explicitly in appendix B. For the two exceptional series, we can argue that they do not contribute to scalar two-point functions. In the \(\mathcal{U}_{s,t}\) case, it suffices to use the fact that the \(SO(d+1)\) content of \(\mathcal{U}_{s,t}\) is [25; 26] \[\mathcal{U}_{s,t}|_{SO(d+1)}=\bigoplus_{n\geq s}\bigoplus_{t+1 \leq m\leq s}\mathbb{Y}_{n,m}\, \tag{3.18}\] where \(\mathbb{Y}_{n,m}\) denotes the two-row Young diagram with \(n\) boxes in the first row and \(m\) boxes in the second row. For example, \(\mathbb{Y}_{2,1}=\yng(1)\). On the other hand, it is clear that a scalar operator \(\mathcal{O}\) in dS\({}_{d+1}\) cannot generate any such two-row representation of \(SO(d+1)\) when acting on the vacuum. It means that the matrix element of \(\mathcal{O}(Y)\) between \(|\Omega\rangle\) and an arbitrary state in \(\mathcal{U}_{s,t}\) vanishes. This excludes all \(\mathcal{U}_{s,t}\). For \(\mathcal{V}_{p}\), let's consider \(\mathcal{G}_{p}(Y_{1},Y_{2})=\langle\Omega|\mathcal{O}(Y_{1})|\mathbb{1}_{ \mathcal{V}_{p}}|\mathcal{O}(Y_{2})|\Omega\rangle\). Because \(\mathbb{1}_{\mathcal{V}_{p}}\) commutes with \(SO(d+1,1)\) actions, \(\mathcal{G}_{p}(Y_{1},Y_{2})\) is a function of \(\sigma\equiv Y_{1}\cdot Y_{2}\). For the same reason, the \(SO(d+1,1)\) Casimir operator which is equal to \((1-p)(d+p-1)\) acting on \(\mathcal{V}_{p}\), yields a second order differential equation of \(\mathcal{G}_{p}(\sigma)\): \[(1-\sigma^{2})\partial_{\sigma}^{2}\mathcal{G}_{p}(\sigma)-(d+1) \sigma\partial_{\sigma}\mathcal{G}_{p}(\sigma)=(1-p)(d+p-1)\mathcal{G}_{p}(\sigma) \tag{3.19}\] The two linearly independent solutions of this equation are \[f_{p}(\sigma) =F\left(d+p-1,1-p,\frac{d+1}{2},\frac{1-\sigma}{2}\right)\] \[g_{p}(\sigma) =\left(\frac{2}{1-\sigma}\right)^{d+p-1}F\left(d+p-1,p+\frac{d-1} {2},2p+d-1,\frac{2}{1-\sigma}\right) \tag{3.20}\] Since \(p\in\mathbb{Z}_{>0}\), the first solution \(f_{p}\) is a polynomial of degree \(p-1\) in \(\sigma\), and hence it blows up as \(\sigma\to-\infty\) (or remains a constant when \(p=1\)). Notice that \(\sigma<1\) corresponds to spacelike separated points, and the limit \(\sigma\to-\infty\) means that the two points are very far away separated. However, any physical two-point function should decay in this limit. The other solution \(g_{p}\) decays like \((-\sigma)^{-(d+p-1)}\) for large negative \(\sigma\). But it has a singularity at \(\sigma=-1\), i.e. when \(Y_{1}\) and \(Y_{2}\) are antipodal points. Such antipodal singularities violate our assumption of analyticity in \(\mathcal{T}_{-}\times\mathcal{T}_{+}\). Therefore, eq. (3.19) does not have a nontrivial solution that is free of singularity at both \(\sigma=-\infty\) and \(\sigma=-1\). The only physically meaningful solution is \(\mathcal{G}_{p}\equiv 0\). So the full Kallen-Lehmann decomposition of the scalar operator \(\mathcal{O}\) in \(d\geq 2\) is \[G_{\mathcal{O}}(Y_{1},Y_{2})=\int_{\mathbb{R}}d\lambda\,\rho_{ \mathcal{O}}^{\mathcal{P},0}(\lambda)\,G_{\lambda,0}(Y_{1},Y_{2})+\int_{- \frac{d}{2}}^{\frac{d}{2}}d\lambda\,\rho_{\mathcal{O}}^{\mathcal{C},0}(\lambda )\,G_{i\lambda,0}(Y_{1},Y_{2})\,, \tag{3.21}\] where \(\rho_{\mathcal{O}}^{\mathcal{P},0}(\lambda)\) and \(\rho_{\mathcal{O}}^{\mathcal{C},0}(\lambda)\) are the spectral densities corresponding to principal series and complementary series contributions respectively. They are nonnegative by construction. In total generality, we thus expect the appearance of a continuum of states in the principal series and in the complementary series in the Kallen-Lehmann decomposition of a scalar two-point function. What instead we observe in practice, in every example we have explored in section 5, is that the complementary series appears as a discrete sum of states corresponding to specific values of \(\lambda\). Group theory arguments point to the fact that this is the case in free theories and CFTs [1], but we do not have a proof to exclude a continuum of complementary series states in scalar two-point functions of generic interacting QFTs. As a final comment, let us mention that special constructions of the two-point function of a free massless scalar (\(p=1\) in (3.19)) with the zero mode removed are present in the literature (see for example [42; 43; 44; 45]), but these are not true gauge invariant observables10. In other words, the operators constructed in these examples do not correspond to physical observables and thus we do not expect them to appear in the Kallen-Lehmann decomposition of a physical scalar operator. This is in analogy with the case of free massless scalars in 2D flat space. Just like in that scenario, the two-point function of the derivatives of a free massless scalar in dS is instead a good observable. In fact, we expect it to contribute to the spinning version of the Kallen-Lehmann decomposition in higher dimensions (3.31), and we explicitly see it contributing to the spinning Kallen-Lehmann decomposition in 2D (3.56). Footnote 10: Here the gauge symmetry is the shift symmetry of the free massless scalar. #### 3.1.2 Spinning operators Given a spin \(J\) bulk operator \(\mathcal{O}^{(J)}(Y,W)\), the main step towards its Kallen-Lehmann representation is computing the matrix element \(\mathcal{F}_{J,\ell}(Y,P;W,Z)\equiv\langle\Omega|\mathcal{O}^{(J)}(Y,W)|P,Z \rangle_{\Delta,\ell}\), for any \(\ell\geq 0\). Due to the various constraints imposed on the four vectors \(\{Y^{A},W^{A},P^{A},Z^{A}\}\), the most general form of \(\mathcal{F}_{J,\ell}\) is \[\mathcal{F}_{J,\ell}(Y,P;W,Z)=\sum_{b=0}^{\min(J,\ell)}f_{\mathcal{O}^{(J)}}^{ b}(\Delta,\ell)\,\frac{(Y\cdot Z)^{\ell-b}(2P\cdot W)^{J-b}(W\cdot Z)^{b}}{(-2Y \cdot P)^{\Delta+J-b}}\, \tag{3.22}\] To find the coefficients \(f^{b}_{\mathcal{O}^{(J)}}\), we use the tangential condition of \(P\cdot\partial_{Z}\mathcal{F}_{J,\ell}=0\), which yields the following recurrence relation \[(b+1)f^{b+1}_{\mathcal{O}^{(J)}}=(\ell-b)f^{b}_{\mathcal{O}^{(J)}},\quad(\ell-\min(J,\ell))f^{\min(J,\ell)}_{\mathcal{O}^{(J)}}=0. \tag{3.23}\] When \(\ell>J\), the initial condition \((\ell-J)f^{J}_{\mathcal{O}^{(J)}}=0\) gives \(f^{J}_{\mathcal{O}^{(J)}}=0\), which further implies that all the rest \(f^{b}_{\mathcal{O}^{(J)}}\) vanish because \(\ell-b\geq\ell-J\) is always nonzero. So principal series of spin larger than \(J\) cannot contribute to the two-point function of \(\mathcal{O}^{(J)}\).11 When \(\ell\leq J\), eq. (3.23) has a nontrivial solution instead Footnote 11: The same argument also works for complementary series. \[f^{b}_{\mathcal{O}^{(J)}}(\Delta,\ell)=\frac{(\Delta+\ell-1) \Gamma(\Delta)(\Delta+\ell)_{J-\ell}\,c_{\mathcal{O}^{(J)}}(\Delta,\ell)}{2 \pi^{\frac{d+1}{2}}(\Delta-1)}\binom{\ell}{b},\ \ b=0,1,\cdots,\ell. \tag{3.24}\] where the complicated normalization factor is inserted for later convenience. Plugging this solution into eq. (3.22) gives \[\mathcal{F}_{J,\ell}(Y,P;W,Z)=\frac{(\Delta+\ell-1)\Gamma(\Delta )(\Delta+\ell)_{J-\ell}\,c_{\mathcal{O}^{(J)}}(\Delta,\ell)}{2\pi^{\frac{d+1} {2}}(\Delta-1)}\frac{\Phi^{\ell}\,(2W\cdot P)^{J-\ell}}{(-2Y\cdot P)^{\Delta+J }}\, \tag{3.25}\] where \[\Phi(Y,P;W,Z)\equiv 2(Y\cdot Z)(W\cdot P)-2(Y\cdot P)(W\cdot Z). \tag{3.26}\] For \(J=\ell\), \(\mathcal{F}_{J,J}\) reduces to \(c_{\mathcal{O}^{(J)}}(\Delta,J)\,\mathcal{K}_{\Delta,J}\), with \(\mathcal{K}_{\Delta,J}(Y,P;W,Z)\) being the bulk-to-boundary propagator of a spin \(J\) field, given by \[\mathcal{K}_{\Delta,J}(Y,P;W,Z)=\frac{(\Delta+J-1)\Gamma(\Delta) }{2\pi^{\frac{d+1}{2}}(\Delta-1)}\frac{\Phi^{J}}{(-2Y\cdot P)^{\Delta+J}}. \tag{3.27}\] For \(\ell<J\), noticing that \(\Phi\) is annihilated by \(W\cdot\nabla_{Y}=W\cdot\partial_{Y}\), we can realize \(\mathcal{F}_{J,\ell}\) as derivatives of \(\mathcal{K}_{\Delta,\ell}\): \[\mathcal{F}_{J,\ell}=c_{\mathcal{O}^{(J)}}(\Delta,\ell)\,(W\cdot \nabla_{Y})^{J-\ell}\mathcal{K}_{\Delta,\ell}. \tag{3.28}\] Finally, using the de Sitter split representation of a spin \(\ell\) Wightman function \(G_{\lambda,\ell}\)12[41] Footnote 12: It is defined as the symmetric, traceless and transverse Green’s function, satisfying \[\left(-\nabla^{2}+\frac{d^{2}}{4}+\lambda^{2}-\ell\right)G(Y_{1},Y_{2};W_{1},W_ {2})=\delta(Y_{1},Y_{2})(W_{1}\cdot W_{2})^{\ell}. \tag{3.29}\] \[G_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})=\frac{1}{\ell!\,(\tfrac{d }{2}-1)_{\ell}}\int_{P}\,\mathcal{K}_{\Delta,\ell}(Y_{1},P;W_{1},D_{Z}) \mathcal{K}_{\bar{\Delta},\ell}(Y_{2},P;W_{2},Z)\, \tag{3.30}\] we obtain the Kallen-Lehmann decomposition of \(\mathcal{O}^{(J)}\): \[G_{\mathcal{O}^{(J)}}(Y_{1}, Y_{2};W_{1}, W_{2})= \sum_{\ell=0}^{J}\int_{\mathbb{R}}\,d\lambda\,\rho^{\mathcal{P},\ell }_{\mathcal{O}^{(J)}}(\lambda)\,[\left(W_{1}\cdot\nabla_{1}\right)(W_{2}\cdot \nabla_{2})]^{J-\ell}\,G_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})\] \[+\cdots \tag{3.31}\] where \(\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\) is a nonnegative function of \(\lambda\) and \(d\lambda\,\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\) is a product of the measure \([d\Delta]_{\ell}\) and the factor \(|c_{\mathcal{O}^{(J)}}(\Delta,\ell)|^{2}\) and the dots stand for contributions coming from exceptional and complementary series. ### dS\({}_{2}\) We have derived the Kallen-Lehmann decomposition for generic operators in higher dimensional dS, focusing on the contribution of the principal series. The derivation is based on the resolution of the identity (3.1) in the full Hilbert space. In two dimensional dS, we need to modify (3.1) in several ways. First, because the principal series of \(\mathrm{SO}(2,1)\) has only one label, namely the scaling dimension \(\Delta\), the sum over \(\ell\) in eq. (3.1) cannot appear when \(d=1\). The second modification is closely related to the discussion regarding the embedding space formalism of dS\({}_{2}\) in section 2.2. A spin \(J\) tensor operator \(\mathcal{O}^{(J)}_{\mu_{1}\cdots\mu_{J}}\) in dS\({}_{2}\) has two independent components, i.e. chiralities. The two chiralities can be mapped to each other by parity, denoted by \(\Theta\), which belongs to \(O(2,1)\) instead of \(SO(2,1)\). More precisely, \(\Theta\) is defined to flip the sign of \(Y^{1}\) in embedding space, or \(\mathbf{y}\) in planar coordinates. We will focus on parity invariant QFTs. From the representation side, it means that we should decompose the full Hilbert space into UIRs of \(O(2,1)\). It is very easy to describe such UIRs. Given a fixed \(\Delta\), there are two principal series (or complementary series depending on the value of \(\Delta\)) representations \(\mathcal{P}^{\pm}_{\Delta}\), distinguished by the intrinsic parity under \(\Theta\), i.e. \[\Theta|\Delta,y\rangle_{\pm}\equiv\pm|\Delta,-y\rangle_{\pm} \tag{3.32}\] where \(|\Delta,y\rangle_{\pm}\) is a basis of \(\mathcal{P}^{\pm}_{\Delta}\). For the discrete series, \(\mathcal{D}^{-}_{p}\) is the image of \(\mathcal{D}^{+}_{p}\) under \(\Theta\), because \(\Theta\) flips the sign of \(L_{0}\). So the direct sum \(\mathcal{D}_{p}\equiv\mathcal{D}^{+}_{p}\oplus\mathcal{D}^{-}_{p}\) furnishes an \(O(2,1)\) representation, while each summand does not. Altogether, the resolution of the identity in dS\({}_{2}\) can be formulated as \[\mathbb{1}_{\mathcal{H}}=|\Omega\rangle\langle\Omega|+\sum_{\pm} \int_{\frac{1}{2}+i\mathbb{R}}[d\Delta]_{\pm}\int_{P}\,|\Delta,P\rangle_{\pm \,\pm}\langle\Delta,P|+\sum_{p\geq 1}\mathbb{1}_{\mathcal{D}_{p}}+\cdots \tag{3.33}\] Before using it to derive the Kallen-Lehmann decomposition in dS\({}_{2}\), let us make some remarks on this formula. * \(\mathbb{1}_{\mathcal{D}_{p}}\) is the identity operator in the representation \(\mathcal{D}_{p}\): \[\mathbb{1}_{\mathcal{D}_{p}}=\sum_{|n|\geq p}\,\frac{\Gamma(|n|+1-p)}{\Gamma( |n|+p)}\,|n\rangle_{p}\,{}_{p}\langle n|\.\] (3.34) where the states \(|n\rangle_{p}\) are introduced in section 2.1. * "\(\sum_{p\geq 1}\)" is a formal sum of discrete series. It is possible that there are several copies of each \(\mathcal{D}_{p}\) distinguished by other quantum numbers. Sums over such quantum numbers are also implicitly included in \(\sum_{p\geq 1}\). * The dots correspond to the contribution from the complementary series. It can be derived using the same approach as the principal series, see Appendix B. * We always shift the operator under consideration such that its vacuum one-point function vanishes. It means that the first term \(|\Omega\rangle\langle\Omega|\) of (3.33) does not contribute. #### 3.2.1 Scalar operators Let \(\mathcal{O}(Y)\) be a scalar operator in dS\({}_{2}\). The derivation of the principal and complementary series part of its Kallen-Lehmann decomposition is exactly the same as in higher dimensions, except for an extra sum over two chiralities. For discrete series states, we can show that they do not contribute and the argument is exactly the one we used for the exceptional series \(\mathcal{V}_{p}\) in higher dimensions. So the full Kallen-Lehmann decomposition of the scalar operator \(\mathcal{O}\) in dS\({}_{2}\) is \[G_{\mathcal{O}}(Y_{1},Y_{2})=\int_{\mathbb{R}}d\lambda\,\rho^{\mathcal{P}}_{ \mathcal{O}}(\lambda)G_{\lambda,0}(Y_{1},Y_{2})+\int_{-\frac{1}{2}}^{\frac{1} {2}}d\lambda\,\rho^{\mathcal{C}}_{\mathcal{O}}(\lambda)G_{i\lambda,0}(Y_{1}, Y_{2}) \tag{3.35}\] The functions \(\rho^{\mathcal{P}}_{\mathcal{O}}(\lambda)\) and \(\rho^{\mathcal{C}}_{\mathcal{O}}(\lambda)\) are nonnegative by construction. #### 3.2.2 Spinning operators The distinction between \(|\Delta,P\rangle_{\pm}\) becomes crucial when the bulk operator carries a nonzero spin. For example, let us consider a vector operator \(V_{A}(Y)\). In higher dimensions, the matrix element \(\langle\Omega|V_{A}(Y)|\Delta,P\rangle\) is a linear combination of \(Y_{A}\) and \(P_{A}\), and the former is killed in the index-free formalism. When \(d=1\), there can be one more type of tensor structure in this matrix element, namely \(\epsilon_{ABC}Y^{B}P^{C}\), where \(\epsilon_{ABC}\) is the totally antisymmetric tensor in \(\mathbb{R}^{2,1}\), normalized as \(\epsilon_{012}=1\). It is a pseudo vector in contrast to \(Y_{A}\) and \(P_{A}\). So \(\epsilon_{ABC}Y^{B}P^{C}\) can only appear in \(\langle\Omega|V_{A}(Y)|\Delta,P\rangle_{-}\), while \(Y_{A}\) and \(P_{A}\) can only appear in \(\langle\Omega|V_{A}(Y)|\Delta,P\rangle_{+}\)13. Next, we will generalize this simple example to any spinning operators in dS\({}_{2}\). Footnote 13: Here we have assumed \(V_{A}\) to be a vector instead of pseudo vector. In latter case, \(\epsilon_{ABC}Y^{B}P^{C}\) is in \(\langle\Omega|V_{A}(Y)|\Delta,P\rangle_{+}\), while \(Y_{A}\) and \(P_{A}\) are in \(\langle\Omega|V_{A}(Y)|\Delta,P\rangle_{-}\). We will always consider tensors instead of pseudo tensors in the following discussion. It is easy to check that their Kallen-Lehmann representations take the same form. #### Principal series part Let \(\mathcal{O}^{(J)}\) be a spin \(J\) operator in dS\({}_{2}\). Deriving its Kallen-Lehmann decomposition amounts to computing the matrix elements \(\mathcal{F}_{J,\pm}(Y,P;W)\equiv\langle\Omega|\mathcal{O}^{(J)}(Y,W)|\Delta,P \rangle_{\pm}\). \(\mathcal{F}_{J,+}\) is a scalar and hence its \(W\) dependence can only be \((P\cdot W)^{J}\). In contrast, \(\mathcal{F}_{-}\) is a pseudo scalar, so its \(W\) dependence should be \((P\cdot W)^{J-1}\epsilon(W,Y,P)\), where \(\epsilon(W,Y,P)\equiv\langle\Omega|V_{A}(Y)|\Delta,P\rangle_{+}\). \(\epsilon_{ABC}W^{A}Y^{B}P^{C}\). Altogether, the most general form of \({\cal F}_{\pm}\) is \[{\cal F}_{+}(Y,P;W) =c^{+}_{{\cal O}^{(J)}}(\Delta)\,(W\cdot\nabla_{Y})^{J}\,{\cal K}_{ \Delta}(Y,P)\] \[{\cal F}_{-}(Y,P;W) =\frac{c^{-}_{{\cal O}^{(J)}}(\Delta)}{\Delta}\,(W\cdot\nabla_{Y}) ^{J-1}\,\epsilon(W,Y,\nabla_{Y}){\cal K}_{\Delta}(Y,P)\, \tag{100}\] where we have replaced any \(P\) in the numerator by derivatives of \(Y\). Then, using the \(d=1\) version of the split representation (101), we obtain \[\int_{P}\langle\Omega|{\cal O}^{(J)}(Y_{1},W_{1})|\Delta,P\rangle_ {+\,+}\langle\Delta,P|{\cal O}^{(J)}(Y_{2},W_{2})|\Omega\rangle=|c^{+}_{{\cal O }^{(J)}}|^{2}(W_{1}\cdot\nabla_{1})^{J}(W_{2}\cdot\nabla_{2})^{J}G_{\lambda,0 }(Y_{1},Y_{2})\, \tag{101}\] and \[\int_{P}\langle\Omega|{\cal O}^{(J)}(Y_{1},W_{1})|\Delta,P\rangle _{-\,-}\langle\Delta,P|{\cal O}^{(J)}(Y_{2},W_{2})|\Omega\rangle\] \[=|c^{-}_{{\cal O}^{(J)}}|^{2}(W_{1}\!\cdot\!\nabla_{1})^{J-1}(W_ {2}\!\cdot\!\nabla_{2})^{J-1}G_{\lambda,1}(Y_{1},Y_{2};W_{1},W_{2})\, \tag{102}\] where \(G_{\lambda,1}\) is the free two-point function of a Proca field of mass \(m^{2}=\Delta\bar{\Delta}=\frac{1}{4}+\lambda^{2}\), and it is related to the scalar two-point function by eq. (100). Altogether, the principal series part of the Kallen-Lehmann decomposition of \({\cal O}^{(J)}\) is \[G_{{\cal O}^{(J)}}(Y_{1},Y_{2};W_{1},W_{2}) =\int_{\mathbb{R}}d\lambda\,\rho^{{\cal P},0}_{{\cal O}^{(J)}}( \lambda)(W_{1}\cdot\nabla_{1})^{J}(W_{2}\cdot\nabla_{2})^{J}G_{\lambda,0}(Y_{ 1},Y_{2})\] \[+\int_{\mathbb{R}}d\lambda\,\rho^{{\cal P},1}_{{\cal O}^{(J)}}( \lambda)(W_{1}\cdot\nabla_{1})^{J-1}(W_{2}\cdot\nabla_{2})^{J-1}G_{\lambda,1 }(Y_{1},Y_{2};W_{1},W_{2})+\cdots \tag{103}\] In this equation, \(\rho^{{\cal P},0}_{{\cal O}^{(J)}}(\lambda)\) and \(\rho^{{\cal P},1}_{{\cal O}^{(J)}}(\lambda)\) are two nonnegative (even) functions of \(\lambda\), defined by \[|c^{+}_{{\cal O}^{(J)}}(\Delta)|^{2}[d\Delta]_{+}=\rho^{{\cal P},0}_{{\cal O}^ {(J)}}(\lambda)\,d\lambda,\ \ \ |c^{-}_{{\cal O}^{(J)}}(\Delta)|^{2}[d\Delta]_{-}=\rho^{{\cal P},1}_{{\cal O}^{( J)}}(\lambda)\,d\lambda. \tag{104}\] The contribution of the complementary series takes the same form as eq. (103), except that the integral domain should be \(-\frac{1}{2}<i\lambda<\frac{1}{2}\). Discrete series part For scalar operators in dS\({}_{2}\), we have shown that the discrete series cannot appear in the Kallen-Lehmann decomposition. The argument is based on some second order differential equation of \(\langle\Omega|{\cal O}(Y_{1})|\mathbb{1}_{{\cal D}_{p}}|{\cal O}(Y_{2})|\Omega\rangle\), induced by the \(SO(2,1)\) Casimir. Non-trivial solutions of such differential equations always have unphysical singularities, and hence \(\langle\Omega|{\cal O}(Y_{1})|\mathbb{1}_{{\cal D}_{p}}|{\cal O}(Y_{2})|\Omega\rangle\) has to vanish. In the spin \(J\) case, by leveraging this Casimir method, we are able to exclude all \({\cal D}_{p}\) with \(p>J\) in the Kallen-Lehmann decomposition of \({\cal O}^{(J)}\) in a similar way. We leave details of this argument to appendix E. For \(p\leq J\), the Casimir equations have physical solutions, so \({\cal D}_{p}\) does contribute to the two-point function of \({\cal O}^{(J)}\). However, to prove the positivity of this contribution requires some extra input, for example, reflection positivity after a Wick rotation to the sphere [5]. We will not give this type of arguments. Instead, we adopt the same strategy as in the principal series case, i.e. using the resolution of the identity operator \(\mathbb{1}_{\mathcal{D}^{\pm}_{p}}=\sum_{\psi}|\psi\rangle\langle\psi|\) in \(\mathcal{D}^{\pm}_{p}\) and computing the matrix elements of \(\mathcal{O}^{(J)}\) between the BD vacuum and \(|\psi\rangle\). This method guarantees the positivity automatically, but meanwhile it also leads to certain technical difficulties compared to the principal series case because discrete series representations do not admit (\(\delta\)-function) normalizable continuous basis such as \(|y\rangle\) or \(|P\rangle\). Instead, its resolution of the identity is formulated in terms of the discrete basis \(|n\rangle_{p}\), c.f. eq. (2.18). This basis diagonalizes \(L_{0}\), so unlike \(|P\rangle\), it is not \(SO(2,1)\) covariant. Due to the loss of the manifest covariance, the embedding space formalism stops being an efficient computational tool, so it is much more difficult to calculate the matrix elements, e.g. \(\langle\Omega|\mathcal{O}^{(J)}(Y,W)|n\rangle_{p}\). With that being said, we choose to directly work in the conformal global coordinates \((\tau,\varphi)\) (2.21), since they admit \(L_{0}\sim\partial_{\varphi}\) as a Killing vector. As mentioned in section 2.2.2, we also introduce lightcone coordinates \(y^{\pm}=\tau\pm\varphi\). Then the two nonvanishing components of \(\mathcal{O}^{(J)}\) are \(\mathcal{O}^{(J)}_{++\cdots+}\) and \(\mathcal{O}^{(J)}_{---\cdots-}\). The matrix elements of interest are \(\mathcal{F}^{(n,\pm)}_{J,p}(y^{+},y^{-})\equiv\langle\Omega|\mathcal{O}^{(J) }_{+\cdots+}(y^{\pm})|n\rangle_{p}\). Let's start with \(\mathcal{F}^{(p,\pm)}_{p,p}\) which corresponds to \(J=n=p\). It should satisfy two first order differential equations induced by the conditions \(L_{0}|p\rangle_{p}=p|p\rangle_{p}\) and \(L_{-}|p\rangle_{p}=0\), since \(|p\rangle_{p}\) is the lowest-weight state in the representation \(\mathcal{D}^{+}_{p}\). To find such differential equations, we need to know how \(\mathfrak{so}(2,1)\) generators act on bulk operators. Recall that \(\{L_{0},L_{\pm}\}\) are defined by eq. (2.13) and their associated Killing vectors are computed in [26] \[V_{0}=i\partial_{\varphi},\ \ V_{+}=-i\left(e^{-iy^{+}}\partial_{+}+e^{iy^{-}} \partial_{-}\right),\ \ V_{-}=-i\left(e^{iy^{+}}\partial_{+}+e^{-iy^{-}} \partial_{-}\right)\, \tag{3.41}\] where \(\partial_{\pm}=\partial_{y^{\pm}}=\frac{1}{2}(\partial_{\tau}\pm\partial_{ \varphi})\). Because of the convention (2.30), the action of \(-L_{\alpha}\) on \(\mathcal{O}^{(p)}\) is realized by the Lie derivative along \(V_{\alpha}\), i.e. \([L_{\alpha},\mathcal{O}^{(p)}_{\mu_{1}\cdots\mu_{p}}]=-\mathcal{L}_{V_{\alpha }}\mathcal{O}^{(p)}_{\mu_{1}\cdots\mu_{p}}\), where \(\alpha=0,\pm\). For example, for \(\alpha=0\), it implies \[i\partial_{\varphi}\mathcal{F}^{(p,\pm)}_{p,p}=-\langle\Omega|[L_{0}, \mathcal{O}^{(p)}_{+\cdots+}(y^{\pm})]|p\rangle_{p}=p\mathcal{F}^{(p,\pm)}_{p, p}. \tag{3.42}\] So the \(\varphi\) dependence in \(\mathcal{F}^{(p,\pm)}_{p,p}\) is simply \(e^{-ip\varphi}\). Similarly, for \(\alpha=-\), we obtain \(\partial_{\mp}\mathcal{F}^{(p,\pm)}_{p,p}=0\). Therefore, \(\mathcal{F}^{(p,\pm)}_{J,p}\) are determined up to normalization constants \(\mathcal{F}^{(p,\pm)}_{J,p}(y^{\pm})=c^{\pm}_{p,p}e^{\mp ipy^{\pm}}\). With this lowest-weight mode known, any \(\mathcal{F}^{(n,\pm)}_{p,p}\) with \(n\geq p\) can be obtained by acting \(n-p\) times with \(\mathcal{L}_{V_{+}}\) since \(L^{n-p}_{+}|p\rangle_{p}=(2p)_{n-p}|n\rangle_{p}\) (c.f. eq. (2.15)): \[\mathcal{F}^{(n,\pm)}_{p,p}(y^{\pm})=\frac{1}{(2p)_{n-p}}\mathcal{L}^{n-p}_{V_{ +}}\mathcal{F}^{(p,\pm)}_{p,p}(y^{\pm})=(\mp)^{n-p}\,c^{\pm}_{p,p}\,e^{\mp iny ^{\pm}}\, \tag{3.43}\] which allows us to compute the contribution of \(\mathcal{D}^{+}_{p}\) in the two-point function of \(\mathcal{O}^{(p)}\). For example, for the \((+,+)\) component, we have \[\langle\Omega|\mathcal{O}^{(p)}_{+\cdots+}(y_{1})|\mathbb{1}_{\mathcal{D}^{+}_ {p}}|\mathcal{O}^{(p)}_{+\cdots+}(y_{2})|\Omega\rangle=|c^{+}_{p,p}|^{2}\, \sum_{n\geq p}\frac{\Gamma(n+p)}{\Gamma(n-p+1)}e^{-iny^{+}_{12}}\, \tag{3.44}\] where \(y^{+}_{12}=y^{+}_{1}-y^{+}_{2}\). The infinite sum over \(n\) in (3.44) looks divergent because we have suppressed the explicit \(i\epsilon\) prescription. Using \(e^{iy^{+}_{j}}=\frac{i-(y_{j}+\eta_{j})}{i+(y_{j}+\eta_{j})}\), which relates the planar coordinates \((\eta,y)\) and conformal global coordinates \((\tau,\varphi)\) or equivalently \((y^{+},y^{-})\), and restoring the \(i\epsilon\) prescription (\(\eta_{1}\to e^{i\epsilon}\eta_{1},\eta_{2}\to e^{-i\epsilon}\eta_{2}\)), then \(e^{-iy^{+}_{12}}\) in eq. (3.44) should be replaced by \[r(\epsilon)\equiv\frac{i-(y_{2}+e^{-i\epsilon}\eta_{2})}{i+(y_{ 2}+e^{-i\epsilon}\eta_{2})}\frac{i+(y_{1}+e^{i\epsilon}\eta_{1})}{i-(y_{1}+e^{ i\epsilon}\eta_{1})}. \tag{3.45}\] It is straightforward to check that \(|r(\epsilon)|<1\) for small \(\epsilon\), and hence the sum in (3.44) is convergent given the \(i\epsilon\) prescription (\(\eta_{1}\to e^{i\epsilon}\eta_{1},\eta_{2}\to e^{-i\epsilon}\eta_{2}\)). Evaluating the sum yields \[\langle\Omega|\mathcal{O}^{(p)}_{+\dots+}(y_{1})|\mathbb{1}_{ \mathcal{D}^{+}_{p}}|\mathcal{O}^{(p)}_{+\dots+}(y_{2})|\Omega\rangle=\frac{ \Gamma(2p)|c^{+}_{p,p}|^{2}}{(-4)^{p}}\left(\sin\frac{y^{+}_{12}}{2}\right)^ {-2p}. \tag{3.46}\] Similarly, for other components, we have \[\langle\Omega|\mathcal{O}^{(p)}_{+\dots+}(y_{1})|\mathbb{1}_{ \mathcal{D}^{+}_{p}}|\mathcal{O}^{(p)}_{-\dots-}(y_{2})|\Omega\rangle =\frac{\Gamma(2p)c^{+}_{p,p}(c^{-}_{p,p})^{*}}{4^{p}}\left(\cos \frac{y^{+}_{1}+y^{-}_{2}}{2}\right)^{-2p}\] \[\langle\Omega|\mathcal{O}^{(p)}_{-\dots-}(y_{1})|\mathbb{1}_{ \mathcal{D}^{+}_{p}}|\mathcal{O}^{(p)}_{+\dots+}(y_{2})|\Omega\rangle =\frac{\Gamma(2p)c^{-}_{p,p}(c^{+}_{p,p})^{*}}{4^{p}}\left(\sin \frac{y^{-}_{1}+y^{+}_{2}}{2}\right)^{-2p}\] \[\langle\Omega|\mathcal{O}^{(p)}_{-\dots-}(y_{1})|\mathbb{1}_{ \mathcal{D}^{+}_{p}}|\mathcal{O}^{(p)}_{-\dots-}(y_{2})|\Omega\rangle =\frac{\Gamma(2p)|c^{-}_{p,p}|^{2}}{(-4)^{p}}\left(\sin\frac{y^{ -}_{12}}{2}\right)^{-2p}. \tag{3.47}\] The contribution of \(\mathcal{D}^{-}_{p}\) does not require any extra computation since it is the image of \(\mathcal{D}^{+}_{p}\) under the parity \(\Theta\). Noticing that \(\Theta\) also flips chiralities, it is easy to obtain relations like \[\langle\Omega|\mathcal{O}^{(p)}_{+\dots+}(y^{+}_{1},y^{-}_{1})| \mathbb{1}_{\mathcal{D}^{-}_{p}}|\mathcal{O}^{(p)}_{+\dots+}(y^{+}_{2},y^{-}_{ 2})|\Omega\rangle =\langle\Omega|\mathcal{O}^{(p)}_{-\dots-}(y^{-}_{1},y^{+}_{1})| \mathbb{1}_{\mathcal{D}^{+}_{p}}|\mathcal{O}^{(p)}_{-\dots-}(y^{-}_{2},y^{+}_ {2})|\Omega\rangle. \tag{3.48}\] Altogether, the contribution of \(\mathcal{D}_{p}=\mathcal{D}^{+}_{p}\oplus\mathcal{D}^{-}_{p}\) to the \(\mathcal{O}^{(p)}\) two-point function can be summarized as \[\langle\Omega|\mathcal{O}^{(p)}_{\pm\dots\pm}(y_{1})|\mathbb{1}_{ \mathcal{D}_{p}}|\mathcal{O}^{(p)}_{\pm\dots\pm}(y_{2})|\Omega\rangle =\frac{\Gamma(2p)\left(|c^{+}_{p,p}|^{2}+|c^{-}_{p,p}|^{2}\right) }{(-4)^{p}}\left(\sin\frac{y^{\pm}_{12}}{2}\right)^{-2p}\] \[\langle\Omega|\mathcal{O}^{(p)}_{\pm\dots\pm}(y_{1})|\mathbb{1}_{ \mathcal{D}_{p}}|\mathcal{O}^{(p)}_{\mp\dots\mp}(y_{2})|\Omega\rangle =\frac{\Gamma(2p)\left(c^{\pm}_{p,p}\left(c^{\mp}_{p,p}\right)^ {*}+\left(c^{\pm}_{p,p}\right)^{*}c^{\mp}_{p,p}\right)}{4^{p}}\left(\cos \frac{y^{\pm}_{1}+y^{\mp}_{2}}{2}\right)^{-2p}. \tag{3.49}\] The \((\pm,\mp)\) component blows up when \(\cos\frac{y^{\pm}_{1}+y^{\mp}_{2}}{2}\) vanishes. On the other hand, we have \(1+Y_{1}\cdot Y_{2}\propto\prod_{\pm}\cos\frac{y^{\pm}_{1}+y^{\mp}_{2}}{2}\), which implies that the (+, -) and (-, +) components in (3.49) have an antipodal singularity. So these components have to vanish, and this requirement imposes a nontrivial constraint on the coefficients \(c^{\pm}_{p,p}\), namely \(c^{+}_{p,p}(c^{-}_{p,p})^{*}+c^{-}_{p,p}(c^{+}_{p,p})^{*}=0\). Comparing eq. (3.49) with (A.29) and (A.30), we can make the following identification \[\langle\Omega|\mathcal{O}^{(p)}_{\alpha\dots\alpha}(y_{1})| \mathbb{1}_{\mathcal{D}_{p}}|\mathcal{O}^{(p)}_{\beta\dots\beta}(y_{2})| \Omega\rangle =4\pi\left(|c^{+}_{p,p}|^{2}+|c^{-}_{p,p}|^{2}\right)\left( \nabla^{(1)}_{\alpha}\right)^{p}\left(\nabla^{(2)}_{\beta}\right)^{p}G_{-i(p- \frac{1}{2})}(y_{1},y_{2}) \tag{3.50}\] where \(\alpha,\beta\in\{+,-\}\), and in embedding space it means \[\langle\Omega|\mathcal{O}^{(p)}(Y_{1},W_{1})|\mathbb{1}_{\mathcal{D}_ {p}}|\mathcal{O}^{(p)}(Y_{2},W_{2})|\Omega\rangle=4\pi\left(|c^{+}_{p,p}|^{2}+ |c^{-}_{p,p}|^{2}\right)\left(W_{1}\cdot\nabla_{1}\right)^{p}\left(W_{2}\cdot \nabla_{2}\right)^{p}G_{-i(p-\frac{1}{2})}(Y_{1},Y_{2}) \tag{3.51}\] The remaining task is to generalize the computation above to the case of \(J>p\). As before, we start with building the lowest-weight modes \(\mathcal{F}^{(p,\pm)}_{J,p}\), which is fixed up to normalization by the defining properties of \(|p\rangle_{p}\). With a short computation, we find \[\mathcal{F}^{(p,+)}_{J,p}=(2p)_{J-p}\,c^{+}_{J,p}\,\frac{e^{-iJt- ip\varphi}}{(2i\cos t)^{J-p}},\ \ \mathcal{F}^{(p,-)}_{J,p}=(2p)_{J-p}\,c^{-}_{J,p}\,\frac{e^{iJt- ip\varphi}}{(-2i\cos t)^{J-p}}\, \tag{3.52}\] where \(c^{\pm}_{J,p}\) are unknown normalization factors. Unlike in the \(J=p\) case, \(\mathcal{F}^{(p,\pm)}_{J,p}\) are not chiral or anti-chiral functions. This fact makes it hard to compute the repeated action of \(\mathcal{L}_{V_{+}}\) on these modes. How we deal with this technical difficulty is based on several important observations. First we notice that \(\mathcal{F}^{(p,\pm)}_{J,p}\) can be realized as covariant derivatives of \(\mathcal{F}^{(p,\pm)}_{p,p}\). \[\mathcal{F}^{(p,+)}_{J,p} =c^{+}_{J,p}\left(\partial_{+}-(J-1)\tan t\right)\cdots\left( \partial_{+}-p\tan t\right)e^{-ipy^{+}}=c^{+}_{J,p}\nabla^{J-p}_{+}e^{-ipy^{+}}\] \[\mathcal{F}^{(p,-)}_{J,p} =c^{-}_{J,p}\left(\partial_{-}-(J-1)\tan t\right)\cdots\left( \partial_{-}-p\tan t\right)e^{ipy^{-}}=c^{-}_{J,p}\nabla^{J-p}_{-}e^{ipy^{-}}\, \tag{3.53}\] where the Christoffel symbols \(\Gamma^{+}_{++}=\Gamma^{-}_{--}=\tan t\) have been used. Here we want to emphasize that \(e^{-ipy^{+}}\) and \(e^{ipy^{-}}\) are not normal functions. They should be treated as the two lightcone components of a symmetric and traceless spin \(p\) tensor. The next observation is that \([\mathcal{L}_{V_{+}},\nabla_{\pm}]\xi_{\pm\cdots\pm}=0\), for any symmetric and traceless \(\xi\). It allows us to commute the Lie derivatives and covariant derivatives when computing \(\mathcal{F}^{n,\pm}_{J,p}\). In the end, the Lie derivatives effectively act on \(e^{\mp iy^{\pm}}\), and this action has already been figured out in the \(J=p\) case: \[\mathcal{F}^{(n,\pm)}_{J,p}=c^{\pm}_{J,p}\nabla^{J-p}_{\pm}(\mp)^ {n-p}\,e^{\mp iny^{\pm}}. \tag{3.54}\] Compared to the \(J=p\) case, the only difference is the extra covariant derivative \(\nabla^{J-p}_{\pm}\). So the previous analysis can be applied here in the exactly same way, which yields \[\langle 0|\mathcal{O}^{(J)}(Y_{1},W_{1})|\mathbb{1}_{\mathcal{D}_{p }}|\mathcal{O}^{(J)}(Y_{2},W_{2})|\Omega\rangle\\ =4\pi\left(|c^{+}_{J,p}|^{2}+|c^{-}_{J,p}|^{2}\right)\left(W_{1} \cdot\nabla_{1}\right)^{J}\left(W_{2}\cdot\nabla_{2}\right)^{J}G_{-i(p-\frac{1 }{2})}(Y_{1},Y_{2}). \tag{3.55}\] Altogether, combining (3.33), (3.39) and (3.55), we obtain the full Kallen-Lehmann decomposition of \(\mathcal{O}^{(J)}\) in dS\({}_{2}\): \[G_{\mathcal{O}^{(J)}}(Y_{1},Y_{2};W_{1},W_{2}) =\int_{\mathbb{R}}d\lambda\,\rho^{\mathcal{P},0}_{\mathcal{O}^{(J) }}(\lambda)(W_{1}\cdot\nabla_{1})^{J}(W_{2}\cdot\nabla_{2})^{J}G_{\lambda,0}(Y_ {1},Y_{2})\] \[+\int_{\mathbb{R}}d\lambda\,\rho^{\mathcal{P},1}_{\mathcal{O}^{(J )}}(\lambda)(W_{1}\cdot\nabla_{1})^{J-1}(W_{2}\cdot\nabla_{2})^{J-1}G_{\lambda, 1}(Y_{1},Y_{2};W_{1},W_{2})\] \[+\int_{-\frac{1}{2}}^{\frac{1}{2}}d\lambda\,\rho^{\mathcal{C},0}_ {\mathcal{O}^{(J)}}(\lambda)(W_{1}\cdot\nabla_{1})^{J}(W_{2}\cdot\nabla_{2})^ {J}G_{i\lambda,0}(Y_{1},Y_{2})\] \[+\int_{-\frac{1}{2}}^{\frac{1}{2}}d\lambda\,\rho^{\mathcal{C},1}_ {\mathcal{O}^{(J)}}(\lambda)(W_{1}\cdot\nabla_{1})^{J-1}(W_{2}\cdot\nabla_{2} )^{J-1}G_{i\lambda,1}(Y_{1},Y_{2};W_{1},W_{2})\] \[+\sum_{p=1}^{J}\rho^{\mathcal{D}_{p}}_{\mathcal{O}^{(J)}}\left(W _{1}\cdot\nabla_{1}\right)^{J}\left(W_{2}\cdot\nabla_{2}\right)^{J}G_{-i(p- \frac{1}{2})}(Y_{1},Y_{2})\, \tag{100}\] where the nonnegative function \(\rho^{\mathcal{D}_{p}}_{\mathcal{O}^{(J)}}\) is obtained by absorbing \(4\pi\left(|c^{+}_{J,p}|^{2}+|c^{-}_{J,p}|^{2}\right)\) into the formal sum over the discrete series in (3.33). It is worth mentioning that the term with \(p=J\) in the sum in the last line of (100) is actually proportional to the CFT two-point function of a spin \(p\) conserved current in a dS\({}_{2}\) background. ### Flat space limit We have derived the Kallen-Lehmann decomposition for de Sitter spinning two-point functions in \(d\geq 2\) and \(d=1\). Now, let us consider how (3.31) and (100) reduce to the Kallen-Lehmann decomposition in Minkowski space when taking the radius of de Sitter to infinity. What we expect to happen is that, given that free scalar fields with \(\Delta\) in the principal series correspond to the range of masses \(m^{2}>\frac{d^{2}}{4R^{2}}\), in the flat space limit the principal series range will be extended to account for all massive representations. The complementary series, accounting for \(0<m^{2}<\frac{d^{2}}{4R^{2}}\), is reduced to only massless representations. The same is true for the discrete series, because keeping \(m^{2}R^{2}\) fixed to some discrete value in the flat space limit necessarily implies \(m\to 0\). Apart from these distinctions between the various dS unitary irreps, taking the flat space limit of the Kallen-Lehmann decomposition in dS is analogous to how it is done in AdS [46]. In \(d+1\) dimensional Minkowski spacetime, the Kallen-Lehmann decomposition of Wightman two-point functions of traceless symmetric spin \(J\) operators organizes itself in blocks that are labeled by the eigenvalue of \(P^{\mu}P_{\mu}\equiv-m^{2}\) and the spin of the little group \(SO(d)\), denoted by \(\ell\)[47] \[\langle\Omega|\mathcal{O}^{(J)}(x_{1},w_{1})\mathcal{O}^{(J)}(x_{2},w_{2})| \Omega\rangle=\sum_{\ell=0}^{J}\int_{0}^{\infty}dm^{2}\ \rho^{\mathcal{ M},\ell}_{\mathcal{O}^{(J)}}(m^{2})\Delta^{(J)}_{m^{2},\ell}(x_{1},x_{2};w_ {1},w_{2}), \tag{101}\] where \(w_{i},x_{i}\in\mathbb{R}^{d,1}\), \(w_{i}\) are some null vectors to contract all indices, and \(\rho^{\mathcal{ M},\ell}_{\mathcal{O}^{(J)}}(m^{2})\) are the positive flat space spectral densities. \(\Delta^{(J)}_{m^{2},\ell}(x_{1},x_{2};w_{1},w_{2})\) are the free Wightman propa gators with Lorentz spin \(J\), little group spin \(\ell\) and mass squared \(m^{2}\) \[\Delta^{(J)}_{m^{2},\ell}(x_{1},x_{2};w_{1},w_{2})=(-)^{J-\ell}m^{2J}\int\frac{d^ {d}p}{(2\pi)^{d}}e^{ip\cdot x}(2\pi)\theta(p^{0})\delta(p^{2}+m^{2})\Pi^{(J)}_{ \ell}(p,w_{1},w_{2})\,, \tag{111}\] where \(\Pi^{(J)}_{\ell}(p,w_{1},w_{2})\) are the projectors on the little group irrep of spin \(\ell\). The prefactor \(m^{2J}\) is inserted following [47], such that \(\Delta^{(J)}_{m^{2},\ell}\) does not diverge in the massless limit when \(d\geq 2\). This way, massless representations are smoothly connected to massive ones, and they appear with spectral densities that are proportional to \(\delta(m^{2})\). In contrast to [47], we also include a spin-dependent sign \((-)^{J-\ell}\) in \(\Delta^{(J)}_{m^{2},\ell}\). This choice is consistent with the positivity of \(\rho^{\mathfrak{r},\ell}_{\mathcal{O}^{(J)}}(m^{2})\). In the large \(R\) limit, the conformal dimension \(\Delta=\frac{d}{2}+i\lambda\) is connected to the mass \(m\) (which is kept fixed) as \(\lambda^{2}\approx m^{2}R^{2}\). In Appendix A.4 we argue that, when taking \(R\to\infty\) while keeping \(Y_{1}\cdot Y_{2}-R^{2}\) fixed, the free propagators become \[[(W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2})]^{J-\ell}G_{Rm,\ell}(Y_{1},Y_{2 };W_{1},W_{2})\approx\,\beta_{J,\ell}\,m^{-2\ell}\Delta^{(J)}_{m^{2},\ell}(x_ {1},x_{2};w_{1},w_{2})\, \tag{112}\] where \(\beta_{J,\ell}\) are normalization factors, known for \(0\leq\ell\leq J\leq 2\). For example, it is equal to \(1\) when \(0\leq J\leq 1\), and is given by eq. (109) when \(J=2\). Now consider the principal series contribution to the Kallen-Lehmann decomposition after performing the change of variables \(\lambda=Rm\)14 Footnote 14: Notice that, assuming the two-point function we are decomposing has mass dimensions \(2\,\mathbf{\Delta}\), and given that the mass dimensions of the free propagators are \[[[(W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2})]^{J-\ell}G_{Rm,\ell}(Y_{1},Y_ {2};W_{1},W_{2})]=d-1+2(J-\ell)\,, \tag{113}\] the correct way to restore dimensions to the spectral densities is to reintroduce an extra factor of \(R^{d-1+2(J-\ell)-2\mathbf{\Delta}}\) in \(\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(Rm)\). \[\sum_{\ell=0}^{J}\int_{0}^{\infty}dm^{2}\frac{R}{m}\rho^{\mathcal{P},\ell}_{ \mathcal{O}^{(J)}}(Rm)[(W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2})]^{J-\ell} G_{Rm,\ell}(Y_{1},Y_{2};W_{1},W_{2}). \tag{114}\] If the two-point function we are decomposing does not diverge as \(R\to\infty\), (114) becomes under this limit \[\eqref{eq:22}\approx\sum_{\ell=0}^{J}\int_{0}^{\infty}dm^{2}\ \rho^{\mathfrak{r},\ell}_{\mathcal{O}^{(J)}}(m^{2})\Delta^{(J)}_{m^{2},\ell}(x_ {1},x_{2};w_{1},w_{2})\,, \tag{115}\] where we read off the connection between de Sitter principal series and flat space spectral densities \[\rho^{\mathfrak{r},\ell}_{\mathcal{O}^{(J)}}(m^{2})=\lim_{R\to\infty}\frac{ \beta_{J,\ell}\,R}{m^{2\ell+1}}\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(Rm)+\cdots \tag{116}\] where the dots stand for contributions coming from other UIRs. In practice this means that, in the large \(R\) limit, de Sitter spectral densities grow with a power of \(R\) which is fixed by dimensional analysis, and its coefficient is the associated flat space spectral density. We check that (3.63) is true for our CFT examples in section 5.2 by comparing with the flat space CFT spectral densities computed in [47], and find perfect agreement. Let us also discuss the discrete series contributions in \(d=1\): \[\sum_{p=0}^{J}\rho_{\mathcal{O}^{(J)}}^{\mathcal{D}_{p}}(W_{1}\cdot\nabla_{1})^ {J}(W_{2}\cdot\nabla_{2})^{J}G_{-i(p-\frac{1}{2})}(Y_{1},Y_{2})\,. \tag{3.64}\] To restore dimensions correctly, we need to redefine \(\rho_{\mathcal{O}^{(J)}}^{\mathcal{D}_{p}}\) with a factor of \(R^{-2\boldsymbol{\Delta}+2J}\) where \(2\boldsymbol{\Delta}\) is the mass dimension of the two-point function we are decomposing. But then, under the flat space limit, the only way for this quantity to survive is if \(\boldsymbol{\Delta}=J\). Then, these contributions can be incorporated in (3.63) as \[\rho_{\mathcal{O}^{(J)}}^{\mathcal{H},0}(m^{2})=\delta(m^{2})\sum_{p=0}^{J} \rho_{\mathcal{O}^{(J)}}^{\mathcal{D}_{p}}\,,\qquad\text{if $d=1$ and $\boldsymbol{\Delta}=J$}\,. \tag{3.65}\] We find agreement between these statements and the massless representations that appear in \(d=1\) in the CFT spectral densities in [47] when the CFT primary being decomposed is a conserved current. We show this explicitly in the examples in section 5.2. ## 4 Inversion formulae In this section, we find inversion formulae that extract the spectral densities from the Kallen-Lehmann decompositions (3.31) and (3.56). For the scalar two-point function, using the analytic continuation from the sphere, an inversion formula was found in [6]. There, it was shown that the spectral density can be computed by carrying out an integral over the discontinuity of the two-point function in the region where the two points are time-like separated. In this section, we propose an alternative and more convenient procedure to derive the spectral densities for spinning de Sitter two-point functions using harmonic analysis in Euclidean Anti de Sitter (EAdS). We will first derive an inversion formula for the principal series spectral densities in \(d\geq 2\) and then one for the principal series and discrete series contributions in \(d=1\). The main idea is to continue the Kallen-Lehmann decomposition (3.31) from dS to EAdS and to exploit the orthogonality of harmonic functions under integrals over EAdS. We emphasize that this method is a mathematical trick, and that all spectral densities we derive in this way lead to Kallen-Lehmann integral representations which we numerically test directly in de Sitter. In section 4.3, we argue that for two-point functions satisfying certain criteria, there are no more contributions to the Kallen-Lehmann decomposition other than principal series contributions. ### Wick rotation to EAdS As mentioned above, the first step to invert the Kallen-Lehmann decomposition is continuing (3.31) to Euclidean Anti de Sitter space, of which we review various coordinate systems and for which we set up notation in Appendix F.1. Here we describe the precise way in which we realize this continuation, inspired by what was done in [5; 48; 49; 50]. By \(SO(d+1,1)\) invariance, Wightman functions only depend on the following dot products \[G_{\mathcal{O}^{(J)}}(Y_{1},Y_{2};W_{1},W_{2})=G_{\mathcal{O}^{(J)}}(Y_{1} \cdot Y_{2},(Y_{1}\cdot W_{2})(Y_{2}\cdot W_{1}),W_{1}\cdot W_{2}) \tag{4.1}\] As discussed in 3.1.1, Wightman functions in de Sitter are defined with an \(i\epsilon\) prescription which is realized in planar coordinates (2.22) as \[Y_{1}=Y_{1}(\eta_{1}e^{i\epsilon},\mathbf{y}_{1})\,,\ Y_{2}=Y_{2}(\eta_{2}e^{-i \epsilon},\mathbf{y}_{2})\,. \tag{4.2}\] We Wick rotate to EAdS by simply taking \(\epsilon\to\frac{\pi}{2}\) and identifying the EAdS Poincare coordinates \(z=|\eta|\) and \(\mathbf{x}=\mathbf{y}\), so that the dot products transform as \[Y_{1}\cdot Y_{2}\underset{\epsilon\to\frac{\pi}{2}}{\longrightarrow}X_{1} \cdot X_{2}\,,\qquad(Y_{1}\cdot W_{2})(Y_{2}\cdot W_{1})\underset{\epsilon\to \frac{\pi}{2}}{\longrightarrow}(X_{1}\cdot W_{2})(X_{2}\cdot W_{1})\,, \tag{4.3}\] where \(X\in\text{EAdS}_{d+1}\) and \(X\cdot W=0\) as is reviewed in F.1. It can be checked that, under this particular Wick rotation, \(Y_{1}\cdot Y_{2}\) will move through the domain of analyticity discussed in section 3 and will not cross the cut at \(Y_{1}\cdot Y_{2}\in[1,\infty)\). In Figure 3.1 we show an example of how \(Y_{1}\cdot Y_{2}\) moves in the complex plane under this rotation. Moreover, this Wick rotation maps Wightman functions in dS for free traceless symmetric tensor fields to harmonic functions in EAdS:15 Footnote 15: For example, this can be derived by analytically continuing (3.30) to EAdS. \[G_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})\underset{\epsilon\to\frac{\pi}{2}}{ \longrightarrow}\Gamma(\pm i\lambda)\,\Omega_{\lambda,\ell}(X_{1},X_{2};W_{1}, W_{2})\,, \tag{4.4}\] where throughout this paper we use the shorthand convention that, inside gamma functions and Pochhammer symbols, \(\Gamma(a\pm b)\equiv\Gamma(a+b)\Gamma(a-b)\). In Appendix F, we review some of the useful properties of harmonic functions. Among them, the orthogonality relation (F.12) will play a crucial role in the derivation of our inversion formula. ### Inversion formula for \(d\geq 2\) We start with the spinning Kallen-Lehmann decomposition in \(d\geq 2\) that we proved in section 3. After the Wick rotation to EAdS, it reads \[G_{\mathcal{O}^{(J)}}(X_{1},X_{2};W_{1},W_{2})= \sum_{\ell=0}^{J}\int_{\mathbb{R}}d\lambda\ \rho_{\mathcal{O}^{(J)}}^{\mathcal{P},\ell}(\lambda)\left[(W_{1}\cdot\nabla_{1} )(W_{2}\cdot\nabla_{2})\right]^{J-\ell}\Gamma(\pm i\lambda)\Omega_{\lambda, \ell}(X_{1},W_{1};X_{2},W_{2})\] \[+\text{possible contributions from other UIRs} \tag{4.5}\] As we will discuss in section 4.3, the harmonic functions form a complete and orthogonal basis of square-integrable two-point functions [34]. In other words, if a two-point function is square-integrable, its Kallen-Lehmann decomposition only includes principal series contributions. In \(d\geq 2\), we find that the two-point functions we considered in our examples in section 5 can always be studied in a regime where the principal series contributions reproduce the full two-point function, and then by analytic continuation away from that regime we could recover any complementary series part as poles that cross the contour of integration in (4.5). We have not encountered exceptional series contributions in any of our examples. Given these facts, let us for now focus on inverting the decomposition over the principal series. To exploit the orthogonality of the harmonic functions, we act on both sides of (4.5) with the integro-differential operator \[\int_{X_{1}}\Omega_{\lambda^{\prime},m}(X_{3},X_{1},W_{3},K_{1})[(K_{1}\cdot \nabla_{1})(K_{2}\cdot\nabla_{2})]^{J-m}\,. \tag{4.6}\] where we use the shorthand notation for integrating \(X_{1}\) over EAdS defined in (F.13). The right hand side of (4.5) becomes \[\sum_{\ell=0}^{J}\int d\lambda\,\Gamma(\pm i\lambda)\rho^{\mathcal{P},\ell}_{ \mathcal{O}^{(J)}}(\lambda)\int_{X_{1}}\Omega_{\lambda^{\prime},m}[(K_{1}\! \cdot\!\nabla_{1})(K_{2}\!\cdot\!\nabla_{2})]^{J-m}[(W_{1}\!\cdot\!\nabla_{1}) (W_{2}\!\cdot\!\nabla_{2})]^{J-\ell}\Omega_{\lambda,\ell}\,, \tag{4.7}\] where we are omitting the arguments of the harmonic functions to avoid clutter. Let us focus on the quantity \[(K_{1}\cdot\nabla_{1})^{J-m}(W_{1}\cdot\nabla_{1})^{J-\ell}\Omega_{\lambda, \ell}(X_{1},X_{2};W_{1},W_{2})\,. \tag{4.8}\] The fact that divergences of harmonic functions vanish, implies that we can express (4.8) in terms of commutators \[[(K_{1}\cdot\nabla_{1})^{J-m},(W_{1}\cdot\nabla_{1})^{J-\ell}]\Omega_{\lambda, \ell}(X_{1},X_{2};W_{1},W_{2})\,. \tag{4.9}\] Using basic properties of commutators together with the divergenceless condition, we can write this as \[(K_{1}\cdot\nabla_{1})^{J-m-1}[(K_{1}\cdot\nabla_{1}),(W_{1}\cdot\nabla_{1}) ^{J-\ell}]\Omega_{\lambda,\ell}(X_{1},X_{2};W_{1},W_{2})\,. \tag{4.10}\] Evaluating this commutator (F.19), we get that (4.7) can be written as \[\sum_{\ell=0}^{J}\kappa_{J-\ell,\ell}^{2}\int d\lambda\,\,\Gamma(\pm i \lambda)\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\int_{X_{1}} \Omega_{\lambda^{\prime},m}[(K_{1}\!\cdot\!\nabla_{1})(K_{2}\cdot\nabla_{2}) ]^{J-m-1}[(W_{1}\!\cdot\!\nabla_{1})(W_{2}\cdot\nabla_{2})]^{J-\ell-1}\Omega _{\lambda,\ell}\,, \tag{4.11}\] with \[\kappa_{n,m}\equiv-\frac{n}{2}(d+n+2m-2)\left(\left(\frac{d}{2}+m+n-1\right)^ {2}+\lambda^{2}\right)\,. \tag{4.12}\] By iteration, we find three possible scenarios that can happen to the integral over \(X_{1}\): * If \(\ell>m\), the spatial integral would eventually be proportional to \[\int_{X_{1}}\Omega_{\lambda^{\prime},m}(X_{3},X_{1};W_{3},K_{1})(K_{1}\cdot \nabla_{1})(K_{2}\cdot\nabla_{2})\Omega_{\lambda,\ell}(X_{1},X_{2};W_{1},W_{2 })=0\] (4.13) * If \(\ell<m\), instead, one eventually obtains \[\int_{X_{1}}\Omega_{\lambda^{\prime},m}(X_{3},X_{1};W_{3},K_{1})(W_{1}\cdot \nabla_{1})(W_{2}\cdot\nabla_{2})\Omega_{\lambda,\ell}(X_{1},X_{2};W_{1},W_{2 })=0\,,\] (4.14) which vanishes because integrating by parts the derivative \(\nabla_{1}\), the integrand becomes a divergence on \(\Omega_{\lambda^{\prime},m}\,\). * The case \(\ell=m\) is the only one that does not vanish. Instead, it gives for the spatial integral \[\begin{split}\int_{X_{1}}\Omega_{\lambda^{\prime},m}[(K_{1}\cdot \nabla_{1})&(K_{2}\cdot\nabla_{2})]^{J-m}[(W_{1}\cdot\nabla_{1})(W_ {2}\cdot\nabla_{2})]^{J-\ell}\Omega_{\lambda,\ell}\\ &=\delta_{\ell,m}\delta(\lambda-\lambda^{\prime})\left(\prod_{n= 1}^{J-\ell}\kappa_{n,\ell}^{2}\right)\Omega_{\lambda,\ell}(X_{1},X_{3};W_{1}, W_{3})\,.\end{split}\] (4.15) This procedure allows us to isolate the \(\ell\)-th contribution in the sum in (4.5) \[\begin{split}\widetilde{\mathcal{N}}_{J,\ell}&\Omega _{\lambda,\ell}(X_{2},X_{3},W_{2},W_{3})\rho^{\mathcal{P},\ell}_{\mathcal{O}^{ (J)}}(\lambda)\\ &=\int_{X_{1}}\Omega_{\lambda,\ell}(X_{3},X_{1};W_{3},K_{1})[(K_ {1}\cdot\nabla_{1})(K_{2}\cdot\nabla_{2})]^{J-\ell}G_{\mathcal{O}^{(J)}}(X_{1 },X_{2};W_{1},W_{2})\,,\end{split} \tag{4.16}\] where \[\begin{split}\widetilde{\mathcal{N}}_{J,\ell}&\equiv \Gamma(\pm i\lambda)\prod_{n=1}^{J-\ell}\kappa_{n,\ell}^{2}=\frac{\ell!\Gamma (\pm i\lambda)}{2^{2(J-\ell)}}\left(\frac{d-1}{2}\right)_{\ell}\left((J-\ell)! (d+2\ell-1)_{J-\ell}\left(\frac{d}{2}\pm i\lambda+\ell\right)_{J-\ell} \right)^{2}\,.\end{split} \tag{4.17}\] Equation (4.16) is valid for all \(X_{2}\) and \(X_{3}\) in EAdS as well as any null \(W_{2}\) and \(W_{3}\) that satisfy the tangential condition. We therefore pick the convenient choice of \(X_{2}=X_{3}\). Note that, unlike bulk-to-bulk propagators, harmonic functions are regular at coincident points in EAdS. In addition, we take a trace over the free indices by performing the substitution \(W_{3}\to K_{2}\). After all these operations, we find an **inversion formula** for the principal series spectral densities appearing in the Kallen-Lehmann decomposition of spinning two-point functions in dS \[\begin{split}\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}( \lambda)&=\frac{1}{\mathcal{N}_{J,\ell}}\int_{X_{1}}\Omega_{ \lambda,\ell}(X_{2},X_{1};K_{2},K_{1})[(K_{1}\cdot\nabla_{1})(K_{2}\cdot\nabla _{2})]^{J-\ell}G_{\mathcal{O}^{(J)}}(X_{1},X_{2};W_{1},W_{2})\,,\end{split} \tag{4.18}\] with \[\mathcal{N}_{J,\ell}\equiv\widetilde{\mathcal{N}}_{J,\ell}\ \Omega_{\lambda,\ell}(X,X;K,W)\,. \tag{4.19}\] The trace and coincident point limit of \(\Omega_{\lambda,\ell}\) was computed in [34]. Here, we report the result \[\Omega_{\lambda,J}(X,X;K,W)=\frac{J!\left(\frac{d-1}{2}\right)_{J}g(J)}{(4\pi )^{\frac{d+1}{2}}\Gamma(\frac{d+1}{2})\Gamma(\pm i\lambda)}\left(\frac{d}{2}+ J-1\pm i\lambda\right)\Gamma\left(\frac{d}{2}-1\pm i\lambda\right) \tag{4.20}\] with \[\begin{split} g(J)&=\frac{(2J+d-2)(J+d-3)!}{(d-2)!J! }\,,\hskip 28.452756ptd\geq 3\,,\\ g(0)&=1\,,\qquad g(J\neq 0)=2\,,\hskip 28.452756ptd=2 \.\end{split} \tag{4.21}\] Altogether, the overall normalization factor \(\mathcal{N}_{J,\ell}\) is equal to \[\mathcal{N}_{J,\ell}=\frac{g(\ell)\left[\ell!\,(J-\ell)!\,(\tfrac{d-1}{2})\ell\,( d+2\ell-1)_{J-\ell}\,\Gamma(\tfrac{d}{2}+J\pm i\lambda)\right]^{2}}{4^{J-\ell}(4 \pi)^{\frac{d+1}{2}}\Gamma(\tfrac{d+1}{2})\Gamma(\tfrac{d}{2}+\ell\pm i\lambda) \prod_{t=0}^{\ell-1}\left(\tfrac{d}{2}\pm i\lambda+t-1\right)}. \tag{119}\] In practice, one might conveniently evaluate (103) by placing \(X_{2}\) at the origin of EAdS. This choice makes the angular part of the integral trivial after carrying out all derivatives and index contractions. Therefore, we will be left with a one-dimensional integral over the EAdS chordal distance. We spell out the explicit formulae of these one dimensional integrals for \(J=0,1\) in Appendix G. #### 4.2.1 Spurious poles The inversion formula (103) implies that the spectral density \(\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\) may contain \(\mathcal{O}^{(J)}\)-independent poles in the complex \(\lambda\) plane coming from the normalization factor \(\mathcal{N}_{J,\ell}^{-1}\) and the harmonic function \(\Omega_{\lambda,\ell}\), which we will refer to as _spurious poles_. First, we claim that the poles of \(\Omega_{\lambda,\ell}\) actually do not lead to poles in \(\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}\). More precisely, focusing on the \(\lambda\)-dependent part of \(\mathcal{N}_{J,\ell}\), c.f. eq. (119) \[\mathcal{N}_{J,\ell}^{-1}\sim\frac{\Gamma(\tfrac{d}{2}\pm i\lambda+\ell)}{ \Gamma(\tfrac{d}{2}\pm i\lambda+J)^{2}}\prod_{t=0}^{\ell-1}\left(\left(\frac{d }{2}-1+t\right)^{2}+\lambda^{2}\right)\,, \tag{120}\] one can show that the factors in the product cancel out all the poles of \(\Omega_{\lambda,\ell}\). To illustrate this cancellation, we write down the poles and residues of \(\Omega_{\lambda,\ell}\) for \(\ell\) up to \(2\), using the explicit expressions of the harmonic functions given in appendix F.3 \[id\underset{\lambda=-i\tfrac{d-2}{2}}{\text{Res}}\left[\Omega_{ \lambda,1}(X_{1},X_{2};W_{1},W_{2})\right]=(W_{1}\cdot\nabla_{1})(W_{2}\cdot \nabla_{2})\Omega_{-i\tfrac{d}{2},0}(X_{1},X_{2})\,, \tag{121}\] \[i\frac{d+2}{2}\underset{\lambda=-i\tfrac{d}{2}}{\text{Res}}\left[ \Omega_{\lambda,2}(X_{1},X_{2};W_{1},W_{2})\right]=(W_{1}\cdot\nabla_{1})(W_{2 }\cdot\nabla_{2})\Omega_{-i\tfrac{d+2}{2},1}(X_{1},X_{2};W_{1},W_{2})\,,\] \[-id(d+2)\underset{\lambda=-i\tfrac{d-2}{2}}{\text{Res}}\left[ \Omega_{\lambda,2}(X_{1},X_{2};W_{1},W_{2})\right]=(W_{1}\cdot\nabla_{1})^{2} (W_{2}\cdot\nabla_{2})^{2}\Omega_{-i\tfrac{d+2}{2},0}(X_{1},X_{2})\,.\] These relations have very clear physical meanings. For example, in the first line of (121), evaluating the residue of \(\Omega_{\lambda,1}\) at \(\lambda=-i\tfrac{d-2}{2}\) amounts to approaching the massless limit of the free two-point function of a Proca field, recalling that the Proca mass in dS\({}_{d+1}\) is given by \(\sqrt{(\tfrac{d}{2}-1)^{2}+\lambda^{2}}\)16. The same as in flat space, the longitudinal part of a Proca two-point function diverges in the massless limit and can be removed by a gauge transformation, with the ghost field being a massless scalar. This explains the appearance of \((W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2})\Omega_{-i\tfrac{d}{2},0}\). Similarly, the second and third lines of (121) correspond to taking the massless and partially massless limit [51; 52; 53; 54; 55] of a free spin 2 field respectively. In the latter case, the ghost field is a tachyonic scalar of mass square \(m^{2}=-(d+1)\) and that is why we have \((W_{1}\cdot\nabla_{1})^{2}(W_{2}\cdot\nabla_{2})^{2}\Omega_{-i\frac{d+2}{2},0}\). More generally, \(\Omega_{\lambda,\ell}\) has a simple pole at the partially massless point of depth \(t\in\{0,1,\cdots,\ell-1\}\), i.e. \(\lambda=-i(\frac{d}{2}+t-1)\), and at this point the residue is proportional to \((W_{1}\cdot\nabla_{1})^{\ell-t}(W_{2}\cdot\nabla_{2})^{\ell-t}\Omega_{-i( \frac{d}{2}+\ell-1),t}\): \[\underset{\lambda=-i(\frac{d}{2}+t-1)}{\text{Res}}\Omega_{\lambda,\ell}=\frac {1}{\alpha_{\ell,t}}(W_{1}\cdot\nabla_{1})^{\ell-t}(W_{2}\cdot\nabla_{2})^{ \ell-t}\Omega_{-i(\frac{d}{2}+\ell-1),t}\, \tag{101}\] where \(\alpha_{\ell,t}\) is a constant, e.g. \(\alpha_{1,0}=id,\alpha_{2,1}=i\frac{d+2}{2},\alpha_{2,0}=-id(d+2)\). Apparently, such poles are precisely cancelled by the corresponding zeros in \(\mathcal{N}_{J,\ell}^{-1}\). Before proceeding to discuss the poles of \(\mathcal{N}_{J,\ell}^{-1}\), we'd like to make a conjecture about the explicit form of \(\alpha_{\ell,t}\) for generic \(\ell\) and \(t\): \[\alpha_{\ell,t}=-i(-2)^{\ell-t}\binom{\ell}{t}^{-1}\,\frac{\Gamma\left(\frac{ d}{2}+\ell\right)\Gamma(\ell-t)}{\Gamma\left(\frac{d}{2}+t\right)}. \tag{102}\] It matches the known results of \(\alpha_{\ell,t}\) for \(\ell\leq 2\). The remaining ratio of gamma functions in \(\mathcal{N}_{J,\ell}^{-1}\) has poles at \[\lambda=\pm i\left(\frac{d}{2}+\ell+q-1\right)\,,\qquad q\in\{1,2,\cdots,J- \ell\}\,. \tag{103}\] Combing the conjecture (102) and the inversion formula (100), we can derive a relation between the residue of \(\rho_{\mathcal{O}^{(J)}}^{\mathcal{P},\ell}\) at these spurious poles and the value of \(\rho_{\mathcal{O}^{(J)}}^{\mathcal{P},\ell+q}\) at \(\lambda=-i\left(\frac{d}{2}+\ell-1\right)\): \[\rho_{\mathcal{O}^{(J)}}^{\mathcal{P},\ell+q}\left(-i\left(\frac{ d}{2}+\ell-1\right)\right)=i\,2^{q}\,\Gamma(q)\binom{\ell+q}{\ell}^{-1}\left( \frac{d}{2}+\ell-1\right)_{q}\,\underset{\lambda=-i(\frac{d}{2}+\ell+q-1)}{ \text{Res}}\rho_{\mathcal{O}^{(J)}}^{\mathcal{P},\ell}(\lambda) \tag{104}\] We note that these identities are very similar to relations between conformal blocks and partial amplitudes of different spins and conformal dimension found in the AdS and CFT literature [34, 56, 57]. In all the examples we have tested in section 5, the relations (104) are verified to hold. In section 4.4, we will argue that closing the contour of integration over the principal series in (100) and taking the late time limit, turns the Kallen-Lehmann decomposition into a sum over boundary operators. The identities (101) and (104) ensure \[\underset{\lambda=-i(\frac{d}{2}+\ell+q-1)}{\text{Res}}\Big{[} \rho_{\mathcal{O}^{(J)}}^{\mathcal{P},\ell}(\lambda)(W_{1}\cdot\nabla_{1}\,W_{ 2}\cdot\nabla_{2})^{J-\ell}G_{\lambda,\ell}\Big{]}\\ =-\underset{\lambda=-i(\frac{d}{2}+\ell-1)}{\text{Res}}\Big{[} \rho_{\mathcal{O}^{(J)}}^{\mathcal{P},\ell+q}(\lambda)(W_{1}\cdot\nabla_{1}\,W _{2}\cdot\nabla_{2})^{J-\ell-q}G_{\lambda,\ell+q}\Big{]}\, \tag{105}\] where the residue on the L.H.S comes from \(\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\), and the residue on the R.H.S comes from \(G_{\lambda,\ell+q}\). This relation ensures that the spurious poles picked up when closing the contour of integration do not contribute to the two-point function of \(\mathcal{O}^{(J)}\), implying the absence of boundary operators with the spurious conformal dimensions \(\Delta=d+\ell+q-1\) in the Boundary Operator Expansion of \(\mathcal{O}^{(J)}\). #### 4.2.2 Relation to the inversion formula from the sphere In this section, we compare the explicit form of (4.2.2) in \(J=0\) case with the inversion formula obtained from analytical continuation from the sphere in [6]. In Appendix G, we show that the inversion formula (4.2.2) for some scalar operator \(\mathcal{O}\) simplifies to \[\rho^{\mathcal{P},0}_{\mathcal{O}}(\lambda)=\frac{2\pi^{\frac{d+1}{2}}}{\Gamma (\pm i\lambda)}\int_{-\infty}^{-1}d\sigma\,(\sigma^{2}-1)^{\frac{d-1}{2}}\, \mathbf{F}\left(\frac{d}{2}+i\lambda,\frac{d}{2}-i\lambda,\frac{d+1}{2},\frac {1+\sigma}{2}\right)\,G_{\mathcal{O}}(\sigma). \tag{4.30}\] where \(G_{\mathcal{O}}(\sigma)\) is the two-point function of \(\mathcal{O}\) and by symmetry it can only depend on the \(SO(d+1,1)\) invariant \(\sigma\equiv Y_{1}\cdot Y_{2}\). The spectral density can thus be derived from an integral over a range of \(\sigma\) that corresponds to a part of the spacelike separated region in de Sitter. This means one would be able to reconstruct the whole two-point function just having access to its value in the region \(\sigma\in(-\infty,-1)\).17 Footnote 17: Here we assume the two-point function is well-defined and single-valued and satisfies the appropriate conditions for the completeness of the principal series discussed in section 4.3 Equation (4.30) is another version of the inversion formula [6] \[\rho^{\mathcal{P},0}_{\mathcal{O}}(\lambda)=\frac{(4\pi)^{\frac{d-1}{2}}\Gamma (1-\frac{d}{2}\pm i\lambda)}{i\Gamma(\pm i\lambda)}\int_{1}^{\infty}d\sigma \;\mathbf{F}\left(1-\frac{d}{2}+i\lambda,1-\frac{d}{2}-i\lambda,\frac{3-d}{2},\frac{1-\sigma}{2}\right)\text{Disc}\left[G_{\mathcal{O}}(\sigma)\right] \tag{4.31}\] which was found by analytical continuation from the sphere. Here the discontinuity is defined as \(\text{Disc}\left[G_{\mathcal{O}}(\sigma)\right]=\lim_{\epsilon\to 0}G_{ \mathcal{O}}(\sigma+i\epsilon)-G_{\mathcal{O}}(\sigma-i\epsilon)\). The integral in (4.31) is over the timelike separated region (\(\sigma\in[1,\infty)\)) where the two-point function has a branch cut and the integration is over its discontinuity. We now argue that these two formulae are simply equivalent assuming that the Wightman two-point function \(G_{\mathcal{O}}\) satisfies the analyticity properties discussed at the beginning of Section 3. Consider the integral (4.31). It can be written as a contour integral that goes around the branch cut \(\sigma\in[1,\infty)\). One can deform this contour until it surrounds the region \(\sigma\in(-\infty,-1]\), as is illustrated in Figure 4.1. In this contour deforming process we assumed \(G_{\mathcal{O}}(\sigma)\) is analytic everywhere except for the mentioned branch cut and it decays sufficiently fast so that the contribution from the arc at infinity vanishes18. This new contour surrounds the branch cut of the regularized hypergeometric function in (4.31), which is precisely in the region \(\sigma\in(-\infty,-1)\): Footnote 18: The hypergeometric in (4.31) falls like \(\sigma^{-1+\frac{d}{2}}\), so \(G_{\mathcal{O}}(\sigma)\) has to fall faster than \(\sigma^{-\frac{d}{2}}\) for this contribution to vanish. This is the same condition for the completeness of the principal series discussed in section 4.3. \[\rho^{\mathcal{P},0}_{\mathcal{O}}(\lambda)=\frac{(4\pi)^{\frac{d-1}{2}}\Gamma (1-\frac{d}{2}\pm i\lambda)}{i\Gamma(\pm i\lambda)}\int_{-\infty}^{-1}d\sigma \;\text{Disc}\Big{[}\mathbf{F}\left(1-\frac{d}{2}+i\lambda,1-\frac{d}{2}-i \lambda,\frac{3-d}{2},\frac{1-\sigma}{2}\right)\Big{]}G_{\mathcal{O}}(\sigma )\,. \tag{4.32}\] The discontinuity of the hypergeometric function around its branch cut is given by \[\text{Disc}\left[\,\mathbf{F}\left(1-\frac{d}{2}+i\lambda,1-\frac{d}{ 2}-i\lambda,\frac{3-d}{2},\frac{1-\sigma}{2}\right)\right]=\\ \frac{2^{2-d}\pi i}{\Gamma(1-\frac{d}{2}\pm i\lambda)}(\sigma^{2} -1)^{\frac{d-1}{2}}\,\mathbf{F}\left(\frac{d}{2}+i\lambda,\frac{d}{2}-i\lambda,\frac{d+1}{2},\frac{1+\sigma}{2}\right). \tag{111}\] Using this, one finds that (110) and (111) are equivalent. ### Completeness of principal series and analyticity of the spectral densities In this section, we will spell out the conditions under which the Kallen-Lehmann decomposition of a spinning two-point function in EAdS\({}_{d+1}\) with \(d\geq 2\) only contains principal series representations. Moreover, by analytical continuation of the inversion formula derived in (109), we study analytic properties of the spectral densities. Let us start from the fact that harmonic functions \(\Omega_{\lambda,\ell}(X_{1},X_{2};W_{1},W_{2})\) with \(\lambda\in\mathbb{R}\) are a complete basis for square-integrable two-point functions in EAdS\({}_{d+1}\) with \(d\geq 2\)[58; 34]. In other words any square-integrable spin-\(J\) two-point function in EAdS can be written as \[G_{\mathcal{O}^{(J)}}(X_{1},X_{2};W_{1},W_{2})=\sum_{\ell=0}^{J} \int_{\mathbb{R}}d\lambda\,c_{\ell,J}(\lambda)\left((W_{1}\cdot\nabla_{1})(W_ {2}\cdot\nabla_{2})\right)^{J-\ell}\Omega_{\lambda,\ell}(X_{1},X_{2};W_{1},W_{ 2}) \tag{112}\] Figure 4.1: The contour deformation that illustrates the equivalence of the two inversion formulae – eq. (110) from the sphere and eq. (111) from EAdS. In orange, we represent the contour which in the inversion formula derived from the sphere is around the cut at \(\sigma\in[1,\infty)\), represented by the red zigzag line. We deform the contour as indicated by the gray arrows to the \(\sigma\in(-\infty,-1]\) interval where there is the cut of the hypergeometric in eq. (110) represented by the blue zigzag line. We assume the two-point function satisfies the analyticity properties discussed in section 3. for some coefficients \(c_{\ell,J}(\lambda)\) that do not depend on \(X_{1}\) and \(X_{2}\). The right hand side has exactly the form of the principal series contributions in the Kallen-Lehmann decomposition in de Sitter (3.31). Therefore, if a de Sitter two-point function after the Wick rotation to EAdS is square-integrable, we expect that only contributions from representations in the principal series appear in its Kallen-Lehmann decomposition. Let us see how square-integrability in EAdS translates into specific conditions on two-point functions in de Sitter. A generic spin-\(J\) two-point function in the index-free formalism can be organized as a polynomial in \(W_{1}\) and \(W_{2}\) as follows \[G_{\mathcal{O}^{(J)}}(X_{1},X_{2};W_{1},W_{2})=\sum_{n=0}^{J}\left(W_{1}\cdot W _{2}\right)^{J-n}\left((W_{1}\cdot X_{2})(W_{2}\cdot X_{1})\right)^{n}\mathcal{ G}^{(n)}_{\mathcal{O}^{(J)}}(\sigma). \tag{4.35}\] Its square-integrability can be phrased in terms of the convergence of the following integral19 Footnote 19: For instance, in the case of a scalar two-point function (\(J=0\)) this condition simplifies to \[\int_{X_{1}}|G_{\mathcal{O}}(X_{1},X_{2})|^{2}=\int_{0}^{\infty}dr\,\sinh^{d }r\,|\mathcal{G}^{(0)}_{\mathcal{O}}(-\cosh r)|^{2}=\int_{-\infty}^{-1}d \sigma\,(\sigma^{2}-1)^{\frac{d-1}{2}}|\mathcal{G}^{(0)}_{\mathcal{O}}( \sigma)|^{2}<\infty. \tag{4.36}\] \[\int_{X_{1}}G_{\mathcal{O}^{(J)}}(X_{1},X_{2};W_{1},W_{2})=\sum_{n=0}^{J} \left(W_{1}\cdot W_{2}\right)^{J-n}((W_{1}\cdot X_{2})(W_{2}\cdot X_{1}))^{n} \,\mathcal{G}^{(n)}_{\mathcal{O}^{(J)}}(\sigma). \tag{4.37}\] Substituting (4.35) into this condition, assuming that \(\mathcal{G}^{(n)}_{\mathcal{O}^{(J)}}(\sigma)\) are regular on the interval \(\sigma\in(-\infty,-1)\) (which corresponds to spacelike separation in de Sitter), we can keep the leading terms in the large \(\sigma\) limit and obtain the following inequality20 Footnote 20: This comes from (F.17) and counting powers of \(X_{1}\) and \(X_{2}\) in \[(K_{1}\cdot K_{2})^{J-n}((K_{1}\cdot X_{2})(K_{2}\cdot X_{1}))^{n}(W_{1}\cdot W _{2})^{J-m}((W_{1}\cdot X_{2})(W_{2}\cdot X_{1}))^{m}\,. \tag{4.38}\] \[\int_{-\infty}^{-1}d\sigma\ |\sigma|^{d-1+2J+m+n}\mathcal{G}^{(n)}_{\mathcal{O}^{(J) }}(\sigma)\mathcal{G}^{(m)}_{\mathcal{O}^{(J)}}(\sigma)<\infty\,,\qquad\forall m,n=0,\dots,J. \tag{4.39}\] Now let us assume that, in the large distance limit, these functions decay as power-law21 Footnote 21: As discussed in section 4.4, this statement follows from the existence of the bulk-to-boundary operator expansion. \[\mathcal{G}^{(n)}_{\mathcal{O}^{(J)}}(\sigma)\underset{\sigma\to-\infty}{ \sim}|\sigma|^{-\omega_{J,n}-n}\,, \tag{4.40}\] Then, the square-integrability of a spinning two-point function and therefore the completeness of the principal series in its Kallen-Lehmann decomposition is ensured if \[\underset{n}{\text{min}}[\text{Re}(\omega_{J,n})]>\frac{d}{2}+J\,\qquad\text{completeness of principal series} \tag{4.41}\] where by \(\underset{n}{\text{min}}[x_{n}]\) we mean the minimum value of the set \(\{x_{n}\}\). When the fall-offs of a two-point function violate this condition, other representations than the principal series might appear in its Kallen-Lehmann decomposition. In the examples in section 5, we observe that in the limit cases in which this inequality is saturated, the principal series is still enough to reconstruct the full two-point function. Now let us consider a two-point function which satisfies the condition (114), so that only the principal series contributes to its Kallen-Lehmann decomposition. Given the inversion formula (118), we can analytically continue in \(\lambda\) and study the analyticity properties of the principal series spectral densities by studying the convergence of the inversion integrals. For instance, consider the scalar case, in which the only spectral density is given by the inversion formula (110). If we analytically continue this equation in the complex \(\Delta=\frac{d}{2}+i\lambda\) plane, we would see that the integral in (110) is convergent if \[d-\text{Re}(\omega_{0,0})<\text{Re}(\Delta)<\text{Re}(\omega_{0,0})\,, \tag{119}\] where we used the fact that the hypergeometric in (110) has large distance fall-offs with powers \(\Delta\) and \(d-\Delta\). We thus expect the spectral density \(\rho^{\mathcal{P}}_{\mathcal{O}^{(0)}}(\lambda)\) to be fully analytic in the strip defined in (119). In the spin 1 case, the explicit inversion formulae for \(\rho^{\mathcal{P},1}_{\mathcal{O}^{(1)}}\) and \(\rho^{\mathcal{P},0}_{\mathcal{O}^{(1)}}\) are given by (101). In the large \(\sigma\) limit the inversion integrals converge if \[\begin{split}\ell=1:& d-\min_{n}\,\text{Re}(\omega_ {1,n})<\ \text{Re}(\Delta)<\min_{n}\,\text{Re}(\omega_{1,n})\\ \ell=0:& d+1-\min_{n}\,\text{Re}(\omega_{1,n})<\ \text{Re}( \Delta)<\min_{n}\,\text{Re}(\omega_{1,n})-1\end{split} \tag{120}\] Figure 12: The analytic structure of the spectral density of a scalar two-point function with a power-law large distance behavior \(G_{\mathcal{O}^{(0)}}(\sigma)\underset{\sigma\to-\infty}{\sim}|\sigma|^{-\omega}\). There is a strip of analyticity (the blue shaded region) if \(\text{Re}\left(\omega\right)>\frac{d}{2}\). Because of the shadow symmetry of the spectral density, the position of the possible poles (grey crosses) are also shadow symmetric. Moreover if the operator in the two-point function is Hermitian, the poles come in complex conjugate pairs i.e. reflection symmetric with respect to the x-axis. For arbitrary spin, we conjecture that \(\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\) is analytic in \[d-(\underset{n}{\text{min}}\ \text{Re}(\omega_{J,n})+\ell-J)<\text{Re}\left( \Delta\right)<\underset{n}{\text{min}}\ \text{Re}(\omega_{J,n})+\ell-J\,. \tag{100}\] We have explicitly checked this conjecture for \(J=2\). Let us now discuss the appearance of other UIRs than the principal series. If one has control over the fall-offs \(\omega_{J,n}\) of the two-point function \(G_{\mathcal{O}^{(J)}}\) by tuning some parameters of the theory, then one can reach a regime where (101) is violated. In the process of this analytic continuation, poles or branch points of the spectral densities cross the principal series integrals in the Kallen-Lehmann decomposition, resulting in additional sums and integrals over other UIRs. Group theory results in [1] as well as the examples in section 5 suggest that additional representations contributing solely as isolated points rather than as a continuum of states, but at the moment we cannot rule out their presence as a continuum in a generic interacting QFT. In some examples in section 5, we tune \(\omega_{J,n}\) by tuning the masses in the theories we are considering and we see how, when (101) is violated, poles in the spectral densities cross the contour of integration over the principal series at \(\text{Im}(\Delta)=0\), so that they lead to the appearance of complementary series states. Before the continuation, these poles appear in the complex \(\Delta\) plane in symmetric pairs with respect to the \(\Delta\)-real axis. So either they are on the real line or they come in pairs when they are off of the real-line c.f. figure 4.2. In the latter case, considering that the complementary series corresponds to real \(\Delta\), when we are performing the analytic continuation in \(\omega_{J,n}\) by first decreasing its imaginary part, in the examples in section 5, the complex conjugate pairs of poles merge on the real line where they meet a simple zero. Then, one of them moves towards the contour and ultimately crosses it, introducing a complementary series contribution in the Kallen-Lehmann decomposition, while the other typically moves in the opposite direction. Let us finally remark that the boundary of the strip of analyticity mentioned above is not necessarily saturated by poles. In other words, (100) is just the _minimum_ region of analyticity of \(\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\). Moreover, for a fixed \(J\), the thinnest strip is for \(\ell=0\). In this case the strip of analyticity disappears when \(\underset{n}{\text{min}}\ \text{Re}\left(\omega_{J,n}\right)=\frac{d}{2}+J\), which is exactly in agreement with the completeness condition. ### Boundary operator expansion In this section we assume the following about the spectral densities \(\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\) of a two-point function: 1. Meromorphicity in \(\lambda\). 2. Growth that is at most exponential in the limit \(\text{Im}(\lambda)\to-\infty\). 3. Presence of zeroes at \(\lambda=-in\) for \(n\in\mathbb{N}\). Then, we can show that the spinning operator \(\mathcal{O}^{(J)}\) appearing in the two-point function can be expanded around the late time surface in terms of boundary operators. These boundary operators transform as primaries and descendants under the \(d\)-dimensional Euclidean conformal group. They will in general have complex scaling dimensions, and as such, the putative Euclidean CFT on the boundary that they define is non-unitary. The discussion in this section is analogous to what was argued in [6] for the scalar Kallen-Lehmann decomposition, we just generalize it to higher spins. We do not claim these are necessary conditions, but they are sufficient. Some of these conditions might be relaxed while maintaining the existence of the Boundary Operator Expansion, but all of them are satisfied by the spectral densities in the examples we studied in section 5. Let us start from the following identity, which should be understood with the prescription \[G_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})=\Gamma(\pm i\lambda)\Omega_{\lambda, \ell}(Y_{1},Y_{2};W_{1},W_{2})\,. \tag{101}\] At the same time, harmonic functions can be expressed in terms of EAdS bulk-to-bulk propagators [34] \[\Omega_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})=\frac{i\lambda}{2\pi}\left( \Pi_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})-\Pi_{-\lambda,\ell}(Y_{1},Y_{2};W _{1},W_{2})\right)\,. \tag{102}\] We stress that these are just functional relations and that we are still in de Sitter space. Using these relations we can write the principal series contributions to the Kallen-Lehmann decomposition (100) as \[G_{\mathcal{O}^{(J)}}=\sum_{\ell=0}^{J}\int_{\mathbb{R}}d\lambda\ \rho_{\mathcal{O}^{(J)}}^{\mathcal{P},\ell}(\lambda)\frac{i\lambda}{\pi} \Gamma(\pm i\lambda)((W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2}))^{J-\ell} \Pi_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})\,, \tag{103}\] where we are omitting the arguments of \(G_{\mathcal{O}^{(J)}}\) to avoid clutter. This representation is convenient for our purposes because the \(\Pi_{\lambda,\ell}\) become bulk-to-boundary propagators when we send one of their coordinates to the boundary [34] \[\Pi_{\lambda,\ell}(Y,-P/\eta;W,-Z/\eta)\underset{\eta\to 0^{-}}{\approx}(- \eta)^{\Delta-\ell}\Pi_{\lambda,\ell}(Y,P;W,Z)+O(\eta^{\Delta-\ell+1}) \tag{104}\] where the explicit expression of the bulk-to-boundary propagator is (100), \(\Delta\equiv\frac{d}{2}+i\lambda\) as usual and \(P\) and \(Z\) are the embedding space realization of boundary vectors; we introduced them in section 2.2.3. Moreover, by using the recursion relations in [34], it is possible to show that the bulk-to-bulk propagators \(\Pi_{\lambda,\ell}\) have the following large \(\text{Re}(\Delta)\) behavior \[\begin{split}\Pi_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2}) \underset{\text{Re}(\Delta)\to\infty}{\sim}\sum_{n=0}^{\ell}& c_{n}(\sigma)\frac{2^{\Delta}\Delta^{\frac{d}{2}-1}}{(1-\sigma)^{ \Delta}}\left[1+\sqrt{\frac{\sigma+1}{\sigma-1}}\right]^{-2\Delta}\\ &\times(W_{1}\cdot W_{2})^{\ell-n}((Y_{1}\cdot W_{2})(Y_{2}\cdot W _{1}))^{n}\,,\end{split} \tag{105}\] for some coefficients \(c_{n}(\sigma)\) which are independent of \(\Delta\). Now consider the fact that taking one of the time coordinates to late times \(\eta\to 0^{-}\) corresponds to \(|\sigma|\to\infty\) (c.f. eq. (100)). Assuming the spectral densities satisfy the properties which we have listed at the beginning of this section, we can consider the two sides of (103) at some fixed \(0<|\sigma|^{-1}\ll 1\) and close the contour of integration in the lower side of the complex \(\lambda\) plane and the contribution from the arc at infinity will vanish22. Spurious poles will give contributions that cancel with each other as discussed in section 4.2.1. The poles at \(\lambda=-in\) in the gamma function appearing in (114) are canceled by the zeroes in the spectral density. We are thus left with the contributions of the non-spurious (let us call them physical) poles of the spectral densities \[G_{\mathcal{O}^{(J)}}=2\sum_{\ell=0}^{J}\sum_{\lambda_{*}}\underset{\lambda= \lambda_{*}}{\text{Res}}\left[\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}( \lambda)\right]\lambda_{*}\Gamma(\pm i\lambda_{*})((W_{1}\cdot\nabla_{1})(W_{2} \cdot\nabla_{2}))^{J-\ell}\Pi_{\lambda_{*},\ell}(Y_{1},Y_{2};W_{1},W_{2})\,. \tag{115}\] Now, we take \(Y_{2}\) to a point \(P\) on the late time boundary and \(W\) to a null vector \(Z\) such that \(P\cdot Z=0\), as in (113). On the right hand side of (115), we obtain a sum of bulk-to-boundary propagators and their derivatives. By comparison with the left hand side, this suggests that a spin \(J\) operator \(\mathcal{O}^{(J)}(Y,W)\) in de Sitter satisfies the following late time expansion in terms of boundary operators \[\mathcal{O}^{(J)}(-P/\eta,-Z/\eta)\underset{\eta\to 0^{-}}{\approx}\sum_{ \ell=0}^{J}\sum_{\Delta_{*}}c_{\mathcal{O}^{(J)}O^{(J)}_{\Delta_{*}}(-\eta)^{ \Delta_{*}-\ell}}(Z\cdot\partial_{P})^{J-\ell}O^{(\ell)}_{\Delta_{*}}(P,Z)+ \cdots\,, \tag{116}\] where \(O^{(\ell)}_{\Delta_{*}}(P,Z)\) are boundary CFT primaries of spin \(\ell\) and we call \(c_{\mathcal{O}^{(J)}O^{(\ell)}_{\Delta_{*}}}\) the Boundary Operator Expansion (BOE) coefficients. The dots stand for descendants, and \(\Delta_{*}\equiv\frac{d}{2}+i\lambda_{*}\) with \(\lambda_{*}\) being the position of the physical poles of the spectral density \(\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\). By comparing with (115), we relate the BOE coefficients to the residues of the spectral density \[c_{\mathcal{O}^{(J)}O^{(\ell)}_{\Delta_{*}}}=2\underset{\lambda=\lambda^{*}}{ \text{Res}}\left[\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)\right] \lambda^{*}\Gamma(\pm i\lambda^{*})\,. \tag{117}\] Let us stress that the existence of this BOE is dependant on the assumptions we stated at the beginning of this section. It would be interesting to understand what are its convergence properties and whether any of our assumptions can be relaxed while maintaining its validity. We leave these for future work. In the examples in section 5, where these assumptions are verified, we will draw precise connections between the poles of the spectral densities we will be studying and the associated boundary operators. When extra representations other than the principal series appear in our examples, their contributions are canceled when closing the contour of integration and landing on the sum in (115). In practice this means that the BOE, once derived by closing the contour of integration over the principal series, can be trusted even if we continue the two-point function beyond the regime in which it decomposes in principal series representations only. If more representations than the principal series appeared in the Kallen-Lehmann decomposition, then we expect to find boundary operators with \(\text{Re}(\Delta)<\frac{d}{2}\). Let us also note that, if this BOE exists, then the bulk two-point function of \(\mathcal{O}^{(J)}\) has to have a power law decay at late times, justifying the discussion in section 4.3. Moreover, given (116), the power of this decay corresponds to the conformal dimension of the lowest lying primary in the BOE of \(\mathcal{O}^{(J)}\). Finally, an important open question is whether the same BOE (4.51) of a bulk local operator can be used inside different correlation functions. ### Inversion formula in dS\({}_{2}\) In this section, we will derive an inversion formula to extract the spectral densities in the dS\({}_{2}\) Kallen-Lehmann decomposition (3.56). For simplicity, we first assume that the spectral density associated with the complementary series is vanishing, and we will later discuss under what conditions such an assumption is valid. In general dimensions, the tensor structure of \(G_{\mathcal{O}^{(J)}}(Y_{1},Y_{2};W_{1},W_{2})\) has two building blocks, namely \(W_{1}\cdot W_{2}\) and \((Y_{1}\cdot W_{2})(Y_{2}\cdot W_{1})\). In dS\({}_{2}\), because of the relations in eq.(2.33), \(G_{\mathcal{O}^{(J)}}\) is actually a scalar function of \(\sigma=Y_{1}\cdot Y_{2}\), multiplied by \((W_{1}\cdot W_{2})^{J}\), and the scalar function depends on whether \(W_{1}\) and \(W_{2}\) have the same chirality. Without loss of generality, fixing \(W_{1}=W_{1}^{+}\), \(G_{\mathcal{O}^{(J)}}\) is encoded in two scalar functions \(G_{\mathcal{O}^{(J)}}^{\pm}(\sigma)\), defined by \[G_{\mathcal{O}^{(J)}}(Y_{1},Y_{2};W_{1}^{+},W_{2}^{\pm})=(W_{1}^{+}\cdot W_{2 }^{\pm})^{J}\,G_{\mathcal{O}^{(J)}}^{\pm}(\sigma). \tag{4.53}\] Plugging it into (3.56), we should have \[(W_{1}^{+}\cdot W_{2}^{\pm})^{J}\,G_{\mathcal{O}^{(J)}}^{\pm}( \sigma) =\int_{\mathbb{R}}d\lambda\,\rho_{\mathcal{O}^{(J)}}^{\mathcal{P},0}(\lambda)(W_{1}^{+}\cdot\nabla_{1})^{J}(W_{2}^{\pm}\cdot\nabla_{2})^{J}G_{ \lambda,0}(Y_{1},Y_{2})\] \[+\int_{\mathbb{R}}d\lambda\,\rho_{\mathcal{O}^{(J)}}^{\mathcal{P },1}(\lambda)(W_{1}^{+}\cdot\nabla_{1})^{J-1}(W_{2}^{\pm}\cdot\nabla_{2})^{J- 1}G_{\lambda,1}(Y_{1},Y_{2};W_{1}^{+},W_{2}^{\pm})\] \[+\sum_{p=0}^{J}\rho_{\mathcal{O}^{(J)}}^{\mathcal{D}_{p}}\left(W _{1}^{+}\cdot\nabla_{1}\right)^{J}\left(W_{2}^{\pm}\cdot\nabla_{2}\right)^{J}G _{-i(p-\frac{1}{2})}(Y_{1},Y_{2}). \tag{4.54}\] The next task is to reduce the tensor structure on the R.H.S. For the first line, it is actually solved in appendix A, c.f. eq. (A.22) and eq. (A.26) \[(W_{1}^{+}\cdot\nabla_{1})^{J}(W_{2}^{\pm}\cdot\nabla_{2})^{J}G_{\lambda,0}(Y _{1},Y_{2})=(W_{1}^{+}\cdot W_{2}^{\pm})^{J}\,\phi_{\lambda,J}^{\pm}(\sigma)\, \tag{4.55}\] where \[\phi_{\lambda,J}^{\pm}(\sigma)\equiv\partial_{\sigma}^{J}((\sigma \pm 1)^{J}\partial_{\sigma}^{J})G_{\lambda,0}(\sigma),\ \ G_{\lambda,0}(\sigma)=\frac{\Gamma(\frac{1}{2}\pm i\lambda)}{4\pi}F\left( \frac{1}{2}+i\lambda,\frac{1}{2}-i\lambda,1,\frac{1+\sigma}{2}\right). \tag{4.56}\] For the second line, using the definition of \(G_{\lambda,1}\) given by eq. (A.17), we have \[\left(\frac{1}{4}+\lambda^{2}\right)G_{\lambda,1}(Y_{1},Y_{2};W_{1}^{+},W_{2}^ {\pm})=\pm(W_{1}^{+}\cdot\nabla_{1})(W_{2}^{\pm}\cdot\nabla_{2})G_{\lambda,0}( \sigma). \tag{4.57}\] So it is equivalent to the first line. The reduction of the third line is given by eq. (A.27) and eq. (A.24). Altogether, the spin \(J\) Kallen-Lehmann decomposition (3.56) is equivalent to the following two scalar equations: \[G_{\mathcal{O}^{(J)}}^{+}(\sigma)=\int_{\mathbb{R}}\,d\lambda\,\rho_{\mathcal{ O}^{(J)}}^{\mathcal{P},+}(\lambda)\phi_{\lambda,J}^{+}(\sigma)+\sum_{p=0}^{J} \rho_{\mathcal{O}^{(J)}}^{\mathcal{D}_{p}}\psi_{p,J}(\sigma),\ \ \ \rho_{\mathcal{O}^{(J)}}^{\mathcal{P},+}=\rho_{\mathcal{O}^{(J)}}^{ \mathcal{P},0}+\frac{1}{\frac{1}{4}+\lambda^{2}}\rho_{\mathcal{O}^{(J)}}^{ \mathcal{P},1}\, \tag{4.58}\] and \[G^{-}_{\mathcal{O}^{(J)}}(\sigma)=\int_{\mathbb{R}}\,d\lambda\,\rho^{\mathcal{P},-}_ {\mathcal{O}^{(J)}}(\lambda)\phi^{-}_{\lambda,J}(\sigma),\qquad\rho^{\mathcal{P}, -}_{\mathcal{O}^{(J)}}=\rho^{\mathcal{P},0}_{\mathcal{O}^{(J)}}-\frac{1}{\frac {1}{4}+\lambda^{2}}\rho^{\mathcal{P},1}_{\mathcal{O}^{(J)}}\, \tag{111}\] where \(\psi_{p,J}(\sigma)\) is defined in eq. (108). To invert these two equations, we introduce \(J\)-dependent inner products for real functions defined on \((-\infty,-1)\): \[(f,g)^{\pm}_{J}=\int_{-\infty}^{-1}d\sigma(\sigma\mp 1)^{2J}f(\sigma)g( \sigma). \tag{112}\] In appendix D, we show that \(\{\phi^{+}_{\lambda,J}\}\cup\{\psi_{p,J}\}\) is an orthogonal basis with respect to \((\,,\,)^{+}_{J}\), and \(\{\phi^{-}_{\lambda,J}\}\) is an orthogonal basis with respect to \((\,,\,)^{-}_{J}\). Using the orthogonality relations, c.f. eq. (109), (110) and (111), we obtain the following **inversion formulae** for \(\mathrm{dS}_{2}\) \[\rho^{\mathcal{P},\pm}_{\mathcal{O}^{(J)}}(\lambda)=\frac{4 \lambda\sinh(2\pi\lambda)}{(\frac{1}{2}+i\lambda)^{2}_{J}(\frac{1}{2}-i \lambda)^{2}_{J}}\int_{-\infty}^{-1}d\sigma(\sigma\mp 1)^{2J}G^{\pm}_{ \mathcal{O}^{(J)}}(\sigma)\phi^{+}_{\lambda,J}(\sigma)\] \[\rho^{\mathcal{D}_{p}}_{\mathcal{O}^{(J)}}=\frac{8\pi^{2}\,(2p-1 )}{\Gamma(J+p)^{2}\Gamma(1+J-p)^{2}}\int_{-\infty}^{-1}d\sigma(\sigma-1)^{2J }G^{+}_{\mathcal{O}^{(J)}}(\sigma)\psi_{p,J}(\sigma) \tag{113}\] and \(\rho^{\mathcal{P},0}_{\mathcal{O}^{(J)}}(\lambda),\rho^{\mathcal{P},1}_{ \mathcal{O}^{(J)}}(\lambda)\) can be recovered by taking linear combinations of \(\rho^{\mathcal{P},\pm}_{\mathcal{O}^{(J)}}(\lambda)\). The expansions (111) and (111) are valid and unique when \(G^{\pm}_{\mathcal{O}^{(J)}}\) is integrable with respect to \((\,,\,)^{\pm}_{J}\). Alternatively, it means that complementary series does not contribute to the two-point function of \(\mathcal{O}^{(J)}\) if \(G^{\pm}_{\mathcal{O}^{(J)}}(\sigma)\) decays faster than \((-\sigma)^{-J-\frac{1}{2}}\) at large \(-\sigma\). ## 5 Applications In this section, we apply the inversion formulae (105) and (113) to compute the Kallen-Lehmann decomposition of a variety of two-point functions. We study two-point functions of composite operators in free theories, primary operators in de Sitter Conformal Field Theories and composite operators in a weakly coupled Quantum Field Theory. In the free theory case, studying which terms appear in the Kallen-Lehmann decomposition informs us on the decomposition of tensor products of UIRs of the de Sitter group. In the CFT case, we study how \(SO(d+1,2)\) UIRs decompose into \(SO(d+1,1)\) UIRs. In the weakly coupled case, we use the Kallen-Lehmann representation to compute anomalous dimensions of boundary operators. All throughout, we compare the decomposition in \(d>1\) with the one in \(d=1\), where the discrete series states contribute up to \(\Delta=J\) for spin \(J\) two-point functions. The two exceptional series never appear in our examples. ### Free QFTs One of the uses of the Kallen-Lehmann decomposition is to study the decomposition of multi-particle states into single particle UIRs. By studying the contents of the Kallen-Lehmann decomposition of a two-point function of a composite operator made of products of elementary fields, we infer the complete set of UIRs that is generated by the action of that operator on the Bunch-Davies vacuum. As shown in [1], the totality of UIRs that appears in such a decomposition is almost exclusively composed of the principal series, except for a few isolated complementary series states that we recover by analytic continuation. In this section, to avoid clutter, we will write \[\langle\mathcal{O}(Y_{1})\mathcal{O}(Y_{2})\rangle\equiv\langle\Omega|\mathcal{ O}(Y_{1})\mathcal{O}(Y_{2})|\Omega\rangle\,. \tag{5.1}\] #### 5.1.1 Spin 0 Examples Let us start with the simplest possible case: the two-point function of a free elementary massive scalar field \(\phi\) with \(\Delta_{\phi}=\frac{d}{2}+i\lambda_{\phi}\) in the principal series \[\langle\phi(Y_{1})\phi(Y_{2})\rangle=G_{\lambda_{\phi},0}(Y_{1},Y_{2})\,. \tag{5.2}\] The Kallen-Lehmann decomposition of this two-point function should read \[\langle\phi(Y_{1})\phi(Y_{2})\rangle=\int_{\mathbb{R}}d\lambda\ \rho_{\phi}^{ \mathcal{P},0}(\lambda)G_{\lambda,0}(Y_{1},Y_{2})\,. \tag{5.3}\] It is then immediate to see that, necessarily, \[\begin{split}\rho_{\phi}^{\mathcal{P},0}(\lambda)&= \frac{1}{2}(\delta(\lambda+\lambda_{\phi})+\delta(\lambda-\lambda_{\phi}))\\ &=\lim_{\epsilon\to 0}\frac{\epsilon}{2\pi(\epsilon^{2}+( \lambda^{2}-\lambda_{\phi}^{2})^{2})}\,,\end{split} \tag{5.4}\] which is a manifestly real and positive quantity. It has two poles in the lower half of the complex \(\lambda\) plane, signaling the presence of two primary boundary operators in the BOE of \(\phi\) \[\phi(-P/\eta)\underset{\eta\to 0^{-}}{\approx}(-\eta)^{\Delta_{\phi}} \mathcal{O}(P)+(-\eta)^{\widetilde{\Delta}_{\phi}}\widetilde{\mathcal{O}}(P)+\dots \tag{5.5}\] where \(\mathcal{O}(P)\) and \(\widetilde{\mathcal{O}}(P)\) are CFT\({}_{d}\) primaries with scaling dimensions \(\Delta_{\phi}\) and \(d-\Delta_{\phi}\) respectively, and the dots stand for descendants. The fact that the spectral density is a delta function in this case makes sense, since a free field already falls into a single particle UIR. To see more interesting features, like a decomposition into a continuum of states, one has to instead consider two-point functions of composite operators, such as \(\phi_{1}\phi_{2}(Y)\) in a free theory of two massive scalars \[\langle\phi_{1}\phi_{2}(Y_{1})\phi_{1}\phi_{2}(Y_{2})\rangle=\langle\phi_{1}( Y_{1})\phi_{1}(Y_{2})\rangle\langle\phi_{2}(Y_{1})\phi_{2}(Y_{2})\rangle=G_{ \lambda_{1}}(Y_{1},Y_{2})G_{\lambda_{2}}(Y_{1},Y_{2}) \tag{5.6}\] where we take the two fields to have scaling dimensions \(\Delta_{1}=\frac{d}{2}+i\lambda_{1}\) and \(\Delta_{2}=\frac{d}{2}+i\lambda_{2}\) in the principal series, so \(\lambda_{1},\lambda_{2}\in\mathbb{R}\). This two-point function is free of antipodal singularities and decays at large distances as \[|G_{\phi_{1}\phi_{2}}(\sigma)|\underset{\sigma\to-\infty}{\sim}|\sigma|^{-d} \,,\qquad\sigma\equiv Y_{1}\cdot Y_{2} \tag{5.7}\] Given the discussion in section 4.3, this means the Kallen-Lehmann decomposition of this two-point function will only include states in the principal series, as long as \(d>2\). We observe through numerical checks of (5.12) that even in the limit case \(d=2\) this two-point function decomposes into principal series representations only. To apply the inversion formula (4.18) to this two-point function, we analytically continue it to EAdS as discussed in section 4.1. Under this continuation, (5.6) becomes a product of two harmonic functions \[\langle\phi_{1}\phi_{2}(X_{1})\phi_{1}\phi_{2}(X_{2})\rangle=\Gamma(\pm i \lambda_{1})\Gamma(\pm i\lambda_{2})\Omega_{\lambda_{1},0}(X_{1},X_{2})\Omega_ {\lambda_{2},0}(X_{1},X_{2})\,. \tag{5.8}\] Then, the inversion formula reads \[\rho^{\mathcal{P},0}_{\phi_{1}\phi_{2}}(\lambda)=\frac{\Gamma(\pm i\lambda_{1} )\Gamma(\pm i\lambda_{2})}{\mathcal{N}_{0,0}}\int_{X_{1}}\Omega_{\lambda,0}(X _{1},X_{2})\Omega_{\lambda_{1},0}(X_{1},X_{2})\Omega_{\lambda_{2},0}(X_{1},X_{ 2})\,, \tag{5.9}\] where \(\mathcal{N}_{J,\ell}\) is defined in (4.19). To make progress, we use the split representation (F.14) on the three harmonic functions, following what was first done in [2]. Defining \(\Delta_{3}\equiv\frac{d}{2}+i\lambda\), we have \[\rho^{\mathcal{P},0}_{\phi_{1}\phi_{2}}(\lambda)=\frac{\lambda^{2}\lambda_{1}^ {2}\lambda_{2}^{2}\Gamma(\pm i\lambda_{1})\Gamma(\pm i\lambda_{2})}{\pi^{3} \mathcal{N}_{0,0}}\int_{X_{1}}\prod_{k=1}^{3}\int_{P_{1}}\Pi_{\Delta_{k},0}(X_ {1},P_{k})\Pi_{\bar{\Delta}_{k},0}(X_{2},P_{k})\,, \tag{5.10}\] where \(\Pi_{\Delta,0}(X,P)\) is a EAdS scalar bulk-to-boundary propagator, of which we report the definition in (F.15). The integral over \(X_{1}\) leads to a CFT three point function \[\rho^{\mathcal{P},0}_{\phi_{1}\phi_{2}}(\lambda)=\frac{\lambda^{2}\lambda_{1}^ {2}\lambda_{2}^{2}\Gamma(\pm i\lambda_{1})\Gamma(\pm i\lambda_{2})b(\Delta_{1},\Delta_{2},\Delta,0)}{\pi^{3}\mathcal{N}_{0,0}}\int_{P_{1},P_{2},P_{3}}\frac{ \prod_{k=1}^{3}\Pi_{\bar{\Delta}_{k},0}(X_{2},P_{k})}{(P_{12})^{\Delta_{123}} (P_{13})^{\Delta_{132}}(P_{23})^{\Delta_{231}}}\,, \tag{5.11}\] where the notation and all the coefficients are explicit in the Appendix H.1. There, we also show how to solve the remaining integrals over the three boundary points \(P_{1}\), \(P_{2}\) and \(P_{3}\). Importantly, the spectral density of every free QFT two-point function of composite operators made of two fundamental fields with spin can be reduced to linear combinations of this specific integral, so that in the spinning examples we will make extensive use of it. In Appendix H.1 we show how to eventually obtain \[\rho^{\mathcal{P},0}_{\phi_{1}\phi_{2}}(\lambda)=\frac{\lambda\sinh(\pi\lambda )}{32\pi^{\frac{d}{2}+3}\Gamma(\frac{d}{2})\Gamma(\frac{d}{2}\pm i\lambda)} \prod_{\pm,\pm,\pm}\Gamma\left(\frac{\frac{d}{2}\pm i\lambda\pm i\lambda_{1} \pm i\lambda_{2}}{2}\right)\,. \tag{5.12}\] It can be checked numerically that if \(\lambda_{1},\lambda_{2}\in\mathbb{R}\), the integral of (5.12) fully reproduces (5.6) \[\langle\phi_{1}\phi_{2}(Y_{1})\phi_{1}\phi_{2}(Y_{2})\rangle=\int_{\mathbb{R} }\mathrm{d}\lambda\ \rho^{\mathcal{P},0}_{\phi_{1}\phi_{2}}(\lambda)G_{\lambda,0}(Y_{1},Y_{2})\,, \qquad\text{if }\lambda_{1},\lambda_{2}\in\mathbb{R}\,. \tag{5.13}\] Analytically continuing \(\lambda_{1}\) and \(\lambda_{2}\) to the complementary series such that \(i\lambda_{1}\in(0,\frac{d}{2})\) and \(i\lambda_{2}\in(0,\frac{d}{2})\), poles of \(\rho^{\mathcal{P},0}_{\phi_{1}\phi_{2}}(\lambda)\) can cross the contour of integration over the principal series, so that their residues need to be added by hand, introducing some complementary series contributions to the Kallen-Lehmann decomposition of this two-point function. This is in agreement with what is discussed in section 4.3. By studying the gamma functions in (5.12) we see that poles cross the contour if there exists some \(n\in\mathbb{N}\) such that \[\frac{d}{2}+2n<i\lambda_{1}+i\lambda_{2}<d\,, \tag{5.14}\] where the second inequality comes from the fact that \(\lambda_{1}\) and \(\lambda_{2}\) are constrained to be on the complementary series. Let us assume more specifically that \[\frac{d}{2}+2N<i\lambda_{1}+i\lambda_{2}<\frac{d}{2}+2(N+1)\,, \tag{111}\] for some \(N<\frac{d}{4}\). Then, the full decomposition reads \[\langle\phi_{1}\phi_{2}(Y_{1})\phi_{1}\phi_{2}(Y_{2})\rangle= \int_{\mathbb{R}}\mathrm{d}\lambda\ \rho^{\mathcal{P},0}_{\phi_{1}\phi_{2}}(\lambda)G_{ \lambda,0}(Y_{1},Y_{2})+\sum_{n=0}^{N}\rho^{\mathcal{C},0}_{\phi_{1}\phi_{2}}( n)G_{\lambda_{1}+\lambda_{2}+i\left(\frac{d}{2}+2n\right),0}(Y_{1},Y_{2}) \tag{112}\] where \[\rho^{\mathcal{C},0}_{\phi_{1}\phi_{2}}(n)= \frac{(-1)^{n}(\frac{d}{2})_{n}\Gamma(-n+i\lambda_{12})\Gamma( \frac{d}{2}+n-i\lambda_{12})\prod_{j=1,2}\Gamma(-n+i\lambda_{j})\Gamma(\frac{ d}{2}+n-i\lambda_{j})}{4\pi^{1+\frac{d}{2}}n!\Gamma(-2n+i\lambda_{12}) \Gamma(-\frac{d}{2}-2n+i\lambda_{12})\Gamma(d+2n-i\lambda_{12})\Gamma(\frac{ d}{2}+2n-i\lambda_{12})} \tag{113}\] with \(\lambda_{12}\equiv\lambda_{1}+\lambda_{2}\). The complementary series densities are simply the residues on the poles of \(\rho^{\mathcal{P},0}_{\phi_{1}\phi_{2}}(\lambda)\) \[\rho^{\mathcal{C},0}_{\phi_{1}\phi_{2}}(n)=4\pi i\operatorname*{Res}_{\lambda= \lambda_{1}+\lambda_{2}+i\left(\frac{d}{2}+2n\right)}\rho^{\mathcal{P},0}_{ \phi_{1}\phi_{2}}(\lambda)\,. \tag{114}\] As expected from the proof of the Kallen-Lehmann decomposition, by studying the sign of \(\rho^{\mathcal{C},0}_{\phi_{1}\phi_{2}}(n)\), it can be verified that these functions are positive as long as \(\lambda_{1}\) and \(\lambda_{2}\) are in the complementary series and lie in the interval (5.14). The appearance of this discrete sum of complementary series UIRs is in agreement with [1], see Table 1.3 there. Moreover, this sum was derived before in [59] and we checked that our \(\rho_{\phi_{1}\phi_{2}}^{\mathcal{C},0}(n)\) matches their equation (48)23. The sum over \(n\) runs up to \(N<\lfloor\frac{d}{4}\rfloor\) because only in this range the interval \((\frac{d}{2}+2n,d)\) is non-vanishing. Footnote 23: To verify the matching one needs to substitute \(\lambda_{1}\to-i\alpha\), \(\lambda_{2}\to-i\beta\) and \(d\to D-1\). Boundary Operator ExpansionThe spectral density (5.12) satisfies all the assumptions of section 4.4. From its poles in the lower half of the complex \(\lambda\) plane we can thus read off the primary operators which appear in the BOE of \(\phi_{1}\phi_{2}\), namely \[\phi_{1}\phi_{2}(-P/\eta)\underset{\eta\to 0^{-}}{\approx} \sum_{n=0}^{\infty} \left[c_{\Delta_{1}\Delta_{2}}(-\eta)^{\Delta_{1}+\Delta_{2}+2n}[ \mathcal{O}_{1}\mathcal{O}_{2}]_{n}(P)\right. \tag{5.19}\] \[\left.+\,c_{\Delta_{1}\bar{\Delta}_{2}}(-\eta)^{\Delta_{1}+\bar{ \Delta}_{2}+2n}[\mathcal{O}_{1}\widetilde{\mathcal{O}}_{2}]_{n}(P)+\cdots \right]+\cdots\] where notation like \([\mathcal{O}_{1}\mathcal{O}_{2}]_{n}\) should be understood to stand for all the boundary scalar primaries that can be constructed with \(\mathcal{O}_{1},\mathcal{O}_{2}\) and \(2n\) contracted derivatives while being symmetric under \(1\leftrightarrow 2\). The dots in the square brackets stand for contributions from primaries like \([\widetilde{\mathcal{O}}_{1}\mathcal{O}_{2}]_{n}\) and \([\widetilde{\mathcal{O}}_{1}\widetilde{\mathcal{O}}_{2}]_{n}\), while the dots outside of the brackets stand for descendants. The operators \(\mathcal{O}_{i}(P)\) are defined as the leading late time behavior of the free fields \(\phi_{1}(Y)\) and \(\phi_{2}(Y)\) \[\phi_{1,2}(-P/\eta)\underset{\eta\to 0^{-}}{\sim}(-\eta)^{\Delta_{1,2}} \mathcal{O}_{1,2}(P)+(-\eta)^{\bar{\Delta}_{1,2}}\widetilde{\mathcal{O}}_{1,2 }(P)\,. \tag{5.20}\] so that \(\mathcal{O}_{1}(P)\) and \(\widetilde{\mathcal{O}}_{1}(P)\) transform as CFT scalar primaries with scaling dimensions \(\Delta_{1}=\frac{d}{2}+i\lambda_{1}\) and \(\bar{\Delta}_{1}=\frac{d}{2}-i\lambda_{1}\) respectively (and analogously \(\mathcal{O}_{2}(P)\) and \(\widetilde{\mathcal{O}}_{2}(P)\)). An extra comment: when \(\lambda_{1}\) and \(\lambda_{2}\) satisfy (5.14), the poles of (5.12) at \(\lambda=\lambda_{1}+\lambda_{2}+i(\frac{d}{2}+2n)\) can be picked up when closing the contour of integration to find the boundary operators in the late time limit. One could thus expect there to be operators on the boundary with \(\Delta=d-\Delta_{1}-\Delta_{2}-2n\). But the residues on these poles are precisely canceled by the complementary series sum in (5.16), and thus such operators are actually not appearing in the bulk-boundary OPE of \(\phi_{1}\phi_{2}(Y)\). #### 5.1.2 Spin 1 Examples Consider the correlator \[\langle V\phi(Y_{1};W_{1})V\phi(Y_{2};W_{2})\rangle=\langle V(Y_{1};W_{1})V(Y_ {2};W_{2})\rangle\langle\phi(Y_{1})\phi(Y_{2})\rangle \tag{5.21}\] in a free theory of a massive vector with \(\Delta_{V}=\frac{d}{2}+i\lambda_{V}\) and a massive scalar \(\Delta_{\phi}=\frac{d}{2}+i\lambda_{\phi}\), both on the principal series. This two-point function has two scalar components \[G_{V\phi}(Y_{1},Y_{2};W_{1},W_{2})=\mathcal{G}_{V\phi}^{(0)}(\sigma)(W_{1} \cdot W_{2})+\mathcal{G}_{V\phi}^{(1)}(\sigma)(W_{1}\cdot Y_{2})(W_{2}\cdot Y _{1})\,, \tag{5.22}\] which decay at large distances as \[|\mathcal{G}_{V\phi}^{(0)}(\sigma)|\underset{\sigma\to-\infty}{\sim}|\sigma|^{ -d}\,,\qquad|\mathcal{G}_{V\phi}^{(1)}(\sigma)|\underset{\sigma\to-\infty}{ \sim}|\sigma|^{-d-1}\,. \tag{5.23}\] Following the discussion in section 4.3, we can thus state that the Kallen-Lehmann decomposition of this two-point function will only include principal series contributions, as long as \(d>2\). We verified that this is true also in the limit case \(d=2\). These will be organized in two terms, related to transverse and longitudinal degrees of freedom. In Appendix H.1 we show in detail how to apply the inversion formula to this case and how to express the two spectral densities as linear combinations of (110). Here we report the result \[\rho^{\mathcal{P},0}_{V\phi}(\lambda)=\frac{2^{-1}\pi^{-3-\frac{d }{2}}\lambda\sinh(\pi\lambda)}{(\Delta_{V}-1)(\bar{\Delta}_{V}-1)(d^{2}+4 \lambda^{2})\Gamma(\frac{d}{2})\Gamma(\frac{d}{2}\pm i\lambda+1)}\prod_{\pm,\pm,\pm}\Gamma\left(\frac{\frac{d}{2}+1\pm i\lambda\pm i\lambda_{V}\pm i\lambda_{ \phi}}{2}\right)\] \[\rho^{\mathcal{P},1}_{V\phi}(\lambda)=\frac{2^{-12}\pi^{-3-\frac{d }{2}}\lambda\sinh(\pi\lambda)f_{\lambda,\lambda_{V},\lambda_{\phi}}}{\Gamma( \frac{d+2}{2})(\Delta_{V}-1)(\bar{\Delta}_{V}-1)\Gamma(\frac{d}{2}\pm i\lambda +1)}\prod_{\pm,\pm,\pm}\Gamma\left(\frac{\frac{d}{2}\pm i\lambda\pm i\lambda_{ \phi}\pm i\lambda_{V}}{2}\right)\,, \tag{111}\] with \[f_{\lambda,\lambda_{V},\lambda_{\phi}}= 16\left(\lambda_{\phi}^{2}-(\lambda^{2}+\lambda_{V}^{2})\right)^{ 2}+64(d-1)\lambda^{2}\lambda_{V}^{2}\] \[+8d(3d-4)\lambda_{\phi}^{2}+8d\left(2d^{2}-5d+4\right)\left( \lambda^{2}+\lambda_{V}^{2}\right)+d^{3}\left(4d^{2}-11d+8\right)\,, \tag{112}\] where we see the appearance of the spurious pole predicted in section 4.2.1. When \(\phi\) and \(V\) have scaling dimensions in the principal series, the integrals over the principal series reproduce the full two-point function. We elaborate on how exactly to carry out numerical checks in section F.3. Continuing their scaling dimensions to the complementary series, \(i\lambda_{V}\in\left(0,\frac{d}{2}-1\right)\) and \(i\lambda_{\phi}\in(0,\frac{d}{2})\), instead leads to different poles of \(\rho^{\mathcal{P},0}_{V\phi}(\lambda)\) and \(\rho^{\mathcal{P},1}_{V\phi}(\lambda)\) crossing the contour of integration. This happens when the following conditions are satisfied for some integers \(N_{0}\) and \(N_{1}\): \[\rho^{\mathcal{P},0}_{V\phi}: \frac{d}{2}+2N_{0}+1<i\lambda_{\phi}+i\lambda_{V}<\frac{d}{2}+2(N _{0}+1)+1\,, \tag{113}\] \[\rho^{\mathcal{P},1}_{V\phi}: \frac{d}{2}+2N_{1}<i\lambda_{\phi}+i\lambda_{V}<\frac{d}{2}+2(N_{ 1}+1)\,,\] where the unitarity bounds for the complementary series impose \(N_{0}<\frac{d-4}{4}\) and \(N_{1}<\frac{d-2}{4}\,.\) The complementary series contributions to this two-point function when (113) are satisfied, then, are given by the sum over the residues of \(\rho^{\mathcal{P},0}_{V\phi}(\lambda)\) and \(\rho^{\mathcal{P},1}_{V\phi}(\lambda)\) on those poles. Moreover, in \(d=1\) the discrete series contributes as well. We can explicitly derive this extra contribution by analytically continuing in the dimension until \(d=1\), and keeping track of any poles that cross the contour of integration over the principal series. Specifically, what happens is that the poles at \(\lambda=\pm i\frac{d-2}{2}\) in the spin-1 free propagator (see section 4.2.1 for a discussion about these poles) cross the contour of integration. At the precise value \(d=2\) these poles at \(\lambda=\pm i\frac{d-2}{2}\) are canceled by zeroes of the form \(\lambda\sinh(\pi\lambda)\) which are present in the propagator, but when continuing all the way to \(d=1\), the poles need to be taken into account. The complete decomposition thus reads \[\langle V\phi(Y_{1};W_{1})V\phi(Y_{2};W_{2})\rangle= \sum_{\ell=0}^{1}\int_{\mathbb{R}}\mathrm{d}\lambda\;\rho_{V\phi}^ {\mathcal{P},\ell}(\lambda)[(W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2})]^{1- \ell}G_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})\] \[+\sum_{n=0}^{N_{0}}\rho_{V\phi,n}^{\mathcal{C},0}(W_{1}\cdot \nabla_{1})(W_{2}\cdot\nabla_{2})G_{\lambda_{\phi}+\lambda_{V}+i(\frac{d}{2}+2n +1),0}(Y_{1},Y_{2})\] \[+\sum_{n=0}^{N_{1}}\rho_{V\phi,n}^{\mathcal{C},1}G_{\lambda_{\phi} +\lambda_{V}+i(\frac{d}{2}+2n),1}(Y_{1},Y_{2};W_{1},W_{2}) \tag{113}\] \[+\delta_{d,1}\rho_{V\phi}^{\mathcal{D}_{1}}\left(W_{1}\cdot \nabla_{1}\right)(W_{2}\cdot\nabla_{2})\,G_{-\frac{i}{2},0}(Y_{1},Y_{2})\,,\] where \(\delta_{d,1}\) is a Kronecker delta, because the discrete series term only contributes in \(d=1\). We stress that the complementary series contributions appear only if (110) are satisfied for some \(N_{0}\) and \(N_{1}\), and are absent otherwise. The spectral densities of the complementary series contributions are, specifically, \[\rho_{V\phi,n}^{\mathcal{C},0} =4\pi i\underset{\lambda=\lambda_{\phi}+\lambda_{V}+i(\frac{d}{2} +2n+1)}{\rm Res}\rho_{V\phi,0}^{\mathcal{P}}(\lambda)\,, \tag{114}\] \[\rho_{V\phi,n}^{\mathcal{C},1} =4\pi i\underset{\lambda=\lambda_{\phi}+\lambda_{V}+i(\frac{d}{2} +2n)}{\rm Res}\rho_{V\phi,1}^{\mathcal{P}}(\lambda)\,,\] and we verify that they are positive functions for \(\lambda_{V}\) and \(\lambda_{\phi}\) in (110). The discrete series density is instead given by \[\rho_{V\phi}^{\mathcal{D}_{1}}=\frac{\pi(\lambda_{V}^{2}-\lambda_{\phi}^{2})} {(1+4\lambda_{V}^{2})\sinh(\pi(\lambda_{V}-\lambda_{\phi}))\sinh(\pi(\lambda_ {V}+\lambda_{\phi}))}\,. \tag{115}\] Now let us discuss the spectrum of **boundary operators** that we can infer from this two-point function. As reviewed in [26] and discussed in section 2.2.3, bulk free vector fields have the following asymptotic behavior \[V_{i}(\eta,{\bf y})\underset{\eta\to 0^{-}}{\sim}(-\eta)^{\Delta_{V}-1} \mathcal{A}_{i}({\bf y})+(-\eta)^{\bar{\Delta}_{V}-1}\widetilde{\mathcal{A}}_ {i}({\bf y})\,, \tag{116}\] with \(\mathcal{A}_{i}({\bf y})\) and \(\widetilde{\mathcal{A}}_{i}({\bf y})\) transforming as CFT primaries with scaling dimensions \(\Delta_{V}\) and \(\bar{\Delta}_{V}\). Using the fact that \(\nabla_{\mu}V^{\mu}=0\) we can fix the asymptotic behavior of \(V_{\eta}(\eta,{\bf y})\) \[V_{\eta}(\eta,{\bf y})\underset{\eta\to 0^{-}}{\sim}\frac{1}{\bar{\Delta}_{V}-1 }(-\eta)^{\Delta_{V}}\partial\cdot\mathcal{A}({\bf y})+\frac{1}{\Delta_{V}-1} (-\eta)^{\bar{\Delta}_{V}}\partial\cdot\widetilde{\mathcal{A}}({\bf y})\,. \tag{117}\] We recognize the appearance of these boundary operators in the poles of the two spectral densities. Specifically, we can write the following BOE (c.f. eq. (109)) \[V\phi(-P/\eta,-Z/\eta)\underset{\eta\to 0^{-}}{\approx}\sum_{n=0}^{ \infty} \left[c_{\Delta_{\phi}\Delta_{V}}^{(0)}(-\eta)^{\Delta_{V}+\Delta_{\phi}+1+2n }(Z\cdot\partial_{P})[(D_{Z}\cdot\partial_{P})\mathcal{A}\mathcal{O}]_{n}(P,Z)\right.\] \[\left.c_{\Delta_{\phi}\Delta_{V}}^{(1)}(-\eta)^{\Delta_{V}+\Delta _{\phi}-1+2n}[\mathcal{A}\mathcal{O}]_{n}(P,Z)+\cdots\right]+\cdots\,, \tag{118}\] where we see both spin \(0\) and spin \(1\) boundary operators appearing and the dots stand for double trace primaries like \([\bar{\mathcal{A}}\mathcal{O}]_{n}(P,Z)\) and descendants. The boundary operator \(\mathcal{O}\) is defined through the late time limit of the free field \[\phi(-P/\eta)\underset{\eta\to 0^{-}}{\approx}(-\eta)^{\Delta_{\phi}}\mathcal{O}(P)+ (-\eta)^{\bar{\Delta}_{\phi}}\widetilde{\mathcal{O}}(P)\,. \tag{110}\] Finally, we verify that the contributions from the spurious poles at \(\lambda=-i\frac{d}{2}\) and \(\lambda=-i\frac{d-2}{2}\) in (109) exactly cancel when closing the contour of integration due to the identities in 4.2.1. That means they are not associated to any boundary operator. We also computed the decomposition of the correlator \[\langle\phi_{1}\nabla\phi_{2}(Y_{1};W_{1})\phi_{1}\nabla\phi_{2}(Y_{2};W_{2}) \rangle=\langle\phi_{1}(Y_{1})\phi_{1}(Y_{2})\rangle(W_{1}\cdot\nabla_{1})(W_ {2}\cdot\nabla_{2})\langle\phi_{2}(Y_{1})\phi_{2}(Y_{2})\rangle\,. \tag{111}\] When \(\lambda_{1},\lambda_{2}\in\mathbb{R}\), the following principal series spectral densities account for the full Kallen-Lehmann decomposition of this two-point function (see appendix H.1 for more details) \[\rho^{\mathcal{P},0}_{\phi_{1}\nabla\phi_{2}}(\lambda) =\frac{(d^{2}+4(\lambda^{2}-\lambda_{1}^{2}+\lambda_{2}^{2}))^{2 }\Gamma(\frac{d+1}{2})\lambda\sinh(\pi\lambda)}{2^{10-d}\pi^{\frac{d+2}{2}}(d ^{2}+4\lambda^{2})\Gamma(d)\Gamma(\frac{d}{2}+1\pm i\lambda)}\prod_{\pm,\pm, \pm}\Gamma\left(\frac{\frac{d}{2}\pm i\lambda\pm i\lambda_{1}\pm i\lambda_{2} }{2}\right)\] \[\rho^{\mathcal{P},1}_{\phi_{1}\nabla\phi_{2}}(\lambda) =\frac{\lambda\sinh(\pi\lambda)}{2^{4}\pi^{3+\frac{d}{2}}\Gamma( \frac{d+2}{2})\Gamma(\frac{d}{2}+1\pm i\lambda)}\prod_{\pm,\pm,\pm}\Gamma \left(\frac{\frac{d}{2}+1\pm i\lambda\pm i\lambda_{1}\pm i\lambda_{2}}{2}\right) \tag{112}\] When analytically continuing the conformal weights of \(\phi_{1}\) and \(\phi_{2}\) to the complementary series \(i\lambda_{1}\in(0,\frac{d}{2})\) and \(i\lambda_{2}\in(0,\frac{d}{2})\), poles of \(\rho^{\mathcal{P},0}_{\phi_{1}\nabla\phi_{2}}(\lambda)\) and \(\rho^{\mathcal{P},1}_{\phi_{1}\nabla\phi_{2}}(\lambda)\) cross the integration contour when the following conditions are satisfied for some \(N_{0}<\frac{d-2}{4}\) and \(N_{1}<\frac{d-4}{4}\) \[\rho^{\mathcal{P},0}_{\phi_{1}\nabla\phi_{2}}: \frac{d}{2}+2N_{0}<i\lambda_{1}+i\lambda_{2}<\frac{d}{2}+2(N_{0}+ 1)\,, \tag{113}\] \[\rho^{\mathcal{P},1}_{\phi_{1}\nabla\phi_{2}}: \frac{d}{2}+2N_{1}+1<i\lambda_{1}+i\lambda_{2}<\frac{d}{2}+2(N_{ 1}+1)+1\,.\] Notice that these are slightly different poles than (108). Moreover, in \(d=1\) there is a discrete series state appearing corresponding to a massless scalar, with \(\Delta=1\). The full decomposition reads \[\langle\phi_{1}\nabla\phi_{2}(Y_{1};W_{1})\phi_{1}\nabla\phi_{2}( Y_{2};W_{2})\rangle= \sum_{\ell=0}^{1}\int_{\mathbb{R}}\mathrm{d}\lambda\ \rho^{\mathcal{P},\ell}_{\phi_{1}\nabla\phi_{2}}(\lambda)[(W_{1}\cdot \nabla_{1})(W_{2}\cdot\nabla_{2})]^{1-\ell}G_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W _{2})\] \[+\sum_{n=0}^{N_{0}}\rho^{\mathcal{C},0}_{\phi_{1}\nabla\phi_{2},n }(W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2})G_{\lambda_{\phi}+\lambda_{V}+i( \frac{d}{2}+2n),0}(Y_{1},Y_{2})\] \[+\sum_{n=0}^{N_{1}}\rho^{\mathcal{C},1}_{\phi_{1}\nabla\phi_{2},n }G_{\lambda_{\phi}+\lambda_{V}+i(\frac{d}{2}+2n+1),1}(Y_{1},Y_{2};W_{1},W_{2})\] \[+\delta_{d,1}\rho^{\mathcal{D}_{1}}_{\phi_{1}\nabla\phi_{2}}\left(W _{1}\cdot\nabla_{1}\right)(W_{2}\cdot\nabla_{2})\,G_{-\frac{i}{2},0}(Y_{1},Y_{2 })\,. \tag{114}\] The complementary series densities are once again positive functions, given by the residues of the principal series densities on the poles that cross the contour \[\rho^{\mathcal{C},0}_{\phi_{1}\nabla\phi_{2},n} =4\pi i\underset{\lambda=\lambda_{1}+\lambda_{2}+i(\frac{d}{2}+2n )}{\rm Res}\rho^{\mathcal{P},0}_{\phi_{1}\nabla\phi_{2}}(\lambda)\,, \tag{115}\] \[\rho^{\mathcal{C},1}_{\phi_{1}\nabla\phi_{2},n} =4\pi i\underset{\lambda=\lambda_{1}+\lambda_{2}+i(\frac{d}{2}+2n +1)}{\rm Res}\rho^{\mathcal{P},1}_{\phi_{1}\nabla\phi_{2}}(\lambda)\,,\] and their contribution is instead absent when \(\lambda_{\phi}\) and \(\lambda_{V}\) are real or imaginary but outside of the intervals (5.36). The discrete series density is again obtainable by analytically continuing in the spacetime dimension and adding the residue on the pole that crosses the contour of integration \[\rho^{\mathcal{D}_{1}}_{\phi_{1}\nabla\phi_{2}}=\frac{\pi(\lambda_{1}^{2}- \lambda_{2}^{2})}{4\sinh(\pi(\lambda_{1}-\lambda_{2}))\sinh(\pi(\lambda_{1}+ \lambda_{2}))}\,. \tag{5.39}\] The difference in the pole structure of (5.35) compared to (5.24) is explained when we consider the boundary operators appearing in the BOE of \(\phi_{1}\nabla\phi_{2}(Y)\) \[\begin{split}\phi_{1}(Z\cdot\partial_{P})\phi_{2}(-P/\eta) \underset{\eta\to 0^{-}}{\approx}&\sum_{n=0}^{\infty}\left[c^{(0)}_{ \Delta_{1}\Delta_{2}}(-\eta)^{\Delta_{1}+\Delta_{2}+2n}(Z\cdot\partial_{P})[ \mathcal{O}_{1}\mathcal{O}_{2}]_{n}(P)\right.\\ &\left.c^{(1)}_{\Delta_{1}\Delta_{2}}(-\eta)^{\Delta_{1}+\Delta_{ 2}+2n}[\mathcal{O}_{1}(Z\cdot\partial_{P})\mathcal{O}_{2}]_{n}(P)\right]\end{split} \tag{5.40}\] To form a boundary scalar, in fact, \(V\phi\) needs the action of a derivative, such that the scalar boundary operators with the lowest scaling dimension have \(\Delta=\Delta_{V}+\Delta_{\phi}+1\). The operator \(\phi_{1}\nabla\phi_{2}\), instead, can form a boundary scalar operator without the use of derivatives and with scaling dimension \(\Delta=\Delta_{1}+\Delta_{2}\). Vice versa for the boundary vector operators. Finally, we verify that the contributions of the spurious poles exactly cancel also in this case, because of the identities in 4.2.1. ### Conformal Field Theories We have shown some examples of Kallen-Lehmann decompositions of two-point functions of composite operators in free QFTs. In this section, we use the Kallen-Lehmann decomposition to study how states generated by the action of bulk CFT primaries on the Euclidean vacuum decompose into UIRs of the de Sitter group. That corresponds to decomposing irreps of \(SO(d+1,2)\) into irreps of \(SO(d+1,1)\,.\) We test examples up to spin 2 and recover the fact that for general \(d>1\), CFT states decompose into principal series states and complementary series states, while for \(d=1\) there is the appearance of discrete series states up to \(\Delta=J,\) as in the free theory case. We verify the validity of our results by comparing their flat space limit as described in section 3.3 to the results presented in [47]. Spinning CFT two-point functions in de SitterTo start, let us review the form of CFT two-point functions of traceless symmetric primary operators with spin in the bulk of de Sitter. The relevant group of symmetries of these correlators is \(SO(d+1,2).\) We thus embed the \(d+1\) dimensional de Sitter CFT in \(\mathbb{R}^{d+1,2}\) with metric \(\eta=\text{diag}(-1,1,\dots,1,-1).\) We denote points in this embedding space \(\mathcal{Y}\in\mathbb{R}^{d+1,2}\), and the invariance under \(SO(d+1,2)\) is enforced by \(\mathcal{Y}^{2}=0\). Explicitly, \[\mathcal{Y}^{2}=Y^{2}-(\mathcal{Y}^{d+2})^{2}=0\,, \tag{5.41}\] where \(Y\in\mathbb{R}^{d+1,1}\). The de Sitter hyperboloid constraint is enforced by \((\mathcal{Y}^{d+2})^{2}=1\). That is the section of the lightcone in \(\mathbb{R}^{d+1,2}\) on which we will focus. We also embed the polarization tensors as \(\mathcal{Z}=(W,0)\), so that \(\mathcal{Y}_{1}\cdot\mathcal{Z}_{2}=Y_{1}\cdot W_{2}\,.\) In [35] it is shown that CFT two-point functions of a spin \(J\) primary operator of conformal dimension \(\mathbf{\Delta}\) is, in embedding space, \[\langle\mathcal{O}^{(J)}(\mathcal{Y}_{1},\mathcal{Z}_{1})\mathcal{O}^{(J)}( \mathcal{Y}_{2},\mathcal{Z}_{2})\rangle=c_{\mathcal{O}^{(J)}}\frac{\left(-2[( \mathcal{Z}_{1}\cdot\mathcal{Z}_{2})(\mathcal{Y}_{1}\cdot\mathcal{Y}_{2})-( \mathcal{Y}_{1}\cdot\mathcal{Z}_{2})(\mathcal{Y}_{2}\cdot\mathcal{Z}_{1})] \right)^{J}}{(-2\mathcal{Y}_{1}\cdot\mathcal{Y}_{2})^{\mathbf{\Delta}+J}}\,. \tag{112}\] Projecting to de Sitter by using the lightcone constraint \[\mathcal{Y}_{1}\cdot\mathcal{Y}_{2}=Y_{1}\cdot Y_{2}-1\,, \tag{113}\] we obtain the general form of a spin \(J\) CFT two-point function in de Sitter expressed in the embedding space formalism \[\langle\mathcal{O}^{(J)}(Y_{1},W_{1})\mathcal{O}^{(J)}(Y_{2},W_{2})\rangle=c_ {\mathcal{O}^{(J)}}\frac{\left[(W_{1}\cdot W_{2})(1-Y_{1}\cdot Y_{2})+(Y_{1} \cdot W_{2})(Y_{2}\cdot W_{1})\right]^{J}}{2^{\mathbf{\Delta}}(1-Y_{1}\cdot Y_{2}) ^{\mathbf{\Delta}+J}}\,, \tag{114}\] #### 5.2.1 Spin 0 Example Let us start by reviewing the Kallen-Lehmann decomposition of the CFT two-point function of a scalar primary operator \(\mathcal{O}\) of conformal dimension \(\mathbf{\Delta}\) in de Sitter, which was computed before in [6]. The two-point function has the form \[\langle\mathcal{O}(Y_{1})\mathcal{O}(Y_{2})\rangle=\frac{c_{\mathcal{O}}}{2^{ \mathbf{\Delta}}(1-Y_{1}\cdot Y_{2})^{\mathbf{\Delta}}}\,. \tag{115}\] It has been argued in section 3 that only scalar principal series and complementary series can contribute to such a two-point function. In addition, using the criterion found in section 4.3, we know that complementary series is also absent when \(\mathbf{\Delta}>\frac{d}{2}\). In this case, the inversion formula for the principal series contribution reads \[\rho_{\mathcal{O}}^{\mathcal{P},0}(\lambda)=\frac{c_{\mathcal{O}}}{2^{\mathbf{ \Delta}}\mathcal{N}_{0,0}}\int_{X_{1}}\Omega_{\lambda,0}(X_{1},X_{2})(1-X_{1} \cdot X_{2})^{-\mathbf{\Delta}}\,. \tag{116}\] This integral can be solved by using the Mellin-Barnes representation of the hypergeometric function and Barnes' first lemma, as explained in Appendix H.2. We obtain \[\rho_{\mathcal{O}}^{\mathcal{P},0}(\lambda)=c_{\mathcal{O}}\frac{2^{1+d-2\bm {\Delta}}\pi^{\frac{d-1}{2}}\Gamma(-\frac{d}{2}+\mathbf{\Delta}\pm i\lambda)}{ \Gamma(\mathbf{\Delta})\Gamma(\frac{1-d}{2}+\mathbf{\Delta})}\lambda\sinh(\pi\lambda)\,. \tag{117}\] We numerically verified that the Kallen-Lehmann integral over the principal series of this spectral density fully reproduces the two-point function, if \(\mathbf{\Delta}>\frac{d}{2}\). When instead \(\mathbf{\Delta}<\frac{d}{2}\), the poles at \[\lambda=\pm i\left(\mathbf{\Delta}-\frac{d}{2}\right) \tag{118}\] cross the contour of integration and need to be added. This corresponds to a complementary series state since \(\lambda\) is imaginary. The contribution of this state when \(\mathbf{\Delta}<\frac{d}{2}\) is in agreement with the decomposition of \(SO(d+1,2)\) irreps into \(SO(d+1,1)\) as shown in table 1.4 of [1]. Unitarity fixes \(\rho_{\mathcal{O}}^{\mathcal{P}}(\lambda)\) to be positive, and from this fact we can infer the usual unitarity bound on \(\mathbf{\Delta}\) \[\mathbf{\Delta}>\frac{d-1}{2}\,. \tag{119}\] In general, we can write the full decomposition of a CFT scalar two-point function as \[\langle{\cal O}(Y_{1}){\cal O}(Y_{2})\rangle=\int_{\mathbb{R}}{\rm d}\lambda\ \rho^{{\cal P},0}_{\cal O}(\lambda)G_{\lambda,0}(Y_{1},Y_{2})+\theta\left( \frac{d}{2}-\mathbf{\Delta}\right)\rho^{{\cal C},0}_{\cal O}G_{i\left(\mathbf{\Delta}- \frac{d}{2}\right),0}(Y_{1},Y_{2})\,, \tag{112}\] where \(\theta(x)\) is a Heaviside theta, and \[\rho^{{\cal C},0}_{\cal O}=c_{\cal O}\frac{4\pi^{\frac{d}{2}}\Gamma(1-\frac{d} {2}+\mathbf{\Delta})\sin\left(\frac{\pi}{2}(d-2\mathbf{\Delta})\right)}{\Gamma(\mathbf{ \Delta})}=4\pi i\underset{\lambda=i(\mathbf{\Delta}-\frac{d}{2})}{\rm Res}\rho^{ {\cal P}}_{\cal O}(\lambda)\,. \tag{113}\] Notice that the complementary series contribution is the two-point function of a free field with \(\Delta=d-\mathbf{\Delta}\). Furthermore, if \(\mathbf{\Delta}=\frac{d-1}{2}\), we have that \(\rho^{{\cal P},0}_{\cal O}(\lambda)=0\) and \[\langle{\cal O}(Y_{1}){\cal O}(Y_{2})\rangle=\rho^{{\cal C},0}_{\cal O}G_{i \left(\mathbf{\Delta}-\frac{d}{2}\right),0}(Y_{1},Y_{2})\Big{|}_{\mathbf{\Delta}=\frac {d-1}{2}}\,. \tag{114}\] This is the two-point function of a conformally coupled scalar in dS\({}_{d+1}\). Finally, we comment on **boundary operators**. The poles in \(\rho^{{\cal P},0}_{\cal O}(\lambda)\) are at \(\Delta=\mathbf{\Delta}+\mathbb{N}\), signaling the appearance of boundary operators with these weights in the Boundary Operator Expansion of the bulk CFT primary \({\cal O}\). Let us explain their origin by considering a \(d+1\) dimensional Lorentzian CFT on the Minkowski cylinder. The discussion is entirely analogous in de Sitter because they are conformally equivalent spacetimes. Consider a scalar primary \({\cal O}(t,\mathbf{x})\) in this CFT with conformal dimension \(\mathbf{\Delta}\) and let us focus on the timeslice at \(t=0\), where the primaries of the smaller \(SO(d+1,1)\) group at that timeslice are those operators \({\cal O}_{n}(\mathbf{x})\) which satisfy \[[D,{\cal O}_{n}(0)]=\Delta_{n}{\cal O}_{n}(0)\,,\qquad[K_{i},{\cal O}_{n}(0)]= 0\qquad i=1,\ldots,d\,, \tag{115}\] It can be checked that particular linear combinations of operators like \((\mathbf{\partial}^{2})^{m}\partial_{t}^{n}{\cal O}(0,\mathbf{x})\) for different \(m\) and \(n\) such that \(\frac{m}{2}+n\) is constant, satisfy this condition. For example, the operator \({\cal O}_{0}(\mathbf{x})\equiv{\cal O}(0,\mathbf{x})\) trivially satisfies (115) with \(\Delta_{0}=\mathbf{\Delta}\). But also \[[K_{i},[P_{0},{\cal O}(0,0)]]=-2[M_{i0},{\cal O}(0,0)]=0 \tag{116}\] so that \({\cal O}_{1}(\mathbf{x})\equiv\partial_{t}{\cal O}(0,\mathbf{x})\) at the timeslice \(t=0\) is a boundary primary with \(\Delta_{1}=\mathbf{\Delta}+1\). If we go further, we find \[[K_{i},[P_{0}^{2},{\cal O}(0,0)]]=2[P_{i},{\cal O}(0,0)]\,, \tag{117}\] but also \[[K_{i},[P_{j}P^{j},{\cal O}(0,0)]]=2(2(\mathbf{\Delta}+1)-d)[P_{i},{\cal O}(0,0)] \tag{118}\] so that a good boundary primary is \[{\cal O}_{2}(\mathbf{x})\equiv\partial_{t}^{2}{\cal O}(0,\mathbf{x})-\frac{1 }{2(\mathbf{\Delta}+1)-d}\mathbf{\partial}^{2}{\cal O}(0,\mathbf{x})\,, \tag{119}\] with \(\Delta_{2}=\mathbf{\Delta}+2\). This can be iterated to find higher and higher primaries with \(\Delta=\mathbf{\Delta}+\mathbb{N}\). By Weyl equivalence, the discussion in de Sitter space is entirely analogous and we can thus explain the presence of the poles at \(\Delta=\mathbf{\Delta}+\mathbb{N}\) in the spectral density (110). Flat space limitTo compute the flat space limit of the Kallen-Lehmann decomposition of this two-point function, we follow the discussion in section 3.3 by first restoring the dimensions of the spectral densities by adding the appropriate factors of the de Sitter radius \(R\): \(\rho^{\mathcal{P},0}_{\mathcal{O}}(\lambda)\to R^{-2\boldsymbol{\Delta}+d-1} \rho^{\mathcal{P},0}_{\mathcal{O}}(\lambda)\). Then, after changing variables to \(\lambda=Rm\), we take the limit \[\lim_{R\to\infty}\frac{R}{m}\rho^{\mathcal{P},0}_{\mathcal{O}}(Rm)=c_{ \mathcal{O}}\frac{2^{d+1-2\boldsymbol{\Delta}}\pi^{\frac{d+1}{2}}}{\Gamma( \boldsymbol{\Delta})\Gamma(\frac{1-d}{2}+\boldsymbol{\Delta})}m^{2\boldsymbol {\Delta}-d-1}\,, \tag{111}\] which precisely matches the flat space CFT scalar spectral density (eq. (4.3) in [47]).24 Once adjusted for dimensions, the complementary series spectral density instead reads Footnote 24: To match conventions \(d_{\text{here}}=(d-1)_{\text{there}}\) and \(\boldsymbol{\Delta}_{\text{here}}=\Delta_{\mathcal{O}\text{there}}\). \[\rho^{\mathcal{C},0}_{\mathcal{O}}=c_{\mathcal{O}}\frac{4\pi^{\frac{d}{2}} \Gamma(1-\frac{d}{2}+\boldsymbol{\Delta})\sin\left(\frac{\pi}{2}(d-2 \boldsymbol{\Delta})\right)}{R^{2\boldsymbol{\Delta}-d+1}\Gamma(\boldsymbol{ \Delta})}\,. \tag{112}\] In the large \(R\) limit it survives only if \(\boldsymbol{\Delta}=\frac{d-1}{2}\). That is in agreement with the fact that in flat space, a free massless scalar is a CFT primary operator. We can thus write that in flat space we have \[\rho^{\mathfrak{V},0}_{\mathcal{O}}(m^{2})=c_{\mathcal{O}}\frac{2^{d+1-2 \boldsymbol{\Delta}}\pi^{\frac{d+1}{2}}}{\Gamma(\boldsymbol{\Delta})\Gamma( \frac{1-d}{2}+\boldsymbol{\Delta})}m^{2\boldsymbol{\Delta}-d-1}\,, \tag{113}\] and instead, if \(\boldsymbol{\Delta}=\frac{d-1}{2}\), \[\rho^{\mathfrak{V},0}_{\mathcal{O}}(m^{2})=c_{\mathcal{O}}\delta(m^{2})\frac {4\pi^{\frac{d+1}{2}}}{\Gamma(\frac{d-1}{2})}\,. \tag{114}\] #### 5.2.2 Spin 1 Example For a spin-1 primary operator \(J\) of conformal weight \(\boldsymbol{\Delta}\), the two-point function is \[\langle J(Y_{1};W_{1})J(Y_{2};W_{2})\rangle=\frac{c_{J}}{2^{\boldsymbol{ \Delta}}}\left[\frac{W_{1}\cdot W_{2}}{(1-Y_{1}\cdot Y_{2})^{\boldsymbol{ \Delta}}}+\frac{(Y_{1}\cdot W_{2})(Y_{2}\cdot W_{1})}{(1-Y_{1}\cdot Y_{2})^{ \boldsymbol{\Delta}+1}}\right]\,. \tag{115}\] In Appendix H.2 we show explicitly how to invert this two-point function and find the principal series spectral densities: \[\begin{split}\rho^{\mathcal{P},0}_{J}(\lambda)&=c_{ J}\frac{2^{3+d-2\boldsymbol{\Delta}}\pi^{\frac{d-1}{2}}(\boldsymbol{\Delta}-d) \Gamma\left(-\frac{d}{2}+\boldsymbol{\Delta}\pm i\lambda\right)}{(d^{2}+4 \lambda^{2})\Gamma(\boldsymbol{\Delta}+1)\Gamma(\frac{1-d}{2}+\boldsymbol{ \Delta})}\lambda\sinh(\pi\lambda)\,,\\ \rho^{\mathcal{P},1}_{J}(\lambda)&=c_{J}\frac{2^{1+d- 2\boldsymbol{\Delta}}\pi^{\frac{d-1}{2}}(\boldsymbol{\Delta}-1)\Gamma(-\frac {d}{2}+\boldsymbol{\Delta}\pm i\lambda)}{\Gamma(\boldsymbol{\Delta}+1) \Gamma(\frac{1-d}{2}+\boldsymbol{\Delta})}\lambda\sinh(\pi\lambda)\,.\end{split} \tag{116}\] For \(d\geq 2\), this is the complete Kallen-Lehmann decomposition of this two-point function, as can be verified numerically by performing the integral over the principal series. In \(d=1\), instead, we see the appearance of discrete series states. We can either compute those directly from the inversion formula in \(d=1\), or just derive them by analytically continuing the higher dimensional result all the way to \(d=1\) while keeping track of any poles that cross the principal series integration contour. The results one obtains in these two ways agree. The full Kallen-Lehmann decomposition for the two-point function of a spin 1 primary CFT operator is thus \[\langle J(Y_{1};W_{1})J(Y_{2};W_{2})\rangle= \sum_{\ell=0}^{1}\int_{\mathbb{R}}\mathrm{d}\lambda\ \rho_{J}^{\mathcal{P},\ell}(\lambda)[(W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2} )]^{1-\ell}G_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})\] \[+\delta_{d,1}\rho_{J}^{\mathcal{D}_{1}}(W_{1}\cdot\nabla_{1})(W_{ 2}\cdot\nabla_{2})G_{-i/2,0}(Y_{1},Y_{2}) \tag{111}\] where \[\rho_{J}^{\mathcal{D}_{1}}=c_{J}\frac{2^{3-2\boldsymbol{\Delta}}\pi}{ \boldsymbol{\Delta}}\,. \tag{112}\] When \(d=1\) and \(\boldsymbol{\Delta}=1\), we have that \(\rho_{J}^{\mathcal{P},0}=\rho_{J}^{\mathcal{P},1}=0\) and the left hand side matches exactly the extra contribution from the discrete series. The discrete series term is in fact the two-point function of the operator \(W\cdot\nabla\varphi\) with \(\varphi\) being the massless scalar, corresponding to the \(\Delta=1\) operator in the discrete series \[(W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2})\langle\varphi(Y_{1})\varphi(Y_{2 })\rangle\,. \tag{113}\] In two-dimensional Minkowski space, the operator \(\partial_{\mu}\varphi\) with \(\varphi\) being a free massless scalar is a CFT primary. It should not surprise us then that the same is true in de Sitter, by Weyl equivalence. One last observation is that from (110) we can recover unitarity bounds for spin 1 CFT primaries \[\boldsymbol{\Delta}\geq d\,. \tag{114}\] Moreover, we observe the expected feature that for a conserved current \(\boldsymbol{\Delta}=d\), only transverse states propagate, since \(\rho_{J}^{\mathcal{P},0}(\lambda)\) vanishes. Finally, we study the boundary operators appearing in the Boundary Operator Expansion of the bulk CFT primary \(J(Y;W).\) The pole structure of the two spectral densities (110), after verifying that the contributions of the spurious poles cancel as expected, signals the appearance of scalar and vector boundary operators with conformal dimensions \(\boldsymbol{\Delta}+\mathbb{N}.\) Analogously to the scalar case, these come from the late time expansion of the bulk operator and of linear combinations of \(\partial_{t}^{n}(\boldsymbol{\partial}^{2})^{m}J^{\mu}(0,\mathbf{x})\). Scalar boundary primary operators then come from particular linear combinations of \(\partial_{t}^{n}(\boldsymbol{\partial}^{2})^{m}J^{0}(0,\mathbf{x})\) and \(\partial_{t}^{n}(\boldsymbol{\partial}^{2})^{m}\boldsymbol{\partial}\cdot \mathbf{J}(0,\mathbf{x})\). Vector operators come from \(\partial_{t}^{n}(\boldsymbol{\partial}^{2})^{m}J^{i}(0,\mathbf{x})\). Flat space limitWe start by restoring the dimensions \(\rho_{J}^{\mathcal{P},0}(\lambda)\to R^{2\boldsymbol{\Delta}-d-1}\rho_{J}^{ \mathcal{P},0}(\lambda)\) and \(\rho_{J}^{\mathcal{P},1}(\lambda)\to R^{2\boldsymbol{\Delta}-d+1}\rho_{J}^{ \mathcal{P},1}(\lambda)\). Then, we have \[\begin{split}\lim_{R\to\infty}\frac{R}{m}\rho_{J}^{\mathcal{P},0 }(Rm)&=c_{J}\frac{2^{d+1-2\boldsymbol{\Delta}}\pi^{\frac{d+1}{2}} (\boldsymbol{\Delta}-d)}{\Gamma(1+\boldsymbol{\Delta})\Gamma(\frac{1-d}{2}+ \boldsymbol{\Delta})}m^{2\boldsymbol{\Delta}-d-3}\,,\\ \lim_{R\to\infty}\frac{R}{m^{3}}\rho_{J}^{\mathcal{P},1}(Rm)& =c_{J}\frac{2^{d+1-2\boldsymbol{\Delta}}\pi^{\frac{d+1}{2}}( \boldsymbol{\Delta}-1)}{\Gamma(1+\boldsymbol{\Delta})\Gamma(\frac{1-d}{2}+ \boldsymbol{\Delta})}m^{2\boldsymbol{\Delta}-d-3}\,.\end{split} \tag{115}\] which match equations (4.10) in [47]. The discrete series contribution with dimensions restored reads \[\rho_{J}^{\mathcal{D}_{1}}=c_{J}\frac{2^{3-2\boldsymbol{\Delta}}\pi}{R^{2 \boldsymbol{\Delta}-2}\boldsymbol{\Delta}}\,. \tag{116}\] In the flat space limit, this survives only if \(\mathbf{\Delta}=1\), corresponding to the case in which we are decomposing a two-point function of a conserved current in \(d=1\). We can thus write, for \(\mathbf{\Delta}>1\) \[\begin{split}\rho_{J}^{\mathfrak{V},0}(m^{2})&=c_{J} \frac{2^{d+1-2\mathbf{\Delta}}\pi^{\frac{d+1}{2}}(\mathbf{\Delta}-d)}{\Gamma(1+\mathbf{ \Delta})\Gamma(\frac{1-d}{2}+\mathbf{\Delta})}m^{2\mathbf{\Delta}-d-3}\,,\\ \rho_{J}^{\mathfrak{V},1}(m^{2})&=c_{J}\frac{2^{d+1 -2\mathbf{\Delta}}\pi^{\frac{d+1}{2}}(\mathbf{\Delta}-1)}{\Gamma(1+\mathbf{\Delta})\Gamma( \frac{1-d}{2}+\mathbf{\Delta})}m^{2\mathbf{\Delta}-d-3}\,,\end{split} \tag{113}\] which precisely match eq. (109) in [47]. In the \(d=1\) and \(\mathbf{\Delta}=1\) case, instead, the Kallen-Lehmann decomposition is given by a massless state \[\rho_{J}^{\mathfrak{V},0}(m^{2})=2\pi c_{J}\delta(m^{2})\,,\qquad\rho_{J}^{ \mathfrak{V},1}(m^{2})=0\,. \tag{114}\] #### 5.2.3 Spin 2 Example The two-point function of a spin-2 traceless and symmetric CFT primary with conformal weight \(\mathbf{\Delta}\) is \[\langle T(Y_{1};W_{1})T(Y_{2};W_{2})\rangle=\frac{c_{T}}{2^{\mathbf{\Delta}}} \Big{[}\frac{(W_{1}\cdot W_{2})^{2}}{(1-Y_{1}\cdot Y_{2})^{\mathbf{\Delta}}}+2 \frac{(W_{1}\cdot W_{2})(Y_{1}\cdot W_{2})(Y_{2}\cdot W_{1})}{(1-Y_{1}\cdot Y_ {2})^{\mathbf{\Delta}+1}}+\frac{[(Y_{1}\cdot W_{2})(Y_{2}\cdot W_{1})]^{2}}{(1-Y_ {1}\cdot Y_{2})^{\mathbf{\Delta}+2}}\Big{]} \tag{115}\] In Appendix H.2 we show how to apply the inversion formula to this case. The resulting principal series contributions to the Kallen-Lehmann decomposition are \[\begin{split}\rho_{T}^{\mathcal{P},0}(\lambda)&=c_{T }\frac{2^{5+d-2\mathbf{\Delta}}(d+1)\pi^{\frac{d-1}{2}}(d-\mathbf{\Delta})(d+1-\mathbf{ \Delta})\Gamma(-\frac{d}{2}+\mathbf{\Delta}\pm i\lambda)}{d(d^{2}+4\lambda^{2})((d +2)^{2}+4\lambda^{2})\Gamma(\mathbf{\Delta}+2)\Gamma(\frac{1-d}{2}+\mathbf{\Delta})} \lambda\sinh(\pi\lambda)\,,\\ \rho_{T}^{\mathcal{P},1}(\lambda)&=c_{T}\frac{2^{4+d -2\mathbf{\Delta}}\pi^{\frac{d-1}{2}}(1-\mathbf{\Delta})(d+1-\mathbf{\Delta})\Gamma(- \frac{d}{2}+\mathbf{\Delta}\pm i\lambda)}{((d+2)^{2}+4\lambda^{2})\Gamma(\mathbf{ \Delta}+2)\Gamma(\frac{1-d}{2}+\mathbf{\Delta})}\lambda\sinh(\pi\lambda)\,,\\ \rho_{T}^{\mathcal{P},2}(\lambda)&=c_{T}\frac{2^{1+d -2\mathbf{\Delta}}\pi^{\frac{d-1}{2}}(\mathbf{\Delta}-1)\mathbf{\Delta}\Gamma(-\frac{d}{2 }+\mathbf{\Delta}\pm i\lambda)}{\Gamma(\mathbf{\Delta}+2)\Gamma(\frac{1-d}{2}+\mathbf{ \Delta})}\lambda\sinh(\pi\lambda)\,.\end{split} \tag{116}\] Once again, for \(d>1\) the Kallen-Lehmann integral over the principal series fully reproduces the two-point function, and the structure of spurious poles matches the predictions made in section 4.2.1. In \(d=1\), through the inversion formula for the discrete series we find contributions from \(\Delta=1\) and \(\Delta=2\) states. \[\begin{split}\langle T(Y_{1};W_{1})T(Y_{2};W_{2})\rangle=& \sum_{\ell=0}^{2}\int_{\mathbb{R}}\mathrm{d}\lambda\ \rho_{T}^{\mathcal{P},\ell}(\lambda)[(W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{ 2})]^{2-\ell}G_{\lambda,\ell}(Y_{1},Y_{2};W_{1},W_{2})\\ &+\delta_{d,1}\sum_{p=1}^{2}\rho_{T}^{\mathcal{D}_{p}}(W_{1}\cdot \nabla_{1})^{2}(W_{2}\cdot\nabla_{2})^{2}G_{i(\frac{1}{2}-p),0}(Y_{1},Y_{2}) \end{split} \tag{117}\] with \[\rho_{T}^{\mathcal{D}_{2}}=c_{T}\frac{2^{3-2\mathbf{\Delta}}\pi}{\mathbf{\Delta}+1}\,, \qquad\rho_{T}^{\mathcal{D}_{1}}=c_{T}\frac{2^{3-2\mathbf{\Delta}}\pi(\mathbf{\Delta}- 2)}{\mathbf{\Delta}(\mathbf{\Delta}+1)}\,. \tag{118}\] The positivity of (116) implies the expected unitarity bound for a spin 2 CFT primary \[\mathbf{\Delta}\geq d+1\,, \tag{119}\] and when \(T\) is a conserved stress tensor, \(\mathbf{\Delta}=d+1\), both \(\rho_{T}^{\mathcal{P},0}\) and \(\rho_{T}^{\mathcal{P},1}\) vanish. When \(d=1\), \(G_{\lambda,2}\) vanishes and the decomposition matches (3.56) with vanishing complementary series contributions. The two discrete series contributions correspond to the two-point functions \[(W_{1}\cdot\nabla_{1})^{2}(W_{2}\cdot\nabla_{2})^{2}\langle\varphi(Y_{1}) \varphi(Y_{2})\rangle\,,\qquad(W_{1}\cdot\nabla_{1})^{2}(W_{2}\cdot\nabla_{2} )^{2}\langle\varphi^{\prime}(Y_{1})\varphi^{\prime}(Y_{2})\rangle\,, \tag{5.77}\] with \(\varphi\) being a massless scalar, corresponding to \(\Delta=1\), and \(\varphi^{\prime}\) being the first tachyon in the discrete series, the \(\Delta=2\) representation. When \(d=1\) and \(\mathbf{\Delta}=2\), all spectral densities vanish except for \(\rho_{T}^{\mathcal{D}_{2}}\), and then \[\langle T(Y_{1};W_{1})T(Y_{2};W_{2})\rangle\Big{|}_{d=1,\mathbf{\Delta}=2}=\rho_{ T}^{\mathcal{D}_{2}}\Big{|}_{\mathbf{\Delta}=2}(W_{1}\cdot\nabla_{1})^{2}(W_{2} \cdot\nabla_{2})^{2}G_{-\frac{3i}{2},0}(Y_{1},Y_{2})\,, \tag{5.78}\] meaning that in \(d=1\) the operator \((W\cdot\nabla)^{2}\varphi^{\prime}\) with \(\varphi^{\prime}\) being a tachyonic free field with \(\Delta_{\varphi^{\prime}}=2\) is a CFT primary. Now let us comment on the boundary operators. First of all, we verify that the contributions from the spurious poles cancel. Then, we observe that the physical poles imply the appearance of boundary operators with spins \(\ell=0,1,2\) and conformal weight \(\mathbf{\Delta}+\mathbb{N}\) in the BOE of \(T(Y,W)\). These again come from the late time limit of linear combinations of \(\partial_{t}^{n}(\mathbf{\partial}^{2})^{m}T^{\mu\nu}(0,\mathbf{x})\). The spin 2 boundary operators are of the form \(\partial_{t}^{n}(\mathbf{\partial}^{2})^{m}T^{ij}(0,\mathbf{x})\), the spin 1 operators are combinations of derivatives and \(\mu,\nu=0,i\) and the scalars are combinations of the trace and of divergences. Flat space limitAs done for the previous examples, we restore the factors of \(R\) in the spectral densities and then we take the large \(R\) limit \[\lim_{R\to\infty}\frac{d}{d+1}\frac{R}{m}\rho_{T}^{\mathcal{P},0 }(Rm) =c_{T}\frac{2^{d+1-2\mathbf{\Delta}}\pi^{\frac{d+1}{2}}(\mathbf{\Delta}-d )(\mathbf{\Delta}-d-1)}{\Gamma(2+\mathbf{\Delta})\Gamma(\frac{1-d}{2}+\mathbf{\Delta})}m^{ 2\mathbf{\Delta}-d-5}\,, \tag{5.79}\] \[\lim_{R\to\infty}\frac{1}{2}\frac{R}{m^{3}}\rho_{T}^{\mathcal{P},1}(Rm) =c_{T}\frac{2^{d+1-2\mathbf{\Delta}}\pi^{\frac{d+1}{2}}(\mathbf{\Delta}-1 )(\mathbf{\Delta}-d-1)}{\Gamma(2+\mathbf{\Delta})\Gamma(\frac{1-d}{2}+\mathbf{\Delta})}m^{ 2\mathbf{\Delta}-d-5}\,,\] \[\lim_{R\to\infty}\frac{R}{m^{5}}\rho_{T}^{\mathcal{P},2}(Rm) =c_{T}\frac{2^{d+1-2\mathbf{\Delta}}\pi^{\frac{d+1}{2}}(\mathbf{\Delta}- 1)\mathbf{\Delta}}{\Gamma(2+\mathbf{\Delta})\Gamma(\frac{1-d}{2}+\mathbf{\Delta})}m^{2\bm {\Delta}-d-5}\,,\] where we have used \(\beta_{2,0}=\frac{d}{d+1},\beta_{2,1}=\frac{1}{2}\) and \(\beta_{2,2}=1\). These results match equations (4.13) in [47]. Restoring the dimensions in the discrete series densities, instead, gives \[\rho_{T}^{\mathcal{D}_{2}}=c_{T}\frac{2^{3-2\mathbf{\Delta}}\pi}{R^{2\mathbf{\Delta}-4 }(\mathbf{\Delta}+1)}\,,\qquad\rho_{T}^{\mathcal{D}_{1}}=c_{T}\frac{2^{3-2\mathbf{ \Delta}}\pi(\mathbf{\Delta}-2)}{R^{2\mathbf{\Delta}-4}\mathbf{\Delta}(\mathbf{\Delta}+1)}\,. \tag{5.80}\] Under the flat space limit, only when \(\mathbf{\Delta}=2\) they have a chance of surviving. The \(p=1\) density vanishes even in this case, because of the factor of \(\mathbf{\Delta}-2\) in the numerator, and so we are left with the following flat space densities for \(\mathbf{\Delta}>2\) \[\rho_{T}^{\mathcal{M},0}(m^{2}) =c_{T}\frac{2^{d+1-2\mathbf{\Delta}}\pi^{\frac{d+1}{2}}(\mathbf{\Delta}-d) (\mathbf{\Delta}-d-1)}{\Gamma(2+\mathbf{\Delta})\Gamma(\frac{1-d}{2}+\mathbf{\Delta})}\,m^ {2\mathbf{\Delta}-d-5}\,, \tag{5.81}\] \[\rho_{T}^{\mathcal{M},1}(m^{2}) =c_{T}\frac{2^{d+1-2\mathbf{\Delta}}\pi^{\frac{d+1}{2}}(\mathbf{\Delta}-1 )(\mathbf{\Delta}-d-1)}{\Gamma(1+\mathbf{\Delta})\Gamma(\frac{1-d}{2}+\mathbf{\Delta})}\,m^ {2\mathbf{\Delta}-d-5}\,,\] \[\rho_{T}^{\mathcal{M},2}(m^{2}) =c_{T}\frac{2^{d+1-2\mathbf{\Delta}}\pi^{\frac{d+1}{2}}(\mathbf{\Delta}-1 )\mathbf{\Delta}}{\Gamma(2+\mathbf{\Delta})\Gamma(\frac{1-d}{2}+\mathbf{\Delta})}\,m^{2 \mathbf{\Delta}-d-5}\,,\] and the special case \(\mathbf{\Delta}=2\) and \(d=1\) instead being \[\rho_{T}^{\mathfrak{N},0}(m^{2})=\frac{\pi}{12}c_{T}\delta(m^{2})\,,\qquad\rho_{ T}^{\mathfrak{N},1}(m^{2})=0\,. \tag{110}\] Equations (110) and (110) match (4.13) in [47].25 Footnote 25: The placement of the massless state in \(\rho_{T}^{\mathfrak{N},0}\) or \(\rho_{T}^{\mathfrak{N},1}\) is arbitrary since the Wightman propagators \(\Delta_{0,1}^{(2)}\) and \(\Delta_{0,0}^{(2)}\) are the same in \(d=1\) in the massless limit. #### 5.2.4 Higher spin examples in dS\({}_{2}\) So far, we have derived the spectral densities for spin \(J\in\{0,1,2\}\) CFT operators \(\mathcal{O}^{(J)}\) in dS\({}_{2}\) by an analytical continuation from higher \(d\). This can also be done systematically using the dS\({}_{2}\) inversion formula developed in section 4.5. In general, the two-point function of a spin \(J\) primary is given by eq. (109), and its corresponding chiral components as defined in eq. (108) are \[G^{+}_{\mathcal{O}^{(J)}}(\sigma)=\frac{2^{J-\mathbf{\Delta}}c_{ \mathcal{O}^{(J)}}}{(1-\sigma)^{\mathbf{\Delta}+J}},\quad G^{-}_{\mathcal{O}^{(J) }}(\sigma)=0. \tag{111}\] Assuming the unitarity bound \(\mathbf{\Delta}\geq J\), the asymptotic behavior \(G^{\pm}_{\mathcal{O}^{(J)}}(\sigma)\sim(-\sigma)^{-(\mathbf{\Delta}+J)}\) ensures the absence of complementary series in the Kallen-Lehmann decomposition of \(\mathcal{O}^{(J)}\). In addition, as a direct result of eq. (107), the vanishing of \(G^{-}_{\mathcal{O}^{(J)}}(\sigma)\) implies \[\rho^{\mathcal{P},1}_{\mathcal{O}^{(J)}}(\lambda)=\left(\frac{1}{4}+\lambda^{ 2}\right)\rho^{\mathcal{P},0}_{\mathcal{O}^{(J)}}(\lambda). \tag{112}\] Then we apply the inversion formula (105) to \(G^{(+)}_{\mathcal{O}^{(J)}}\) given by eq. (111) \[\rho^{\mathcal{P},0}_{\mathcal{O}^{(J)}}(\lambda)=\frac{2^{J- \mathbf{\Delta}+1}c_{\mathcal{O}^{(J)}}\lambda\sinh(2\pi\lambda)}{(\frac{1}{2}+i \lambda)_{2}^{2}(\frac{1}{2}-i\lambda)_{J}^{2}}\int_{-\infty}^{-1}d\sigma\, \frac{\phi_{\lambda,J}^{+}(\sigma)}{(1-\sigma)^{\mathbf{\Delta}-J}}\] \[\rho^{\mathcal{D}_{p}}_{\mathcal{O}^{(J)}}=\frac{2^{J-\mathbf{ \Delta}+3}c_{\mathcal{O}^{(J)}}\pi^{2}\,(2p-1)}{\Gamma(J+p)^{2}\Gamma(1+J-p)^ {2}}\int_{-\infty}^{-1}d\sigma\,\frac{\psi_{p,J}(\sigma)}{(1-\sigma)^{\mathbf{ \Delta}-J}}. \tag{113}\] The evaluation of this type of integral is extensively discussed in appendix H.2. Here we just report the final results \[\rho^{\mathcal{P},0}_{\mathcal{O}^{(J)}}(\lambda)=\left(\frac{1} {4}+\lambda^{2}\right)^{-1}\rho^{\mathcal{P},1}_{\mathcal{O}^{(J)}}(\lambda)= \frac{c_{\mathcal{O}^{(J)}}\Gamma(\pm i\lambda+\mathbf{\Delta}-\frac{1}{2})}{2^{2 \mathbf{\Delta}-J-1}(\frac{1}{2}\pm i\lambda)_{J}\Gamma(\mathbf{\Delta}+J)\Gamma(\bm {\Delta}-J)}\lambda\sinh(\pi\lambda)\] \[\rho^{\mathcal{D}_{p}}_{\mathcal{O}^{(J)}}=\frac{c_{\mathcal{O}^{ (J)}}\pi(2p-1)\Gamma(\mathbf{\Delta}-p)\Gamma(p+\mathbf{\Delta}-1)}{2^{2\mathbf{\Delta}- J-2}\Gamma(\mathbf{\Delta}-J)\Gamma(\mathbf{\Delta}+J)\Gamma(J-p+1)\Gamma(J+p)}. \tag{114}\] When \(\mathbf{\Delta}=J\), i.e. saturation of unitarity bound, only discrete series with \(p=J\) contributes. Flat space limitThe spectral densities we have just derived allow us to make a prediction about flat space two dimensional CFTs. After restoring the correct dimensions, under the flat space limit we get \[\lim_{R\to\infty}\frac{R\,\beta_{J,0}}{m}\rho^{\mathcal{P},0}_{ \mathcal{O}^{(J)}}(Rm)=c_{\mathcal{O}^{(J)}}\frac{2^{2-2\mathbf{\Delta}}\pi}{ \Gamma(\mathbf{\Delta}-J)\Gamma(\mathbf{\Delta}+J)}m^{2(\mathbf{\Delta}-J-1)}\,, \tag{115}\] \[\lim_{R\to\infty}\frac{R\,\beta_{J,1}}{m^{3}}\rho^{\mathcal{P},1} _{\mathcal{O}^{(J)}}(Rm)=c_{\mathcal{O}^{(J)}}\frac{2^{2-2\mathbf{\Delta}}\pi}{ \Gamma(\mathbf{\Delta}-J)\Gamma(\mathbf{\Delta}+J)}m^{2(\mathbf{\Delta}-J-1)}\,,\] where we have used \(\beta_{J,0}=\beta_{J,1}=2^{1-J}\), which can be easily read off from eq. (108). For the discrete series contribution, since \(R^{2(J-\mathbf{\Delta})}\) has to be inserted in \(\rho^{\mathcal{D}_{p}}_{\mathcal{O}^{(J)}}\) before taking \(R\to\infty\), it can survive in the flat space limit only if \(\mathbf{\Delta}=J\), which itself forces \(p=J\). In total we thus have that in 2 spacetime dimensions, the flat space spectral densities of a CFT primary of spin \(J\) and conformal dimension \(\mathbf{\Delta}\) are, for \(\mathbf{\Delta}>J\), \[\rho^{\mathfrak{V},0}_{\mathcal{O}^{(J)}}(m^{2})=\rho^{\mathfrak{V},1}_{ \mathcal{O}^{(J)}}(m^{2})=c_{\mathcal{O}^{(J)}}\frac{2^{2-2\mathbf{\Delta}}\pi}{ \Gamma(\mathbf{\Delta}-J)\Gamma(\mathbf{\Delta}+J)}m^{2(\mathbf{\Delta}-J-1)}\,. \tag{111}\] and for the special case \(\mathbf{\Delta}=J\), \[\rho^{\mathfrak{V},0}_{\mathcal{O}^{(J)}}(m^{2})=\delta(m^{2})c_{\mathcal{O}^ {(J)}}\frac{2^{3-2J}\pi}{\Gamma(2J)}\,,\qquad\rho^{\mathfrak{V},1}_{\mathcal{ O}^{(J)}}(m^{2})=0\,. \tag{112}\] ### Weakly coupled QFT The Kallen-Lehmann decomposition is a non-perturbative representation of two-point functions in de Sitter. At the same time, it can be used to decompose two-point functions order by order in a perturbative expansion, when the QFT is weakly coupled. Since poles in the spectral densities can be related to the conformal dimensions of boundary operators [6] (see discussion in section 4.4), we will observe their position shifting as we turn on interactions in the bulk. From this shift, we can read off anomalous dimensions for the boundary operators. At the same time, the fact that we will stop to a definite order in the coupling expansion, means we will lose the positivity of the spectral densities as a side effect. This is just an artifact of perturbation theory and does not mean that the theory is not unitary. We will now review how to derive anomalous dimensions of boundary operators from the spectral densities and then show a practical example in a scalar weakly interacting theory. Consider the Kallen-Lehmann decomposition of a two-point function of a scalar operator \(\Phi\) (which for simplicity we take to include only principal series contributions) in an interacting theory governed by a coupling \(g\). It will present simple poles on the nonperturbative value of the conformal dimension of each boundary operator that appears in the bulk-boundary OPE of \(\Phi\)[6]. Near a pole at \(\Delta_{*}(g)\), it will behave as \[\rho(\Delta,g)\approx\frac{\operatorname{Res}_{\Delta=\Delta_{*}(g)}\left[ \rho(\Delta,g)\right]}{\Delta-\Delta_{*}(g)}\,. \tag{113}\] For a weakly coupled theory, we can expand this formula in \(g\) and obtain to first order \[\rho(\Delta,g) \approx\frac{\operatorname{Res}_{\Delta=\Delta_{*}(0)}\left[\rho (\Delta,0)\right]}{\Delta-\Delta_{*}(0)}+g\left(\frac{\partial_{g} \operatorname{Res}_{\Delta=\Delta_{*}(0)}\left[\rho(\Delta,0)\right]\, \partial_{g}\Delta_{*}(0)}{\Delta-\Delta_{*}(0)}+\frac{\operatorname{Res}_{ \Delta=\Delta_{*}(0)}\left[\rho(\Delta,0)\right]\,\partial_{g}\Delta_{*}(0)}{ (\Delta-\Delta_{*}(0))^{2}}\right)+O(g^{2})\] \[\equiv\frac{c_{0}}{\Delta-\Delta_{*}(0)}+g\left(\frac{c_{1}}{ \Delta-\Delta_{*}(0)}+\frac{c_{2}}{(\Delta-\Delta_{*}(0))^{2}}\right)+O(g^{2})\,, \tag{114}\] where in the second line we simply set up notation. If we consider the series expansion of \(\Delta_{*}(g)\), \[\Delta_{*}(g)=\Delta_{*}(0)+g\partial_{g}\Delta_{*}(0)+O(g^{2})\,, \tag{115}\] we recognize that the anomalous dimension of the boundary operator with \(\Delta=\Delta_{*}(g)\) at order \(g\) is given by \[\gamma_{*}=g\frac{c_{2}}{c_{0}}\,. \tag{111}\] Simply put, the anomalous shift in the dimension \(\Delta_{*}(0)\) at first order in \(g\) is given by the ratio between the coefficient of the double pole in \(\rho(\Delta,g)\) at the position \(\Delta=\Delta_{*}(0)\) appearing at order \(g\) and the coefficient of the simple pole at the same position but in the free theory. Let us now consider a concrete example. #### 5.3.1 Anomalous dimensions from quartic interactions Consider the following weakly coupled theory for a massive real scalar in de Sitter \[\mathcal{L}=-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1 }{2}m^{2}\phi^{2}-\frac{g}{4!}\phi^{4}\,, \tag{112}\] with \(\Delta=\frac{d}{2}+i\lambda_{\phi}\) and \(\lambda_{\phi}\in\mathbb{R}.\) We are going to be interested in this theory when the interaction is relevant or marginal, so \(d=1,2,3\). We compute the correction to the free two-point function of the composite operator \(\phi^{2}\) by using the in-in formalism, which we review in Appendix I. We choose to consider the Wightman function \[\langle\Omega|\phi^{2}(Y_{1})\phi^{2}(Y_{2})|\Omega\rangle\,, \tag{113}\] where \(|\Omega\rangle\) is the interacting Bunch-Davies vacuum and, as we discussed in 10, we are avoiding the branch cut by taking \(\eta_{1}\to e^{i\epsilon}\eta_{1}\) and \(\eta_{2}\to e^{-i\epsilon}\eta_{2}\,.\) In the notation from [5], which we are going to adopt for the rest of this subsection, it means we are selecting \(Y_{2}\in r\) and \(Y_{1}\in l\) (see Appendix I for more details). When the coupling \(g\) is turned off, the two-point function is given by the free theory contribution, which has the Kallen-Lehmann representation shown in 11, with \(\lambda_{1}=\lambda_{2}=\lambda_{\phi}\) and a factor of 2 accounting for symmetry \[\rho^{\mathcal{P},0}_{\phi^{2},\text{free}}(\lambda)=\frac{\lambda\sinh(\pi \lambda)}{16\pi^{3+\frac{d}{2}}\Gamma(\frac{d}{2})\Gamma(\frac{d}{2}\pm i \lambda)}\Gamma\left(\frac{\frac{d}{2}\pm i\lambda}{2}\right)^{2}\prod_{\pm, \pm}\Gamma\left(\frac{\frac{d}{2}\pm i\lambda\pm 2i\lambda_{\phi}}{2}\right)\,. \tag{114}\] Importantly, this spectral density has simple poles at \[\begin{split}\lambda&=2\lambda_{\phi}-i\left(\frac {d}{2}+2n\right)\longrightarrow\Delta=2\Delta_{\phi}+2n\,,\\ \lambda&=-2\lambda_{\phi}-i\left(\frac{d}{2}+2n\right) \longrightarrow\Delta=2\bar{\Delta}_{\phi}+2n\,,\end{split} \tag{115}\] due to the fact that boundary operators of the form \([\mathcal{O}\mathcal{O}]_{n}\) and \([\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_{n}\)26 appear in the bulk-boundary OPE of \(\phi^{2}\). We expect these operators to inherit anomalous dimensions once we turn on interactions. At leading order in the coupling, the two-point function is corrected by the diagram shown in Figure 2, which following the in-in formalism, corresponds to the following integrals Footnote 26: We remind the reader that \([\mathcal{O}_{1}\mathcal{O}_{2}]_{n}\) is a schematic notation to indicate all the scalar double trace operators one can form with \(\mathcal{O}_{1}\), \(\mathcal{O}_{2}\) and \(2n\) derivatives, and \(\phi(\eta,\mathbf{x})\underset{\eta\to 0^{-}}{\sim}(-\eta)^{\Delta_{\phi}} \mathcal{O}(\mathbf{x})+(-\eta)^{\bar{\Delta}_{\phi}}\widetilde{\mathcal{O}}( \mathbf{x})\). \[\langle\phi^{2}(Y_{1})\phi^{2}(Y_{2})\rangle^{lr}_{(g)}=ig\left[\int_{Y^{I}}(G^{ ll}_{\lambda_{\phi}}(Y_{1},Y)\,G^{lr}_{\lambda_{\phi}}(Y,Y_{2}))^{2}-\int_{Y^{r}}(G^{lr} _{\lambda_{\phi}}(Y_{1},Y)\,G^{rr}_{\lambda_{\phi}}(Y,Y_{2}))^{2}\right] \tag{111}\] In Appendix I we analytically continue these integrals to EAdS and solve them. We obtain that the order \(g\) contribution to the two-point function has the following spectral density \[\begin{split}\rho^{\mathcal{P},0}_{\phi^{2},g}(\lambda)=& g\frac{\rho^{\mathcal{P},0}_{\phi^{2},\text{free}}(\lambda)}{4\sinh^{2}( \pi\lambda_{\phi})}\bigg{[}\sin\left(\pi\left(\frac{d}{2}+2i\lambda_{\phi} \right)\right)B_{\Delta_{\phi},\Delta_{\phi}}(\lambda)\\ &+\sin\left(\pi\left(\frac{d}{2}-2i\lambda_{\phi}\right)\right)B_ {\bar{\Delta}_{\phi},\bar{\Delta}_{\phi}}(\lambda)-2\sin\left(\frac{d\pi}{2} \right)B_{\Delta_{\phi},\bar{\Delta}_{\phi}}(\lambda)\bigg{]}\,,\end{split} \tag{112}\] with \(B_{\Delta_{1}\Delta_{2}}(\lambda)\) defined as an infinite series in (1.21). This function is well-defined when \(d<3\) and suffers from a UV divergence when \(d\geq 3\). In \(d=3\), as discussed in appendix I.3, we can make sense of \(B_{\Delta_{1}\Delta_{2}}(\lambda)\) by dimensional regularization, i.e. \(d=3-\epsilon\), and absorbing the divergence into the wavefunction renormalization of \(\phi^{2}\). In the same appendix, we also show how to extract the anomalous dimensions of \([\mathcal{O}\mathcal{O}]_{n}\) and \([\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_{n}\) from eq. (112), following the prescription outlined above. More precisely, we did that for \(\lambda_{\phi}\in\mathbb{R}\) with the final expressions given by eq. (1.26): \[\begin{split}&\gamma_{[\mathcal{O}\mathcal{O}]_{n}}=-g\frac{( \frac{d}{2})_{n}\Gamma(\frac{1}{2}+n+i\lambda_{\phi})\Gamma(\frac{d}{2}+n+i \lambda_{\phi})\Gamma(\frac{d}{2}+n+2i\lambda_{\phi})\sin(\frac{\pi}{2}(d+4i \lambda_{\phi}))}{2^{d+3}\pi^{\frac{d}{2}}n!\sinh^{2}(\pi\lambda_{\phi})\Gamma (1+n+i\lambda_{\phi})\Gamma(\frac{d+1}{2}+n+i\lambda_{\phi})\Gamma(1+n+2i \lambda_{\phi})}\,,\\ &\gamma_{[\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_{n}}=-g \frac{(\frac{d}{2})_{n}\Gamma(\frac{1}{2}+n-i\lambda_{\phi})\Gamma(\frac{d}{2} +n-i\lambda_{\phi})\Gamma(\frac{d}{2}+n-2i\lambda_{\phi})\sin(\frac{\pi}{2}(d- 4i\lambda_{\phi}))}{2^{d+3}\pi^{\frac{d}{2}}n!\sinh^{2}(\pi\lambda_{\phi}) \Gamma(1+n-i\lambda_{\phi})\Gamma(\frac{d+1}{2}+n-i\lambda_{\phi})\Gamma(1+n -2i\lambda_{\phi})}\,.\end{split} \tag{113}\] For an elementary field that in the absence of interactions is in the principal series (\(\lambda_{\phi}\in\mathbb{R}\)), these anomalous dimensions are complex, satisfying \((\gamma_{[\mathcal{O}\mathcal{O}]_{n}})^{*}=\gamma_{[\widetilde{\mathcal{O}} \widetilde{\mathcal{O}}]_{n}}\) and with a positive real part. The late time boundary operators associated to \(\phi^{2}\) will thus decay faster once interactions are turned on (assuming that \(g>0\), or in other words that the Hamiltonian is bounded from below). In [2, 12, 60, 61], a similar phenomenon was observed for the boundary operators \(\mathcal{O}\) and \(\widetilde{\mathcal{O}}\) themselves. In figure 5.3 we plot \(\gamma_{[\mathcal{O}\mathcal{O}]_{n}}\) and \(\gamma_{[\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_{n}}\) for \(\lambda_{\phi}\in(0,10)\) in \(d=3\) and with \(n=0\). If we naively continue (113) to imaginary values of \(\lambda_{\phi}\) to study the case in which \(\phi\) is in the complementary series, we can match with known results in the literature [62] on the anomalous dimension of \([\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_{0}\) in dS\({}_{4}\), as shown in Figure 5.4. We believe this analytic continuation should be done with care, since many of the steps in Appendix I.3 do not trivially generalize to the complementary series, but simply making \(\lambda_{\phi}\) imaginary seems to work, at least in this case. One can further compare (5.100) with the anomalous dimensions of the corresponding boundary operators in AdS in a quartic theory [63; 64; 65; 66]: \[\gamma^{\rm dS}_{[\mathcal{O}\mathcal{O}]_{n}}=-\frac{1}{2}{\rm csch}^{2}(\pi \lambda_{\phi})\sin\left(\pi\left(\frac{d}{2}+2i\lambda_{\phi}\right)\right) \gamma^{\rm AdS}_{[\mathcal{O}\mathcal{O}]_{n}}\,. \tag{5.101}\] The trigonometric factors appearing in front have two separate origins. The hyperbolic cosecant factor comes from the different normalizations for bulk-to-boundary propagators in dS and AdS, while the sine factor originates from the interference of the two branches of the in-in contour. Interestingly, its role is to cancel unphysical singularities that are otherwise present in the AdS result when analytically continued to the complementary series (which is partly outside of the unitarity bounds in AdS). The anomalous dimensions of \(\widetilde{\mathcal{O}}^{2}\) in dS diverge as one approaches the two endpoints of the complementary series. The divergence as \(i\lambda_{\phi}\to\frac{d}{2}\) is a symptom of the breaking down of perturbation theory due to the IR divergences associated to massless fields. The divergence as \(i\lambda_{\phi}\to 0\) instead corresponds to the degeneracy between the boundary operators \(\mathcal{O}^{2}\) and \(\widetilde{\mathcal{O}}^{2}\) when \(\Delta_{\phi}=\frac{d}{2}\,\). ## 6 Outlook In this paper we have derived the Kallen-Lehmann decomposition for spinning traceless symmetric bulk operators in dS (see section 3), and applied it to many examples in section 5. Here we outline potential future applications that we can imagine for this technology. * In the context of **bootstrapping** QFT in de Sitter [67], it may be useful to study a mixed system involving boundary and bulk correlation functions analogous to what was recently done in flat space [68, 69] and AdS [46]. In particular, it should be possible to generalize [70, 71] and derive a sum rule for the spectral density of the trace of the stress tensor that gives the central charge of the UV CFT in a two dimensional de Sitter background. * In this work, we mostly focused on the contributions of principal and complementary series UIRs (except in dS\({}_{2}\) where we also considered the discrete series systematically). It would be interesting to study the effect of the **type II exceptional series** UIRs, which include photons and gravitons in \(d>2\). For example, in upcoming work [72] we show that computing the leading order correction to the two-point function of the conserved current in scalar QED in de Sitter leads to the appearance of a photon in the Kallen-Lehmann decomposition. Understanding the Ward identities of the boundary operators associated to photons and gravitons is the natural next step towards understanding quantum gravity in de Sitter. * It would be interesting to establish the convergence properties of the Boundary Operator Expansion (BOE) non-perturbatively for QFT in dS. It is likely that the connection between boundary operators and quasi-normal modes of the **static patch** of de Sitter [73; 74; 75; 76; 77; 78] will be useful in this context. ## Acknowledgements We are grateful to Tarek Anous, Miguel Correia, Frederik Denef, Victor Gorbenko, Aditya Hebbar, Matthijs Hogervorst, Austin Joyce, Shota Komatsu, Fedor Popov, Akhil Premkumar and Jiaxin Qiao for useful discussions. JP and ML are supported by the Simons Foundation grant 488649 (Simons Collaboration on the Nonperturbative Bootstrap) and the Swiss National Science Foundation through the project 200020_197160 and through the National Centre of Competence in Research SwissMAP. ZS is supported by the US National Science Foundation under Grant No. PHY-2209997 and the Gravity Initiative at Princeton University. Various properties of Green's functions in de Sitter ### Canonical quantization of a free scalar Let \(\Phi\) be a free massive scalar of mass \(m\geq\frac{d}{2}\) in dS\({}_{d+1}\). We parametrize the mass by \(m^{2}=\frac{d^{2}}{4}+\lambda^{2}\), with \(\lambda\in\mathbb{R}\) (principal series) or \(-\frac{d}{2}<i\lambda<\frac{d}{2}\) (complementary series). The standard bulk mode expansion of \(\phi\) in planar coordinates is \[\Phi(\eta,\mathbf{y}\,)=\int\frac{d^{d}\mathbf{y}}{(2\pi)^{\frac{d}{2}}}\left( a_{\mathbf{k}}\,\phi_{\mathbf{k}}(\eta)e^{i\mathbf{k}\cdot\mathbf{y}}+a_{ \mathbf{k}}^{\dagger}\,\phi_{\mathbf{k}}(\eta)^{*}e^{-i\mathbf{k}\cdot\mathbf{ y}}\right)\, \tag{110}\] where \(\phi_{\mathbf{k}}(\eta)\) satisfies the equations of motion \[\left((-\eta)^{d+1}\partial_{\eta}(-\eta)^{1-d}\partial_{\eta}+| \mathbf{k}|^{2}\eta^{2}+m^{2}\right)\phi_{\mathbf{k}}(\eta)=0. \tag{111}\] and the Klein-Gordon normalization condition (needed to ensure the canonical commutation relation between \(\Phi\) and \(\Pi_{\Phi}=(-\eta)^{1-d}\partial_{\eta}\Phi\)) \[i(-\eta)^{1-d}\left(\phi_{\mathbf{k}}^{*}\partial_{\eta}\phi_{ \mathbf{k}}-\partial_{\eta}\phi_{\mathbf{k}}^{*}\phi_{\mathbf{k}}\right)=1 \tag{112}\] This is solved by \[\phi_{\mathbf{k}}(\eta)=(-\eta)^{\frac{d}{2}}\bar{h}_{i\lambda}(| \mathbf{k}|\eta),\ \ \ \phi_{\mathbf{k}}(\eta)^{*}=(-\eta)^{\frac{d}{2}}h_{i\lambda}(|\mathbf{k}|\eta)\, \tag{113}\] where \[h_{i\lambda}(\xi)=\frac{\sqrt{\pi}}{2}e^{\frac{\pi\lambda}{2}}H_{i \lambda}^{(2)}(-\xi),\ \ \ \bar{h}_{i\lambda}(\xi)=\frac{\sqrt{\pi}}{2}e^{-\frac{\pi\lambda}{2}}H_{i \lambda}^{(1)}(-\xi). \tag{114}\] The functions \(h_{i\lambda}\) and \(\bar{h}_{i\lambda}\) are invariant under \(\lambda\leftrightarrow-\lambda\). This is consistent with the fact that \(m\) is independent of the sign of \(\lambda\). For light scalars, i.e. \(0<m<\frac{d}{2}\), \(\lambda\) is purely imaginary, and \(e^{\pm\frac{\pi\lambda}{2}}\) becomes a phase. The solution of eq. (111) and eq. (112) is not unique. Different linear combinations of the Hankel functions also satisfy the equation of motion and the Klein-Gordon normalization condition. This is related to the usual ambiguity of choosing the vacuum in a curved spacetime. What singles out the above choice is the early time \(\eta\to-\infty\) asymptotic behavior \[\phi_{\mathbf{k}}(\eta)\approx e^{-i\frac{\pi}{4}}(-\eta)^{\frac{ d-1}{2}}\frac{e^{-i|\mathbf{k}|\eta}}{\sqrt{2|\mathbf{k}|}}. \tag{115}\] It means that at early time, the corresponding choice of vacuum looks like the canonical Minkowski vacuum. The vacuum selected in this way is called the Bunch-Davies vacuum. Given the mode functions (113), the Wightman two-point function of \(\Phi\) in the Bunch-Davies vacuum can be expressed as \[G_{\lambda,0}(\eta_{1},y_{1};\eta_{2},\mathbf{y}_{2}\,) \equiv\langle\Omega|\Phi(\eta_{1},\mathbf{y}_{1})\Phi(\eta_{2}, \mathbf{y}_{2}\,)|\Omega\rangle\] \[=(\eta_{1}\eta_{2})^{\frac{d}{2}}\int\frac{d^{d}\mathbf{k}}{(2 \pi)^{d}}e^{-i\mathbf{k}\cdot(\mathbf{y}_{1}-\mathbf{y}_{2})}\bar{h}_{i \lambda}(|\mathbf{k}|\eta_{1})h_{i\lambda}(|\mathbf{k}|\eta_{2})\, \tag{116}\] Evaluating the Fourier transformation in eq. (110) yields the hypergeometric representation of \(G_{\lambda,0}\): \[G_{\lambda,0}(\eta_{1},{\bf y}_{1};\eta_{2},{\bf y}_{2})=\frac{\Gamma(\Delta) \Gamma(\bar{\Delta})}{(4\pi)^{\frac{d+1}{2}}}{\bf F}\left(\Delta,\bar{\Delta}, \frac{d+1}{2},\frac{1+\sigma}{2}\right). \tag{111}\] where \(\Delta=\frac{1}{2}+i\lambda\), \(\sigma\) is the chordal distance between \((\eta_{1},{\bf y}_{1})\) and \((\eta_{2},{\bf y}_{2})\) \[\sigma=Y_{1}\cdot Y_{2}=\frac{\eta_{1}^{2}+\eta_{2}^{2}-{\bf y}_{12}^{2}}{2 \eta_{1}\eta_{2}},\ \ \ {\bf y}_{12}={\bf y}_{1}-{\bf y}_{2}\, \tag{112}\] and \({\bf F}(a,b,c,z)\equiv\frac{1}{\Gamma(c)}F(a,b,c,z)\) is the regularized hypergeometric function. ### Proca fields in dS\({}_{2}\) In dS\({}_{2}\), the mode expansion of a Proca field \(A_{\mu}\) is closely related to that of \(\Phi\). More precisely, consider the Proca Lagrangian \[{\cal L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}m^{2}A_{\mu}A^{\mu}\, \tag{113}\] where \(m^{2}=\frac{1}{4}+\lambda^{2}\). In planar coordinates, the equation of motion of \(A_{\mu}\) is satisfied by \[A_{\eta}=\alpha\partial_{y}\Phi,\ \ \ A_{y}=\alpha\partial_{\eta}\Phi\, \tag{114}\] where \(\alpha\) is a normalization constant and \(\Phi\) is a canonically normalized scalar field of the same mass \(m\). We fix \(\alpha\) by requiring \(A_{y}\) and its conjugate momentum \(\Pi_{y}\) satisfy the standard commutation relation. The canonical momentum \(\Pi_{y}\) is given by \[\Pi_{y}=\eta^{2}F_{\eta y}=\alpha\,\eta^{2}(\partial_{\eta}^{2}- \partial_{y}^{2})\Phi=-\alpha m^{2}\Phi\, \tag{115}\] where in the last step we have used the equation of motion of \(\Phi\). Using the fact that \(\partial_{\eta}\Phi=\Pi_{\Phi}\) is the canonical momentum of \(\Phi\), we get \[[\Pi_{y},A_{y}]=-\alpha^{2}m^{2}[\Phi,\Pi_{\Phi}]=-i\alpha^{2}m^{2}\, \tag{116}\] which implies \(\alpha^{2}=\frac{1}{m^{2}}\) for canonical quantization. Thus the Green's function of \(A_{\mu}\) can be summarized as \[d=1:\ \ \langle\Omega|A_{\mu}(\eta_{1},y_{1})A_{\nu}(\eta_{2},y_{2})| \Omega\rangle=\frac{1}{m^{2}}\,{\epsilon_{\mu}}^{\alpha}\partial_{y_{1}^{\alpha }}{\epsilon_{\nu}}^{\beta}\partial_{y_{2}^{\beta}}G_{\lambda,0}(\sigma)\, \tag{117}\] where \(\epsilon_{\mu\alpha}\) and \(\epsilon_{\nu\beta}\) are totally antisymmetric tensors at \((\eta_{1},y_{1})\) and \((\eta_{2},y_{2})\) respectively. To write the two-point function \(\langle\Omega|A_{\mu}(\eta_{1},y_{1})A_{\nu}(\eta_{2},y_{2})|\Omega\rangle\) in terms of embedding space coordinates, we need the following relation \[\epsilon_{ABC}\frac{\partial Y^{A}}{\partial y^{\mu}}\frac{ \partial Y^{B}}{\partial y^{\nu}}Y^{C}=-\epsilon_{\mu\nu}\, \tag{118}\] which can be directly checked in any local coordinates \(y^{\mu}\). It implies that the embedding space counterpart of \(\epsilon_{\mu\nu}\) is \(-\epsilon_{ABC}Y^{C}\). Then the uplift of \({\epsilon_{\mu}}^{\alpha}\partial_{y^{\alpha}}\) to embedding space should be \[{\epsilon_{\mu}}^{\alpha}\partial_{y^{\alpha}}\Longrightarrow- \epsilon_{ABC}Y^{C}(\partial_{Y^{B}}-Y_{B}\,Y\cdot\partial_{Y})=\epsilon_{ABC }Y^{B}\partial_{Y^{C}}. \tag{119}\] Altogether, the two-point function \(G_{\lambda,1}(Y_{1},Y_{2};W_{1},W_{2})\) of \(A(Y,W)\) is \[G_{\lambda,1}(Y_{1},Y_{2};W_{1},W_{2}) \equiv\langle\Omega|A(Y_{1},W_{1})A(Y_{2},W_{2})|\Omega\rangle\] \[=\left(\frac{1}{4}+\lambda^{2}\right)^{-1}\epsilon\left(W_{1},Y_{ 1},\partial_{Y_{1}}\right)\epsilon\left(W_{2},Y_{2},\partial_{Y_{2}}\right)G_ {\lambda,0}(Y_{1},Y_{2})\, \tag{111}\] where \(\epsilon(U_{1},U_{2},U_{3})\equiv\epsilon_{ABC}U_{1}^{A}U_{2}^{B}U_{3}^{C}\). ### Analytical continuation of \(G_{\lambda,0}\) in dS\({}_{2}\) In dS\({}_{2}\), the Green's function \(G_{\lambda,0}\) in eq. (107) becomes divergent when \(\Delta\equiv\frac{1}{2}+i\lambda=p\in\mathbb{Z}_{+}\). Such divergences can be removed by acting with derivatives. More precisely, let's first apply the series expansion of hypergeometric functions to \(G_{\lambda,0}\) \[G_{\lambda,0}(\sigma)=\sum_{n\geq 0}\frac{\Gamma(\Delta+n)\Gamma(\bar{ \Delta}+n)}{(n!)^{2}}\left(\frac{1+\sigma}{2}\right)^{n}. \tag{112}\] Then we take the limit \(\Delta\to p\). The problematic terms in this limit correspond to \(n\leq p-1\), which means that the divergent part is a polynomial in \(\sigma=Y_{1}\cdot Y_{2}\) of degree \(p-1\). This polynomial is obviously annihilated by the differential operator \((W_{1}\cdot\nabla_{1})^{p}(W_{2}\cdot\nabla_{2})^{p}\). Therefore, the following function has a well-defined \(\Delta\to p\) limit \[\Psi_{p,\Delta}(Y_{1},Y_{2};W_{1},W_{2})\equiv(W_{1}\cdot\nabla_{1})^{p}(W_{2 }\cdot\nabla_{2})^{p}\,G_{-i(\Delta-\frac{1}{2}),0}(\sigma). \tag{113}\] In the remaining part of the section, we are going to compute \(\Psi_{p,q}\equiv\lim_{\Delta\to q}\Psi_{p,\Delta}\) for any \(1\leq q\leq p\). As discussed in section 2.2.2, it amounts to computing the matrix \[\begin{pmatrix}\Psi_{p,q}(Y_{1},Y_{2};W_{1}^{+},W_{2}^{+})&\Psi_{p,q}(Y_{1},Y_ {2};W_{1}^{+},W_{2}^{-})\\ \Psi_{p,q}(Y_{1},Y_{2};W_{1}^{-},W_{2}^{+})&\Psi_{p,q}(Y_{1},Y_{2};W_{1}^{-},W_ {2}^{-})\end{pmatrix}\, \tag{114}\] where \(W^{\pm}\) are defined by eq. (31) to encode the two chiral components of a symmetric and traceless tensor in dS\({}_{2}\). Let's start with the diagonal entries \(\Psi_{p,\Delta}(Y_{1},Y_{2};W_{1}^{\pm},W_{2}^{\pm})\) \[\Psi_{p,\Delta}(Y_{1},Y_{2};W_{1}^{\pm},W_{2}^{\pm}) =\sum_{n=0}^{p}\begin{pmatrix}p\\ n\end{pmatrix}(W_{1}^{\pm}\cdot\nabla_{1})^{p-n}(W_{2}^{\pm}\cdot Y_{1})^{p}(W_ {1}^{\pm}\cdot\nabla_{1})^{n}\partial_{\sigma}^{p}G_{-i(\Delta-\frac{1}{2}),0 }(\sigma)\] \[=\sum_{n=0}^{p}\begin{pmatrix}p\\ n\end{pmatrix}\frac{p!}{n!}(W_{1}^{\pm}\!\cdot\!W_{2}^{\pm})^{p-n}(W_{1}^{\pm} \!\cdot\!Y_{2})^{n}(W_{2}^{\pm}\!\cdot\!Y_{1})^{n}\partial_{\sigma}^{p+n}G_{ -i(\Delta-\frac{1}{2}),0}(\sigma). \tag{115}\] Using the relation \((W_{1}^{\pm}\cdot Y_{2})(W_{2}^{\pm}\cdot Y_{1})=(\sigma+1)W_{1}^{\pm}\cdot W_ {2}^{\pm}\) established in eq. (33), we get \[\Psi_{p,\Delta}(Y_{1},Y_{2};W_{1}^{\pm},W_{2}^{\pm}) =(W_{1}^{\pm}\cdot W_{2}^{\pm})^{p}\sum_{n=0}^{p}\begin{pmatrix}p \\ n\end{pmatrix}\frac{p!}{n!}(\sigma+1)^{n}\partial_{\sigma}^{J+n}G_{-i(\Delta- \frac{1}{2}),0}(\sigma)\] \[=\frac{\Gamma(\Delta)\Gamma(\bar{\Delta})}{4\pi}(W_{1}^{\pm} \cdot W_{2}^{\pm})^{p}\partial_{\sigma}^{p}((\sigma+1)^{p}\partial_{\sigma}^{p} )F\left(\Delta,\bar{\Delta},1,\frac{1+\sigma}{2}\right)\, \tag{116}\] where the \(2p\)-th order differential operator \(\partial^{p}_{\sigma}((\sigma+1)^{p}\partial^{p}_{\sigma})\) acting on \(F\left(\Delta,\bar{\Delta},1,\frac{1+\sigma}{2}\right)\) yields another hypergeometric function: \[\Psi_{p,\Delta}(Y_{1},Y_{2};W_{1}^{\pm},W_{2}^{\pm})=\frac{\Gamma(\Delta+p) \Gamma(\bar{\Delta}+p)}{2^{p+2}\pi}(W_{1}^{\pm}\cdot W_{2}^{\pm})^{p}F\left( \Delta+p,\bar{\Delta}+p,1,\frac{1+\sigma}{2}\right). \tag{111}\] It clearly has a finite limit when \(\Delta\) hits any positive integer \(q\) that is not larger than \(p\): \[\Psi_{p,q}(Y_{1},Y_{2};W_{1}^{\pm},W_{2}^{\pm})=(W_{1}^{\pm}\cdot W_{2}^{\pm})^ {p}\,\psi_{p,q}(Y_{1},Y_{2})\, \tag{112}\] where \[\psi_{p,q}(Y_{1},Y_{2})=\frac{\Gamma(p+q)\Gamma(p+1-q)}{2^{p+2}\pi}F\left(p+q, p+1-q,1,\frac{1+Y_{1}\cdot Y_{2}}{2}\right). \tag{113}\] The off-diagonal matrix elements in (110) can be computed similarly \[\Psi_{p,\Delta}(Y_{1},Y_{2};W_{1}^{\pm},W_{2}^{\mp}) =\frac{\Gamma(\Delta)\Gamma(\bar{\Delta})}{4\pi}(W_{1}^{\pm} \cdot W_{2}^{\mp})^{p}\partial^{p}_{\sigma}((\sigma-1)^{p}\partial^{p}_{ \sigma})F\left(\Delta,\bar{\Delta},1,\frac{1+\sigma}{2}\right)\] \[=\frac{\Gamma(\Delta+p)^{2}\Gamma(\bar{\Delta}+p)^{2}(W_{1}^{\pm }\cdot W_{2}^{\mp})^{p}}{(-2)^{p+2}(2p)!\pi\Gamma(\Delta)\Gamma(\bar{\Delta})} F\left(\Delta+p,\bar{\Delta}+p,2p+1,\frac{1+\sigma}{2}\right). \tag{114}\] They vanish when \(\Delta\) approaches at positive integer \(q\in\{1,2,\cdots,p\}\), because \(\lim_{\Delta\to q}(\bar{\Delta})_{p}=0\). Altogether, we have \[(W_{1}^{\pm}\cdot\nabla_{1})^{p}(W_{2}^{\pm}\cdot\nabla_{2})^{p} \,G_{-i(q-\frac{1}{2}),0}(Y_{1},Y_{2}) =(W_{1}^{\pm}\cdot W_{2}^{\pm})^{p}\,\psi_{p,q}(Y_{1},Y_{2})\,\] \[(W_{1}^{\pm}\cdot\nabla_{1})^{p}(W_{2}^{\mp}\cdot\nabla_{2})^{p} \,G_{-i(q-\frac{1}{2}),0}(Y_{1},Y_{2}) =0. \tag{115}\] When \(q=p\), \(\psi_{p,p}(Y_{1},Y_{2})\) becomes particularly simple \[\psi_{p,p}(Y_{1},Y_{2})=\frac{2^{p-2}\Gamma(2p)}{\pi(1-Y_{1}\cdot Y_{2})^{2p}}. \tag{116}\] To end this section, we show the pull-back of \(\Psi_{p,p}(Y_{1},Y_{2};W_{1},W_{2})\) to conformal global coordinates. According to the discussion in section 2.2.2, it amounts to replacing \(W_{\pm}^{A}\) by \(\partial_{\pm}Y^{A}\) (where \(\partial_{\pm}\) denotes the ordinary derivative with respect to the local lightcone coordinates \(y^{\pm}=\tau\pm\varphi\)), for example, \[\left(\nabla_{+}^{(1)}\right)^{p}\left(\nabla_{+}^{(2)}\right)^{ p}G_{-i(p-\frac{1}{2})}(Y_{1},Y_{2}) =\Psi_{p,p}(Y_{1},Y_{2};\partial_{+}Y_{1},\partial_{+}Y_{2})\,\] \[=\frac{2^{p-2}\Gamma(2p)}{\pi}\left(\frac{\partial_{y_{1}^{+}} \partial_{y_{2}^{+}}\sigma}{(1-\sigma)^{2}}\right)^{p}=\frac{\Gamma(2p)}{4\pi \left(-4\sin^{2}\frac{y_{12}^{+}}{2}\right)^{p}}\, \tag{117}\] where \(\nabla_{\pm}\) denotes covariant derivative along \(y^{\pm}\). Similarly, the other rest components are \[\left(\nabla_{-}^{(1)}\right)^{p}\left(\nabla_{-}^{(2)}\right)^{ p}G_{-i(p-\frac{1}{2})}(Y_{1},Y_{2}) =\frac{\Gamma(2p)}{4\pi\left(-4\sin^{2}\frac{y_{12}^{-}}{2}\right)^{p}}\,\] \[\left(\nabla_{+}^{(1)}\right)^{p}\left(\nabla_{-}^{(2)}\right)^{p}G_{- i(p-\frac{1}{2})}(Y_{1},Y_{2}) =\left(\nabla_{-}^{(1)}\right)^{p}\left(\nabla_{+}^{(2)}\right)^{ p}G_{-i(p+\frac{1}{2})}(Y_{1},Y_{2})=0. \tag{118}\] ### Flat space limit of \(G_{\lambda,\ell}\) In this subsection, we elaborate on the discussion in section 3.3 and show that the canonically normalized de Sitter Green's functions \([(W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2})]^{J-\ell}G_{\lambda,\ell}(Y_{1},Y_ {2};W_{1},W_{2})\) reduce to the flat space Wightman functions \(m^{-2\ell}\Delta_{m^{2},\ell}^{(J)}(x_{1},x_{2};w_{1},w_{2})\) (which are introduced in [47] for \(0\leq\ell\leq J\leq 2\)), up to numerical normalization factors. For example, in the \(J=0\) case, we expect the flat space limit of the scalar Green's function \(G_{\lambda,0}\) in dS to reproduce \(\Delta_{m^{2},0}^{(0)}\), which is given by [47] \[\Delta_{m^{2},0}^{(0)}(x_{1},x_{2})=\frac{1}{(2\pi)^{\frac{d+1}{2}}}m^{d-1} \frac{K_{\frac{d-1}{2}}\left(m\sqrt{x_{12}^{2}}\right)}{\left(m\sqrt{x_{12}^{2 }}\right)^{\frac{d-1}{2}}}\,, \tag{113}\] where \(x_{1}^{\mu},x_{2}^{\mu}\) are flat space coordinates of \(\mathbb{R}^{d,1}\), and \(i\epsilon\) is suppressed in \(\sqrt{x_{12}^{2}}\). Let's start with discussing the flat space limit of the coordinates and the metric of dS. The dS metric in planar coordinate is \(ds^{2}=R^{2}\frac{-d\eta^{2}+d\mathbf{y}^{\,2}}{\eta^{2}}\), where the dS radius \(R\) is restored. We consider the large \(R\) limit, with \(R(\eta+1)\equiv x^{0}\) and \(R\,y^{i}\equiv x^{i}\) being fixed. Then the dS metric reduces to the flat space metric \(ds^{2}=\eta_{\mu\nu}dx^{\mu}dx^{\nu}\). It is also useful to show how \(\sigma=Y_{1}\cdot Y_{2}\) is related to the flat space distance \(x_{12}^{2}\) in this limit. For this purpose, we define a new variable \[\rho\equiv\frac{1}{2}\left(1-\frac{\sigma}{R^{2}}\right)=\frac{-\eta_{12}^{2} +\mathbf{y}_{12}^{\,2}}{4\eta_{1}\eta_{2}}=\frac{-(x_{12}^{0})^{2}+\mathbf{x} _{12}^{2}}{4\eta_{1}\eta_{2}R^{2}}. \tag{114}\] In the flat space limit, we can simply replace \(4\eta_{1}\eta_{2}\) by 4, and hence obtain \(\rho\approx\frac{x_{12}^{2}}{4R^{2}}\). In other words, \(\rho\to 0\) in the flat space limit, but \(4\rho R^{2}\) is finite and equal to the flat space distance \(x_{12}^{2}\). Using the large \(R\) relation \(\lambda=mR\), we can also write \(4\rho\lambda^{2}\approx m^{2}x_{12}^{2}\). Next, we consider the de Sitter scalar propagator \[G_{\lambda,0}(\sigma)=\frac{\Gamma(\frac{d}{2}\pm i\lambda)}{2^{d+1}\pi^{\frac {d+1}{2}}R^{d-1}}\mathbf{F}\left(\frac{d}{2}-i\lambda,\frac{d}{2}+i\lambda, \frac{d+1}{2},1-\rho\right)\,, \tag{115}\] which is expressed in terms of the new variable \(\rho\). To retrieve the flat space propagator we will thus have to take \(\lambda^{2}\rightarrow\infty\) and \(\rho\to 0\) while keeping the product fixed. This limit can be easily implemented if we rewrite \(G_{\lambda,0}\) by using the following Mellin representation of the hypergeometric function \[\mathbf{F}(a,b,c,z)=\frac{\int_{\mathbb{R}}\,dt\,\Gamma(a+it)\Gamma(b+it) \Gamma(c-a-b-it)\Gamma(-it)(1-z)^{it}}{2\pi\Gamma(a)\Gamma(b)\Gamma(c-a) \Gamma(c-b)}. \tag{116}\] which yields \[G_{\lambda,0}(\sigma)=\frac{\int_{\mathbb{R}}\,dt\,\Gamma(\frac{d}{2}+it\pm i \lambda)\Gamma(\frac{1-d}{2}-it)\Gamma(-it)\rho^{it}}{2^{d+2}\pi^{\frac{d+3}{ 2}}\Gamma(\frac{1}{2}\pm i\lambda)R^{d-1}}. \tag{117}\] Taking the large \(\lambda\) limit in eq. (117) : \[\frac{1}{\Gamma(\frac{1}{2}\pm i\lambda)}\approx\frac{1}{2\pi}\,e^{\pi\lambda}, \ \ \Gamma\left(\frac{d}{2}+it\pm i\lambda\right)\approx 2\pi e^{-\pi\lambda} \lambda^{d-1+2it}\, \tag{118}\] it is then easy to see that \(G_{\lambda,0}(\sigma)\) becomes \[G_{\lambda,0}(\sigma)\approx\frac{m^{d-1}}{2^{d+2}\pi^{\frac{d+3}{2}}}\int_{ \mathbb{R}}\,dt\,\Gamma\left(\frac{1-d}{2}-it\right)\Gamma\left(-it\right)( \lambda^{2}\rho)^{it}\, \tag{110}\] where we have used \(\lambda/R=m\). The remaining integral can be evaluated by using the integral representation of \(\Gamma\) functions \[G_{\lambda,0}(\sigma)\approx\frac{m^{d-1}}{2^{d+2}\pi^{\frac{d+3}{2}}}\int_{0} ^{\infty}\frac{du}{u}\,u^{\frac{1-d}{2}}\,e^{-u}\int_{0}^{\infty}\frac{dv}{v} \,e^{-v}\int_{\mathbb{R}}dt\,\left(\frac{\lambda^{2}\rho}{uv}\right)^{it}. \tag{111}\] Performing the \(t\) integral gives a delta function supported on \(uv=\lambda^{2}\rho\), so (111) reduces to a familiar form \[G_{\lambda,0}(\sigma)\approx\frac{m^{d-1}}{2^{d+1}\pi^{\frac{d+1}{2}}}\int_{0 }^{\infty}\frac{du}{u}\,u^{\frac{1-d}{2}}\,e^{-u-\frac{\lambda^{2}\rho}{u}}= \frac{m^{d-1}}{(2\pi)^{\frac{d+1}{2}}}\frac{K_{\frac{d-1}{2}}\left(2\lambda \sqrt{\rho}\right)}{(2\lambda\sqrt{\rho})^{\frac{d-1}{2}}}. \tag{112}\] This is exactly (110) after using the relation \(4\rho\lambda^{2}\approx m^{2}x_{12}^{2}\). We want to mention that a EAdS version of this story was discussed in [79]. The second example is \(J=\ell=1\). On the flat space side, we have [47] \[m^{-2}\Delta_{m^{2},1}^{(1)}(x_{1},x_{2};w_{1},w_{2})=(w_{1}\cdot w_{2}+m^{-2} (w_{1}\cdot\partial_{1})(w_{2}\cdot\partial_{2}))\Delta_{m^{2},0}^{(0)}(x_{1},x_{2}) \tag{113}\] where \(w_{1}^{\mu},w_{2}^{\mu}\) are auxiliary null vectors. On the dS side, the spin 1 Green's function \(G_{\lambda,1}\) can be obtained by evaluating the split representation (10) for \(\ell=1\)27: Footnote 27: One can check that when \(d=1\), eq. (110) is actually equivalent to eq.(110) by using the equation of motion \((1-\sigma^{2})\partial_{\sigma}^{2}G_{\lambda,0}-2\sigma\partial_{\sigma}G_{ \lambda,0}=(\frac{1}{4}+\lambda^{2})G_{\lambda,0}\). \[G_{\lambda,1}(Y_{1},Y_{2};W_{1},W_{2})=\frac{\left[\left(\frac{d^{2}}{4}+ \lambda^{2}\right)(W_{1}\cdot W_{2})+\sigma(W_{1}\cdot\partial_{Y_{1}})(W_{2 }\cdot\partial_{Y_{2}})+d(W_{1}\cdot Y_{2})(W_{2}\cdot\partial_{Y_{2}})\right] G_{\lambda,0}(\sigma)}{\frac{(d-2)^{2}}{4}+\lambda^{2}} \tag{114}\] where the \(R\) dependence is implicitly restored in \(G_{\lambda,0}(\sigma)\). To take the flat space limit of \(G_{\lambda,1}\), we should pull \(W^{A}\) back to local coordinates, which is realized by the relation \(W^{A}=\frac{w^{\mu}}{R}\frac{\partial Y^{A}}{\partial y^{\mu}}\)28. Here \(y^{\mu}=(\eta,\mathbf{y})\) denotes the planar coordinates. Applying this pull-back rule to \(W_{1}\cdot W_{2}\) yields \(W_{1}\cdot W_{2}=\frac{w_{1}^{\mu}w_{2}^{\nu}}{2R^{2}}\partial_{y_{1}^{\mu}} \partial_{y_{2}^{\mu}}\sigma\). Because of the identification \(x^{\mu}=(R(\eta+1),Ry^{i})\), we can replace \(R^{-1}\partial_{y^{\mu}}\) by \(\partial_{x^{\mu}}\) and then in the flat space limit we get Footnote 28: In general, the null vector \(w^{\mu}\) here are different from the one used in the flat space two-point function (113), because \(w^{\mu}\) is null respect to the dS metric \(g_{\mu\nu}\). In this case, as we choose the planar coordinates, \(g_{\mu\nu}\) is conformally equivalent to \(\eta_{\mu\nu}\) and hence \(w^{\mu}\) is also null with repsect to the flat metric. \[W_{1}\cdot W_{2}=(w_{1}\cdot\partial_{1})(w_{2}\cdot\partial_{2})(-2\rho R^{2 })\approx w_{1}\cdot w_{2} \tag{115}\] where the substitution \(\sigma=R^{2}-2\rho R^{2}\) is made in the first step, and the relation \(\rho R^{2}=\frac{1}{4}x_{12}^{2}\) is used in the second step. Similarly, the term \(\sigma(W_{1}\cdot\partial_{Y_{1}})(W_{2}\cdot\partial_{Y_{2}})\) reduces to \(\sigma(w_{1}\cdot\partial_{1})(w_{2}\cdot\partial_{2})\). Although \(\sigma\) itself diverges in the flat space limit, the combination \(\sigma/\lambda^{2}\) is actually finite and equal to \(m^{-2}\) because of the large \(R\) relations \(\frac{\sigma}{R^{2}}=1\) and \(mR=\lambda\). It is also easy to check that the remaining term in \(G_{\lambda,1}(Y_{1},Y_{2};W_{1},W_{2})\) does not survive in the flat space limit. Altogether, the flat space limit of \(G_{\lambda,1}\) should be \[G_{\lambda,1}\approx\left(w_{1}\cdot w_{2}+m^{-2}(w_{1}\cdot \partial_{1})(w_{2}\cdot\partial_{2})\right)\Delta^{(0)}_{m^{2},0}(x_{1},x_{2 })=m^{-2}\Delta^{(1)}_{m^{2},1}(x_{1},x_{2};w_{1},w_{2}). \tag{111}\] Let's also consider the case \(J=1\), \(\ell=0\). In flat space, \(\Delta^{(1)}_{m^{2},0}\) is given by \[\Delta^{(1)}_{m^{2},0}(x_{1},x_{2};w_{1},w_{2})=(w_{1}\cdot\partial_{1})(w_{2} \cdot\partial_{2})\Delta^{(0)}_{m^{2},0}(x_{1},x_{2})\, \tag{112}\] and its de Sitter counterpart is \((W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2})G_{\lambda,0}(\sigma)\). Since \(W\cdot\nabla=W\cdot\partial_{Y}\) reduces to \(w\cdot\partial_{x}\) in the flat space limit, we have \[(W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2})G_{\lambda,0}(\sigma)\approx \Delta^{(1)}_{m^{2},0}(x_{1},x_{2};w_{1},w_{2}). \tag{113}\] We report the flat space limit in the \(J=2\) case without showing details. On the flat space side, all \(\Delta^{(2)}_{m^{2},\ell}\) with \(0\leq\ell\leq 2\) are given by [47] \[\Delta^{(2)}_{m^{2},0}(x_{1},x_{2};w_{1},w_{2}) = \frac{d+1}{d}(w_{1}\cdot\partial_{1})^{2}(w_{2}\cdot\partial_{2} )^{2}\Delta^{(0)}_{m^{2},0}(x_{1},x_{2})\] \[\Delta^{(2)}_{m^{2},1}(x_{1},x_{2};w_{1},w_{2}) = 2((w_{1}\cdot\partial_{1})^{2}(w_{2}\cdot\partial_{2})^{2}+m^{2} (w_{1}\cdot w_{2})(w_{1}\cdot\partial_{1})(w_{2}\cdot\partial_{2}))\Delta^{( 0)}_{m^{2},0}(x_{1},x_{2}) \tag{114}\] \[\Delta^{(2)}_{m^{2},2}(x_{1},x_{2};w_{1},w_{2}) = \left((m^{2}\,w_{1}\cdot w_{2}+w_{1}\cdot\partial_{1}\,w_{2} \cdot\partial_{2})^{2}-\frac{1}{d}(w_{1}\cdot\partial_{1})^{2}(w_{2}\cdot \partial_{2})^{2}\right)\Delta^{(0)}_{m^{2},0}(x_{1},x_{2})\.\] and on the de Sitter side we find \[(W_{1}\cdot\nabla_{1})^{2}(W_{2}\cdot\nabla_{2})^{2}G_{\lambda, 0}(Y_{1},Y_{2})\approx\frac{d}{d+1}\Delta^{(2)}_{m^{2},0}(x_{1},x_{2};w_{1},w _{2})\] \[(W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2})G_{\lambda,1}(Y_{1}, Y_{2};W_{1},W_{2})\approx\frac{1}{2\,m^{2}}\Delta^{(2)}_{m^{2},1}(x_{1},x_{2};w_{1},w _{2})\] \[G_{\lambda,2}(Y_{1},Y_{2};W_{1},W_{2})\approx\frac{1}{m^{4}} \Delta^{(2)}_{m^{2},2}(x_{1},x_{2};w_{1},w_{2}). \tag{115}\] For generic \(J\), the construction of \(\Delta^{(J)}_{m^{2},\ell}\) is much more involved since it requires \(J+1\) projections operators \(\Pi_{J\ell}\) that effectively implement the branching rule from the spin \(J\) representation of \(SO(d,1)\) to spin \(\ell\) representations of \(SO(d)\). The explicit expressions of \(\Pi_{J\ell}\) have been solved in [25] but they are very complicated. We will not give these expressions. Instead, we will discuss \(\Delta^{(J)}_{m^{2},\ell}\) when \(d=1\). This is a very degenerate case, because all \(\Pi_{J\ell}\) with \(\ell\geq 2\) vanish. To find \(\Pi_{J0}\) and \(\Pi_{J1}\), let's pick a vector \(p^{\mu}\) in \(\mathbb{R}^{1,1}\). \(\Pi_{J0}\) should only have longitudinal components along \(p^{\mu}\), which in the index free formalism means \(\Pi_{J0}\propto p^{-2J}(p\cdot w_{1})^{J}(p\cdot w_{2})^{J}\). The proportional constant can be fixed by using the fact that \(\Pi_{J0}\) is a projector. Indeed, we find \[\Pi_{J0}(p,w_{1},w_{2})=2^{J-1}\frac{(p\cdot w_{1})^{J}(p\cdot w_{2})^{J}}{p^{2J }}. \tag{116}\] By the completeness of \(\Pi_{J0}\) and \(\Pi_{J1}\), we know immediately that \(\Pi_{J1}=(w_{1}\cdot w_{2})^{J}-\Pi_{J0}\), with \(\Pi_{J0}\) given by eq. (116). In \(\mathbb{R}^{1,1}\), there exists an interesting relation \(w_{2}-2p\cdot w_{1}\,p\cdot w_{2})=0\), which allows us to rewrite \(\Pi_{J1}\) as \[\Pi_{J1}(p,w_{1},w_{2})=2^{J-1}\frac{(p\cdot w_{1})^{J-1}(p\cdot w_{2})^{J-1}}{p ^{2J}}\left(p^{2}\,w_{1}\cdot w_{2}-p\cdot w_{1}\,p\cdot w_{2}\right). \tag{111}\] With these projectors known, we can define \(\Delta^{(J)}_{m^{2},\ell}\) following the general recipe given in [47]: \[\Delta^{(J)}_{m^{2},0} =(-)^{J}m^{2J}\Pi_{J0}\left(p^{2}\rightarrow-m^{2},p_{\alpha} \to i\partial_{x_{1}^{\alpha}}\right)\Delta^{(0)}_{m^{2},0}(x_{1},x_{2})\] \[=2^{J-1}(w_{1}\cdot\partial_{1})^{J}(w_{2}\cdot\partial_{2})^{J }\Delta^{(0)}_{m^{2},0}(x_{1},x_{2})\, \tag{112}\] and \[\Delta^{(J)}_{m^{2},1} =(-)^{J-1}m^{2J}\Pi_{J1}\left(p^{2}\rightarrow-m^{2},p_{\alpha} \to i\partial_{x_{1}^{\alpha}}\right)\Delta^{(0)}_{m^{2},0}(x_{1},x_{2})\] \[=2^{J-1}(w_{1}\cdot\partial_{1})^{J-1}(w_{2}\cdot\partial_{2})^{ J-1}\left(m^{2}\,w_{1}\cdot w_{2}+w_{1}\cdot\partial_{1}\,w_{2}\cdot \partial_{2}\right)\Delta^{(0)}_{m^{2},0}(x_{1},x_{2})\, \tag{113}\] where the extra signs \((-)^{J}\) and \((-)^{J-1}\) have been explained in section 3.3. On the de Sitter side, we thus find \[(W_{1}\cdot\nabla_{1})^{J}(W_{2}\cdot\nabla_{2})^{J}G_{\lambda, 0}(Y_{1},Y_{2})\approx 2^{1-J}\Delta^{(J)}_{m^{2},0}(x_{1},x_{2};w_{1},w_{2})\,\] \[(W_{1}\cdot\nabla_{1})^{J-1}(W_{2}\cdot\nabla_{2})^{J-1}G_{ \lambda,1}(Y_{1},Y_{2};W_{1},W_{2})\approx\frac{2^{1-J}}{m^{2}}\Delta^{(J)}_{ m^{2},1}(x_{1},x_{2};w_{1},w_{2}). \tag{114}\] For \(d=1\) and \(J=2\), eq. (111) and eq. (114) are consistent. ## Appendix B Complementary series in the Kallen-Lehmann decompositions In this appendix, we show how complementary series contributes to the two-point function of a scalar operator \(\mathcal{O}(Y)\). We will only focus on \(\mathcal{C}_{\Delta,0}\) since \(\mathcal{C}_{\Delta,\ell}\) with \(\ell>0\) cannot contribute to \(\langle\Omega|\mathcal{O}(Y_{1})\mathcal{O}(Y_{2})|\Omega\rangle\). Compared to the principal series, the main difference is the resolution of the identity. For a principal series representation \(\mathcal{P}_{\Delta,0}\), the identity operator in its Hilbert space can be expressed as \[\mathbb{1}_{\mathcal{P}_{\Delta,0}}=\int d^{d}\mathbf{y}\,|\Delta, \mathbf{y}\,\rangle\langle\Delta,\mathbf{y}\,|=\int_{P}|\Delta,P\rangle \langle\Delta,P|. \tag{115}\] which follows from the inner product \(\langle\Delta,\mathbf{y}_{1}|\Delta,\mathbf{y}_{2}\,\rangle=\delta^{d}( \mathbf{y}_{1}-\mathbf{y}_{2}\,)\). However, for complementary series, we are not allowed to choose such an inner product since it does not respect the reality condition of \(\mathfrak{so}(d+1,1)\) generators [26]. Instead, \(\langle\Delta,\mathbf{y}_{1}|\Delta,\mathbf{y}_{2}\,\rangle\) has to be proportional to the CFT two-point function of a scalar primary of scaling dimension \(\Delta\), i.e. \(\langle\Delta,\mathbf{y}_{1}|\Delta,\mathbf{y}_{2}\,\rangle\propto|\mathbf{y} _{1}-\mathbf{y}_{2}\,|^{-2\Delta}\). From the embedding space point of view, it is obvious that \(\langle\Delta,P_{1}|\Delta,P_{2}\rangle\propto(-2P_{1}\cdot P_{2})^{-\Delta}\) is the only choice that is compatible with the \(SO(d+1,1)\) invariance and the scaling property imposed on \(|\Delta,P\rangle\), when \(\Delta\) is real. Also because of \(-2P_{1}\cdot P_{2}=|\mathbf{y}_{1}-\mathbf{y}_{2}|^{2}\), it is consistent with the expression in local coordinates. We fix the normalization of inner products in \(\mathcal{C}_{\Delta,0}\) by choosing \[\langle\Delta,P_{1}|\Delta,P_{2}\rangle=\frac{N_{\Delta}}{(-2P_{1} \cdot P_{2})^{\Delta}},\;\;\;N_{\Delta}=\frac{\Gamma(\Delta)}{\pi^{\frac{d}{2}} \,\Gamma(\frac{d}{2}-\Delta)}\, \tag{114}\] It fully determines the resolution of identity \[\mathbbm{1}_{\mathcal{C}_{\Delta,0}}=\int_{P_{1},P_{2}}\,|\Delta,P _{1}\rangle\,\frac{N_{\tilde{\Delta}}}{(-2P_{1}\cdot P_{2})^{\tilde{\Delta}}} \,\langle\Delta,P_{2}|. \tag{115}\] For example, it is straightforward to check \(\langle\Delta,P_{1}|\mathbb{1}_{\mathcal{C}_{\Delta,0}}|\Delta,P_{2}\rangle= \langle\Delta,P_{1}|\Delta,P_{2}\rangle\) by using \[\int_{P_{1}}\frac{1}{(-2P_{0}\cdot P_{1})^{\Delta}}\frac{1}{(-2P _{1}\cdot P_{2})^{\tilde{\Delta}}}=\int\,d^{d}\mathbf{y}_{1}\frac{1}{|\mathbf{ y}_{0}-\mathbf{y}_{1}|^{2\Delta}}\frac{1}{|\mathbf{y}_{1}-\mathbf{y}_{2}|^{2 \tilde{\Delta}}}=\frac{\delta^{d}(\mathbf{y}_{0},\mathbf{y}_{2}\,)}{N_{ \Delta}N_{\tilde{\Delta}}}. \tag{116}\] We insert \(\mathbbm{1}_{\mathcal{C}_{\Delta,0}}\) into the two-point function of \(\mathcal{O}(Y)\) \[\langle\Omega|\mathcal{O}(Y_{1})|\mathbb{1}_{\mathcal{C}_{\Delta,0}}|\mathcal{O}(Y_{2})|\Omega\rangle=\int_{P_{3},P_{4}}\,\langle\Omega| \mathcal{O}(Y_{1})|\Delta,P_{3}\rangle\,\frac{N_{\tilde{\Delta}}}{(-2P_{3} \cdot P_{4})^{\tilde{\Delta}}}\,\langle\Delta,P_{4}|\mathcal{O}(Y_{2})|\Omega \rangle\, \tag{117}\] where \(\langle 0|\mathcal{O}(Y)|\Delta,P\rangle=c_{\mathcal{O}}(\Delta)\,\mathcal{ K}_{\Delta}(Y,P)\). Next, we write the remaining double integral in local coordinates and use the Fourier transformation \[\frac{N_{\tilde{\Delta}}}{x^{2\tilde{\Delta}}}=\int\frac{d^{d} \mathbf{k}}{(2\pi)^{d}}\left(\frac{k}{2}\right)^{d-2\Delta}e^{i\mathbf{k}\cdot \mathbf{x}}\, \tag{118}\] which leads to \[\langle\Omega|\mathcal{O}(\eta_{1},\mathbf{y}_{1})|\mathbb{1}_{ \mathcal{C}_{\Delta,0}}|\mathcal{O}(\eta_{2},\mathbf{y}_{2})|\Omega\rangle =|c_{\mathcal{O}}(\Delta)|^{2}\int\frac{d^{d}\mathbf{k}}{(2\pi)^{ d}}\left(\frac{k}{2}\right)^{d-2\Delta}\] \[\times\int\,d^{d}\mathbf{y}_{3}\,\mathcal{K}_{\Delta}(\eta_{1}, \mathbf{y}_{1};\mathbf{y}_{3})e^{i\mathbf{k}\cdot\mathbf{y}_{3}}\int\,d^{d} \mathbf{y}_{4}\,\mathcal{K}_{\Delta}(\eta_{2},\mathbf{y}_{2};\mathbf{y}_{4})e ^{-i\mathbf{k}\cdot\mathbf{y}_{4}}. \tag{119}\] For the integral over \(\mathbf{y}_{3}\) and \(\mathbf{y}_{4}\), we use eq. (3.14) and eq. (3.15) respectively. In the end, we obtain the mode expansion of the free Green's function \(G_{-i(\Delta-\frac{1}{2})}\), c.f. eq. (119): \[\langle\Omega|\mathcal{O}(\eta_{1},\mathbf{y}_{1})|\mathbb{1}_{ \mathcal{C}_{\Delta,0}}|\mathcal{O}(\eta_{2},\mathbf{y}_{2})|\Omega\rangle=|c _{\mathcal{O}}(\Delta)|^{2}G_{-i(\Delta-\frac{1}{2})}(\eta_{1},\mathbf{y}_{1} ;\eta_{2},\mathbf{y}_{2}). \tag{120}\] Therefore the complementary series part of the \(\mathcal{O}\) Kallen-Lehmann decomposition takes the form \[\int_{-\frac{d}{2}}^{\frac{d}{2}}\,d\lambda\,\rho^{\mathcal{C}, 0}_{\mathcal{O}}(\lambda)\,G_{i\lambda}(Y_{1},Y_{2}), \tag{121}\] where \(\rho^{\mathcal{C},0}_{\mathcal{O}}(\lambda)\) is a nonnegative function by construction. For spinning operators, there is a similar result. Discrete series in free scalar theory in dS\({}_{2}\) Let \(\Phi(Y)\) be a free scalar field of scaling dimension \(\Delta=\frac{1}{2}+i\nu\). We have argued that discrete series representations cannot contribute to the two-point function of \(\Phi^{2}(Y)\) because it would always lead to antipodal singularity while \(\langle\Omega|\Phi^{2}(Y_{1})\Phi^{2}(Y_{2})|\Omega\rangle\) is clearly free of such a singularity given that \(|\Omega\rangle\) is the BD vacuum. On the other hand, according to the group theoretical analysis in [1], the two-particle Hilbert space \(\mathcal{H}_{2}\) of \(\Phi\) should contain all \(\mathcal{D}_{k}^{\pm}\) for \(k=2,4,6,\cdots\). To reconcile this contradiction, we will explicitly compute the matrix elements of \(\Phi^{2}\) between the vacuum \(|\Omega\rangle\) and discrete series states in \(\mathcal{H}_{2}\). More precisely, we will focus on the lowest-weight state \(|k\rangle_{k}\) in each \(\mathcal{D}_{k}^{+}\), because all \(\langle\Omega|\Phi^{2}(Y)|\ell\rangle_{k}\) should vanish once \(\langle\Omega|\Phi^{2}(Y)|k\rangle_{k}\) vanishes as a simple result of the \(SO(2,1)\) symmetry. For \(\mathcal{D}_{k}^{-}\), the argument is exactly the same. Let's first describe the single-particle states of \(\Phi\) in conformally global coordinates, using the following mode expansion in BD vacuum [26]: \[\Phi=\sum_{n\in\mathbb{Z}}\phi_{n}a_{n}+\phi_{n}^{*}a_{n}^{\dagger},\;\;\; \phi_{n}=g_{n}(\tau)\frac{e^{-in\varphi}}{\sqrt{2\pi}}\, \tag{108}\] where29 Footnote 29: We remind the reader that \(\mathbf{F}\) is our notation for the regularized hypergeometric function. \[g_{n}(\tau)=\frac{\Gamma(n+\bar{\Delta})}{\sqrt{2}}e^{-in\tau} \mathbf{F}\left(\Delta,\bar{\Delta},n+1,\frac{1}{1+e^{2i\tau}}\right). \tag{109}\] The canonically normalized single-particle states are \(|n\rangle_{\Delta}\equiv a_{n}^{\dagger}|\Omega\rangle\), and the action of \(\mathfrak{so}(2,1)\) on these states is computed in [26] \[L_{\pm}|n\rangle_{\Delta}=(n\pm\Delta)|n\pm 1\rangle_{\Delta},\;\;\;L_{0}|n \rangle_{\Delta}=n|n\rangle_{\Delta}. \tag{110}\] The two-particle Hilbert space \(\mathcal{H}_{2}\) is spanned by \(|n,m\rangle_{\Delta}\equiv a_{n}^{\dagger}a_{m}^{\dagger}|\Omega\rangle\), and the lowest-weight state \(|k\rangle_{k}\) of \(\mathcal{D}_{k}^{+}\) in \(\mathcal{H}_{2}\) can be written as [1] \[|k\rangle_{k}=c\sum_{\ell\in\mathbb{Z}}\frac{\Gamma(\Delta+\ell- k)}{\Gamma(\bar{\Delta}+\ell)}|\ell,k-\ell\rangle_{\Delta}. \tag{111}\] where \(c\) is some unimportant normalization constant. With all these ingredients, we can now compute the matrix element of \(\Phi^{2}\) between \(\langle\Omega|\) and \(|k\rangle_{k}\): \[\langle 0|\Phi^{2}(\tau,\varphi)|k\rangle_{k} =2c\sum_{\ell}\frac{\Gamma(\Delta+\ell-k)}{\Gamma(\bar{\Delta}+ \ell)}\phi_{\ell}(\tau,\varphi)\phi_{k-\ell}(\tau,\varphi)\] \[=\frac{c\,e^{-ik(\tau+\varphi)}}{2\cosh(\pi\nu)}\sum_{\ell}(-)^{ \ell}\mathbf{F}\left(\Delta,\bar{\Delta},\ell+1,\frac{1}{1+e^{2i\tau}}\right) \mathbf{F}\left(\Delta,\bar{\Delta},k-\ell+1,\frac{1}{1+e^{2i\tau}}\right). \tag{112}\] To evaluate the sum over \(\ell\) in eq. (112), we use the series expansion of the regularized hypergeometric function \[\langle 0|\Phi^{2}(\tau,\varphi)|k\rangle_{k}=\frac{c\,e^{-ik( \tau+\varphi)}}{2\cosh(\pi\nu)}\sum_{n,m\geq 0}\frac{(\frac{1}{2}\pm i\nu)_{n}( \frac{1}{2}\pm i\nu)_{m}}{n!\,m!}\,\mathcal{T}(n\!+\!1,m\!+\!k\!+\!1)\left(1\!+ \!e^{2i\tau}\right)^{-n-m}\, \tag{113}\] where \[\mathcal{T}(a,b)\equiv\sum_{\ell\in\mathbb{Z}}\frac{(-)^{\ell}}{ \Gamma(a+\ell)\Gamma(b-\ell)},\ \ \ a,b\in\mathbb{Z}. \tag{110}\] Since \(a,b\) are integers, \(\mathcal{T}(a,b)\) is actually a finite sum supported on \(1-a\leq\ell\leq b-1\), which implies that \(\mathcal{T}\) vanishes when \(a+b<2\). When \(a+b\geq 2\), the function \(f(z)\equiv\frac{1}{\Gamma(a+z)\Gamma(b-z)}\) decays fast enough at large \(|z|\) such that we can use the Sommerfeld-Watson transformation to claim that \(-\mathcal{T}(a,b)\) is equivalent to summing over residues of \(\frac{\pi f(z)}{\sin(\pi z)}\) at the poles of \(f(z)\). On the other hand, it is clear that \(f(z)\) is an entire function, and hence \(\mathcal{T}(a,b)\) should vanish when \(a+b\geq 2\). Altogether, \(\mathcal{T}(a,b)\) vanishes identically for any integer \(a\) and \(b\), implying that \(\langle 0|\Phi^{2}(\tau,\varphi)|k\rangle_{k}=0\). This simple computation shows explicitly why discrete series states do not appear in the two-point function of \(\Phi^{2}\). In other words, although the two-particle Hilbert space of \(\Phi\) contains irreducible components that furnish discrete series representations, it is impossible to excite states with such symmetry by acting with \(\Phi^{2}\) on the BD vacuum \(|\Omega\rangle\). Instead, if we consider two \(\Phi\) fields that are separated in spacetime, it can be checked similarly that \(\langle\Omega|\Phi(\tau_{1},\varphi_{1})\Phi(\tau_{2},\varphi_{2})|k\rangle_{k}\) does not vanish, which means that discrete series can have nonzero contribution to the four-point function of \(\Phi\). This is consistent with the analysis of the four-point function of late time operators in [6]. ## Appendix D Properties of \(\phi^{\pm}_{\lambda,J}\) and \(\psi_{p,J}\) In this appendix, we give various details of the functions \(\phi^{\pm}_{\lambda,J}\) and \(\psi_{p,J}\), focusing on their inner products with respect to \((\,\ )^{\pm}_{J}\). We list the definitions of these functions \[\phi^{+}_{\lambda,J}(\sigma) =\partial^{J}_{\sigma}((\sigma+1)^{J}\partial^{J}_{\sigma})G_{ \lambda,0}(\sigma)=\frac{\Gamma(\frac{1}{2}\pm i\lambda+J)}{2^{J+2}\pi}F\left( \frac{1}{2}+i\lambda+J,\frac{1}{2}-i\lambda+J,1,\frac{1+\sigma}{2}\right)\,\] \[\phi^{-}_{\lambda,J}(\sigma) =\partial^{J}_{\sigma}((\sigma-1)^{J}\partial^{J}_{\sigma})G_{ \lambda,0}(\sigma)=\frac{\Gamma(\frac{1}{2}\pm i\lambda+J)^{2}F\left(\frac{1}{ 2}\!+\!i\lambda\!+\!J,\frac{1}{2}\!-\!i\lambda\!+\!J,2J+1,\frac{1+\sigma}{2} \right)}{(-2)^{J+2}(2J)!\pi\Gamma(\frac{1}{2}\pm i\lambda)}\,\] \[\psi_{p,J}(\sigma) =\frac{\Gamma(J+p)\Gamma(J+1-p)}{2^{J+2}\pi}F\left(J+p,J+1-p,1, \frac{1+\sigma}{2}\right)\, \tag{112}\] and the two inner products \((\,\ )^{\pm}_{J}\) ( for real functions defined on \((-\infty,-1)\) ) \[(f,g)^{\pm}_{J}=\int_{-\infty}^{-1}d\sigma\ (1\mp\sigma)^{2J}\ f(\sigma)g( \sigma)\,. \tag{113}\] \(\phi^{\pm}_{\lambda,J}\) share the same large \(-\sigma\) behavior at the leading order: \[\phi^{\pm}_{\lambda,J}(\sigma)\approx\frac{1}{(-2)^{J+2}\pi}\left( \frac{\Gamma(-2i\lambda)\Gamma(\frac{1}{2}+i\lambda)(\frac{1}{2}+i\lambda)^{2 }_{J}}{\Gamma(\frac{1}{2}-i\lambda)}\left(-\frac{1+\sigma}{2}\right)^{-( \frac{1}{2}+i\lambda+J)}+c.c\right). \tag{114}\] The asymptotic behavior of \(\psi_{p,J}(\sigma)\) can be easily obtained by using the relation \[\psi_{p,J}(\sigma)=\left(\frac{1-\sigma}{2}\right)^{-2J}F\left(1-J-p,p-J,1, \frac{1+\sigma}{2}\right) \tag{115}\] Noticing that \(F\left(1-J-p,p-J,1,\frac{1+\sigma}{2}\right)\) is a polynomial of degree \(J-p\) in \(\frac{1+\sigma}{2}\), we find that \(\psi_{p,J}(\sigma)\) decays as \((-\sigma)^{-J-p}\) for large \(-\sigma\). For each fixed \(J\), define a second order differential operator \(\mathfrak{D}_{+}^{(J)}\equiv(1-\sigma^{2})\partial_{\sigma}^{2}+2(1-(J+1)( \sigma+1))\partial_{\sigma}\), which is hermitian with respect to the inner product \(\left(\,\ \right)_{J}^{+}\). It admits \(\phi_{\lambda,J}^{+}\) and \(\psi_{p,J}\) as eigenfunctions, i.e. \[\mathfrak{D}_{+}^{(J)}\phi_{\lambda,J}^{+}=\left[\left(\frac{1}{2}+J\right)^{2 }+\lambda^{2}\right]\phi_{\lambda,J}^{+},\ \ \mathfrak{D}_{+}^{(J)}\psi_{p,J}=(J+p)(J+1-p)\psi_{p,J}. \tag{100}\] which follows from the hypergeometric nature of these functions. \(\{\phi_{\lambda,J}^{+}\}\) consist of the continuous spectrum of \(\mathfrak{D}_{+}^{(J)}\). They are \(\delta\) function normalizable and their inner product can be easily extracted from the asymptotic behavior (101) \[(\phi_{\lambda,J}^{+},\phi_{\lambda^{\prime},J}^{+})_{J}^{+}=\frac{\left( \frac{1}{2}\pm i\lambda\right)_{J}^{2}}{8\lambda\sinh(2\pi\lambda)}\left( \delta(\lambda-\lambda^{\prime})+\delta(\lambda+\lambda^{\prime})\right). \tag{101}\] Similarly, \(\{\psi_{p,J}\}\) is an orthogonal basis of the discrete spectrum of \(\mathfrak{D}_{+}^{(J)}\). We can compute their norm \((\psi_{p,J},\psi_{p,J})_{J}^{+}\) by using eq. (100) for one of the two \(\psi_{p,J}\): \[(\psi_{p,J},\psi_{p,J})_{J}^{+} =\mathfrak{N}\int_{-\infty}^{-1}d\sigma(1-\sigma)^{2J}F\left(1+J- p,J+p,1,\frac{1+\sigma}{2}\right)^{2}\] \[=4^{J}\mathfrak{N}\,\int_{-\infty}^{-1}d\sigma F\left(1+J-p,J+p,1,\frac{1+\sigma}{2}\right)F\left(1-J-p,p-J,1,\frac{1+\sigma}{2}\right)\] \[=2^{2J+1}\mathfrak{N}\,\int_{0}^{\infty}dsF\left(1+J-p,J+p,1,-s \right)F\left(1-J-p,p-J,1,-s\right)\, \tag{102}\] where \(s=-\frac{1+\sigma}{2}\), and \[\mathfrak{N}=\frac{1}{4^{2+J}\pi^{2}}\Gamma(J+p)^{2}\Gamma(1+J-p)^{2}. \tag{103}\] As we have mentioned above, \(F\left(1-J-p,p-J,1,-s\right)\) is a polynomial of degree \(J-p\) in \(s\). One crucial point is that only the leading term of this polynomial contributes to the integral (102), as a result of the Mellin transformation of hypergeometric functions \[\int_{0}^{\infty}dx\,x^{t-1}F(a,b,c,-x)=\frac{\Gamma(c)\Gamma(a-t)\Gamma(b-t) }{\Gamma(a)\Gamma(b)\Gamma(c-t)}\Gamma(t)\, \tag{104}\] which holds when \(\min(\operatorname{Re}\left(a\right),\operatorname{Re}\left(b\right))> \operatorname{Re}s>0\). More precisely, performing the \(s\) integral against the monomial \(s^{m}\) in \(F\left(1-J-p,p-J,1,-s\right)\) would lead to \((-m)_{J-p}\), and hence the integral vanishes when \(m<J-p\). The leading term of \(F\left(1-J-p,p-J,1,-s\right)\) can be easily extracted by the series expansion of hypergeometric function \[F\left(1-J-p,p-J,1,-s\right)=\frac{\Gamma(J+p)}{\Gamma(2p)\Gamma(J-p+1)}(-s)^{ J-p}+\cdots \tag{105}\] Therefore, the integral (102) reduces to \[(\psi_{p,J},\psi_{p,J})_{J}^{+}=\frac{(-)^{J-p}2^{2J+1}\Gamma(J+p)}{\Gamma(2p) \Gamma(J-p+1)}\mathfrak{N}\,\int_{0}^{\infty}\,dsF\left(1+J-p,J+p,1,-s\right)s ^{J-p}\, \tag{106}\] which can be evaluated as the analytical continuation of (D.9) \[\int_{0}^{\infty}\,dsF\,(1+J-p,J+p,1,-s)\,s^{J-p} =\lim_{t\to J-p+1}\frac{\Gamma(J+p-t)\Gamma(J-p+1-t)}{\Gamma(J+k) \Gamma(J-p+1)\Gamma(1-t)}\Gamma(t)\] \[=(-)^{J-p}\frac{\Gamma(2p-1)\Gamma(J-p+1)}{\Gamma(J+p)}\.\] (D.12) Altogether, we obtain the norm of \(\psi_{p,J}\) with respect to \((\,,\,)_{J}^{+}\) \[(\psi_{p,J},\psi_{p,J})_{J}^{+}=\frac{2^{2J+1}}{2p-1}\mathfrak{N}= \frac{\Gamma(J+p)^{2}\Gamma(1+J-p)^{2}}{8\pi^{2}(2p-1)}\.\] (D.13) For \(\phi_{\lambda,J}^{-}\), it is easy to see that they are eigenfunctions of the differential operator \(\mathfrak{D}_{-}^{(J)}\equiv(1-\sigma^{2})\partial_{\sigma}^{2}+2((2J+1)-(J+1 )(\sigma+1))\partial_{\sigma}\), which is hermitian with respect to \((\,,\,)_{J}^{-}\): \[\mathfrak{D}_{-}^{(J)}\phi_{\lambda,J}^{-}=\left[\left(\frac{1}{2 }+J\right)^{2}+\lambda^{2}\right]\phi_{\lambda,J}^{-}\.\] (D.14) They are \(\delta\) function normalizable. Because of eq. (D.3), their normalization should be the same as \(\phi_{\lambda,J}^{+}\) \[(\phi_{\lambda,J}^{-},\phi_{\lambda^{\prime},J}^{-})_{J}^{-}= \frac{\left(\frac{1}{2}\pm i\lambda\right)_{J}^{2}}{8\lambda\sinh(2\pi\lambda) }\left(\delta(\lambda-\lambda^{\prime})+\delta(\lambda+\lambda^{\prime}) \right)\.\] (D.15) Unlike \(\mathfrak{D}_{+}^{(J)}\), \(\mathfrak{D}_{-}^{(J)}\) does not have a discrete spectrum. Appendix E Discrete series \(\mathcal{D}_{p}\) in the two-point function of \(\mathcal{O}^{(J)}\) when \(p<J\) In this appendix, we prove that including \(\mathcal{D}_{p}\) in the two-point function of \(\mathcal{O}^{(J)}\) in dS\({}_{2}\) will lead to antipodal singularities when \(p\geq J\). More precisely, we are considering \[\mathcal{G}_{p}^{(J)}(Y_{1},Y_{2};W_{1},W_{2})\equiv\langle\Omega| \mathcal{O}^{(J)}(Y_{1},W_{1})|\mathbb{1}_{\mathcal{D}_{p}}|\mathcal{O}^{(J)} (Y_{2},W_{2})|\Omega\rangle\] (E.1) which has two independent chiral components \[\mathcal{G}_{p}^{(J)}(Y_{1},Y_{2};W_{1}^{+},W_{2}^{\pm})=(W_{1}^ {+}\cdot W_{2}^{\pm})^{J}\,\mathcal{G}_{p}^{(J,\pm)}(\sigma),\ \ \ \sigma=Y_{1}\cdot Y_{2}\.\] (E.2) After a short computation, one can show that 30 Footnote 30: A similar result has been obtained in AdS [80]. \[\nabla_{1}^{2}\mathcal{G}_{p}^{(J)}(Y_{1},Y_{2};W_{1},W_{2})= \left(C_{2}^{SO(2,1)}+J^{2}\right)\,\mathcal{G}_{p}^{(J)}(Y_{1},Y_{2};W_{1},W_{ 2})\,\] (E.3) where \(\nabla_{1}^{2}=\nabla_{1A}\nabla_{1}^{A}\), with \(\nabla_{1A}=\partial_{Y_{1}^{A}}-Y_{1A}\,Y_{1}\cdot\partial_{Y_{1}}-W_{1A}(Y_{ 1}\cdot\partial_{W_{1}})\) given by eq. (2.27). Due to the \(SO(2,1)\) invariance of \(\langle\Omega|\), the Casimir \(C_{2}^{SO(2,1)}\) effectively acts on the projection operator \(\mathbb{1}_{\mathcal{D}_{p}}\) and yields \(p(1-p)\). Therefore, \(\mathcal{G}_{p}^{(J)}(Y_{1},Y_{2};W_{1},W_{2})\) is an eigenmode of \(\nabla_{1}^{2}\) with eigenvalue \(p(1-p)+J^{2}\). Using the explicit expression of \(\nabla_{1}^{2}\), we can further obtain a second order differential equation for \(\mathcal{G}_{p}^{(J,\pm)}(\sigma)\) \[(1-\sigma^{2})\partial_{\sigma}^{2}\mathcal{G}_{p}^{(J,\pm)}-2\,((1+J)\sigma\pm J )\,\partial_{\sigma}\mathcal{G}_{p}^{(J,\pm)}(\sigma)=(1+J-p)(J+p)\mathcal{G}_{ p}^{(J,\pm)}(\sigma) \tag{102}\] For each fixed sign, i.e. chiral component, the ODE (102) has two linearly independent solutions, one of which has growing behavior for large \(-\sigma\). Such solutions are clearly unphysical. The other decaying solution is given by \[\mathcal{G}_{p}^{(J,\pm)}(\sigma)=\left(\frac{2}{1-\sigma}\right)^{J+p}F\left( p\mp J,p+J,2p,\frac{2}{1-\sigma}\right) \tag{103}\] They have power law decay \((-\sigma)^{-J-p}\) for two points with large spacelike separation. However, they are not regular when the two points are antipodal. Their leading singular behaviors around \(\sigma=-1\) are \[\mathcal{G}_{p}^{(J,+)}(\sigma)\stackrel{{\sigma \rightarrow-1}}{{\approx}}-\frac{\Gamma(2p)}{\Gamma(p-J)\Gamma(p+J)}\log\left( -\frac{\sigma+1}{2}\right)\,\] \[\mathcal{G}_{p}^{(J,-)}(\sigma)\stackrel{{\sigma \rightarrow-1}}{{\approx}}-\frac{\Gamma(2p)\Gamma(2J)}{\Gamma(p-J)\Gamma(p+J)} \left(\frac{2}{1+\sigma}\right)^{2J}. \tag{104}\] Therefore, the discrete series \(\mathcal{D}_{p}\) should not contribute to \(\langle\Omega|\mathcal{O}^{(J)}(Y_{1},W_{1})\mathcal{O}^{(J)}(Y_{2},W_{2})|\Omega\rangle\) when \(p>J\). ## Appendix F Harmonic Analysis in EAdS In this appendix, we review some facts about Euclidean Anti de Sitter. We mostly follow the notations in [34]. ### Coordinates in Euclidean Anti de Sitter \((d+1)\)-dimensional Euclidean Anti de Sitter spacetime can be defined as a set of points embedded in Minkowski space \(\mathbb{R}^{d+1,1}\): \[\text{EAdS}_{d+1}:-(X^{0})^{2}\,+\,(X^{1})^{2}\,+\,\dots\,+\,(X^{d+1})^{2}=-R^ {2}\, \tag{105}\] which defines two disconnected hypersurfaces. In our convention, we pick EAdS to be the one with \(X^{0}>0\). This definition makes it manifest that EAdS is invariant under \(SO(d+1,1)\) rotation and boosts and hence its \((d+1)(d+2)/2\) generators satisfy the commutation relations of \(SO(d+1,1)\) algebra in (4). Let us present two coordinate systems that are useful for us in this paper. One is the _global coordinate_ system that is given by \[X^{0}=R\cosh r\cosh\zeta\,\quad X^{i}=R\sinh r\,\Omega^{i}\,\quad X^{d+1}=R\cosh r\sinh\zeta\, \tag{106}\] where \(i=1,\dots,d\), Euclidean time \(\zeta\in\mathbb{R}\), \(r\in\mathbb{R}^{+}\)31 and \(\Omega^{i}\in S^{d-1}\subset\mathbb{R}^{d}\) is a unit vector (\(\Omega^{i}\Omega_{i}=1\)). This coordinate system leads to the induced metric of Footnote 31: This is true for \(d\geq 2\). In \(d=1\) the range is instead \(r\in\mathbb{R}\). \[ds^{2}=R^{2}(\cosh^{2}r\,\text{d}\zeta^{2}+\text{d}r^{2}+\sinh^{2}r\,\text{d} \Omega_{d-1}^{2}) \tag{107}\] Other useful coordinates are _Poincare coordinates_, defined as \[X^{0}=R\frac{z^{2}+{\bf x}^{2}+1}{2z}\,\quad X^{i}=R\frac{x^{i}}{z}\,\quad X^{d+1}=R \frac{z^{2}+{\bf x}^{2}-1}{2z}\, \tag{111}\] with metric \[ds^{2}=R^{2}\frac{dz^{2}+d{\bf x}^{2}}{z^{2}} \tag{112}\] where \({\bf x}\in\mathbb{R}^{d}\) with \(i=1,\cdots,d\) make a flat \(d\)-dimensional Euclidean spatial slice and \(z>0\) to satisfy the \(X^{0}>0\) condition. Let us also define the two-point invariants \(\sigma\) in both dS and EAdS as \[\sigma^{\rm dS}=\frac{Y_{1}\cdot Y_{2}}{R^{2}}\,\qquad\sigma^{\rm EAdS}=\frac{ X_{1}\cdot X_{2}}{R^{2}} \tag{113}\] which for instance in planar coordinates and Poincare coordinates are given by respectively \[\sigma^{\rm dS}=\frac{\eta_{1}^{2}+\eta_{2}^{2}-|{\bf y}_{12}|^{2}}{2\eta_{1} \eta_{2}}\,\qquad\sigma^{\rm EAdS}=-\frac{z_{1}^{2}+z_{2}^{2}+|{\bf x}_{12}|^{2}}{2z_{1} z_{2}}. \tag{114}\] This shows that the range of the two-point function invariants are \(\sigma^{\rm dS}\in\mathbb{R}\) and \(\sigma^{\rm EAdS}\in(-\infty,-1)\). In the main text, we drop the superscript of \(\sigma^{\rm dS}\) as we are focusing on de Sitter spacetime. The Wick rotation in (108) transforms \(\sigma^{\rm dS}\to\sigma^{\rm EAdS}\). Similarly to de Sitter we use the index-free notation to represent the traceless symmetric tensors in EAdS by contracting the spin \(J\) tensor indicies with \(J\) auxiliary vectors: \(W^{A}\). They satisfy tangential and null conditions in EAdS: \[W\cdot X=W^{2}=0. \tag{115}\] In embedding space, the lightcone in \(\mathbb{R}^{d+1,1}\) is the boundary of both EAdS and of dS. We thus use the same symbol \(P\) to indicate null rays: \[P^{2}=0\,\quad P\sim\alpha P \tag{116}\] for \(\alpha\in\mathbb{R}\). In Poincare coordinates, this corresponds to the \(z=0\) plane where the EAdS generators reduce to generators of a \(d\)-dimensional conformal theory on a Eudclidean plane spanned by \({\bf x}\). ### Harmonic functions Here we summarize some facts about Euclidean AdS harmonic functions in embedding space, following the notation of [34], e.g. taking \(R=1\). Harmonic functions are defined as the regular divergence-free eigenfunctions of the Laplacian operator in EAdS: \[\begin{split}\left(\nabla_{1}^{2}+\frac{d^{2}}{4}+\lambda^{2}+ \ell\right)\ \Omega_{\lambda,\ell}(X_{1},X_{2};W_{1},W_{2})&=0\\ \nabla_{1}\cdot K_{1}\ \Omega_{\lambda,\ell}(X_{1},X_{2};W_{1},W_{2})& =0\.\end{split} \tag{117}\] They are proportional to a difference of two EAdS propagators \(\Pi_{\Delta,J}(X_{1},X_{2};W_{1},W_{2})\) with an overall factor \[\Omega_{\lambda,\ell}(X_{1},X_{2};W_{1},W_{2})=\frac{i\lambda}{2\pi}\left(\Pi_{ \frac{d}{2}+i\lambda,\ell}(X_{1},X_{2};W_{1},W_{2})-\Pi_{\frac{d}{2}-i\lambda, \ell}(X_{1},X_{2};W_{1},W_{2})\right)\,\, \tag{111}\] One can explicitly check from the short distance limit of the bulk-to-bulk propagators that the harmonic functions are regular at coincident points. They also satisfy the orthogonality relation \[\begin{split}\frac{1}{\ell!(\frac{d-1}{2})_{\ell}}\int_{X}\Omega_ {\lambda,\ell}(X_{1},X;W_{1},K)&\Omega_{\lambda^{\prime},\ell}(X,X_{2};W,W_{2})\\ &=\frac{1}{2}\left[\delta(\lambda-\lambda^{\prime})+\delta( \lambda+\lambda^{\prime})\right]\Omega_{\lambda,\ell}(X_{1},X_{2};W_{1},W_{2} )\end{split} \tag{112}\] where we introduce the short hand notation for integrating over EAdS defined as \[\int_{X}\equiv\int d^{d+2}X\,\delta(X^{2}+1)\theta(X^{0})\,. \tag{113}\] where the term \(\theta(X^{0})\) encodes that we picked the upper hyperboloid (\(X^{0}>0\)) in our definition for EAdS. Harmonic functions can equivalently be defined as a product of bulk-to-boundary propagators integrated over the boundary point \[\Omega_{\lambda,\ell}(X_{1},X_{2};W_{1},W_{2})=\frac{\lambda^{2}}{\pi\ell!( \frac{d}{2}-1)_{\ell}}\int_{P}\ \Pi_{\frac{d}{2}+i\lambda,\ell}(X_{1},P;W_{1},D_{Z})\Pi_{ \frac{d}{2}-i\lambda,\ell}(P,X_{2};Z,W_{2})\,. \tag{114}\] We refer to this as the split representation. Bulk-to-boundary propagators are defined and normalized through \[\begin{split}\Pi_{\Delta,\ell}(X,P;W,Z)&=\mathfrak{ C}_{\Delta,\ell}\frac{((-2P\cdot X)(W\cdot Z)+2(W\cdot P)(Z\cdot X))^{\ell}}{(-2P \cdot X)^{\Delta+\ell}}\,,\\ \mathfrak{C}_{\Delta,\ell}&=\frac{(\ell+\Delta-1) \Gamma(\Delta)}{2\pi^{\frac{d}{2}}(\Delta-1)\Gamma(\Delta+1-\frac{d}{2})}\,, \end{split} \tag{115}\] and the action of the differential operator \(D_{Z}\) that contracts the boundary indices is defined in (42). In EAdS, the action of the \(K\) operators which contract the bulk indices is similar to the one in dS (28), up to some signs which are necessary to keep its action internal to the manifold defined by \(X^{2}=-1\), \[\begin{split} K_{A}=&\frac{d-1}{2}\left[\frac{ \partial}{\partial W^{A}}+X_{A}\left(X\cdot\frac{\partial}{\partial W}\right) \right]+\left(W\cdot\frac{\partial}{\partial W}\right)\frac{\partial}{ \partial W^{A}}\\ &+X_{A}\left(X\cdot\frac{\partial}{\partial W}\right)\left(W\cdot \frac{\partial}{\partial W}\right)-\frac{1}{2}\,W_{A}\left[\frac{\partial^{2} }{\partial W\cdot\partial W}+\left(X\cdot\frac{\partial}{\partial W}\right) \left(X\cdot\frac{\partial}{\partial W}\right)\right]\,.\end{split} \tag{116}\] The action on \(W\) vectors is analogous to (29): \[\frac{1}{J!(\frac{d-1}{2})_{J}}K_{A_{1}}\cdots K_{A_{J}}W^{B_{1}}\cdots W^{B_ {J}}=\frac{1}{J!}\sum_{\pi}G_{A_{\pi_{1}}}^{\ B_{1}}\cdots G_{A_{\pi_{J}}}^{\ B_{J}}- \text{traces}\,, \tag{117}\] with \[G_{AB}=\eta_{AB}+X_{A}X_{B}\,. \tag{111}\] The following commutator will be useful for our computations \[\begin{split}[K\cdot\nabla,(W\cdot\nabla)^{n}]=&\frac{ n}{2}(W\cdot\nabla)^{n-1}(d+n+2\mathcal{D}_{W}-2)\\ &\times(1-n-(n+\mathcal{D}_{W}-1)(d+n+\mathcal{D}_{W}-2)+\nabla ^{2})\,.\end{split} \tag{112}\] where \(\mathcal{D}_{W}=W\cdot\partial_{W}\). Importantly, \(\nabla\cdot K=K\cdot\nabla\). From the embedding space point of view, the harmonic functions in EAdS and the bulk-to-bulk propagators in dS satisfy the same Laplacian equation up to the Wick rotation discussed in section 4.1. They are also divergence-free and regular at \(\sigma\to-1\). This is another way to see that they have to be proportional to each other as mentioned in 4.4. ### Explicit form of harmonic functions up to \(J=2\) We have reviewed the definition of the harmonic functions and how they can be expressed through a split representation. But to carry out numerical checks of the Kallen-Lehmann decomposition, it is useful to have their explicit expressions. Consider the fact that, for \(SO(d+1,1)\) invariance, any two-point function in the index free formalism has to organize itself in terms of polynomials of \(W_{1}\cdot W_{2}\) and \((W_{1}\cdot X_{2})(W_{2}\cdot X_{1})\) as follows \[G_{\mathcal{O}^{(J)}}(X_{1},X_{2};W_{1},W_{2})=\sum_{n=0}^{J}(W_{1}\cdot W_{2} )^{J-n}((W_{1}\cdot X_{2})(W_{2}\cdot X_{1}))^{n}\mathcal{G}^{(n)}_{\mathcal{O }^{(J)}}(\sigma)\,, \tag{113}\] with \(\mathcal{G}^{(n)}_{\mathcal{O}^{(J)}}(\sigma)\) being scalar functions of \(\sigma=X_{1}\cdot X_{2}\,.\) This is true also for derivatives of harmonic functions \[((W_{1}\cdot\nabla_{1})(W_{2}\cdot\nabla_{2}))^{J-\ell}\Omega_{\lambda,\ell} (X_{1},X_{2};W_{1},W_{2})=\sum_{n=0}^{J}(W_{1}\cdot W_{2})^{J-n}((W_{1}\cdot X _{2})(W_{2}\cdot X_{1}))^{n}h^{(J)}_{\ell,n}(\sigma)\,, \tag{114}\] for some scalar functions \(h^{(J)}_{\ell,n}(\sigma)\). If we plug (113) and (114) into the formula for the Kallen-Lehmann decomposition analytically continued to EAdS (4.5), we can match the coefficients of the tensor structures on both sides and obtain some scalar equations \[\mathcal{G}^{(n)}_{\mathcal{O}^{(J)}}(\sigma)=\sum_{\ell=0}^{J}\int_{\mathbb{ R}}d\lambda\ \Gamma(\pm i\lambda)\rho^{\mathcal{P},\ell}_{\mathcal{O}^{(J)}}(\lambda)h^{(J)} _{\ell,n}(\sigma)\,, \tag{115}\] where we only wrote the principal series part for simplicity. These are the integrals which we carry out numerically and with which we check the validity of the various spectral densities we have derived. Notice that under the Wick rotation described in 4.1, \(\sigma^{\text{dS}}\to\sigma^{\text{EAdS}}\) so that (115) are valid both in EAdS (\(\sigma\in(-\infty,-1)\)) and in de Sitter (\(\sigma\in(-\infty,\infty)\)) provided one gives a small imaginary part to \(\sigma\) when going above the cut at timelike separation. For example, consider the case of a spin 2 CFT primary of conformal weight \(\mathbf{\Delta}\). As argued in section 5.2.3, we have \(\mathcal{G}^{(2)}_{T}(\sigma)=c_{T}(1-\sigma)^{-\mathbf{\Delta}}\). (115) then reads \[c_{T}(1-\sigma)^{-\mathbf{\Delta}}=\int_{\mathbb{R}}d\lambda\ \Gamma(\pm i\lambda)\left(\rho^{\mathcal{P},0}_{T}(\lambda)h^{(2)}_{0,0}( \sigma)+\rho^{\mathcal{P},1}_{T}(\lambda)h^{(2)}_{1,0}(\sigma)+\rho^{\mathcal{ P},2}_{T}(\lambda)h^{(2)}_{2,0}(\sigma)\right)\,, \tag{116}\] with the spectral densities in (111), and we choose \(d\geq 2\). In the rest of this subsection, we report the explicit expression of the \(h^{(J)}_{\ell,n}(\sigma)\) functions for \(J=0,1,2\) and \(n,\ell\in[0,J]\), so that all these kinds of integrals can be checked to hold by numerical evaluation. For \(J=0\), we have the well known expression of the scalar harmonic function \[\Omega_{\lambda,0}(X_{1},X_{2})=h^{(0)}_{0,0}(\sigma)=\frac{\Gamma(\frac{d}{2} \pm i\lambda)}{2^{d+1}\pi^{\frac{d+1}{2}}\Gamma(\pm i\lambda)}\mathbf{F}\left( \frac{d}{2}-i\lambda,\frac{d}{2}+i\lambda,\frac{d+1}{2},\frac{1+\sigma}{2}\right) \tag{112}\] Let us explicitly show how to compute \(h^{(1)}_{1,n}\) for \(n=0\) and \(n=1\). The other functions are obtained with the same mechanical steps. We start by considering the relevant harmonic function and its split representation \[\Omega_{\lambda,1}(X_{1},X_{2};W_{1},W_{2})=\frac{\lambda^{2}}{\pi(\frac{d-2} {2})}\int_{P}\Pi_{\Delta,1}(X_{1},P;W_{1},D_{Z})\Pi_{\bar{\Delta},1}(X_{2},P; W_{2},Z)\,, \tag{113}\] where the apparent \(d=2\) pole is actually canceled by the action of \(D_{Z}\) on \(Z\), we are using the notation \(\Delta=\frac{d}{2}+i\lambda\) in this appendix and bulk-to-boundary propagators are defined in (110). Contracting the boundary indices we obtain \[\Omega_{\lambda,1}(X_{1},X_{2};W_{1},W_{2})= \frac{\lambda^{2}\mathfrak{C}_{\Delta,1}\mathfrak{C}_{\bar{ \Delta},1}}{\pi}\int_{P}\left[\frac{W_{1}\cdot W_{2}}{(-2P\cdot X_{1})^{\Delta }(-2P\cdot X_{2})^{\bar{\Delta}}}+\frac{(P\cdot W_{2})(W_{1}\cdot X_{2})}{(-2 P\cdot X_{1})^{\Delta}(-2P\cdot X_{2})^{\bar{\Delta}+1}}\right. \tag{114}\] \[\left.\qquad+\frac{(P\cdot W_{1})(W_{2}\cdot X_{1})}{(-2P\cdot X _{1})^{\Delta+1}(-2P\cdot X_{2})^{\bar{\Delta}}}+\frac{(P\cdot W_{1})(P\cdot W _{2})(X_{1}\cdot X_{2})}{(-2P\cdot X_{1})^{\Delta+1}(-2P\cdot X_{2})^{\bar{ \Delta}+1}}\right].\] We can trade factors of \(P\) for derivatives with respect to \(X_{1}\) and \(X_{2}\) and obtain \[\Omega_{\lambda,1}(X_{1},X_{2};W_{1},W_{2})= \frac{\mathfrak{C}_{\Delta,1}\mathfrak{C}_{\bar{\Delta},1}}{ \Delta\bar{\Delta}}\Bigg{(}\Delta\bar{\Delta}W_{1}\cdot W_{2}+\frac{\Delta}{ 2}(W_{1}\cdot X_{2})(W_{2}\cdot\partial_{X_{2}})+\frac{\bar{\Delta}}{2}(W_{2} \cdot X_{1})(W_{1}\cdot\partial_{X_{1}}) \tag{115}\] \[\left.\qquad+\frac{1}{4}(X_{1}\cdot X_{2})(W_{1}\cdot\partial_{X _{1}})(W_{2}\cdot\partial_{X_{2}})\Bigg{)}\frac{\lambda^{2}}{\pi}\int_{P} \frac{1}{(-2P\cdot X_{1})^{\Delta}(-2P\cdot X_{2})^{\bar{\Delta}}}\,.\] We can now undo the split representation such that this becomes just a sum of derivatives of the scalar harmonic function \[\Omega_{\lambda,1}(X_{1},X_{2};W_{1},W_{2}) =\frac{\mathfrak{C}_{\Delta,1}\mathfrak{C}_{\bar{\Delta},1}}{ \Delta\bar{\Delta}\mathfrak{C}_{\Delta,0}\mathfrak{C}_{\bar{\Delta},0}}\Bigg{(} \Delta\bar{\Delta}W_{1}\cdot W_{2}+\frac{\Delta}{2}(W_{1}\cdot X_{2})(W_{2} \cdot\partial_{X_{2}}) \tag{116}\] \[+\frac{\bar{\Delta}}{2}(W_{2}\cdot X_{1})(W_{1}\cdot\partial_{X _{1}})+\frac{1}{4}(X_{1}\cdot X_{2})(W_{1}\cdot\partial_{X_{1}})(W_{2}\cdot \partial_{X_{2}})\Bigg{)}\Omega_{\lambda,0}(X_{1},X_{2})\,.\] By carrying out these derivatives explicitly, and collecting the coefficients of \(W_{1}\cdot W_{2}\) and \((X_{1}\cdot W_{2})(X_{2}\cdot W_{1})\) we can read off \[h^{(1)}_{1,0}(\sigma) =\mathcal{N}^{(1)}_{1}\left[2\mathbf{F}^{(0)}(\sigma)+\sigma \mathbf{F}^{(1)}(\sigma)\right]\,, \tag{117}\] \[h^{(1)}_{1,1}(\sigma) =\mathcal{N}^{(1)}_{1}\Bigg{[}d\mathbf{F}^{(1)}(\sigma)+\frac{ \sigma}{8}((d+2)^{2}+4\lambda^{2})\mathbf{F}^{(2)}(\sigma)\Bigg{]}\,,\] where we introduced the shorthand notation \[\mathbf{F}^{(a)}(\sigma)\equiv\frac{1}{\Gamma(\frac{d+1}{2}+a)}\ F\left(\frac{d}{ 2}+i\lambda+a,\frac{d}{2}-i\lambda+a,\frac{d+1}{2}+a,\frac{\sigma+1}{2}\right)\,. \tag{111}\] and defined \[\mathcal{N}_{1}^{(1)}=\frac{\Gamma(\frac{d}{2}\pm i\lambda+1)\lambda\sinh( \pi\lambda)}{2^{d}\pi^{\frac{d+3}{2}}((d-2)^{2}+4\lambda^{2})} \tag{112}\] We report here the other functions for \(J=1\), which can be computed with the same procedure \[\begin{split} h^{(1)}_{0,0}(\sigma)&=\mathcal{N}_{ 0}^{(1)}\left(\frac{(d-2)^{2}}{4}+\lambda^{2}\right)\mathbf{F}^{(1)}(\sigma) \,,\\ h^{(1)}_{0,1}(\sigma)&=\frac{1}{2}\mathcal{N}_{0}^ {(1)}\left(\frac{(d-2)^{2}}{4}+\lambda^{2}\right)\left(\frac{(d+2)^{2}}{4}+ \lambda^{2}\right)\mathbf{F}^{(2)}(\sigma)\,.\end{split} \tag{113}\] with \(\mathcal{N}_{0}^{(1)}=\mathcal{N}_{1}^{(1)}\). In the rest of this subsection we report the functions for \(J=2\). We start from the functions \(h^{(2)}_{\ell,n}\) with \(\ell=2\). \[\begin{split}\frac{h^{(2)}_{2,0}(\sigma)}{\mathcal{N}_{2}^{(2)}}=& 2d\left(\mathbf{F}^{(0)}( \sigma)+\sigma\mathbf{F}^{(1)}(\sigma)\right)+(d\sigma^{2}-1)\mathbf{F}^{(2)} (\sigma)\\ \frac{h^{(2)}_{2,1}(\sigma)}{\mathcal{N}_{2}^{(2)}}=& 2(d)_{2}\mathbf{F}^{(1)}( \sigma)+d\sigma\left(5+3d+\frac{d^{2}}{4}+\lambda^{2}\right)\mathbf{F}^{(2)} (\sigma)+(d\sigma^{2}-1)\left(\frac{d}{2}\pm i\lambda+2\right)\mathbf{F}^{(3)} (\sigma)\\ \frac{h^{(2)}_{2,2}(\sigma)}{\mathcal{N}_{2}^{(2)}}=& \frac{(d)_{3}}{2}\mathbf{F}^{(2)}(\sigma)+\left(\frac{d}{2}\pm i \lambda+2\right)\left[\frac{d(d+2)\sigma}{2}\mathbf{F}^{(3)}(\sigma)+\frac{(d \sigma^{2}-1)}{8}\left(\frac{d}{2}\pm i\lambda+3\right)\mathbf{F}^{(4)}(\sigma )\right]\end{split} \tag{114}\] with \[\mathcal{N}_{2}^{(2)}=\frac{(d+2)^{2}+4\lambda^{2}}{d(d^{2}+4\lambda^{2})} \mathcal{N}_{1}^{(1)}\,, \tag{115}\] and \((d)_{n}\) is the Pochhammer symbol. For \(J=2\) and \(\ell=1\), instead, we have \[\begin{split}\frac{h^{(2)}_{1,0}}{\mathcal{N}_{1}^{(2)}}=& \mathbf{F}^{(1)}(\sigma)+\sigma\mathbf{F}^{(2)}(\sigma)\,,\\ \frac{h^{(2)}_{1,1}}{\mathcal{N}_{1}^{(2)}}=&\frac{ 1}{8}\bigg{[}(d(d+12)+4(\lambda^{2}+5))\mathbf{F}^{(2)}(\sigma)+2\sigma((d+4) ^{2}+4\lambda^{2})\mathbf{F}^{(3)}(\sigma)\bigg{]}\,,\\ \frac{h^{(2)}_{1,2}}{\mathcal{N}_{1}^{(2)}}=&\frac{ (d+4)^{2}+4\lambda^{2}}{128}\bigg{[}8(d+2)\mathbf{F}^{(3)}(\sigma)+\sigma((d+ 6)^{2}+4\lambda^{2})\mathbf{F}^{(4)}(\sigma)\bigg{]}\,,\end{split} \tag{116}\] with \[\mathcal{N}_{1}^{(2)}=\frac{\Gamma(\frac{d}{2}\pm i\lambda+2)\lambda\sinh( \pi\lambda)}{2^{d}\pi^{\frac{d+3}{2}}((d-2)^{2}+4\lambda^{2})} \tag{117}\] and for \(J=2\) with \(\ell=0\) \[\begin{split} h^{(2)}_{0,0}=&\frac{\Gamma(\frac{d}{2 }\pm i\lambda+2)\lambda\sinh(\pi\lambda)}{2^{d+2}\pi^{\frac{d+3}{2}}}\mathbf{F} ^{(2)}(\sigma)\,,\quad h^{(2)}_{0,1}=\frac{\Gamma(\frac{d}{2}\pm i\lambda+3) \lambda\sinh(\pi\lambda)}{2^{d+2}\pi^{\frac{d+3}{2}}}\mathbf{F}^{(3)}(\sigma) \,,\\ h^{(2)}_{0,2}=&\frac{\Gamma(\frac{d}{2}\pm i\lambda+ 4)\lambda\sinh(\pi\lambda)}{2^{d+5}\pi^{\frac{d+3}{2}}}\mathbf{F}^{(4)}( \sigma)\,.\end{split} \tag{118}\] Explicit expressions of the inversion formula ### Spin 0 The Kallen-Lehmann decomposition of scalar two-point functions only has the \(\ell=0\) term. The inversion formula (4.18) then takes the simple form \[\rho^{\mathcal{P},0}_{\mathcal{O}^{(0)}}(\lambda)=\frac{2^{d+1}\pi^{\frac{d+1}{ 2}}\Gamma(\frac{d+1}{2})}{\Gamma(\frac{d}{2}\pm i\lambda)}\int_{X_{1}}\Omega_{ \lambda,0}(X_{2},X_{1})G_{\mathcal{O}^{(0)}}(X_{1},X_{2})\.\] (G.1) The scalar two-point function \(G_{\mathcal{O}^{(0)}}(Y_{1},Y_{2})\) and its Wick rotation to EAdS \(G_{\mathcal{O}^{(0)}}(X_{1},X_{2})\) only depend on the two-point invariants \(\sigma^{\rm dS}\equiv Y_{1}\cdot Y_{2}\) and \(\sigma^{\rm EAdS}\equiv X_{1}\cdot X_{2}\) which we discuss more in detail in Appendix F. We can thus use \(G_{\mathcal{O}^{(0)}}(\sigma)\) as a short-hand notation for the scalar two-point function of some bulk scalar operator \(\mathcal{O}^{(0)}\). Since \(\rho^{\mathcal{P},0}_{\mathcal{O}^{(0)}}(\lambda)\) does not depend on \(X_{2}\), we are free to place it anywhere in EAdS and in particular we can pick the origin, which makes the angular part of integral over \(X_{1}\) trivial. We choose global coordinates in EAdS, given by (F.2), for which we have \[\sigma^{\rm EAdS}=-\cosh r\,\qquad\int_{X_{1}}=\text{vol}(S^{d})\int_{0}^{ \infty}dr\,(\sinh r)^{d}\] (G.2) where we performed the integration over the angular coordinates, which leads to a factor of \(\text{vol}(S^{d})=\frac{2\pi^{\frac{d+1}{2}}}{\Gamma(\frac{d+1}{2})}\). With the change of variable \(r\to\sigma^{\rm EAdS}\) and replacing the explicit value of \(\Omega_{\lambda,0}\) from eq. (F.24), \[\rho^{\mathcal{P},0}_{\mathcal{O}^{(0)}}(\lambda)=\frac{2\pi^{\frac{d+1}{2}}} {\Gamma(\pm i\lambda)}\int_{-\infty}^{-1}\!d\sigma\,(\sigma^{2}-1)^{\frac{d-1} {2}}\ \mathbf{F}\left(\frac{d}{2}+i\lambda,\frac{d}{2}-i\lambda,\frac{d+1}{2}, \frac{1+\sigma}{2}\right)\,G_{\mathcal{O}^{(0)}}(\sigma)\.\] (G.3) Here, we omit the label EAdS in \(\sigma\), not only to avoid clutter but also to emphasise that this formula can be seen as an integration over the point-function in de Sitter as well. ### Spin 1 We state the generic form of a spinning two-point function in (4.35). In particular for spin-1 fields we have: \[G_{\mathcal{O}^{(1)}}(Y_{1},Y_{2};W_{1},W_{2})=(W_{1}\cdot W_{2})\mathcal{G}^ {(0)}_{\mathcal{O}^{(1)}}(\sigma)+(W_{1}\cdot Y_{2})(W_{2}\cdot Y_{1}) \mathcal{G}^{(1)}_{\mathcal{O}^{(1)}}(\sigma)\,,\] (G.4) where again we used shorthand notation for the two-point invariant \(\sigma=Y_{1}\cdot Y_{2}\). After Wick rotating this two-point function as discussed in 4.1, we plug it into (4.18) and we carry out all the index contractions through the application of the \(K\) operators on the \(W\) vectors (F.16). Using the explicit expressions for \(\Omega_{\lambda,1}\) and \(\Omega_{\lambda,0}\) from appendix F.3, we then place \(X_{2}\) at the origin of EAdS as done in the scalar case and using elementary hypergeometric identities we end up obtaining \[\rho^{\mathcal{P},1}_{\mathcal{O}^{(1)}}(\lambda) =\pi^{\frac{d-1}{2}}\lambda\sinh(\pi\lambda)\int_{-\infty}^{-1}d \sigma\,(\sigma^{2}-1)^{\frac{d-1}{2}}\ \left(2\mathbf{F}^{(0)}(\sigma)\mathcal{G}^{(0)}_{ \mathcal{O}^{(1)}}(\sigma)-(\sigma^{2}-1)\mathbf{F}^{(1)}(\sigma)\mathcal{G} ^{(1)}_{\mathcal{O}^{(1)}}(\sigma)\right)\,\] \[\rho^{\mathcal{P},0}_{\mathcal{O}^{(1)}}(\lambda) =\frac{32\pi^{\frac{d-1}{2}}\lambda\sinh(\pi\lambda)}{(d^{2}+4 \lambda^{2})}\int_{-\infty}^{-1}d\sigma\,(\sigma^{2}-1)^{\frac{d-1}{2}}\ \mathbf{F}^{(0)}(\sigma)\,f(\sigma,\mathcal{G}^{(0)}_{ \mathcal{O}^{(1)}}(\sigma),\mathcal{G}^{(1)}_{\mathcal{O}^{(1)}}(\sigma))\,\] (G.5) where \[f(\sigma,\mathcal{G}^{(0)}_{\mathcal{O}^{(1)}}(\sigma),\mathcal{G}^{(1 )}_{\mathcal{O}^{(1)}}(\sigma))= \left[\left(d+1\right)^{2}\sigma+\left((2d+3)\sigma^{2}-(d+2) \right)\partial_{\sigma}+\sigma(\sigma^{2}-1)\partial_{\sigma}^{2}\right] \mathcal{G}^{(0)}_{\mathcal{O}^{(1)}}(\sigma) \tag{111}\] \[+\left[(d+2)\left((d+2)\sigma^{2}{-}1\right)+(2d+5)\sigma(\sigma ^{2}{-}1)\partial_{\sigma}+(\sigma^{2}{-}1)^{2}\partial_{\sigma}^{2}\right] \mathcal{G}^{(1)}_{\mathcal{O}^{(1)}}(\sigma)\;.\] We stress the fact that these integrals can now be interpreted as being carried out in a physical region of spacelike separation in de Sitter. ## Appendix H Inversion integrals In this appendix, we show how to carry out the integrals that are encountered when applying the inversion formula to the examples we have considered throughout this work. We will start with some general remarks about these integrals and then show all the specific examples. ### Free QFTs In the main text, we have considered two-point functions of composite operators made of two fundamental fields with \(\Delta_{1}=\frac{d}{2}+i\lambda_{1}\) and spin \(m\) and \(\Delta_{2}=\frac{d}{2}+i\lambda_{2}\) and spin \(J-m\). In total generality, in a free theory these two-point functions factorize \[\langle\phi_{1}^{(m)}\phi_{2}^{(J-m)}(Y_{1};W_{2})\phi_{1}^{(m)} \phi_{2}^{(J-m)}(Y_{2};W_{2})\rangle\\ =\langle\phi_{1}^{(m)}(Y_{1};W_{2})\phi_{1}^{(m)}(Y_{2};W_{2}) \rangle\langle\phi_{2}^{(J-m)}(Y_{1};W_{2})\phi_{2}^{(J-m)}(Y_{2};W_{2}) \rangle\,. \tag{112}\] In every case we studied, after carrying out derivatives and index contractions, we can reduce the inversion formulas for the spectral densities to linear combinations of integrals of the following form \[\begin{split}\rho^{\mathcal{P},\ell}_{\phi_{1}^{(m)}\phi_{2}^{(J -m)}}(\lambda)&=\sum_{k_{1},k_{2},k_{3}}c_{k_{1},k_{2},k_{3}} \int_{P_{1},P_{2},P_{3}}\frac{\prod_{j=1}^{3}\Pi_{\tilde{\Delta}_{j}-k_{j},0} (X_{2},P_{j})}{(P_{12})^{\Delta_{123,k}}(P_{13})^{\Delta_{132,k}}(P_{23})^{ \Delta_{231,k}}}\\ &\equiv\sum_{k_{1},k_{2},k_{3}}c_{k_{1},k_{2},k_{3}}\mathcal{I} ^{\text{QFT}}_{k_{1},k_{2},k_{3}}(\lambda_{1},\lambda_{2})\,,\end{split} \tag{113}\] where \[\begin{split}\Delta_{ijl,k}&\equiv\frac{\Delta_{i}+k _{i}+\Delta_{j}+k_{j}-\Delta_{l}-k_{l}}{2}\,,\hskip 28.452756ptP_{ij}\equiv-2P_{i} \cdot P_{j}\,,\\ \Delta_{3}&\equiv\frac{d}{2}+i\lambda_{3}\,,\hskip 56.905512pt \lambda_{3}\equiv\lambda\,,\end{split} \tag{114}\] and \(c_{k_{1},k_{2},k_{3}}\) are some coefficients which we determined case by case, and which we will show how to find in the following subsections. For now, let us focus on the integral \(\mathcal{I}^{\text{QFT}}_{0,0,0}\). Other \(\mathcal{I}^{\text{QFT}}_{k_{1},k_{2},k_{3}}\) can be easily obtained by making the shift \(\Delta_{i}\to\Delta_{i}+k_{i}\) in \(\mathcal{I}^{\text{QFT}}_{0,0,0}\). The seed integral \(\mathcal{I}^{\text{QFT}}_{0,0,0}\) was first solved in [2] by a brute force computation in local coordinates of the boundary. It was also computed in [81] by using multiple Schwinger parameterizations. We report here a more covariant approach in the more modern language of harmonic analysis, which exploits the underlying representation structure. Since we focus on the \(k_{i}=0\) case, it is convenient to use the simplified notation \(\Delta_{ijl}=\Delta_{ijl,0}=\frac{\Delta_{i}+\Delta_{j}-\Delta_{l}}{2}\). Following the conventions of [34; 82], we also define \[\langle\mathcal{O}_{\Delta_{1}}(P_{1})\mathcal{O}_{\Delta_{2}}(P_{2})\mathcal{O }_{\Delta_{3}}(P_{3})\rangle=\frac{1}{(P_{12})^{\Delta_{123}}(P_{13})^{\Delta_ {132}}(P_{23})^{\Delta_{231}}}\, \tag{111}\] which represents the standard CFT three-point function of scalar primaries. Let us first consider the \(P_{1}\) and \(P_{2}\) integrals in \(\mathcal{I}^{\rm QFT}_{0,0,0}\), namely \[\mathcal{J}(P_{3})\equiv\int_{P_{1},P_{2}}\langle\mathcal{O}_{\Delta_{1}}(P_{ 1})\mathcal{O}_{\Delta_{2}}(P_{2})\mathcal{O}_{\Delta_{3}}(P_{3})\rangle\Pi_ {\bar{\Delta}_{1}}(X_{2},P_{1})\Pi_{\bar{\Delta}_{2}}(X_{2},P_{2}). \tag{112}\] \(\mathcal{J}(P_{3})\) defined in this way is a scalar function of \(P_{3}\) and \(X_{2}\), and is homogeneous in \(P_{3}\) of degree \(-\Delta_{3}\). So it has to take the form \[\mathcal{J}(X_{2},P_{3})=c(\Delta_{1},\Delta_{2},\Delta_{3})\,\Pi_{\Delta_{3}} (X_{2},P_{3})\, \tag{113}\] where \(c(\Delta_{1},\Delta_{2},\Delta_{3})\) is a constant to be fixed. To extract this constant, we integrate \(\mathcal{J}(X_{2},P_{3})\) against another bulk-to-boundary propagator \(\Pi_{\bar{\Delta}_{4}}(X_{2},P_{4})\) \[\widetilde{\mathcal{J}}(P_{3},P_{4})\equiv\int_{X_{2}}\,\mathcal{J}(X_{2},P_{ 3})\Pi_{\bar{\Delta}_{4}}(X_{2},P_{4}),\ \ \ \Delta_{4}=\frac{d}{2}+i\lambda_{4}. \tag{114}\] If we treat the CFT three-point function \(\langle\mathcal{O}_{\Delta_{1}}(P_{1})\mathcal{O}_{\Delta_{2}}(P_{2}) \mathcal{O}_{\Delta_{3}}(P_{3})\rangle\) in \(\mathcal{J}(P_{3})\) as a bulk integral of three bulk-to-boundary propagators \(\prod_{j=1}^{3}\Pi_{\Delta_{j}}(X_{1},P_{j})\), then \(\widetilde{\mathcal{J}}(P_{3},P_{4})\) can be diagrammatically represented by the left panel of fig. 1, with integrations over \((X_{1},X_{2},P_{1},P_{2})\) understood. Performing the \(P_{1}\) and \(P_{2}\) integrals first would yield a one-loop Witten diagram 32 as shown in the right panel of fig. 1. Such a Witten diagram was considered in [83]. Here, we are effectively considering a different order, namely integrating out the bulk points \(X_{1}\) and \(X_{2}\) first. In eq.(110), the bulk integral over \(X_{2}\) gives another CFT three-point function \(\langle{\cal O}_{\bar{\Delta}_{1}}(P_{1}){\cal O}_{\bar{\Delta}_{2}}(P_{2}){\cal O }_{\bar{\Delta}_{4}}(P_{4})\rangle\), multiplied by a constant [34] \[b(\bar{\Delta}_{1},\bar{\Delta}_{2},\bar{\Delta}_{4},0)=\frac{\pi^{\frac{d}{2} }\Gamma\left(\frac{\bar{\Delta}_{1}+\bar{\Delta}_{2}\bar{\Delta}_{4}-d}{2} \right)\Gamma\left(\frac{\bar{\Delta}_{1}+\bar{\Delta}_{2}-\bar{\Delta}_{4}}{ 2}\right)\Gamma\left(\frac{\bar{\Delta}_{1}+\bar{\Delta}_{4}-\bar{\Delta}_{2}} {2}\right)\Gamma\left(\frac{\bar{\Delta}_{2}+\bar{\Delta}_{4}-\bar{\Delta}_{ 1}}{2}\right)}{2\Gamma(\bar{\Delta}_{1})\Gamma(\bar{\Delta}_{2})\Gamma(\bar{ \Delta}_{4})}{\mathfrak{C}_{\bar{\Delta}_{1},0}\mathfrak{C}_{\bar{\Delta}_{2},0}\mathfrak{C}_{\bar{\Delta}_{4},0}} \tag{111}\] and hence \(\widetilde{\cal J}(P_{3},P_{4})\) reduces to \[\widetilde{\cal J}(P_{3},P_{4})=b(\bar{\Delta}_{1},\bar{\Delta}_ {2},\bar{\Delta}_{4},0)\int_{P_{1},P_{2}}\langle{\cal O}_{\Delta_{1}}(P_{1}){ \cal O}_{\Delta_{2}}(P_{2}){\cal O}_{\Delta_{3}}(P_{3})\rangle\langle{\cal O} _{\bar{\Delta}_{1}}(P_{1}){\cal O}_{\bar{\Delta}_{2}}(P_{2}){\cal O}_{\bar{ \Delta}_{4}}(P_{4})\rangle. \tag{112}\] This integral (and also its higher spin generalization) was originally computed by [25], and is recently reviewed by [82]. Without loss of generality, we assume \(\lambda_{3},\lambda_{4}>0\), and then get \[\widetilde{\cal J}(P_{3},P_{4})=\frac{2\pi^{\frac{d}{2}}}{\Gamma( \frac{d}{2})}\frac{2\pi^{d+1}\Gamma(\pm i\lambda_{3})}{\Gamma(\Delta_{3})\Gamma (\bar{\Delta}_{3})}b(\bar{\Delta}_{1},\bar{\Delta}_{2},\bar{\Delta}_{3},0) \delta(\lambda_{3}-\lambda_{4})\delta(P_{3},P_{4}). \tag{113}\] On the other hand, combining eq. (110) and eq. (110) yields \[\widetilde{\cal J}(P_{3},P_{4})=c(\Delta_{1},\Delta_{2},\Delta_{3} )\,\int_{X_{2}}\Pi_{\Delta_{3}}(X_{2},P_{3})\Pi_{\bar{\Delta}_{4}}(X_{2},P_{4}) \tag{114}\] where the bulk integral over \(X_{2}\) has been carried out in [34]: \[\int_{X_{2}}\Pi_{\Delta_{3}}(X_{2},P_{3})\Pi_{\bar{\Delta}_{4}}(X _{2},P_{4})=\mathfrak{C}_{\Delta_{3}}\mathfrak{C}_{\bar{\Delta}_{4}}\frac{2\pi ^{d+1}\Gamma(\pm i\lambda_{3})}{\Gamma(\Delta_{3})\Gamma(\bar{\Delta}_{3})} \delta(\lambda_{3}-\lambda_{4})\delta(P_{3},P_{4}). \tag{115}\] Comparing eq. (113) and eq. (114), we obtain \[c(\Delta_{1},\Delta_{2},\Delta_{3})=\frac{2\pi^{\frac{d}{2}}}{ \Gamma(\frac{d}{2})}\frac{b(\bar{\Delta}_{1},\bar{\Delta}_{2},\bar{\Delta}_{3},0)}{\mathfrak{C}_{\bar{\Delta}_{3}}\mathfrak{C}_{\bar{\Delta}_{3}}}. \tag{116}\] Once the constant \(c(\Delta_{1},\Delta_{2},\Delta_{3})\) is fixed, the remaining integral over \(P_{3}\) in \({\cal I}_{0,0,0}^{\rm QFT}\) is of the form \[\int_{P}\frac{1}{(-2P\cdot Y)^{d}}=\frac{\pi^{\frac{d}{2}}\Gamma( \frac{d}{2})}{\Gamma(d)}\frac{1}{(-Y^{2})^{\frac{d}{2}}}. \tag{117}\] Altogether, the final expression of \({\cal I}_{0,0,0}^{\rm QFT}\) is \[{\cal I}_{0,0,0}^{\rm QFT}=\frac{2\pi^{d}}{\Gamma(d)}b(\bar{ \Delta}_{1},\bar{\Delta}_{2},\bar{\Delta}_{3},0)=\frac{\Gamma(\frac{d}{2}- \Delta_{123})\Gamma(\frac{d}{2}-\Delta_{132})\Gamma(d-\sum_{j=1}^{3}\frac{ \Delta_{j}}{2})}{8\Gamma(d)\Gamma(1-i\lambda_{1})\Gamma(1-i\lambda_{2})\Gamma(1 -i\lambda_{3})}. \tag{118}\] For arbitrary \({\cal I}_{k_{1},k_{2},k_{3}}^{\rm QFT}\), it suffices to make the substitution \(\Delta_{i}\to\Delta_{i}+k_{i}\) in eq. (118) \[{\cal I}_{k_{1},k_{2},k_{3}}^{\rm QFT}=\frac{\Gamma(\frac{d}{2} -\Delta_{123,k})\Gamma(\frac{d}{2}-\Delta_{132,k})\Gamma(\frac{d}{2}-\Delta_{23 1,k})\Gamma(d-\sum_{j=1}^{3}\frac{\Delta_{j}+k_{j}}{2})}{8\Gamma(d)\Gamma(1-i \lambda_{1}-k_{1})\Gamma(1-i\lambda_{2}-k_{2})\Gamma(1-i\lambda_{3}-k_{3})}\,. \tag{119}\] #### h.1.1 Scalar Free QFT The first nontrivial case we have explored in section 5 is the two-point function of the composite operator \(\phi_{1}\phi_{2}\) in a free theory. We showed that (eq. (111)) \[\rho^{\mathcal{P}}_{\phi_{1}\phi_{2}}(\lambda)=\frac{\Gamma(\pm i\lambda_{1}) \Gamma(\pm i\lambda_{2})}{\mathcal{N}_{0,0}}b(\Delta_{1},\Delta_{2},\Delta,0) \frac{\lambda_{1}^{2}\lambda_{2}^{2}\lambda^{2}}{\pi^{3}}\mathcal{I}^{\text{ QFT}}_{0,0,0}(\lambda_{1},\lambda_{2})\,, \tag{117}\] Applying our integral identity to this case is thus immediate. After simplifying, we obtain \[\rho^{\mathcal{P}}_{\phi_{1}\phi_{2}}(\lambda)=\frac{\lambda\sinh(\pi\lambda) \Gamma(\frac{d+1}{2})}{2^{6-d}\pi^{\frac{d+7}{2}}\Gamma(d)\Gamma(\frac{d}{2} \pm i\lambda)}\prod_{\pm,\pm,\pm}\Gamma\left(\frac{\frac{d}{2}\pm i\lambda \pm i\lambda_{1}\pm i\lambda_{2}}{2}\right)\,. \tag{118}\] #### h.1.2 Spin 1 Free QFT We studied two spin 1 correlators of composite operators in free theory. For the operator \(V\phi\) made of a vector and a scalar, the principal series contributions are given by \[\rho^{\mathcal{P},0}_{V\phi}(\lambda) =\frac{\Gamma(\pm i\lambda_{\phi})\Gamma(\pm i\lambda_{V})}{ \mathcal{N}_{1,0}}\int_{X_{1}}\Omega_{\lambda,0}(X_{2},X_{1})(K_{1}\cdot\nabla _{1})(K_{2}\cdot\nabla_{2})\Omega_{\lambda_{V},1}(X_{1},X_{2};W_{1},W_{2}) \Omega_{\lambda_{\phi},0}(X_{1},X_{2})\,, \tag{119}\] \[\rho^{\mathcal{P},1}_{V\phi}(\lambda) =\frac{\Gamma(\pm i\lambda_{\phi})\Gamma(\pm i\lambda_{V})}{ \mathcal{N}_{1,1}}\int_{X_{1}}\Omega_{\lambda,1}(X_{2},X_{1};K_{2},K_{1}) \Omega_{\lambda_{V},1}(X_{1},X_{2};W_{1},W_{2})\Omega_{\lambda_{\phi},0}(X_{1 },X_{2})\,. \tag{120}\] Let us start from \(\rho^{\mathcal{P},0}_{V\phi}\). We begin by using the split representation for the harmonic functions and carrying out the integral over \(X_{1}\). This kind of integrals is solved in [34], eq. (126) there \[\begin{split}\frac{1}{J!\left(\frac{d-1}{2}\right)_{J}}\int_{X} \Pi_{\Delta_{2}}(X,P_{1})&\Pi_{\Delta,J}(X,P_{2};K,Z)(W\cdot \nabla)^{J}\Pi_{\Delta_{1}}(X,P_{3})\\ &=b(\Delta_{1},\Delta_{2},\Delta,J)\frac{((Z\cdot P_{3})P_{12}-(Z \cdot P_{1})P_{23})^{J}}{P_{13}^{\frac{\Delta_{1}+\Delta_{2}-\Delta+J}{2}}P_{ 23}^{\frac{\Delta_{1}-\Delta_{2}+\Delta+J}{2}}P_{12}^{\frac{-\Delta_{1}+ \Delta_{2}+\Delta+J}{2}}}\,,\end{split} \tag{121}\] with \[b(\Delta_{1},\Delta_{2},\Delta,J)=\frac{\Gamma\left(\frac{\Delta_{1}\pm \Delta_{2}\pm\Delta\cdot d+J}{2}\right)\Gamma\left(\frac{\Delta_{1}\pm\Delta_{ 2}\pm\Delta\cdot d}{2}\right)\Gamma\left(\frac{\Delta_{1}\pm\Delta_{2}+J}{2} \right)\Gamma\left(\frac{\Delta_{2}\pm\Delta_{1}+J}{2}\right)}{2^{1-J}\pi^{- \frac{d}{2}}\Gamma(\Delta_{1})\Gamma(\Delta_{2})\Gamma(\Delta+J)}\mathfrak{C} _{\Delta_{1},0}\mathfrak{C}_{\Delta_{2},0}\mathfrak{C}_{\Delta,J} \tag{122}\] being the generalization of (118). Carrying out all derivatives and index contractions, we can write \[\rho^{\mathcal{P},0}_{V\phi}(\lambda)=\widetilde{\mathcal{N}}^{V\phi}_{0}\int _{P_{1},P_{2},P_{3}}\frac{P_{13}(-2P_{2}\cdot X)+P_{12}(-2P_{3}\cdot X)-P_{23} (-2P_{1}\cdot X)}{(-2P_{1}\cdot X)^{\Delta_{\phi}}(-2P_{2}\cdot X)^{\Delta_{V }+1}(-2P_{3}\cdot X)^{\widetilde{\Delta}+1}(P_{12})^{\alpha}(P_{13})^{\beta}(P_{ 23})^{\gamma}}\,, \tag{123}\] with \[\alpha=\frac{\Delta_{V}+\Delta_{\phi}-\Delta+1}{2}\,,\qquad\beta=\frac{\Delta+ \Delta_{\phi}-\Delta_{V}+1}{2}\,,\qquad\gamma=\frac{\Delta+\Delta_{V}-\Delta_{ \phi}-1}{2}\,. \tag{124}\] and \[\widetilde{\mathcal{N}}^{V\phi}_{0}=\frac{\lambda^{2}\lambda_{\phi}^{2}\lambda_ {V}^{2}\Gamma(\pm i\lambda_{V})\Gamma(\pm i\lambda_{\phi})}{4\pi^{3}\mathcal{N} _{1,0}}(d-1)^{2}\widetilde{\Delta}_{\lambda}\mathfrak{C}_{\widetilde{\Delta}_{ \lambda},0}\mathfrak{C}_{\widetilde{\Delta}_{V},1}\mathfrak{C}_{\widetilde{ \Delta}_{\phi},0}b(\Delta,\Delta_{\phi},\Delta_{V},1) \tag{125}\] The three terms in the sum in (H.23) are exactly of the form (H.2), so that we can write \[\rho^{\mathcal{P},0}_{V\phi}(\lambda) =\widetilde{\mathcal{N}}^{V\phi}_{0}\left(\mathcal{I}^{\text{QFT}}_ {-1,0,0}(\lambda_{V},\lambda_{\phi})+\mathcal{I}^{\text{QFT}}_{0,0,-1}(\lambda_ {V},\lambda_{\phi})-\mathcal{I}^{\text{QFT}}_{-1,1,-1}(\lambda_{V},\lambda_{ \phi})\right)\] (H.26) \[=\frac{\pi^{-3-\frac{d}{2}}\lambda\sinh(\pi\lambda)}{2(\Delta_{V} -1)(\bar{\Delta}_{V}-1)(d^{2}+4\lambda^{2})\Gamma(\frac{d}{2})\Gamma(\frac{d}{ 2}\pm i\lambda+1)}\prod_{\pm,\pm,\pm}\Gamma\left(\frac{\frac{d}{2}+1\pm i \lambda\pm i\lambda_{V}\pm i\lambda_{\phi}}{2}\right)\,.\] Then, we continue with \(\rho^{\mathcal{P},1}_{V\phi}\). After using the split representation on the harmonic functions in (H.20), let us focus on the resulting \(X_{1}\) integral \[\int_{X_{1}}\Pi_{\Delta_{\phi}}(X_{1},P_{1})\Pi_{\Delta,1}(X_{1}, P_{2};K_{1},Z_{2})\Pi_{\Delta_{V},1}(X_{1},P_{3};W_{1},Z_{3})\] (H.27) \[\propto\int_{X_{1}}\frac{((-2P_{2}\cdot X_{1})(Z_{2}\cdot K_{1})+ 2(X_{1}\cdot Z_{2})(P_{2}\cdot K_{1}))\left((-2P_{3}\cdot X_{1})(W_{1}\cdot Z_ {3})+2(X_{1}\cdot Z_{3})(P_{3}\cdot W_{1})\right)}{(-2X_{1}\cdot P_{1})^{ \Delta_{\phi}}(-2X_{1}\cdot P_{2})^{\Delta+1}(-2X_{1}\cdot P_{3})^{\Delta_{V}+ 1}}\,.\] We can trade all factors of \(X_{1}\) in the numerator for derivatives with respect to boundary points. In this way, the \(X_{1}\) integral becomes an integral over a product of three scalar bulk-to-boundary propagators, again leading to a CFT three point function \[\int_{X_{1}}\Pi_{\Delta_{\phi}}(X_{1},P_{1})\Pi_{\Delta,1}(X_{1}, P_{2};K_{1},Z_{2})\Pi_{\Delta_{V},1}(X_{1},P_{3};W_{1},Z_{3})\] (H.28) \[=\frac{1}{\Delta\Delta_{V}}\left((P_{2}\cdot P_{3})Z_{2}\cdot \partial_{P_{2}}Z_{3}\cdot\partial_{P_{3}}+\Delta_{V}Z_{3}\cdot P_{2}Z_{2} \cdot\partial_{P_{2}}+\Delta Z_{2}\cdot P_{3}Z_{3}\cdot\partial_{P_{3}}+ \Delta\Delta_{V}Z_{2}\cdot Z_{3}\right)\] \[\qquad\qquad\times\frac{b(\Delta_{\phi},\Delta,\Delta_{V},0)}{(P _{12})^{\Delta_{123}}(P_{13})^{\Delta_{132}}(P_{23})^{\Delta_{231}}}\,.\] Carrying out the derivatives and substituting this back into (H.20), we can write the result as a linear combination of \(\mathcal{I}^{\text{freeQFT}}\): \[\rho^{\mathcal{P},1}_{V\phi}(\lambda)=\widetilde{\mathcal{N}}^{V \phi}_{1}\Big{[} ((\Delta-\Delta_{V})^{2}-\Delta_{\phi}^{2})\Big{(}2\mathcal{I}_{0,1,-1}+2 \mathcal{I}_{-1,1,0}-\mathcal{I}_{1,0,-1}-\mathcal{I}_{-1,0,1}-\mathcal{I}_{-1,2,-1}\Big{)}\] (H.29) \[\qquad+2(\Delta_{\phi}+\Delta(2\Delta_{V}-1)-\Delta_{V})\Big{(}( d-2)\mathcal{I}_{0,0,0}+2\mathcal{I}_{-1,0,-1}\Big{)}\Big{]}\] where we kept the notation abbreviated and the subscripts of \(\mathcal{I}_{k_{1},k_{2},k_{3}}\) indicate, in order, the integers \(k_{j}\) to add to \(\Delta_{V}\), \(\Delta_{\phi}\) and \(\Delta\equiv\frac{d}{2}+i\lambda\,.\) Moreover, \[\widetilde{\mathcal{N}}^{V\phi}_{1}=\frac{\Gamma(\pm i\lambda_{\phi})\Gamma(\pm i \lambda_{V})}{\mathcal{N}_{1,1}}\frac{\lambda^{2}\lambda_{V}^{2}\lambda_{\phi }^{2}}{16\pi^{3}\Delta\Delta_{V}}(d-1)^{2}b(\Delta_{\phi},\Delta,\Delta_{V},0 )\frac{\mathfrak{C}_{\Delta,1}\mathfrak{C}_{\Delta_{V},1}}{\mathfrak{C}_{ \Delta,0}\mathfrak{C}_{\Delta_{V},0}}\,.\] (H.30) Assembling all the pieces together and simplifying, we obtain \[\rho^{\mathcal{P},1}_{V\phi}(\lambda)=\frac{2^{-12}\pi^{-3-\frac{d}{2}} \lambda\sinh(\pi\lambda)f_{\lambda,\lambda_{V},\lambda_{\phi}}}{\Gamma(\frac{d+2 }{2})(\Delta_{V}-1)(\bar{\Delta}_{V}-1)\Gamma(\frac{d}{2}\pm i\lambda+1)}\prod _{\pm,\pm,\pm}\Gamma\left(\frac{\frac{d}{2}\pm i\lambda\pm i\lambda_{\phi}\pm i \lambda_{V}}{2}\right)\,,\] (H.31) with \[f_{\lambda,\lambda_{V},\lambda_{\phi}}= 16\left(\lambda_{\phi}^{2}-(\lambda^{2}+\lambda_{V}^{2})\right)^ {2}+64(d-1)\lambda^{2}\lambda_{V}^{2}\] (H.32) \[+8d(3d-4)\lambda_{\phi}^{2}+8d\left(2d^{2}-5d+4\right)\left( \lambda^{2}+\lambda_{V}^{2}\right)+d^{3}\left(4d^{2}-11d+8\right)\,.\] For the operator \(\phi_{1}\nabla\phi_{2}\), the steps are analogous, so we only report the linear combination in terms of the standard master integral. We have \[\rho^{\mathcal{P},0}_{\phi_{1}\nabla\phi_{2}}(\lambda) =\widetilde{\mathcal{N}}_{0}^{\phi_{1}\nabla\phi_{2}}\left(2\bar{ \Delta}_{1}\mathcal{I}^{\text{QFT}}_{-1,-1,0}+(\Delta_{1}+\Delta_{2}-d) \mathcal{I}^{\text{QFT}}_{0,0,0}\right) \tag{111}\] \[=\frac{(d^{2}+4(\lambda^{2}-\lambda_{1}^{2}+\lambda_{2}^{2}))^{2} \Gamma(\frac{d+1}{2})}{2^{8-d}\pi^{\frac{d+7}{2}}(d^{2}+4\lambda^{2})^{2}\Gamma (d)\Gamma(\frac{d}{2}\pm i\lambda)}\lambda\sinh(\pi\lambda)\prod_{\pm,\pm,\pm} \Gamma\left(\frac{\frac{d}{2}\pm i\lambda\pm i\lambda_{1}\pm i\lambda_{2}}{2} \right)\,,\] with \[\widetilde{\mathcal{N}}_{0}^{\phi_{1}\nabla\phi_{2}}=\frac{\Gamma(\pm i \lambda_{1})\Gamma(\pm i\lambda_{2})}{2\pi^{3}\mathcal{N}_{1,0}}(d-1)^{2} \lambda^{2}\lambda_{1}^{2}\lambda_{2}^{2}\Delta_{2}\bar{\Delta}_{2}((\bar{ \Delta}_{1}-\Delta_{2})b(\Delta,\Delta_{1},\Delta_{2},0)+2\Delta_{1}b(\Delta, \Delta_{1}+1,\Delta_{2}+1,0)) \tag{112}\] and \[\rho^{\mathcal{P},1}_{\phi_{1}\nabla\phi_{2}}(\lambda) =\widetilde{\mathcal{N}}_{1}^{\phi_{1}\nabla\phi_{2}}\left( \mathcal{I}^{\text{QFT}}_{0,0,-1}-\mathcal{I}^{\text{QFT}}_{1,-1,-1}+ \mathcal{I}^{\text{QFT}}_{0,-1,0}\right) \tag{113}\] \[=\frac{\Gamma(-\frac{d}{2}\pm i\lambda)(\cosh(2\pi\lambda)-(-1)^{ d})}{2^{5}\pi^{5+\frac{d}{2}}\Gamma(\frac{d+2}{2})}\lambda\sinh(\pi\lambda)\prod_{\pm,\pm, \pm}\Gamma\left(\frac{\frac{d}{2}+1\pm i\lambda\pm i\lambda_{1}\pm i\lambda_{ 2}}{2}\right)\,.\] with \[\widetilde{\mathcal{N}}_{1}^{\phi_{1}\nabla\phi_{2}}=\frac{\Gamma(\pm i \lambda_{1})\Gamma(\pm i\lambda_{2})}{4\pi^{3}\Delta\mathcal{N}_{1,1}}(d-1)^{ 2}\Delta_{2}\bar{\Delta}_{2}(\Delta+\Delta_{1}-\Delta_{2}-1)\lambda^{2} \lambda_{1}^{2}\lambda_{2}^{2}\frac{\mathfrak{C}_{\Delta,1}}{\mathfrak{C}_{ \Delta,0}}b(\Delta,\Delta_{1},\Delta_{2}+1,0)\,. \tag{114}\] ### CFTs Another class of two-point functions we considered in this work are two-point functions of spin \(J\) primary bulk CFT operators with conformal dimension \(\mathbf{\Delta}\) in de Sitter, which as argued in section 5.2, are of the form \[\begin{split}\langle\mathcal{O}^{(J)}(Y_{1},W_{1})\mathcal{O}^{( J)}(Y_{2},W_{2})\rangle&=c_{\mathcal{O}}\frac{[(W_{1}\cdot W_{2})(1-Y_{1} \cdot Y_{2})+(Y_{1}\cdot W_{2})(Y_{2}\cdot W_{1})]^{J}}{2^{\mathbf{\Delta}}(1-Y_{1} \cdot Y_{2})^{\mathbf{\Delta}+J}}\\ &=\frac{c_{\mathcal{O}}}{2^{\mathbf{\Delta}}}\sum_{m=0}^{J}\binom{J} {m}\frac{(W_{1}\cdot W_{2})^{m}[(Y_{1}\cdot W_{2})(Y_{2}\cdot W_{1})]^{J-m}}{( 1-Y_{1}\cdot Y_{2})^{\mathbf{\Delta}+J-m}}\end{split} \tag{115}\] Applying the inversion formula (116) we can retrieve the principal series contribution \[\rho^{\mathcal{P},\ell}_{O^{(J)}}(\lambda)=\frac{1}{\mathcal{N}_{J,\ell}}\int _{X_{1}}\Omega_{\lambda,\ell}(X_{2},X_{1};K_{2},K_{1})[(K_{1}\cdot\nabla_{1})( K_{2}\cdot\nabla_{2})]^{J-\ell}\langle\mathcal{O}^{(J)}(X_{1},W_{1})\mathcal{O}^{(J)}( X_{2},W_{2})\rangle\,. \tag{116}\] After carrying out all the index contractions and the derivatives, in all the examples we explored the result can always be written as \[\begin{split}\rho^{\mathcal{P},\ell}_{O^{(J)}}(\lambda)& =\sum_{n=-\ell}^{\ell}\sum_{k=0}^{J-|n|}c_{n,k}(\mathbf{\Delta}, \lambda)\int_{X_{1}}\Omega_{\lambda+in,0}(X_{2},X_{1})(1-X_{1}\cdot X_{2})^{- \mathbf{\Delta}-J+k}\\ &\equiv\sum_{n=-\ell}^{\ell}\sum_{k=0}^{J-|n|}c_{n,k}(\mathbf{\Delta}, \lambda)\mathcal{I}^{(J)}_{\text{CFT},n,k}(\mathbf{\Delta},\lambda)\,,\end{split} \tag{117}\] with some coefficients \(c_{n,k}(\mathbf{\Delta},\lambda)\) which we determined case by case and which we will show in the following subsections of this Appendix. They appear to satisfy a symmetry \(c_{n,k}(\mathbf{\Delta},\lambda)=c_{-n,k}(\mathbf{\Delta},\lambda)\,.\) Let us focus on the integral \(\mathcal{I}^{(J)}_{\text{CFT},n,k}(\mathbf{\Delta},\lambda)\). By conformal invariance, we can fix \(X_{2}\) in the origin of EAdS, which in global coordinates is given by \(r_{2}=0\). In the same coordinates, we have \(\sigma=X_{1}\cdot X_{2}=-\cosh r_{1}\) and \[\int_{X}=\int_{0}^{\infty}\mathrm{d}r\sinh^{d}r\int\mathrm{d}\Omega_{d}=S^{d} \int_{-\infty}^{-1}\mathrm{d}\sigma(\sigma^{2}-1)^{\frac{d-1}{2}}\,. \tag{111}\] The integral is thus \[\mathcal{I}^{(J)}_{\text{CFT},n,k}(\mathbf{\Delta},\lambda)=C_{\Omega}S^{d}\int_ {-\infty}^{-1}\mathrm{d}\sigma(\sigma^{2}-1)^{\frac{d-1}{2}}\,\,{}_{2}F_{1} \left(\Delta-n,\bar{\Delta}+n,\frac{d+1}{2},\frac{1+\sigma}{2}\right)(1- \sigma)^{-\mathbf{\Delta}-J+k}\,, \tag{112}\] where \(\Delta\equiv\frac{d}{2}+i\lambda\). To solve this, we resort to the Mellin Barnes representation of the hypergeometric function (which is the inverse Mellin transformation of (108)) \[{}_{2}F_{1}(a,b,c,z)=\frac{\Gamma(c)}{\Gamma(a)\Gamma(b)}\int_{-i\infty}^{i \infty}\frac{ds}{2\pi i}\frac{\Gamma(a+s)\Gamma(b+s)\Gamma(-s)}{\Gamma(c+s)}( -z)^{s}\,, \tag{113}\] and we change variables to \(u=\frac{\sigma+1}{2}\): \[\mathcal{I}^{(J)}_{\text{CFT},n,k}(\mathbf{\Delta},\lambda)=\tilde{c}\int_{-i \infty}^{i\infty}\frac{ds}{2\pi i}\frac{\Gamma(\Delta-n+s)\Gamma(\bar{\Delta} +n+s)\Gamma(-s)}{\Gamma(\frac{d+1}{2}+s)}\int_{0}^{\infty}\mathrm{d}u(1+u)^{ \frac{d-1}{2}+k-J-\mathbf{\Delta}}u^{\frac{d-1}{2}+s}\,, \tag{114}\] with \[\tilde{c}=\frac{2^{k-J-\mathbf{\Delta}}(i\lambda-n)\sin(\pi(n-i\lambda))}{\pi \Gamma(\frac{d+1}{2})}\,. \tag{115}\] The integral over \(u\) gives some gamma functions, one of which crucially cancels the denominator in the Mellin integral, giving \[\mathcal{I}^{(J)}_{\text{CFT},n,k}(\mathbf{\Delta},\lambda)=\tilde{c}\int\frac{ ds}{2\pi i}\frac{\Gamma(-s)\Gamma(\bar{\Delta}+n+s)\Gamma(s+\Delta-n)\Gamma(-d-k-s+J+ \mathbf{\Delta})}{\Gamma(\frac{1-d}{2}-k+J+\mathbf{\Delta})}\,. \tag{116}\] The resulting Mellin integral can be carried out with Barnes' first lemma, which states \[\int_{-i\infty}^{i\infty}\frac{ds}{2\pi i}\Gamma(a+s)\Gamma(b+s)\Gamma(c-s) \Gamma(d-s)=\frac{\Gamma(a+c)\Gamma(a+d)\Gamma(b+c)\Gamma(b+d)}{\Gamma(a+b+c +d)}\,. \tag{117}\] Applying this to (116), we obtain \[\mathcal{I}^{(J)}_{\text{CFT},n,k}(\mathbf{\Delta},\lambda)=\tilde{c}\frac{ \Gamma(\bar{\Delta}+n)\Gamma(\Delta-n)\Gamma(-k+l+n+\mathbf{\Delta}-\Delta)\Gamma( -k+l-n+\mathbf{\Delta}+\bar{\Delta})}{\Gamma(-k+l+\mathbf{\Delta})\Gamma(\frac{1}{2} -\frac{d}{2}-k+l+\mathbf{\Delta})}\,. \tag{118}\] Substituting this into (112) we can find the spectral densities for any spin \(J\) CFT two-point function. To avoid clutter in the next subsections, we introduce the convenient shorthand notation \[\bar{\mathcal{I}}^{(J)}_{\text{CFT},n,k}\equiv\frac{\mathcal{I}^{(J)}_{\text{ CFT},n,k}(\mathbf{\Delta},\lambda)}{\mathfrak{C}_{\frac{d}{2}+i\lambda-n,0} \mathfrak{C}_{\frac{d}{2}-i\lambda+n,0}(\lambda+in)^{2}} \tag{119}\] #### h.2.1 Scalar CFT Let us start from the scalar case \[\langle\mathcal{O}(Y_{1})\mathcal{O}(Y_{2})\rangle=\frac{c_{\mathcal{O}}}{2^{ \boldsymbol{\Delta}}(1-Y_{1}\cdot Y_{2})^{\boldsymbol{\Delta}}}\,.\] (H.49) The inversion formula for the principal series contribution reads \[\rho_{\mathcal{O}}^{\mathcal{P}}(\lambda)=\frac{c_{\mathcal{O}}}{2^{ \boldsymbol{\Delta}}\mathcal{N}_{0,0}}\int_{X_{1}}\Omega_{\lambda,0}(X_{1},X_{ 2})(1-X_{1}\cdot X_{2})^{-\boldsymbol{\Delta}}\,.\] (H.50) This is already of the form in (H.39), with \(n=0\) and \(k=0\) and the coefficient being \(c_{0,0}(\boldsymbol{\Delta},\lambda)=\frac{c_{\mathcal{O}}}{2^{\boldsymbol{ \Delta}}\mathcal{N}_{0,0}}\). We can thus simply apply (H.47) and obtain the spectral density \[\rho_{\mathcal{O}}^{\mathcal{P}}(\lambda)=\frac{c_{\mathcal{O}}}{2^{ \boldsymbol{\Delta}}\mathcal{N}_{0,0}}\mathcal{I}_{\text{CFT},0,0}^{(0)}( \boldsymbol{\Delta},\lambda)=c_{\mathcal{O}}\frac{2^{1+d-2\boldsymbol{\Delta} }\pi^{\frac{d-1}{2}}\Gamma\left(-\frac{d}{2}+\boldsymbol{\Delta}\pm i\lambda \right)}{\Gamma(\boldsymbol{\Delta})\Gamma\left(\frac{1-d}{2}+\boldsymbol{ \Delta}\right)}\lambda\sinh(\pi\lambda)\] (H.51) #### h.2.2 Spin 1 CFT Moving on to the spin 1 case, we have \[\langle J(Y_{1};W_{1})J(Y_{2};W_{2})\rangle=\frac{c_{J}}{2^{\boldsymbol{ \Delta}}}\left[\frac{W_{1}\cdot W_{2}}{(1-Y_{1}\cdot Y_{2})^{\boldsymbol{ \Delta}}}+\frac{(Y_{1}\cdot W_{2})(Y_{2}\cdot W_{1})}{(1-Y_{1}\cdot Y_{2})^{ \boldsymbol{\Delta}+1}}\right]\,.\] (H.52) We start by inverting \(\rho_{J}^{\mathcal{P},1}(\lambda)\) \[\rho_{J}^{\mathcal{P},1}(\lambda)=\frac{1}{\mathcal{N}_{1,1}}\int_{X_{1}} \Omega_{\lambda,1}(X_{2},X_{1};K_{2},K_{1})\langle J(X_{1};W_{1})J(X_{2};W_{2} )\rangle\,.\] (H.53) First, we use the split representation (F.14) on \(\Omega_{\lambda,1}\) and carry out the contraction of the boundary indices following the action of \(D_{Z}\) on \(Z\) (2.42) \[\begin{split}&\Omega_{\lambda,1}(X_{2},X_{1};K_{2},K_{1})=\frac{ \lambda^{2}}{\pi(\frac{d-2}{2})}\int_{P}\Pi_{\Delta}(X_{1},P;K_{1},D_{Z})\Pi_{ \tilde{\Delta}}(X_{2},P;K_{2},Z)\\ &=\frac{\mathfrak{C}_{\Delta,1}\mathfrak{C}_{\tilde{\Delta},1} \lambda^{2}}{\pi(\frac{d-2}{2})}\int_{P}\frac{((K_{1}\cdot P)(X_{1}\cdot D_{Z} )-(P\cdot X_{1})(K_{1}\cdot D_{Z}))((K_{2}\cdot P)(X_{2}\cdot Z)-(P\cdot X_{2} )(K_{2}\cdot Z))}{(-2P\cdot X_{1})^{\Delta+1}(-2P\cdot X_{2})^{\tilde{ \Delta}+1}}\\ &=\frac{\mathfrak{C}_{\tilde{\Delta},1}\mathfrak{C}_{\tilde{ \Delta},1}}{\pi\lambda^{-2}}\int_{P}\frac{P\cdot X_{1}(K_{1}\cdot K_{2}P\cdot X _{2}-P\cdot K_{2}X_{2}\cdot K_{1})+P\cdot K_{1}(K_{2}\cdot PX_{1}\cdot X_{2}- P\cdot X_{2}X_{1}\cdot K_{2})}{(-2P\cdot X_{1})^{\Delta+1}(-2P\cdot X_{2})^{ \tilde{\Delta}+1}}\,.\end{split}\] (H.54) Plugging this into (H.53) and computing the action of the \(K\) operators over the \(W\) vectors (F.16) we obtain \[\begin{split}\rho_{J}^{\mathcal{P},1}(\lambda)&= \widetilde{\mathcal{N}}_{1,1}^{\text{CFT}}\int_{X_{1},P}\frac{((P_{1}\cdot X_{1} )^{2}+(P_{1}\cdot X_{2})^{2}+P_{1}\cdot X_{1}P_{1}\cdot X_{2}(d-(d-2)X_{1}\cdot X _{2}))}{(1-X_{1}\cdot X_{2})^{\boldsymbol{\Delta}+1}(-2P\cdot X_{1})^{ \Delta+1}(-2P\cdot X_{2})^{\tilde{\Delta}+1}}\\ &=\frac{1}{2}\widetilde{\mathcal{N}}_{1,1}^{\text{CFT}}\left( \bar{\mathcal{I}}_{\text{CFT},1,0}^{(1)}+\bar{\mathcal{I}}_{\text{CFT},-1,0}^{ (1)}+2\bar{\mathcal{I}}_{\text{CFT},0,0}^{(1)}+(d-2)\bar{\mathcal{I}}_{\text{ CFT},0,1}^{(1)}\right)\end{split}\] (H.55) where \[\widetilde{\mathcal{N}}_{1,1}^{\text{CFT}}\equiv\frac{c_{J}(d-1)^{2}\mathfrak{ C}_{\Delta,1}\mathfrak{C}_{\tilde{\Delta},1}\lambda^{2}}{2^{\boldsymbol{\Delta}} \pi\mathcal{N}_{1,1}}\,.\] (H.56) To the second line of (H.55) we have carried out the \(P\) integral and retrieved scalar harmonic functions, and then organized the sum as a polynomial in \((1-X_{1}\cdot X_{2})\,.\) That put the expression in a form where (H.47) is applicable to each term in the sum. By substituting the expression for \(\mathcal{I}_{\text{CFT}}\) we obtain \[\rho_{J}^{\mathcal{P},1}(\lambda)=c_{J}\frac{2^{1+d-2\boldsymbol{\Delta}}\pi^{ \frac{d-1}{2}}(\boldsymbol{\Delta}-1)\Gamma(-\frac{d}{2}+\boldsymbol{\Delta} \pm i\lambda)}{\Gamma(\boldsymbol{\Delta}+1)\Gamma(\frac{1-d}{2}+\boldsymbol {\Delta})}\lambda\sinh(\pi\lambda)\,.\] (H.57) We do the analogous steps for \(\rho_{J}^{\mathcal{P},0}(\lambda)\), starting from the inversion formula \[\rho_{J}^{\mathcal{P},0}(\lambda)=\frac{1}{\mathcal{N}_{1,0}}\int_{X_{1}} \Omega_{\lambda,0}(X_{2},X_{1})(\nabla_{1}\cdot K_{1})(\nabla_{2}\cdot K_{2}) \langle J(X_{1};W_{1})J(X_{2};W_{2})\rangle\,.\] (H.58) We carry out the derivatives and the contractions with the \(K\) operators and obtain \[\begin{split}\rho_{J}^{\mathcal{P},0}(\lambda)&= \frac{c_{J}(d-1)^{2}(\boldsymbol{\Delta}-d)}{2^{2+\boldsymbol{\Delta}} \mathcal{N}_{1,0}}\int_{X_{1}}\frac{\Omega_{\lambda,0}(X_{2},X_{1})}{(1-X_{1} \cdot X_{2})^{\boldsymbol{\Delta}+1}}(1+\boldsymbol{\Delta}+(\boldsymbol{ \Delta}-d)X_{1}\cdot X_{2})\\ &=\frac{c_{J}(d-1)^{2}(\boldsymbol{\Delta}-d)}{2^{2+\boldsymbol{ \Delta}}\mathcal{N}_{1,0}}\left((1-d+2\boldsymbol{\Delta})\mathcal{I}^{(1)}_{ \text{CFT},0,0}+(d-\boldsymbol{\Delta})\mathcal{I}^{(1)}_{\text{CFT},0,1} \right)\end{split}\] (H.59) Substituting the expression (H.47) and simplifying, we obtain what we presented in the main text \[\rho_{J}^{\mathcal{P},0}(\lambda)=c_{J}\frac{2^{3+d-2\boldsymbol{\Delta}}\pi^{ \frac{d-1}{2}}(\boldsymbol{\Delta}-d)\Gamma\left(-\frac{d}{2}+\boldsymbol{ \Delta}\pm i\lambda\right)}{(d^{2}+4\lambda^{2})\Gamma(\boldsymbol{\Delta}+1) \Gamma(\frac{1-d}{2}+\boldsymbol{\Delta})}\lambda\sinh(\pi\lambda)\,.\] (H.60) #### h.2.3 Spin 2 CFT To treat the spin 2 case, the logic is the same. \[\langle T(Y_{1};W_{1})T(Y_{2};W_{2})\rangle=\frac{c_{T}}{2^{ \boldsymbol{\Delta}}}\Big{[}\frac{(W_{1}\cdot W_{2})^{2}}{(1-Y_{1}\cdot Y_{2} )^{\boldsymbol{\Delta}}}+2\frac{(W_{1}\cdot W_{2})(Y_{1}\cdot W_{2})(Y_{2} \cdot W_{1})}{(1-Y_{1}\cdot Y_{2})^{\boldsymbol{\Delta}+1}}\frac{[(Y_{1} \cdot W_{2})(Y_{2}\cdot W_{1})]^{2}}{(1-Y_{1}\cdot Y_{2})^{\boldsymbol{ \Delta}+2}}\Big{]}\,.\] (H.61) We start from \(\rho_{T}^{\mathcal{P},2}(\lambda)\) \[\rho_{T}^{\mathcal{P},2}(\lambda)=\frac{1}{\mathcal{N}_{2,2}}\int_{X_{1}} \Omega_{\lambda,2}(X_{2},X_{1};K_{2},K_{1})\langle T(X_{1};W_{1})T(X_{2};W_{2 })\rangle\,.\] (H.62) We follow the same identical steps as in the spin 1 example: we use the split representation on \(\Omega_{\lambda,2}\), carry out the contractions between \(D_{Z}\) and \(Z\) and between \(K\) and \(W\). We land on a linear combination of scalar harmonic functions which we can express in terms of \(\mathcal{I}_{\text{CFT}}\) \[\begin{split}\rho_{T}^{\mathcal{P},2}(\lambda)=&\frac {\lambda^{2}(d+1)^{2}(d-1)^{3}\mathfrak{C}_{\Delta,2}\mathfrak{C}_{\bar{\Delta}, 2}}{2^{3+\boldsymbol{\Delta}}d\;\mathcal{N}_{2,2}}&\sum_{\pm} \left(2\bar{\mathcal{I}}^{(2)}_{\text{CFT},\pm 2,0}+8\bar{\mathcal{I}}^{(2)}_{\text{CFT},\pm 1,0}+2(d-2)\bar{\mathcal{I}}^{(2)}_{\text{CFT},\pm 1,1}\\ &+12\bar{\mathcal{I}}^{(2)}_{\text{CFT},0,0}+4(d-2)\bar{\mathcal{I }}^{(2)}_{\text{CFT},0,1}+d(d-2)\bar{\mathcal{I}}^{(2)}_{\text{CFT},0,2}\right),\end{split}\] (H.63) which gives \[\rho_{T}^{\mathcal{P},2}(\lambda)=c_{T}\frac{2^{1+d-2\boldsymbol{\Delta}}\pi^{ \frac{d-1}{2}}(\boldsymbol{\Delta}-1)\boldsymbol{\Delta}\Gamma(-\frac{d}{2}+ \boldsymbol{\Delta}\pm i\lambda)}{\Gamma(\boldsymbol{\Delta}+2)\Gamma(\frac{1 -d}{2}+\boldsymbol{\Delta})}\lambda\sinh(\pi\lambda)\,.\] (H.64) For \(\rho_{T}^{\mathcal{P},1}(\lambda)\) instead, we have \[\rho_{T}^{\mathcal{P},1}(\lambda)=\frac{1}{\mathcal{N}_{2,1}}\int_{X_{1}}\Omega_{ \lambda,1}(X_{2},X_{1};K_{2},K_{1})(K_{1}\cdot\nabla_{1})(K_{2}\cdot\nabla_{2}) \langle T(X_{1};W_{1})T(X_{2};W_{2})\rangle\,.\] (H.65) After applying the split representation and carrying out derivatives and index contractions, we obtain \[\begin{split}\rho_{T}^{\mathcal{P},1}(\lambda)=\widetilde{ \mathcal{N}}_{2,1}^{\text{CFT}}\sum_{\pm}\Big{(}&(4(\mathbf{\Delta}- 1)+d(6\mathbf{\Delta}-4+d(d-2\mathbf{\Delta}-5))\bar{\mathcal{I}}_{\text{CFT},0,1}^{(2 )}\\ &+(d+1+2d^{2}-\mathbf{\Delta}-3d\mathbf{\Delta})\left(2\bar{\mathcal{I}}_{ \text{CFT},0,0}^{(2)}+\bar{\mathcal{I}}_{\text{CFT},\pm 1,0}^{(2)}\right)\\ &-(d+1)(d+1-\mathbf{\Delta})\left((d-2)\bar{\mathcal{I}}_{\text{CFT},0,2}^{(2)}-\bar{\mathcal{I}}_{\text{CFT},\pm 1,1}^{(2)}\right)\,.\end{split}\] (H.66) with \[\widetilde{\mathcal{N}}_{2,1}^{\text{CFT}}\equiv\frac{\lambda^{2}(d+1-\mathbf{ \Delta})(d+1)(d-1)^{2}\mathfrak{C}_{\Delta,1}\mathfrak{C}_{\tilde{\Delta},1}}{ 2^{3+\mathbf{\Delta}}\mathcal{N}_{2,1}}\,.\] (H.67) Explicitly, \[\rho_{T}^{\mathcal{P},1}(\lambda)=c_{T}\frac{2^{4+d-2\mathbf{\Delta}}\pi^{\frac{d -1}{2}}(1-\mathbf{\Delta})(d+1-\mathbf{\Delta})\Gamma(-\frac{d}{2}+\mathbf{\Delta}\pm i \lambda)}{((d+2)^{2}+4\lambda^{2})\Gamma(\mathbf{\Delta}+2)\Gamma(\frac{1-d}{2}+ \mathbf{\Delta})}\lambda\sinh(\pi\lambda)\,,\] (H.68) Finally, we have \[\begin{split}\rho_{T}^{\mathcal{P},0}(\lambda)&= \frac{1}{\mathcal{N}_{2,0}}\int_{X_{1}}\Omega_{\lambda,0}(X_{2},X_{1})(K_{1} \cdot\nabla_{1})^{2}(K_{2}\cdot\nabla_{2})^{2}\langle T(X_{1};W_{1})T(X_{2};W _{2})\rangle\\ &=\widetilde{\mathcal{N}}_{2,0}^{\text{CFT}}\Big{(}(d^{2}+3+8 \mathbf{\Delta}+4\mathbf{\Delta}^{2}-4d(\mathbf{\Delta}+1))\mathcal{I}_{\text{CFT},0,0}^{ (2)}\\ &\qquad\quad-(d-\mathbf{\Delta})\left(2(d-1-2\mathbf{\Delta})\mathcal{I}_{ \text{CFT},0,1}^{(2)}+(\mathbf{\Delta}-d-1)\mathcal{I}_{\text{CFT},0,2}\right) \Big{)}\\ &=c_{T}\frac{2^{5+d-2\mathbf{\Delta}}(d+1)\pi^{\frac{d-1}{2}}(d-\mathbf{ \Delta})(d+1-\mathbf{\Delta})\Gamma(-\frac{d}{2}+\mathbf{\Delta}\pm i\lambda)}{d(d^{2} +4\lambda^{2})((d+2)^{2}+4\lambda^{2})\Gamma(\mathbf{\Delta}+2)\Gamma(\frac{1-d}{ 2}+\mathbf{\Delta})}\lambda\sinh(\pi\lambda)\,,\end{split}\] (H.69) where \[\widetilde{\mathcal{N}}_{2,0}^{\text{CFT}}\equiv\frac{(d-1)^{2}d(d+1)(d+d^{2} -2d\mathbf{\Delta}+\mathbf{\Delta}(\mathbf{\Delta}-1))}{2^{2+\mathbf{\Delta}}\mathcal{N}_{2,0} }\.\] (H.70) ## Appendix I Diagrammatics of de Sitter In this section we review the in-in formalism and we show the details of the computation in section 5.3. To perform computations in the in-in formalism, we find it convenient to analytically continue to EAdS, as done in [40, 48, 5], such that we can exploit the large amount of mathematical results that are already known for Witten diagrams. In this subsection, we will only be interested in scalar fields, and as such we will omit spin labels. \(G_{\lambda}(Y_{1},Y_{2})\) will indicate a spin 0 free propagator which we otherwise refer to as \(G_{\lambda,0}(Y_{1},Y_{2})\). ### In-in formalism The in-in (or Schwinger-Keldysh) formalism [84] has been used to compute physical observables in QFT in de Sitter since the seminal works [85; 86]. We are interested in using it to compute bulk two-point functions on the interacting Bunch-Davies vacuum \[\langle\Omega|\mathcal{O}(\eta_{1},\mathbf{y}_{1})\mathcal{O}(\eta_{2},\mathbf{ y}_{2})|\Omega\rangle\,. \tag{120}\] More explicitly, in the interaction picture, we are computing \[\langle\Omega|\mathcal{O}(\eta_{1},\mathbf{y}_{1})\mathcal{O}(\eta_{2}, \mathbf{y}_{2})|\Omega\rangle=\frac{\langle 0|U_{I}^{\dagger}(\eta_{1},-\infty) \mathcal{O}_{I}(\eta_{1},\mathbf{y}_{1})U_{I}^{\dagger}(0,\eta_{1})U_{I}(0, \eta_{2})\mathcal{O}_{I}(\eta_{2},\mathbf{y}_{2})U_{I}(\eta_{2},-\infty)|0 \rangle}{\langle 0|U_{I}^{\dagger}(0,-\infty)U_{I}(0,-\infty)|0\rangle} \tag{121}\] where \[U_{I}(\eta_{1},\eta_{2})=T\left[\exp\left(-i\int_{\eta_{2}(1-i\epsilon)}^{\eta _{1}(1-i\epsilon)}d\eta\ H_{I}(\eta)\right)\right]\,, \tag{122}\] is the time evolution operator with the interacting part of the Hamiltonian \(H_{I}(\eta)\) (which in de Sitter explicitly depends on time), \(\mathcal{O}_{I}(\eta,\mathbf{y})\) is the operator \(\mathcal{O}(\eta,\mathbf{y})\) in the interaction picture and \(|0\rangle\) is the free Bunch-Davies vacuum. Concretely, we will be interested in the case where \(\mathcal{O}_{I}=\phi^{2}\) and \(\phi\) is an elementary field in the following theory \[\mathcal{L}=-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-\frac{ 1}{2}m^{2}\phi^{2}-\frac{g}{4!}\phi^{4}\,. \tag{123}\] The full contour time integral in (121) is represented pictorially in fig. 1. Expanding the exponentials in (121) for weak couplings and carrying out all the possible Wick contractions results in a set of diagrammatic rules, which are for instance explained in the appendix of [85]. We review them here for completeness, and to introduce the notation that we use in 5.3 * There are "right" (\(r\)) and "left" (\(l\)) vertices coming from operators in the interacting Hamiltonian, depending on whether they are, respectively, in the time ordered or the anti-time ordered part of the contour. The right vertices are multiplied by \(-i\), while the left vertices are multiplied by \(+i\). One then sums over each vertex being \(l\) or \(r\). Figure 1: The contour integral of the in-in formalism, when computing the Wightman function \(G^{lr}(Y_{1},Y_{2})\). The ordering in time of \(\eta_{1}\) and \(\eta_{2}\) does not matter. * Wick contractions between operators on the time ordered part of the contour lead to time ordered propagators \(G^{rr}_{\lambda_{\phi}}(Y_{1},Y_{2})=\langle T\phi(Y_{1})\phi(Y_{2})\rangle\), with \(\phi\) being some free field with \(\Delta=\frac{d}{2}+i\lambda_{\phi}\) and where we are now using embedding space notation for the coordinates in de Sitter. * Wick contractions between operators on the anti-time ordered part of the contour lead to anti-time ordered propagators \(G^{ll}_{\lambda_{\phi}}(Y_{1},Y_{2})=\langle\bar{T}\phi(Y_{1})\phi(Y_{2})\rangle\). * Wick contractions between operators inserted on two different branches of the time contour lead to Wightman functions \[G^{lr}_{\lambda_{\phi}}(Y_{1},Y_{2})=\langle\phi(Y_{1})\phi(Y_{2})\rangle\,, \qquad G^{rl}_{\lambda_{\phi}}(Y_{1},Y_{2})=\langle\phi(Y_{2})\phi(Y_{1})\rangle.\] (1.5) These definitions imply the following relations \[\begin{split} G^{ll}_{\lambda_{\phi}}(Y_{1},Y_{2})& =\theta(\eta_{1}-\eta_{2})G^{rl}_{\lambda_{\phi}}(Y_{1},Y_{2})+ \theta(\eta_{2}-\eta_{1})G^{lr}_{\lambda_{\phi}}(Y_{1},Y_{2})\,,\\ G^{rr}_{\lambda_{\phi}}(Y_{1},Y_{2})&=\theta(\eta_ {1}-\eta_{2})G^{lr}_{\lambda_{\phi}}(Y_{1},Y_{2})+\theta(\eta_{2}-\eta_{1})G^{ rl}_{\lambda_{\phi}}(Y_{1},Y_{2})\,.\end{split} \tag{1.6}\] ### EAdS-dS dictionary In this section, we review the dictionary between in-in de Sitter diagrams and Witten diagrams in EAdS. We translate the rules discussed in [5; 48] in our own embedding space notation. The Wick rotation chosen to open up the in-in contour and analytically continue to EAdS is the following, in planar coordinates (see section F.1 for a discussion on coordinate systems in dS and EAdS) \[\eta^{l}\to e^{i\frac{\pi}{2}}\eta^{l}\,,\qquad\eta^{r}\to e^{-i\frac{\pi}{2} }\eta^{r}\,,\] (I.7) and then identifying the absolute value of \(\eta\) with the radial coordinate \(z\) in EAdS. The authors of [5; 48] have shown that, under this continuation, \[\begin{split} G^{ll}_{\lambda}(Y_{1},Y_{2})& \to\frac{i\lambda}{2\pi}\Gamma(\pm i\lambda)\left(e^{i\pi \Delta_{\lambda}}\Pi_{\Delta_{\lambda}}(X_{1},X_{2})-e^{i\pi\bar{\Delta}_{ \lambda}}\Pi_{\bar{\Delta}_{\lambda}}(X_{1},X_{2})\right)\,,\\ G^{rr}_{\lambda}(Y_{1},Y_{2})&\to\frac{i\lambda}{2 \pi}\Gamma(\pm i\lambda)\left(e^{-i\pi\Delta_{\lambda}}\Pi_{\Delta_{\lambda} }(X_{1},X_{2})-e^{-i\pi\bar{\Delta}_{\lambda}}\Pi_{\bar{\Delta}_{\lambda}}(X_ {1},X_{2})\right)\,,\\ G^{lr}_{\lambda}(Y_{1},Y_{2})&\to\Gamma(\pm i\lambda )\Omega_{\lambda}(X_{1},X_{2})\,,\\ G^{rl}_{\lambda}(Y_{1},Y_{2})&\to\Gamma(\pm i\lambda )\Omega_{\lambda}(X_{1},X_{2})\,,\end{split}\] (I.8) where \(\Pi_{\Delta}(X_{1},X_{2})\) is an EAdS bulk-to-bulk propagator and \(\Delta_{\lambda}\equiv\frac{d}{2}+i\lambda\). Under this continuation, the integrals appearing in perturbative computations also rotate accordingly \[i\int_{Y_{l}}(\cdots)\to e^{-i\frac{\pi}{2}(d-1)}\int_{X}(\cdots)\,,\qquad-i \int_{Y_{r}}(\cdots)\to e^{i\frac{\pi}{2}(d-1)}\int_{X}(\cdots)\,,\] (I.9) where \[\int_{Y_{\alpha}}(\cdots)\equiv\int\frac{d\eta^{\alpha}d^{d}y}{(-\eta^{\alpha })^{d+1}}(\cdots)\] (I.10) with \(\alpha=l\) or \(\alpha=r\) and \(\int_{X}\) is defined in (F.10). ### Details of the anomalous dimensions computation We report here the details of the computation of the anomalous dimensions presented in 5.3.1. Let us start from the in-in formalism sum for the order \(g^{2}\) contribution to the Wightman two-point function of \(\phi^{2}\), where we selected \(Y_{1}\in l\) and \(Y_{2}\in r\). We have only one vertex, so we obtain two terms, since we have to consider the case in which this vertex comes from the interaction Hamiltonian in the time ordered \((r)\) and in the anti-time ordered \((l)\) part of (I.2): \[\langle\phi^{2}(Y_{1})\phi^{2}(Y_{2})\rangle^{lr}_{(g)}=ig\left[\int_{Y^{l}}(G ^{ll}_{\lambda_{\phi}}(Y_{1},Y))^{2}(G^{lr}_{\lambda_{\phi}}(Y,Y_{2}))^{2}- \int_{Y^{r}}(G^{lr}_{\lambda_{\phi}}(Y_{1},Y))^{2}(G^{rr}_{\lambda_{\phi}}(Y,Y_ {2}))^{2}\right]\,,\] (I.11) see Figure 5.2 for the associated diagram. Analytically continuing to EAdS, we apply the rules (I.8) and obtain \[\langle\phi^{2}(X_{1})\phi^{2}(X_{2})\rangle^{lr}_{(g)}= \mathcal{N}_{(g)}\Big{[}e^{-i\frac{\pi}{2}(d-1)}\int_{X}\Big{(}2e ^{i\pi d}\Pi_{\Delta_{\phi}}(X_{1},X)\Pi_{\tilde{\Delta}_{\phi}}(X_{1},X)-e^ {2i\pi\Delta_{\phi}}\Pi_{\tilde{\Delta}_{\phi}}^{2}(X_{1},X)\] \[\qquad\qquad\qquad\qquad\qquad\qquad-e^{2i\pi\tilde{\Delta}_{ \phi}}\Pi_{\tilde{\Delta}_{\phi}}^{2}(X_{1},X)\Big{)}\Omega_{\lambda_{\phi}}^{ 2}(X,X_{2})\] (I.12) \[\qquad\qquad\qquad\qquad+e^{i\frac{\pi}{2}(d-1)}\int_{X}\Big{(}2e ^{-i\pi d}\Pi_{\Delta_{\phi}}(X,X_{2})\Pi_{\tilde{\Delta}_{\phi}}(X,X_{2})-e ^{-2i\pi\Delta_{\phi}}\Pi_{\tilde{\Delta}_{\phi}}^{2}(X,X_{2})\] \[\qquad\qquad\qquad\qquad\qquad\qquad-e^{-2i\pi\tilde{\Delta}_{ \phi}}\Pi_{\tilde{\Delta}_{\phi}}^{2}(X,X_{2})\Big{)}\Omega_{\lambda_{\phi}}^{ 2}(X_{1},X)\Big{]}\,.\] where the normalization factor is \[\mathcal{N}_{(g)}\equiv\frac{g\lambda_{\phi}^{2}}{(2\pi)^{2}}\Gamma(\pm i \lambda_{\phi})^{4}\,.\] (I.13) Therefore, we need to evaluate a bulk integral that involves two bulk-to-bulk propagators and two harmonic functions. To make progress, we express \(\Omega_{\lambda_{\phi}}^{2}\) appearing in (I.12) as an integral over one single harmonic function. It has been effectively done in section 5, and takes the following form \[(\Omega_{\lambda_{\phi}}(X_{1},X_{2}))^{2}=\int_{\mathbb{R}}d\lambda\ \rho_{\Omega}^{\mathcal{P}}(\lambda)\Omega_{\lambda}(X_{1},X_{2})+\sum_{n=0}^{ N}\rho_{\Omega}^{\mathcal{C}}(n)\Omega_{2\lambda_{\phi}+i(\frac{d}{2}+2n)}(X_{1},X_{2})\,,\] (I.14) where the sum appears only if \(\lambda_{\phi}\) is imaginary and \(\frac{d}{4}+N<i\lambda_{\phi}<\frac{d}{4}+N+1\), and \[\rho_{\Omega}^{\mathcal{P}}(\lambda) =\frac{\Gamma(\pm i\lambda)}{2\Gamma(\pm i\lambda_{\phi})^{2}} \rho_{\phi^{2},\text{free}}^{\mathcal{P},0}(\lambda)=\frac{\lambda_{\phi}^{2} \sinh^{2}(\pi\lambda_{\phi})}{32\pi^{4+\frac{d}{2}}\Gamma(\frac{d}{2})\Gamma( \frac{d}{2}\pm i\lambda)}\Gamma\left(\frac{\frac{d}{2}\pm i\lambda\pm 2i \lambda_{\phi}}{2}\right)^{2}\prod_{\pm,\pm}\Gamma\left(\frac{\frac{d}{2}\pm i \lambda\pm 2i\lambda_{\phi}}{2}\right)\,,\] \[\rho_{\Omega}^{\mathcal{C}}(n) =\frac{\lambda_{\phi}^{2}(\frac{d}{2})_{n}\Gamma(\frac{d}{2}+n-i \lambda_{\phi})^{2}\Gamma(-n+i\lambda_{\phi})^{2}\Gamma(\frac{d}{2}+n-2i \lambda_{\phi})\Gamma(-n+2i\lambda_{\phi})\sinh^{2}(\pi\lambda_{\phi})}{4n!(- 1)^{n}\Gamma(d+2n-2i\lambda_{\phi})\Gamma(-2n+2i\lambda_{\phi})}\.\] (I.15) The density \(\rho_{\phi^{2},\text{free}}^{\mathcal{P},0}(\lambda)\) for free theory is given by eq. (5.96). Then, the remaining integral to be evaluated is of the type \[\int_{X}\ \Pi_{\Delta_{1}}(X_{1},X)\Pi_{\Delta_{2}}(X_{1},X)\Omega_{\lambda}(X,X_ {2})\.\] (I.16) where \(\Delta_{1}\) and \(\Delta_{2}\) are equal to either \(\Delta_{\phi}\) or \(\bar{\Delta}_{\phi}\). This integral is convergent for \(d<3\), and has a UV divergence when \(d\geq 3\), which can be easily seen from the coincident limit \(\Pi_{\Delta}(X_{1},X_{2})\sim|1+X_{1}\cdot X_{2}|^{\frac{1-d}{2}}\). We will regularize this UV divergence in dS\({}_{4}\) by using dimensional regularization, namely taking \(d=3-\epsilon\). With the regularization scheme specified, we proceed to compute the integral in (16) with the help of Kallen-Lehmann decomposition in AdS [87] \[\Pi_{\Delta_{1}}(X_{1},X)\Pi_{\Delta_{2}}(X_{1},X)=\sum_{n\geq 0 }a_{\Delta_{1},\Delta_{2}}(n)\Pi_{\Delta_{1}+\Delta_{2}+2n}(X_{1},X)\,\] \[a_{\Delta_{1},\Delta_{2}}(n)\equiv\frac{(\frac{d}{2})_{n}( \Delta_{1}+\Delta_{2}+n+1-d)_{n}(\Delta_{1}+\Delta_{2}+2n)\frac{2-d}{2}}{2 \pi^{\frac{d}{2}}n!(\Delta_{1}+n)_{\frac{2-d}{2}}(\Delta_{2}+n)\frac{2-d}{2}( \Delta_{1}+\Delta_{2}+n-\frac{d}{2})_{n}}\.\] (I.17) The resulting integral involves only one bulk-to-bulk propagator and one harmonic function, i.e. \(\int_{X}\Pi_{\Delta}(X_{1},X)\Omega_{\lambda}(X,X_{2})\). Such an integral is equivalent to the harmonic decomposition of \(\Pi_{\Delta}\)[34] \[\Pi_{\Delta}(X_{1},X)=\int_{\mathbb{R}}\,d\lambda\,\frac{\Omega_{ \lambda}(X_{1},X)}{\lambda^{2}+\left(\Delta-\frac{d}{2}\right)^{2}}\,\] (I.18) where the real part of \(\Delta\) should be larger than \(\frac{d}{2}\). Applying the orthogonality relation (F.12) of the harmonics functions to (I.18) yields \[\int_{X}\Pi_{\Delta}(X_{1},X)\Omega_{\lambda}(X,X_{2})=\frac{ \Omega_{\lambda}(X_{1},X_{2})}{\lambda^{2}+\left(\Delta-\frac{d}{2}\right)^{2 }},\ \ \operatorname{Re}\Delta>\frac{d}{2}\.\] (I.19) Putting all the ingredients together, we obtain \[\int_{X}\Pi_{\Delta_{1}}(X_{1},X)\Pi_{\Delta_{2}}(X_{2},X)\Omega_ {\lambda_{\phi}}^{2}(X,X_{2})=\int_{\mathbb{R}}d\lambda\,\rho_{\Omega}^{ \mathcal{P}}(\lambda)\,B_{\Delta_{1},\Delta_{2}}(\lambda)\,\Omega_{\lambda}( X_{1},X_{2})\,\] (I.20) where \[B_{\Delta_{1},\Delta_{2}}(\lambda)\equiv\sum_{n=0}^{\infty}\frac {a_{\Delta_{1},\Delta_{2}}(n)}{\lambda^{2}+\left(\Delta_{1}+\Delta_{2}+2n- \frac{d}{2}\right)^{2}}\] (I.21) is known as the bubble function [48; 5]. The infinite sum defining \(B_{\Delta_{1},\Delta_{2}}\) is divergent when \(d\geq 3\), because the leading large \(n\) behavior of its summand is \(2^{-d-1}\pi^{-\frac{d}{2}}n^{d-4}\). This is a UV divergence. In the convergence region, re-summation can be performed, obtaining a \({}_{7}F_{6}\) hypergeometric function [48], and when \(d=2\), the result can be further simplified in terms of \(\psi\) functions [79]. However, these re-summed expressions are not directly useful for our purposes, since we are mainly interested in the residues of \(B_{\Delta_{1},\Delta_{2}}\). For dS\({}_{4}\), the dimensional regularization \(d=3-\epsilon\) is implicitly implemented. In dimensional regularization, \(B_{\Delta_{1},\Delta_{2}}(\lambda)\) has a \(\lambda\)-independent \(\frac{1}{\epsilon}\) divergence, and the analytical properties of its finite part is insensitive to the renormalization scheme. Therefore, we will still use eq. (I.21) formally in dS\({}_{4}\), without specifying any renormalization. Altogether, combining eq. (I.12) and eq. (I.20) and rotating back to de Sitter, we get the leading order correction to the Kallen-Lehmann decomposition of \(\phi^{2}\): \[\langle\phi^{2}(Y_{1})\phi^{2}(Y_{2})\rangle^{lr}_{(g)}=\int_{\mathbb{R}}d \lambda\ \rho^{\mathcal{P}}_{\phi^{2},g}(\lambda)G^{lr}_{\lambda}(Y_{1},Y_{2})\,,\] (I.22) where \[\begin{split}\rho^{\mathcal{P}}_{\phi^{2},g}(\lambda)=& g\frac{\rho^{\mathcal{P},0}_{\phi^{2},\text{free}}(\lambda)}{4\sinh^{2}( \pi\lambda_{\phi})}\Bigg{[}\sin\left(\pi\left(\frac{d}{2}+2i\lambda_{\phi} \right)\right)B_{\Delta_{\phi},\Delta_{\phi}}(\lambda)\\ &+\sin\left(\pi\left(\frac{d}{2}-2i\lambda_{\phi}\right)\right)B_ {\bar{\Delta}_{\phi},\bar{\Delta}_{\phi}}(\lambda)-2\sin\left(\frac{d\pi}{2} \right)B_{\Delta_{\phi},\bar{\Delta}_{\phi}}(\lambda)\Bigg{]}\,.\end{split}\] (I.23) Now to compute the anomalous dimensions of \([\mathcal{O}\mathcal{O}]_{n}\) and \([\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_{n}\), we need to extract the coefficient of the double poles at \(\Delta=2\Delta_{\phi}+2n\) and \(\Delta=2\bar{\Delta}_{\phi}+2n\) in (I.23), where \(\Delta\equiv\frac{d}{2}+i\lambda\), as explained in section 5.3. From eq. (I.21), we know that the bubble function \(B_{\Delta_{\phi},\Delta_{\phi}}\) (\(B_{\bar{\Delta}_{\phi},\bar{\Delta}_{\phi}}\)) has a single pole at \(2\Delta_{\phi}+2n\) (\(2\bar{\Delta}_{\phi}+2n\)). In addition, \(\rho^{\mathcal{P},0}_{\phi^{2},\text{free}}(\lambda)\) also has single poles at these points. Therefore, in this case, the coefficient \(c_{2}\) defined by eq. (5.91) should be \[\begin{split} c_{2}^{[\mathcal{O}\mathcal{O}]_{n}}& =\frac{\sin\left(\pi\left(\frac{d}{2}+2i\lambda_{\phi}\right) \right)}{4\sinh^{2}(\pi\lambda_{\phi})}\frac{a_{\Delta_{\phi},\Delta_{\phi}}( n)}{d-4n-4\Delta_{\phi}}c_{0}^{[\mathcal{O}\mathcal{O}]_{n}}\,,\\ c_{2}^{[\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_{n}}& =\frac{\sin\left(\pi\left(\frac{d}{2}-2i\lambda_{\phi}\right) \right)}{4\sinh^{2}(\pi\lambda_{\phi})}\frac{a_{\bar{\Delta}_{\phi},\bar{ \Delta}_{\phi}}(n)}{d-4n-4\bar{\Delta}_{\phi}}c_{0}^{[\widetilde{\mathcal{O}} \widetilde{\mathcal{O}}]_{n}}\,,\end{split}\] (I.24) where \[c_{0}^{[\mathcal{O}\mathcal{O}]_{n}}=\underset{\Delta=2\Delta_{\phi}+2n}{\text {Res}}\rho^{\mathcal{P},0}_{\phi^{2},\text{free}}(\lambda),\ \ \ c_{0}^{[\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_{n}}=\underset{ \Delta=2\Delta_{\phi}+2n}{\text{Res}}\rho^{\mathcal{P},0}_{\phi^{2},\text{ free}}(\lambda)\.\] (I.25) Plugging eq. (I.24) into eq. (5.93), we obtain the anomalous dimensions of \([\mathcal{O}\mathcal{O}]_{n}\) and \([\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_{n}\) respectively \[\gamma^{[\mathcal{O}\mathcal{O}]_{n}} =g\frac{c_{2}^{[\mathcal{O}\mathcal{O}]_{n}}}{c_{0}^{[\mathcal{O} \mathcal{O}]_{n}}}=g\,\frac{\sin\left(\pi\left(\frac{d}{2}+2i\lambda_{\phi} \right)\right)}{4\sinh^{2}(\pi\lambda_{\phi})}\frac{a_{\Delta_{\phi},\Delta_{ \phi}}(n)}{d-4n-4\Delta_{\phi}}\] \[\gamma^{[\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_{n}} =g\,\frac{c_{2}^{[\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_ {n}}}{c_{0}^{[\widetilde{\mathcal{O}}\widetilde{\mathcal{O}}]_{n}}}=g\, \frac{\sin\left(\pi\left(\frac{d}{2}-2i\lambda_{\phi}\right)\right)}{4\sinh^{ 2}(\pi\lambda_{\phi})}\frac{a_{\bar{\Delta}_{\phi},\bar{\Delta}_{\phi}}(n)}{d -4n-4\bar{\Delta}_{\phi}}\,,\] (I.26) where \(a_{\Delta_{1},\Delta_{2}}(n)\) is given by eq. (I.17).
2309.10360
OccluTrack: Rethinking Awareness of Occlusion for Enhancing Multiple Pedestrian Tracking
Multiple pedestrian tracking faces the challenge of tracking pedestrians in the presence of occlusion. Existing methods suffer from inaccurate motion estimation, appearance feature extraction, and association due to occlusion, leading to inadequate Identification F1-Score (IDF1), excessive ID switches (IDSw), and insufficient association accuracy and recall (AssA and AssR). We found that the main reason is abnormal detections caused by partial occlusion. In this paper, we suggest that the key insight is explicit motion estimation, reliable appearance features, and fair association in occlusion scenes. Specifically, we propose an adaptive occlusion-aware multiple pedestrian tracker, OccluTrack. We first introduce an abnormal motion suppression mechanism into the Kalman Filter to adaptively detect and suppress outlier motions caused by partial occlusion. Second, we propose a pose-guided re-ID module to extract discriminative part features for partially occluded pedestrians. Last, we design a new occlusion-aware association method towards fair IoU and appearance embedding distance measurement for occluded pedestrians. Extensive evaluation results demonstrate that our OccluTrack outperforms state-of-the-art methods on MOT-Challenge datasets. Particularly, the improvements on IDF1, IDSw, AssA, and AssR demonstrate the effectiveness of our OccluTrack on tracking and association performance.
Jianjun Gao, Yi Wang, Kim-Hui Yap, Kratika Garg, Boon Siew Han
2023-09-19T06:43:18Z
http://arxiv.org/abs/2309.10360v1
# OccluTrack: Rethinking Awareness of Occlusion for Enhancing Multiple Pedestrian Tracking ###### Abstract Multiple pedestrian tracking faces the challenge of tracking pedestrians in the presence of occlusion. Existing methods suffer from inaccurate motion estimation, appearance feature extraction, and association due to occlusion, leading to inadequate Identification F1-Score (IDF1), excessive ID switches (IDSw), and insufficient association accuracy and recall (AssA and AssR). We found that the main reason is abnormal detections caused by partial occlusion. In this paper, we suggest that the key insight is explicit motion estimation, reliable appearance features, and fair association in occlusion scenes. Specifically, we propose an adaptive occlusion-aware multiple pedestrian tracker, OccluTrack. We first introduce an abnormal motion suppression mechanism into the Kalman Filter to adaptively detect and suppress outlier motions caused by partial occlusion. Second, we propose a pose-guided re-ID module to extract discriminative part features for partially occluded pedestrians. Last, we design a new occlusion-aware association method towards fair IoU and appearance embedding distance measurement for occluded pedestrians. Extensive evaluation results demonstrate that our OccluTrack outperforms state-of-the-art methods on _MOT-Challenge_ datasets. Particularly, the improvements on IDF1, IDSw, AssA, and AssR demonstrate the effectiveness of our OccluTrack on tracking and association performance. Multiple pedestrian tracking, tracking by detection, Kalman filter, re-identification, data association ## I Introduction Multiple pedestrian tracking is a challenging task aiming to form trajectories for detected pedestrians over time. This process involves detecting pedestrians and associating identical pedestrians from sequential frames. It is important in various real-world applications such as surveillance [1, 2], robotics [3, 4], and autonomous driving [5, 6]. Existing state-of-the-art multiple pedestrian tracking methods [7, 8, 9, 10] associate detected pedestrians by combining cues like motions and appearance features to address the occlusion problem. Kalman Filter, one of the motion estimators, is commonly used to provide motion cues by formulating multiple pedestrian tracker as a linear estimation problem, which recursively predicts and updates trajectories from noisy observations (detections) over frames. Re-ID modules extract appearance features from multiple frames to identify the same pedestrian. However, these methods, even with re-ID, are insufficient to resolve occlusion problems because of inaccurate motion estimates, unreliable appearance features, and unfair association. Recently, occlusion has drawn much attention in multiple pedestrian tracking, and various approaches were proposed to address this issue, e.g., tracking by attention [11, 12, 13], graph neural networks [14], self and cross-attention mechanism [15, 16, 17, 18], and hierarchical feature extraction [19]. They address the occlusion problem by formulating motion patterns via advanced deep learning models in adjacent or multiple frames [11, 15, 16, 18] or understanding the context information in the scene [14, 17, 19]. However, these methods require significant computation resources compared with the Kalman Filter, and they ignore the effects caused by partial occlusion. Through visualization of experimental results, we found partial occlusion is the missing key to resolving the occlusion problem. Partial occlusion creates abnormal bounding boxes covering some body parts only. These abnormal bounding boxes change suddenly in center points and aspect ratios, which causes "inaccurate motion estimates", appearance features, and associations. As shown in Fig. 1(a), abnormal detections caused by partial occlusion mislead the motion estimator in predicting wrong trajectories. Especially the errors will be accumulated and amplified without accurate observations during full occlusion. As for re-ID modules, the inputs are from the cropped images according to the detections. Nevertheless, the cropped images will cover two persons (part of the obscured and part of the visible person) during partial occlusion, introducing "unreliable and noisy appearance features". Moreover, the predictions from the motion estimator during occlusion are not as accurate as those in normal situations because of the error accumulation caused by partial occlusion. Hence, the association method should not treat occluded persons as strictly as visible persons, i.e., "unfair association". To alleviate the three problems caused by partial occlusions, we propose the OccluTrack, an adaptive occlusion-aware multiple pedestrian tracker with three strategies, as shown in Fig. 1(b). First, we propose an abnormal motion suppression mechanism for stabilizing parameter updates in the Kalman Filter. In particular, the abnormal motion suppression mechanism leverages the history of tracked persons' observations to detect and suppress abnormal motions. The motion can be more accurately predicted by abnormal motion suppression when partial occlusion occurs. Second, we introduce a pose-guided re-ID strategy for robust part feature extraction. The pose-guided re-ID strategy utilizes a pose estimator to guide the feature extractor to obtain more discriminative features. Lastly, we adopt an occlusion-aware distance measurement for occluded person association. The occlusion-aware distance measurement strategy involves a combination of IoU distance and appearance embedding distance under adaptive thresholds based on the level of occlusion, which is fair for occluded pedestrians.
2308.16438
Model Selection for Ordinary Differential Equations: a Statistical Testing Approach
Ordinary differential equations (ODEs) are foundational in modeling intricate dynamics across a gamut of scientific disciplines. Yet, a possibility to represent a single phenomenon through multiple ODE models, driven by different understandings of nuances in internal mechanisms or abstraction levels, presents a model selection challenge. This study introduces a testing-based approach for ODE model selection amidst statistical noise. Rooted in the model misspecification framework, we adapt foundational insights from classical statistical paradigms (Vuong and Hotelling) to the ODE context, allowing for the comparison and ranking of diverse causal explanations without the constraints of nested models. Our simulation studies validate the theoretical robustness of our proposed test, revealing its consistent size and power. Real-world data examples further underscore the algorithm's applicability in practice. To foster accessibility and encourage real-world applications, we provide a user-friendly Python implementation of our model selection algorithm, bridging theoretical advancements with hands-on tools for the scientific community.
Itai Dattner, Shota Gugushvili, Oleksandr Laskorunskyi
2023-08-31T03:57:08Z
http://arxiv.org/abs/2308.16438v1
# Model Selection for Ordinary Differential Equations: a Statistical Testing Approach ###### Abstract Ordinary differential equations (ODEs) are foundational in modeling intricate dynamics across a gamut of scientific disciplines. Yet, a possibility to represent a single phenomenon through multiple ODE models, driven by different understandings of nuances in internal mechanisms or abstraction levels, presents a model selection challenge. This study introduces a testing-based approach for ODE model selection amidst statistical noise. Rooted in the model misspecification framework, we adapt foundational insights from classical statistical paradigms (Vuong and Hotelling) to the ODE context, allowing for the comparison and ranking of diverse causal explanations without the constraints of nested models. Our simulation studies validate the theoretical robustness of our proposed test, revealing its consistent size and power. Real-world data examples further underscore the algorithm's applicability in practice. To foster accessibility and encourage real-world applications, we provide a user-friendly Python implementation of our model selection algorithm, bridging theoretical advancements with hands-on tools for the scientific community. ## 1 Background and Motivation ### Mechanistic Modelling with Ordinary Differential Equations Differential equations have proven to be a powerful modeling tool in science and engineering. They are widely used for modelling purposes, e.g., in mathematical biology, see Edelstein-Keshet (2005) and Murray (2002); in the theory of chemical reaction networks, see Feinberg (1979); in biochemistry, see Voit (2000); and in compartmental models in epidemiology, see Anderson et al. (1992). On an abstract level, differential equations comprise a class of mechanistic models. Mechanistic models are typically developed based both on the empirical knowledge and on the fundamental laws of the nature (first principles), and harness some information on causal mechanisms governing a system of interest. Upon their calibration, mechanistic models can be leveraged in applications where experiments are either impossible or costly to perform, ideally yielding new and valuable insights into a phenomenon under study, cf. Baker et al. (2018), and leading to better prediction and control of dynamic processes, cf. Strogatz (2018). ### Aims and Contribution In many situations one wants to compare several ODE models for a given phenomenon. This multiplicity of models arises, e.g., when some internal mechanisms governing the process are known only approximately. Other times there is contradictory scientific knowledge on underlying causal relationships, resulting in differing ODE model formulations. Finally, a possibility to choose the abstraction or resolution level at which to represent mathematically a given phenomenon may also lead to competing ODE models. In fact, often the detail level in ODE modelling is dictated by computational and feasibility considerations. See Dattner et al. (2017) and van Voorn et al. (2023) for some examples of the above considerations. In this paper, we focus on the common and practically important case of complex dynamic processes observed with statistical noise. Extensive lists of references on statistical modelling and inference for dynamical systems that cover a wide selection of areas can be found in Ramsay (2006) and Ramsay and Hooker (2017). For a recent review of the role played by differential equations in data analysis, with a focus on parameter estimation for ODE models, see Dattner (2021). Under this statistical setup, we study model selection issues. In particular, we devise an applicable methodology that sheds additional light on the modelling questions at hand and that has a potential to translate into practical recommendations or actions. ### Approaches to Model Selection for ODEs This section reviews the model selection problem for ODEs, while drawing on general literature on model selection, such as Ripley (2004) and Wit et al. (2012). Simultaneously, it provides some motivation on our take on model selection for ODEs. To avoid possible misunderstanding, we recall that a good explanatory model is not always the best predictive model (Shmueli (2010), Perretti et al. (2013a), Hartig and Dormann (2013), and Perretti et al. (2013b)). The primary goals of ODE modelling are to explain the phenomenon under study and make predictions from the model, see Murray (2002). For prediction, alternative approaches to mechanistic modelling, e.g. machine learning methods or discrete time models may also be considered (Michailidis and d'Alche Buc (2013), Lindsey (2001), Ellner et al. (1998), Kendall et al. (1999), and Thakur (1991)). Our work targets the explanatory behavior of ODE models, and as such assumes that their simplicity, robustness, and basis in natural laws will result in reasonable predictive behavior. Rigorous study of the latter, that may involve cross-validation techniques, is a topic on its own. Our approach is based on the following premises, that will be illustrated and expanded upon below. * ODE models are a stylised representation of reality. They are not the 'truth'. * Scientific knowledge may dictate more than one ODE model for a given phenomenon. None of these models is 'true'. * Only a handful of ODE models needs to be compared at any time. * Penalty-based statistical model selection approaches for ODEs require resolution of a number of conceptual challenges. * Information-theoretic criteria do not provide rigorous assessment of statistical significance of the model selection results. The first statement above hardly warrants a discussion: the fact that ODE models are derived based on simplifying assumptions and as such cannot be thought to be 'true' is widely acknowledged in the modelling literature; cf. Murray (2002) and Lindsey (2001). **Example 1.1**.: The simplest growth model for bacterial population under controlled conditions (constant temperature, sufficient supply of the nutrient, and others) in the lab environment is given by the Malthus law: \[x^{\prime}(t) =\psi x(t),\] \[x(0) =\xi.\] Here \(x(t)=\xi\exp(\psi t)\) is the bacterial density at time \(t\), \(\xi\) is the initial value, and \(\psi\) is the growth constant; see, e.g., Edelstein-Keshet (2005), Section 4.1. Derivation of this equation is based on several simplifying assumptions, e.g. that \(x\) is sufficiently large so that addition of several individuals to the population is of negligible consequence, growth of individuals is not correlated, and death can be neglected (Edelstein-Keshet (2005), p. 117). None of these assumptions can be thought as absolutely true in all circumstances. Yet the model has been proven to be adequate in practice under specific conditions; see, e.g., Edelstein-Keshet (2005), p. 120. Typically, ODE model selection arises when there are competing explanations for a phenomenon under study. **Example 1.2**.: Dattner et al. (2017) study the following model for interaction of two bacterial populations, the predatory bacterium _Bdellovibrio bacteriovorus_ and its prey, _Burkholderia stabilis st.2._, in a lab experiment: \[x^{\prime}_{1}(t) =\psi_{1}\psi_{2}x_{3}(t)-\psi_{3}x_{1}(t),\] \[x^{\prime}_{2}(t) =-\psi_{4}(x_{2}(t)-\psi_{5})x_{1}(t),\] \[x^{\prime}_{3}(t) =\psi_{4}(x_{2}(t)-\psi_{5})-\psi_{2}x_{3}(t).\] Here \(x_{1}\), \(x_{2}\) and \(x_{3}\) are concentrations of the predator, prey and the predator-prey complex (bdel-loplast), respectively. The complex is not observed and is introduced into the ODE system to account for the fact that the time it takes the predator to handle its prey is of the same order as the time it takes for the consumed prey items to be converted into new predators (Dattner et al. (2017), page 2). An alternative here could have been some form of the classical Lotka-Volterra model, which only involves the \(x_{1}\) and \(x_{2}\) components. The role of the refuge parameter \(\psi_{5}\) is interesting, in that it models the fact that due spatial inhomogeneities, at any given time not all prey are available for predation to the predator. It goes without saying that reduction of the spatial effects to a single parameter \(\psi_{5}\) is a serious simplifying assumption. See. e.g., Edelstein-Keshet (2005), pp. 87-89 for an additional discussion. The best model needs to be chosen based on scientific knowledge, available data, and model complexity, see Ripley (2004). Statistics provides a suitable and principled formalism for informed decision-making for model selection. In turn, statistical model selection encompasses model discrimination and model testing; see, e.g. Fisher and McAleer (1979/80), McAleer and Bera (1983), Dastoor (1981), and Dastoor (1990) for additional details. According to Dastoor (1990), model testing arises when it is desired to test the 'truth' of a model of interest. On the other hand, model discrimination applies when two or more models are ranked and compared according to some criterion. The latter can be 'deterministic'1 and based on quantities like information criteria, Mallows's \(C_{p}\) and adjusted \(R^{2}\), or 'probabilistic' and involve a significance test. Footnote 1: Strictly speaking, the term ‘deterministic’ is a misnomer. We interpret it loosely as a model discrimination approach that does not involve a significance test. At present, there is limited literature focussing specifically on model selection for ODEs. Important references include, among others, Miao et al. (2009), Zhang et al. (2015), and Wu et al. (2019). For Bayesian methodologies, see, e.g., Girolami (2008), Girolami and Calderhead (2011), Oates et al. (2016) and Hug et al. (2016). A systematic study of scientific papers from 1990-2023 revealed that out of 91 articles discussing'model selection' and 'differential equations', approximately 60% mentioned information criteria, 22% mentioned Bayesian-like criteria, and 25% mentioned cross-validation criteria. Besides these, at least 15 novel methods were introduced to deal with the selection of the 'best' dynamical system describing a given natural phenomenon (see Supplementary Material). In general, for model discrimination one can use information criteria, e.g. the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), see Akaike (1973), Akaike (1974) and Schwarz (1978). However, the expert opinions are divided as to which criterion, if any, is the most suitable in practice. Next, AIC and BIC rely on the trade-off between the goodness-of-fit term and the penalty term penalising model complexity. The latter is difficult to interpret for competing ODE models, that may be nonnested, have a differing number of state variables, differ in severity of their nonlinearity, and involve different external forcing functions or covariates. **Example 1.3**.: Revisit Example 1.2. The classical Lotka-Volterra model and the model proposed in Dattner et al. (2017) have differing numbers of state variables (two and three, respectively). How the two models should be nested within a single all-encompassing ODE model is unclear. **Example 1.4**.: In Section 2.2 ahead, several Lotka-Volterra type systems are considered to model interaction of two populations: predators and their preys. Each one attempts to improve upon the basic Lotka-Volterra system by addressing one of its unrealistic consequences based on the knowledge of the phenomenon under study. Relative weights of these consequences cannot be objectively assessed by simply counting the corresponding parameters. In the linear regression setting, prior to model selection the covariates or features are standardized. This allows a fair assessment of their relative contributions to the response. The same tool is not available in the ODE setting. Cf. the discussion in Vissing Mikkelsen and Hansen (2017), pp. 6 and 29. We note in passing that the experts disagree as far as applicability of information criteria to model selection for nonnested statistical models is concerned; cf. Ripley (2004) and Burnham and Anderson (2002). Importantly, model discrimination approaches such as AIC and BIC are criticised for not providing probabilistic quantification of the significance of their results; see Vuong (1989) and Amemiya [1980]. In fact, it is often the case that AIC and BIC assign similar scores to several models, and while one model is still chosen as the best, the fact whether in probabilistic terms it is significantly better than its competitors remains elusive. Various rules of thumb used in practice to assess statistical significance of relative differences between competing models based on information criteria (see, e.g., Burnham and Anderson [2002]) lack formal theoretical justification and moreover are not universally applicable. ### Overview of our approach A general alternative to 'deterministic' model discrimination is testing. It is the approach we employ for the model selection problem for ODEs. The aim is to leverage peculiarities of the ODE modelling in a targeted way and do not treat the question as a routine model selection problem. Importantly, we adopt the model misspecification framework, where the researcher's principal goal is finding the best possible explanation of the data-generating mechanism given a parametric model; see, e.g., Cox [1961], Cox [1962], White [1981] and Vuong [1989]. Our testing approach builds upon Vuong [1989], that in turn has its roots in Hotelling [1940]. Importantly, it does not require artificial nesting of competing models. As already mentioned in Section 1.3, in the context of ODE model selection, nesting is not always possible, and at any rate it leads to consideration of additional and irrelevant models, thereby resulting in entirely avoidable computational difficulties and the increased computational cost; see Zhang et al. [2015], Ramsay et al. [2007] and Voit and Almeida [2004] for a discussion of computational issues associated with parameter estimation in ODE models. With testing, the case of more than two models can be handled through pairwise comparison, taking suitable care of multiple testing issues; cf. Schennach and Wilhelm [2017]. The nature of the ODE modelling is such that one typically needs to compare only a handful of competing models, so that eventual multiple testing corrections do not result in overly conservative tests. Since with Vuong's test nesting is not a requirement, one can consider, compare and rank genuinely different causal explanations of an empirical phenomenon. Note that here we do not discuss the network reconstruction problems using ODEs, as studied, e.g., in Henderson and Michailidis [2014] and Chen et al. [2017]. Though the distinction is not entirely clear-cut, the latter are in their character closer to exploratory analysis, whereas the problems we have in mind lean towards confirmatory analysis; cf. Snedecor and Cochran [1989], p. 64. Vuong's approach starts with the null hypothesis that the two models are equally close to the true data-generating mechanism, which is not required to be contained among either of the competing models. The alternatives to the null hypothesis are that either the first or the second model is closer to the true data-generating distribution. Testing is based on the likelihood ratio statistic, and the test is directional. Critical values for Vuong's test are obtained from the asymptotic distribution of the likelihood ratio statistic, which varies depending on whether the models are nested or non-nested. Determining nestedness for two ODE models can be difficult, and is in fact oftentimes impossible. The latter difficulty in Vuong's approach is bypassed via the use of a pre-test. However, such a two-step approach to testing can lead to a considerable size distortion (Shi [2015], Schennach and Wilhelm [2017]). Schennach and Wilhelm's modification of Vuong's approach, referred to as the S-W test, addresses the issue of test size distortion and works regardless whether the models are nested or not; see Schennach and Wilhelm [2017]. Testing approaches in Vuong [1989] and Schennach and Wilhelm [2017] were not developed with ODE models in mind, but aimed at classical statistical and econometric models. In the ensuing sections we demonstrate how their work can be adapted to the ODE framework. The extension is nontrivial, but we show that all the necessary details can be worked out. Real Data Examples Prior to delving into technical details, in order to give a taste of how our testing approach works, in this section we present results of its application on several real data examples. ### Agricultural Trial Data Welham et al. (2014), Example 17.1A, provide data of a field trial at Rothamsted Research that studied the relationship between crop yields and applications of the soil fertilizer. The data are yields of spring barley from 20 fields in 1986, and the available soil phosphorus content, measured as Olsen P\({}^{2}\). We plot the data in Figure 1. Welham et al. (2014), Example 17.1C, propose several models for functional relationship between the (mean) yield and phosphorus, of which we focus on the standard exponential model with an asymptote (equation (17.7) in Welham et al. (2014)) and the inverse linear model (equation (17.8) there). Upon rewriting these 3-parameter nonlinear functions in the ODE form and reparametrizing, we obtain two ODE models: \[x^{\prime}(s) =\psi_{1}(\psi_{2}-x(s)),\] \[x(0) =\xi,\] and \[x^{\prime}(s) =-\psi_{1}(-\psi_{2}+x(s))^{2},\] \[x(0) =\xi,\] The state variable \(x\) is yield and is a function of the phosphorus content \(s\). Furthermore, \(\psi_{1},\psi_{2}\) are the model parameters, and \(\xi\) is the initial value. To fit the models, we used the nonlinear least squares method. This is essentially the maximum likelihood estimation approach under the assumption of Gaussian measurement errors. As seen in Figure 1, visually both models fit the data well. Nevertheless, the two models can be rigorously compared via the testing framework that we develop in Sections 3 and 4 below. In our software implementation, little beyond reading the data in and specifying the ODE models via straightforward syntax is required from the user. The outcome of the test statistic is \(-0.359\). Under the commonly used significance level \(\alpha=0.05\), the critical value to reject the null hypothesis that both models are equally distant (in the Kullback-Leibler divergence sense) from the true data-generating mechanism in favor of model B is \(-1.96\), and in favor of model A is \(1.96\). As our test statistic is between these two values, we retain the null hypothesis. Welham et al. (2014) compare the models based on \(R^{2}\), AIC and BIC, and conclude that "there is little statistical difference in the fit of the two non-linear models, so either might reasonably be selected" (page 475). Our results formally corroborate their conclusions. ### Paramecium Aurelia vs Saccharomyces Exiguus Data This example deals with results of a lab experiment on interaction of two species: _Paramecium aurelia_ (predator) and _Saccharomyces exiguus_ (prey). For details of the experiment, we refer to Gause (1935). We obtained data by means of WebPlotDigitizer (see Rohatgi (2022)) by reading the point coordinates of Figure 3 in Gause (1935). The graphical quality of the original figure is modest, and hence we rounded off the measurement readings to one digit after zero. As seen from Figure 2, both populations exhibit characteristic oscillations with an approximately constant period and lend themselves to modelling via a Lotka-Volterra type ODE system. We used the following ODE model formulations: * Model 1: \[x^{\prime}_{1}(t) =\psi_{2}\psi_{3}x_{1}(t)x_{2}(t)-\psi_{4}x_{1}(t),\] Lotka-Volterra \[x^{\prime}_{2}(t) =\psi_{1}x_{2}(t)-\psi_{2}x_{1}(t)x_{2}(t),\] Volterra \[\mathbf{x(0)} =[\xi_{1},\xi_{2}];\] * Model 2: \[x^{\prime}_{1}(t) =\psi_{2}\psi_{3}x_{1}(t)x_{2}(t)-\psi_{4}x_{1}(t),\] Logistic prey \[x^{\prime}_{2}(t) =\psi_{1}x_{2}(t)\left(1-\frac{x_{2}(t)}{\psi_{5}}\right)-\psi_{ 2}x_{1}(t)x_{2}(t),\] \[\mathbf{x(0)} =[\xi_{1},\xi_{2}];\] * Model 3: \[x^{\prime}_{1}(t) =\frac{\psi_{2}\psi_{3}x_{1}(t)x_{2}(t)}{1+\psi_{2}\psi_{5}x_{1}( t)}-\psi_{4}x_{1}(t),\] Type 2 functional response \[x^{\prime}_{2}(t) =\psi_{1}x_{2}(t)-\frac{\psi_{2}x_{1}(t)x_{2}(t)}{1+\psi_{2}\psi_ {5}x_{1}(t)},\] \[\mathbf{x(0)} =[\xi_{1},\xi_{2}];\] * Model 4: \[x^{\prime}_{1}(t) =\psi_{2}\psi_{3}x_{1}(t)x_{2}(t)-\psi_{4}x_{1}(t)-\psi_{5}x^{2}_{ 1}(t),\] Density-dependent predator death \[x^{\prime}_{2}(t) =\psi_{1}x_{2}(t)-\psi_{2}x_{1}(t)x_{2}(t),\] Predator death \[x^{\prime}_{2}(t) =\psi_{1}x_{2}(t)-\psi_{2}x_{1}(t)x_{2}(t),\] Predator death \[x^{\prime}_{2}(t) =\ Here Model 1 is the basic Lotka-Volterra model, whereas each of the successive ones attempts to address one of its shortcomings. For instance, under the basic Lotka-Volterra model, the prey population can increase exponentially. This is reasonable for the prey at low density. However, in real populations as the density becomes higher, the per-head rate of increase declines. Model 2 attempts to account for this by introducing a logistic limitation on the prey growth. In a similar fashion, Model 4 lets the predator vitality rate be a function of the predator density. This appears reasonable, in that predators lacking territories might start infighting or suffer higher death rate. For additional information on each model, we refer to Murdoch et al. (2003); cf. Edelstein-Keshet (2005), pp. 214-217. In the present context, results of model selection can be interpreted in a twofold fashion. Firstly, we may ask whether there is enough information in the observed time series to discern the refinements of the basic Lotka-Volterra model. Secondly, we may ask which of these extensions is statistically the most significant. Figure 1(a) shows the dynamics of _Paramecium aurelia_ (\(Y_{1}\)) and the four fitted models (the first number in the subscript of \(\hat{x}_{11},\ldots,\hat{x}_{41}\) refers to the model and the second number to the state). In the same manner, Figure 1(b) gives the dynamics of _Saccharomyces exiguus_ and the respective fits. Visually the model fits appear to be similar enough. Results of our testing procedure are reported in Table 1. The table implies that no model is shown to be closer to the true data-generating process than others. No multiple testing correction has been applied when presenting the results, as none of the pairwise comparisons turned out to be significant at the conventional \(\alpha=0.05\) level. Upon a closer look at the four ODE systems, we find that Model 2 equals Model 1 when \(\psi_{5}=\infty\), and Models 3 and 4 equal Model 1 when \(\psi_{5}=0\). It is instructive to examine the estimated parameters shown in Table 2. We see that parameter estimates for Models 1, 3 and 4 are nearly identical, and in Models 3 and 4 \(\psi_{5}\approx 0\). The same goes for Model 2, except that \(\psi_{5}\) is now large. The latter is not surprising, given that \(\psi_{5}\) plays a different role in Models 3 and 4, on one hand, and Model 2 on the other. ### Paramecium Bursaria vs Schizosaccharomyces Pombe Data In the previous example, among four suggested predator-prey ODE systems, no model has shown statistically significant superiority over other models. Here we will show the case when the test Figure 2: Dynamics of the predator and prey populations, with four fitted ODE models from Section 2.2. Note that as some fits are nearly identical, not all four lines are visible. The first index of the variable \(x\) indicates the model. The second index indicates the state. works in favor of a model. We fitted the same models as in Section 2 to a different dataset, namely the dataset **gause_1934_book_f39.1** from the **R** package **gauseR**, see Muhlbauer et al. (2020). This deals with interaction of the predator _Paramecium bursaria_ and the prey _Schizosaccharomyces pombe_ (see Gause (1935) for details of the experiment). Figure 3 gives the dynamics of the observed data and plots the fitted models. On purely visual grounds, it is hard to conclude which model is better. The test results are reported in 3. At the 95% significance level, Model 2, which is 'Logistic prey', is closer to the truth in comparison to Models 1 and 4 (see Table 3 for details). In the pair of Models 2 and 3, the shift 0.908 towards Model 2 is not enough to obtain a statistically significant result. Nevertheless, one may still prefer Model 2 over Model 3, because the latter does not achieve a statistically significant improvement over Models 1 and 4. In this case, the increased number of samples could have given additional information. At any rate, the dataset is somewhat unusual, in that the magnitude of periodic fluctuations in Figure 3 diminishes as the time progresses (compare the first and the second cycles). Gause (1935) notes this, but refrains from giving an explanation. This example illustrates well the challenges associated with ODE model selection. ## 3 Problem Formulation We start by introducing a number of concepts and notions to formally define our statistical framework. \begin{table} \begin{tabular}{c c c c} \hline \hline **Model A** & **Model B** & **S-W statistic** & **In favor** \\ \hline 1 & 2 & 0.987 & \(-\) \\ 1 & 3 & 0.791 & \(-\) \\ 1 & 4 & 0.680 & \(-\) \\ 2 & 3 & -0.706 & \(-\) \\ 2 & 4 & -0.706 & \(-\) \\ 3 & 4 & 0.727 & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 1: Test outcomes for Models 1–4 in Section 2.2. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Parameter** & **Model 1** & **Model 2** & **Model 3** & **Model 4** \\ \hline \(\xi_{1}\) & 101.2 & 102.8 & 101.2 & 101.2 \\ \(\xi_{2}\) & 116.0 & 121.2 & 116.0 & 116.0 \\ \(\psi_{1}\) & 0.660 & 0.685 & 0.660 & 0.660 \\ \(\psi_{2}\) & 0.012 & 0.012 & 0.012 & 0.012 \\ \(\psi_{3}\) & 1.450 & 1.531 & 1.490 & 1.490 \\ \(\psi_{4}\) & 1.122 & 1.126 & 1.122 & 1.122 \\ \(\psi_{5}\) & \(-\) & 1566.4 & 5.93E-15 & 3.10E-18 \\ \hline \hline \end{tabular} \end{table} Table 2: Parameter estimates for Models 1–4 in Section 2.2. \begin{table} \begin{tabular}{c c c c} \hline \hline **Model A** & **Model B** & **S-W statistic** & **In favor** \\ \hline 1 & 2 & -4.433 & 2 \\ 1 & 3 & -1.827 & - \\ 1 & 4 & -1.374 & - \\ 2 & 3 & 0.908 & - \\ 2 & 4 & 5.802 & 2 \\ 3 & 4 & 1.680 & - \\ \hline \hline \end{tabular} \end{table} Table 3: Test outcomes for Models 1–4 in Section 2.3. Figure 3: Dynamics of the predator and prey populations, with four fitted ODE models from Section 2.2. ### Statistical Modeling As we make a distinction between the true data generating process and ODE-based approximations to it, we first need to discuss the former. We assume the data collected on the phenomenon of interest are pairs \((t_{i},Y_{i})\), \(i=1,\ldots,n\). The \(t_{i}\)'s can typically be thought of as times, at which measurements \(Y_{i}\)'s are collected3. As a specific example, \(Y_{i}\)'s may represent some quantitative estimates of the number of individuals infected by a certain disease at times \(t_{i}\)'s. In this paper, for the sake of clarity of exposition, we suppose that \((t_{i},Y_{i})\) are independent and identically distributed random vectors and follow a common (unknown) probability distribution \(P_{0}\) with density \(p_{0}\). This setup can be generalised to a more abstract one, but we do not attempt this here. The assumption is flexible enough to cover numerous situations of practical interest; see Remark 3.1 below. Our approach is thus a probabilistic approach to the description of empirical phenomena and is a point of view taken by researchers in statistics and related fields, such as econometrics and machine learning (see, e.g., Wasserman (2004), pp. ix and 19, and White (1994), pp. 5-6). The distribution \(P_{0}\) (equivalently, its density \(p_{0}\)) gives a complete probabilistic description of the data-generating mechanism. A researcher's goal is inference on this unknown distribution. Footnote 3: As illustrated in Section 2.1, ‘time’ is not the only possible interpretation of \(t_{i}\)’s. Often the distribution of times \(t_{i}\)'s is of little relevance or can be assumed to be known (e.g., uniform), and in that case, the primary object of interest is the conditional distribution of \(Y_{i}\) given \(t_{i}\), say \(P_{0}(\cdot|\cdot).\) We assume it has a density \(p_{0}(\cdot|\cdot).\) The first step in inference on \(P_{0}(\cdot|\cdot),\) or equivalently \(p_{0}(\cdot|\cdot)\), is the formulation of a certain approximation to it, termed a statistical model. Next, in the second step, the model is optimised based on available observational or experimental data, and thereby the best approximation to \(P_{0}(\cdot|\cdot)\) or \(p_{0}(\cdot|\cdot)\) is obtained (White (1994), Chapter 2). The departure point for our statistical models is the following generic observational structure, \[Y_{j}(t_{i})=x_{0j}(t_{i})+\epsilon_{ij},\quad i=1,\ldots,n\quad j=1,\ldots,d_ {0}, \tag{3.1}\] where \(Y_{j}(t_{i})\) is a scalar random variable; \(x_{0j}(t)\), \(t\in[0,T]\) is an unknown deterministic function; \(t_{1},\ldots,t_{n}\) are design points; and the unobserved random variables \(\epsilon_{ij}\) are independent measurement errors having zero expectation and finite variance. Such modelling assumptions are standard in the literature dealing with statistical inference for ODEs systems; see, e.g., Ramsay et al. (2007), Hooker (2009), Gugushvili and Klaassen (2012), and Dattner and Klaassen (2015), to name just a few references. We do not assume that (3.1) is the structure matching the 'true' data generating distribution \(P_{0}\). In particular, model misspecification can occur in the additive error assumption, distributional assumptions on the error terms, and the form of the mean function. Note that our derivations in further sections are under the assumption of Gaussian noise with zero mean and variance \(\sigma_{j}^{2}\), yet the overall approach is general and can be adapted to alternative likelihood functions as well. Now for each \(i\), we aggregate \(Y_{j}(t_{i})\)'s into vectors \(Y_{i}\)'s and hence assume that the pairs \((t_{i},Y_{i})\) are independent and identically distributed. _Remark 3.1_.: Suppose that the observation times \(t_{i}\) are independent and identically distributed, and furthermore that they are independent of measurement errors \(\epsilon_{ij}\)'s. Then the pairs \((t_{i},Y_{i})\) will be independent and identically distributed as well. In practice observation times \(t_{i}\) are typically deterministic, for instance they could be daily. But if their empirical distribution stabilizes to a limiting distribution, \(t_{i}\)'s can be reasonably assumed to be independent and identically distributed. In particular, this is the case when \(t_{i}\)'s form a regular and dense grid on the time interval \([0,T]\). See, e.g. Tsybakov (2009) for the use of similar ideas to establish asymptotic properties of nonparametric regression estimators in the fixed design setting, and compare to Gasser and Muller (1979). Let \(\top\) stand for the transpose of a vector. In the sequel we use the notation \[x_{0}(t)=(x_{01}(t),\ldots,x_{0d_{0}}(t))^{\top}\] and denote the vector of derivatives of \(x_{0}(t)\) w.r.t. \(t\) by \[f_{0}(t)=(x^{\prime}_{01}(t),\ldots,x^{\prime}_{0d_{0}}(t))^{\top},\quad t\in[0,T]. \tag{3.2}\] The scientific question studied in this work is essentially a question of finding a parametric description for \(f_{0}(\cdot)\) defined in Equation (3.2), one that expresses \(f_{0}(t)\) in terms of \(x(t)\) that describes the process mechanistically, in the sense that the current rate-of-change depends on the current state. Suppose that we have \(N\) models for describing a dynamic process. We denote such models by \(F_{k}\), \(k=1,\ldots,N\) and assume with some innocuous abuse of notation that \[\begin{cases}x^{\prime}_{k}(t)=F_{k}(x_{k}(t),\psi_{k}),\quad t\in[0,T],\\ x_{k}(0)=\xi_{k}.\end{cases} \tag{3.3}\] Here \(x_{k}(0)=\xi_{k}\) is a column \(d_{k}\)-vector of initial conditions. The parameter vector is given by \[\psi_{k}=(\psi_{k1},\ldots,\psi_{kp_{k}})^{\top}, \tag{3.4}\] where \(\psi_{k}\) is an element of a \(p_{k}\)-dimensional parameter space \(\Psi_{k}\). Let \[x_{k}(t):=x(t;\xi_{k},\psi_{k})=(x_{k1}(t;\xi_{k},\psi_{k}),\ldots,x_{kd_{k}}(t ;\xi_{k},\psi_{k}))^{\top}\] stand for the solution of the initial values problem defined by the system of ODEs and initial values given in Equation (3.3) and a parameter \(\psi_{k}\) defined in (3.4). Furthermore, define \[\theta=(\sigma^{2},\xi,\psi) \tag{3.5}\] where \(\sigma^{2}\) is the d-vector of variances of the noise of each state and we assume \(\theta\in\Theta\) for some subset \(\Theta\) of the Euclidean space. Now the question is: given competing models \(F_{k}\), which one should we prefer? The answer obviously is the one that gets us closest to \(P_{0}\). Hence we need to discuss the distance between the true data generating distribution \(P_{0}\) and the one implied by (3.1) for each competing model under consideration. We denote that latter distribution of the pair \((t_{i},Y_{i})\) by \(P(\cdot,\cdot;\theta)\), and assume that the marginal distribution of \(t_{i}\)'s is fixed, e.g. uniform on \([0,T]\). As argued e.g. in White [1994], pp. 9-10, and in Akaike [1973], Sections 2-3, specifically in the model discrimination context, a sensible and natural discrepancy measure between probability distributions \(P\) and \(Q\) is the Kullback-Leibler divergence (see Kullback and Leibler [1951]) \[\mathrm{KL}(P,Q)=\begin{cases}\int\frac{\mathrm{d}P}{\mathrm{d}Q}\log\left( \frac{\mathrm{d}P}{\mathrm{d}Q}\right)\mathrm{d}P,&P\ll Q,\\ \infty,&\text{otherwise}.\end{cases}\] When \(P\) and \(Q\) possess densities \(p\) and \(q\), as is our case, the Kullback-Leibler divergence can be equivalently written as \[\mathrm{KL}(P,Q)=\mathrm{KL}(p,q)=\int p(y)\log\frac{p(y)}{q(y)}\mathrm{d}y.\] The Kullback-Leibler divergence has the natural property of being nonnegative, and it equals zero if and only if \(P=Q\) (equivalently, when \(p=q\), almost everywhere). Many other useful properties of the Kullback-Leibler divergence are collected in Cover and Thomas (2006). The Kullback-Leibler divergence admits a fundamental information-theoretic interpretation, in that \(\mathrm{KL}(P,Q)\) can be interpreted as the'surprise' experienced on average when one believes that \(Q\) describes a given probabilistic phenomenon and is then told that it is in fact described by \(P\)(White (1994), p. 9). The Kullback-Leibler divergence admits a straightforward generalisation to conditional distributions. For each competing model, under mild regularity conditions there will be a parameter value \(\theta^{*},\) that minimises the Kullback-Leibler divergence between the conditional probability densities \(p_{0}(\cdot|\cdot)\) and \(p(\cdot|\cdot;\theta),\) i.e. \[\mathbb{E}_{0}\left[\log\frac{p_{0}(Y_{i}|t_{i})}{p(Y_{i}|t_{i};\theta)}\right]. \tag{3.6}\] Here the expectation is under the true joint distribution \(P_{0}\) of the pair \((t_{i},Y_{i}).\) Thus the density \(p(\cdot|\cdot;\theta^{*})\) constitutes the best approximation to \(p_{0}(\cdot|\cdot)\) among the densities \(p(\cdot|\cdot;\theta).\) The parameter value \(\theta^{*}\) is referred to as the pseudo-true value of \(\theta,\) while \(p(\cdot|\cdot;\theta^{*})\) is called the pseudo-true model (see Sawa (1978), p. 1276 and Vuong (1989), p. 308). Minimisation of Equation (3.6) over \(\theta\) is equivalent to maximisation of \[\mathbb{E}_{0}\left[p(Y_{i}|t_{i};\theta)\right], \tag{3.7}\] over \(\theta\) and yields the pseudo-true value \(\theta^{*};\) note that this latter implicitly depends on \(P_{0}.\) Unfortunately, since (3.7) depends on the unknown distribution \(P_{0},\) the Kullback-Leibler divergence is not computable in practice and thus neither can be minimized over \(\theta.\) However, it can be estimated by the sample average \[\frac{1}{n}\sum_{i=1}^{n}\log p(Y_{i}|t_{i};\theta),\] which can be maximized instead (see White (1994), Section 2.3). Equivalently, one can maximize \(\sum_{i=1}^{n}\log p(Y_{i}|t_{i};\theta).\) This amounts to nothing else but employing the well-known maximum likelihood method devised by R. A. Fisher as a general parameter estimation technique in statistical problems; see Fisher (1922) and Fisher (1925). The latter is a popular and, under rather general assumptions, statistically optimal approach to parameter estimation; see, e.g., van der Vaart (1998) and White (1994) for modern accounts of the theory. The (conditional) maximum likelihood estimator (MLE) of the parameter \(\theta\) is defined as \[\hat{\theta}_{n}=\mathrm{argmax}_{\theta\in\Theta}\sum_{i=1}^{n}\log p(Y_{i}| t_{i};\theta). \tag{3.8}\] Under rather general conditions, that we do not list here and refer to e.g. White (1994), Chapters 3-7 instead, the MLE \(\hat{\theta}_{n}\) exists, converges to the pseudo-true value \(\theta^{*},\) and is in fact asymptotically normal and optimal in a specific sense. Thus the maximum likelihood estimator \(\hat{\theta}_{n}\) can be used as a proxy for the pseudo-true value \(\theta^{*},\) and consequently the conditional density \(p(\cdot|\cdot;\hat{\theta}_{n})\) can serve as a proxy for \(p(\cdot|\cdot;\theta^{*}),\) and eventually for the true conditional density \(p_{0}(\cdot|\cdot).\) ### Testing Framework Suppose next to \(p(\cdot|\cdot;\theta)\) we have another family of conditional densities \(q(\cdot|\cdot;\gamma),\) parametrised by \(\gamma\in\Gamma\) (defined according to Equation 3.5), as a possible approximation to \(p_{0}(\cdot|\cdot).\) We denote by \(\gamma^{*}\) the pseudo-true value corresponding to the family \(q(\cdot|\cdot;\gamma)\) and by \(\hat{\gamma}_{n}\) the maximum likelihood estimator. Vuong (1989) considered the following formal framework for model discrimination in this context: \[H_{0}:\mathbb{E}_{0}\left[\log p(Y_{i}|t_{i};\theta^{*})\right]=\mathbb{E}_{0} \left[\log q(Y_{i}|t_{i};\gamma^{*})\right],\] meaning the models \(p(\cdot|\cdot;\theta)\) and \(q(\cdot|\cdot;\gamma)\) are equivalent, versus an alternative hypothesis \[H_{p}:\mathbb{E}_{0}\left[\log p(Y_{i}|t_{i};\theta^{*})\right]>\mathbb{E}_{0} \left[\log q(Y_{i}|t_{i};\gamma^{*})\right],\] meaning the model \(p(\cdot|\cdot;\theta)\) is better than the model \(q(\cdot|\cdot;\gamma)\), and another alternative \[H_{q}:\mathbb{E}_{0}\left[\log p(Y_{i}|t_{i};\theta^{*})\right]<\mathbb{E}_{0} \left[\log q(Y_{i}|t_{i};\gamma^{*})\right],\] meaning the model \(q(\cdot|\cdot;\gamma)\) is better than the model \(p(\cdot|\cdot;\theta)\). The choice between \(p(\cdot|\cdot;\theta)\) and \(q(\cdot|\cdot;\gamma)\) is called model testing. It results in selection of one model as the best (in the sense that it is closer to the \(p_{0}(\cdot|\cdot)\)), or retaining the null hypothesis that both models are equally accurate (or inaccurate). The reader may find Figure 4 and other similar ones helpful when trying to visualise schematically various quantities and concepts mentioned throughout this section. The above-displayed formulae suggest a natural way to proceed with testing: replace the information quantities with their sample analogues and base a decision on the log-likelihood ratio \[\text{LR}_{n}=\sum_{i=1}^{n}\log\frac{p(Y_{i}|t_{i};\hat{\theta}_{n})}{q(Y_{i} |t_{i};\hat{\gamma}_{n})}. \tag{3.9}\] This latter (or equivalently, the likelihood ratio) has been used extensively and with great success in various testing problems (see, e.g., van der Vaart (1998), Chapter 16 for a modern account). Vuong derived the asymptotic distribution of (3.9) under \(H_{0}\), as well as its limits under \(H_{p}\) and \(H_{q}\), thereby obtaining critical values, and also concluding that the test is directional: the rejection of the null \(H_{0}\) occurs in the direction of either \(H_{p}\) or \(H_{q}\). Unfortunately, the limiting distribution of the log-likelihood ratio in Vuong's framework depends in a complicated way on the relationship between the models \(p(\cdot|\cdot;\theta)\) and \(q(\cdot|\cdot;\gamma)\), that is not easy to ascertain for ODE models. The key difficulty is nestedness of the models. The two models \[P_{\theta}=\{p(\cdot|\cdot;\theta):\theta\in\Theta\},\quad Q_{\gamma}=\{p( \cdot|\cdot;\gamma):\gamma\in\Gamma\}\] are said to be strictly non-nested, if \(P_{\theta}\cap Q_{\gamma}=\varnothing.\) They are said to be nested, if either \(P_{\theta}\subset Q_{\gamma}\) or \(P_{\theta}\supset Q_{\gamma}.\) Finally, they are said to be overlapping, if \(P_{\theta}\cap Q_{\gamma}\neq\varnothing\) and either \(P_{\theta}\not\subset Q_{\gamma}\) or \(P_{\theta}\not\supset Q_{\gamma}\), or both. See Vuong (1989) and cf. Pesaran (1987) and McAleer and Pesaran (1986) for additional details and examples. For ODE-based statistical systems, verification of the relationship Figure 4: Schematic depiction of two non-overlapping statistical models. between models reduces to verification of the relationship between the ODE systems. Since these typically do not admit closed-form solutions, the exact relationship between the two ODE models will remain unclear. As such, in many if not most cases, Vuong's test will necessarily involve a pre-test step, and the resulting two-stage test may exhibit a significant size distortion. The latter is a serious defect and its extent has been demonstrated in Shi (2015), Section 3. Methods to address this issue have been proposed by Schennach and Wilhelm (2017) and Shi (2015). The former is arguably simpler to present and implement and moreover works without modification irrespective of the relationship between the models \(p(\cdot|\cdot;\theta)\) and \(q(\cdot|\cdot;\gamma)\). Hence our decision to concentrate on the Schennach-Wilhelm test in this research. ## 4 Methodological Approach ### Schennach-Wilhelm test Let \(h_{n}>0\) denote a data-dependent regularisation parameter; we will discuss its choice below. Assume for simplicity the number of observations \(n\) is even, and introduce the weights \[w_{k}(h_{n})=\begin{cases}1,&\text{$k$ odd},\\ 1+h_{n},&\text{$k$ even},\end{cases}\quad k=1,\ldots,n+1.\] Define the reweighted log-likelihood ratio \[\widetilde{\text{LR}}_{n}=\frac{1}{n}\sum_{i=1}^{n}\left(w_{i}(h_{n})\log p(Y _{i}|t_{i};\hat{\theta}_{n})-w_{i+1}(h_{n})\log q(Y_{i}|t_{i};\hat{\gamma}_{n })\right),\] and let (see Supplementary Material for the derivation) \[\hat{\overset{\text{\tiny$\widehat{\varepsilon}$}}{\sigma}}^{2}=(1+h_{n}) \hat{\sigma}^{2}+\frac{h_{n}^{2}}{2}(\hat{\sigma}_{p}^{2}+\hat{\sigma}_{q}^{2})\] be an estimator of the asymptotic variance of the reweighted log-likelihood ratio. Here \[\hat{\sigma}^{2}=\hat{\sigma}_{p}^{2}-2\hat{\sigma}_{pq}+\hat{\sigma}_{q}^{2}\] with \[\hat{\sigma}_{p}^{2} =\frac{1}{n}\sum_{i=1}^{n}\left(\log p(Y_{i}|t_{i};\hat{\theta}_{ n})-\overline{\log p}\right)^{2},\] \[\hat{\sigma}_{pq} =\frac{1}{n}\sum_{i=1}^{n}\left(\log p(Y_{i}|t_{i};\hat{\theta}_{ n})-\overline{\log p}\right)\left(\log q(Y_{i}|t_{i};\hat{\gamma}_{n})- \overline{\log q}\right),\] \[\hat{\sigma}_{q}^{2} =\frac{1}{n}\sum_{i=1}^{n}\left(\log q(Y_{i}|t_{i};\hat{\gamma}_{ n})-\overline{\log q}\right)^{2},\] and we have used the notation \[\overline{\log p}=\frac{1}{n}\sum_{i=1}^{n}\log p(Y_{i}|t_{i};\hat{\theta}_{ n}),\quad\overline{\log q}=\frac{1}{n}\sum_{i=1}^{n}\log q(Y_{i}|t_{i};\hat{ \gamma}_{n}).\] The Schennach-Wilhelm test statistic is defined as \[\widetilde{T}_{n}=\frac{\sqrt{n}\widetilde{\widetilde{\widetilde{\sigma}}} \widetilde{\widetilde{\sigma}}}{\widetilde{\widetilde{\sigma}}}.\] It is shown in Schennach and Wilhelm (2017), Theorem 1, that under regularity assumptions on the statistical models under consideration, that we do not list here but refer to the original paper, the statistic \(\widetilde{T}_{n}\) is asymptotically normal under \(H_{0}\), i.e. \(\widetilde{T}_{n}\sim N(0,1)\), diverges to \(+\infty\) under \(H_{p}\), and to \(-\infty\) under \(H_{q}\). This asymptotic result readily yields a test for model selection. Namely, fix a level \(0<\alpha<1\). Let \(z_{1-\alpha/2}\) be the \(1-\alpha/2\)-quantile of the standard normal distribution. Then retain \(H_{0}\) if \(|\widetilde{T}_{n}|\leq z_{1-\alpha/2}\), and otherwise reject it. Rejection occurs in favour of \(H_{p}\), if \(\widetilde{T}_{n}>z_{1-\alpha/2}\), and in favour of \(H_{q}\), if \(\widetilde{T}_{n}<-z_{1-\alpha/2}\). Schennach and Wilhelm (2017), Section 5, establish favourable theoretical properties of their test. They also conduct a simulation study and apply their method on a real data set. However, the practical examples they consider are limited to classical statistical models like normal location and linear regression. Application of the Schennach-Wilhelm test to ODE models is a novel contribution. ### Regularisation Parameter In this subsection, following Schennach and Wilhelm (2017), we present a methodology for an optimal choice of the regularisation parameter \(h_{n}\). This involves a step that is specific only to ODE-based models. Obviously, the parameter \(h_{n}>0\) can also be chosen subjectively, according to a researcher's preferences, but Schennach and Wilhelm (2017) present the following objective methodology: they remark that their test has a size distortion only if the two models are nested or overlapping. This can be controlled by choosing the regularisation parameter \(h_{n}\) large. On the other hand, taking \(h_{n}\) large makes the test lose power in the case of strictly non-nested models. Schennach and Wilhelm suggest choosing \(h_{n}\) that balances the worst cases under these two scenarios. They achieve this by expanding the test size and power in these two cases in terms of \(h_{n}\), and next balancing the terms in the two expansions by choosing an appropriate \(h_{n}\) (and estimating some constants from the data). Instead of reporting the details of such computations here, that can be found Schennach and Wilhelm (2017), Section 6, we directly provide the formula for the optimal \(\hat{h}_{n}\) they derived, \[\hat{h}_{n}=\left(\frac{\hat{C}_{SD}}{\hat{C}_{PL}}\right)^{1/3}n^{-1/6}(\log \log n)^{1/3}.\] Here the two constants \(\hat{C}_{SD},\hat{C}_{PL}\) are \[\hat{C}_{SD} =\phi\left(z_{\alpha/2}-\frac{\hat{\delta}}{\hat{\sigma}}\right) \frac{\hat{\delta}(\hat{\sigma}^{2}-2(\hat{\sigma}_{p}^{2}+\hat{\sigma}_{q}^{2 }))}{4\hat{\sigma}^{3}},\] \[\hat{C}_{PL} =2\phi(z_{\alpha/2})\frac{\max\{|\operatorname{tr}(\hat{H}_{p}^{ -1}\hat{V}_{p})|,|\operatorname{tr}(\hat{H}_{q}^{-1}\hat{V}_{q})|\}}{\sqrt{( \hat{\sigma}_{p}^{2}+\hat{\sigma}_{q}^{2})/2}},\] and we used the notation \[\hat{\delta} =\frac{\hat{\sigma}}{2}(z_{\alpha/2}-\sqrt{4+z_{\alpha/2}^{2}}),\] \[\hat{H}_{p} =\frac{1}{n}\sum_{i=1}^{n}\nabla_{\theta}^{2}\log p(Y_{i}|t_{i}; \hat{\theta}_{n}),\] \[\hat{V}_{p} =\frac{1}{n}\sum_{i=1}^{n}(\nabla_{\theta}\log p(Y_{i}|t_{i}; \hat{\theta}_{n})(\nabla_{\theta}\log p(Y_{i}|t_{i};\hat{\theta}_{n})^{\prime},\] and similarly for \(\hat{H}_{q}\) and \(\hat{V}_{q}\). The operator \(\nabla_{\theta}\) gives the gradient with respect to \(\theta\), while \(\nabla_{\theta}^{2}\) gives the Hessian. The matrices \(\hat{H}_{p},\hat{V}_{p}\) (as well as those corresponding to the conditional density \(q(\cdot|\cdot;\gamma)\)) are needed for the usual sandwich variance estimator in potentially misspecified models (see, e.g., White (1994), Section 8.3), and hence the requirement for their computation does not go beyond what is done when using the maximum likelihood method (see Schennach and Wilhelm (2017), Section 6). On the other hand, evaluation of the derivatives \(\nabla_{\theta}\log p(Y_{i}|t_{i};\hat{\theta}_{n})\) and \(\nabla_{\theta}^{2}\log p(Y_{i}|t_{i};\hat{\theta}_{n})\) has some peculiarities in the ODE context, but nevertheless is conceptually straightforward, in that it reduces to numerical integration of the sensitivity and variational equations associated with the ODE system. Here we concentrate on the case when the ODE systems under consideration are one-dimensional for simplicity of exposition. A generalization of the arguments to the multidimensional case is straightforward, but much more involved notationally (see Supplementary Material for details). What we are interested in are the derivatives \(\nabla_{\theta}\log p(y|t;\theta)\) and \(\nabla_{\theta}^{2}\log p(y|t;\theta).\) We aggregate the pair \(\xi,\psi\) into a vector \(\eta=(\xi,\psi),\) where \(\psi\) is the vector of ODE system parameters, and \(\xi\) is the vector of the state initial values. At this stage we need to make concrete the distributional assumptions on the likelihood, and we assume it is Gaussian. Nevertheless, the same roadmap can be adapted to alternative likelihood functions as well. Under Gaussianity, \[\log p(y|t;\theta)=-\frac{1}{2}\log(2\pi\sigma^{2})-\frac{(y-x(t;\eta))^{2}}{ 2\sigma^{2}}.\] Calculating the first and second-order partial derivatives with respect to \(\sigma^{2}\) is straightforward. Here we concentrate on derivatives with respect to \(\eta.\) This boils down to evaluating the derivatives \[\nabla_{\eta}(y-x(t;\eta))^{2},\quad\nabla_{\eta}^{2}(y-x(t;\eta))^{2}.\] Now \[\nabla_{\eta}(y-x(t;\eta))^{2}=-2(y-x(t;\eta))\times\nabla_{\eta}x(t;\eta).\] We should thus find means for computing \(\nabla_{\eta}x(\cdot;\eta).\) This is, however, standard via numerical integration. Differentiate both sides of Equation (3.3) with respect to \(\eta,\) interchange on the lefthand side the order of the \(\eta\)- and \(t\)-derivatives, define \(s(t)=\nabla_{\eta}x(t;\eta),\) and get \[\frac{\mathrm{d}s}{\mathrm{d}t} =\frac{\partial}{\partial x}F(x(\eta,t),\eta)s(t)+\frac{\partial }{\partial\eta}F(x(t;\eta),\eta), \tag{4.1}\] \[s(0) =(1,0)^{\prime},\] Here \(1\) and \(0\) in the initial condition are understood as vectors of \(1^{\prime}\)s and \(0^{\prime}\)s of appropriate dimensions, and the notation \(\frac{\partial}{\partial x}F\) and \(\frac{\partial}{\partial\eta}F\) stands for the derivatives of the function \(F\) with respect to its first and second arguments, \(x\) and \(\eta\). As \(x\) is a known function from Equations (3.3), (4.1) is a linear system with known time-dependent coefficients, and hence is relatively easy to integrate. It is referred to as the system of sensitivity equations. Some care needs to be taken when integrating Equation (4.1), as it is typically a stiff system, but by now numerical integration techniques for such systems have been well-studied; see, e.g., Hairer and Wanner (2010). In a similar manner, setting \(z(t)=\nabla_{\eta}^{2}x(t;\eta),\) we can get the matrix differential equation, called the system of variational equations, \[\frac{dz}{dt} =\frac{\partial^{2}}{\partial\eta^{2}}F(x(t;\eta),\eta)+\frac{ \partial^{2}}{\partial\eta\partial x}F(x(t;\eta),\eta)s(t) \tag{4.2}\] \[+\left\{\frac{\partial^{2}}{\partial\eta\partial x}F(x(t;\eta), \eta)+\frac{\partial^{2}}{\partial x^{2}}F(x(t;\eta),\eta)s(t)\right\}s(t)\] \[+\frac{\partial}{\partial x}F(x(t;\eta),\eta)z(t),\] \[z(0) =0,\] where the initial condition is a zero matrix of appropriate dimensions, and partial derivatives are derivatives of the function \(F\) with respect to its first and/or second arguments. Again, \(x\) and \(s\) in Equation (4.2) are known, and the system is linear with time-dependent coefficients. Although the system might be stiff, its integration is a well-studied task. ### Algorithm With all the technical steps worked out, our model selection procedure is summarised as follows: 1. Estimate parameters \(\eta=(\xi,\psi)\) for each model using \(\hat{\eta}_{n}=\operatorname*{argmin}_{\eta}\sum_{j=1}^{d}\sum_{i=1}^{n}(Y_{ji }-x_{j}(t_{i};\eta))^{2}\). 2. Estimate parameters \(\sigma^{2}\) for each model using \(\hat{\sigma}_{j}^{2}=\frac{1}{n}\sum_{i=1}^{n}(Y_{ji}-x_{j}(t_{i};\hat{\eta})) ^{2}\). 3. Obtain numerical derivatives (\(\hat{V}_{n}\) and \(\hat{H}_{n}\) matrices) according to Equations (4.1) and (4.2), or more generally the ones in the Supplementary Material. 4. Create \(\frac{N^{2}-N}{2}\) model pairs to test, where N is the total number of competing models. 5. Calculate the regularization parameter according to Section 4.2, using matrices \(\hat{V}_{n}\), \(\hat{H}_{n}\). 6. Calculate the S-W test statistic for each pair of models according to Section 4.1. 7. Compare the outcomes of the S-W test statistic to the critical values from the standard normal distribution. ## 5 Simulation Study: S-W Test for ODEs In this section, we present results of simulations aiming to show the size and power properties of the S-W test when applied to the ODE model selection. For asymptotic properties of the test we refer the reader to Section 5 in Schennach and Wilhelm (2017). ### Size of the Test Nominally the test size is \(\alpha\), where \(\alpha\) is chosen by the user. However, due to various approximations involved, in practice the size may significantly deviate from the nominal level. This is an undesirable behaviour, but its extent cannot be fully studied through theoretical means only. In order to assess whether the test controls the size at the desired level, we designed a simulation study. The details are as follows: let the data generation process (DGP) correspond to the ODE system \[\begin{cases}x_{0}^{\prime}(t)&=-0.05x_{0}(t)+1,\\ x_{0}(0)&=100,\end{cases}\] and observations be sampled as \[y(t_{i})=x_{0}(t_{i})+\epsilon_{i}\] where \(\epsilon\sim N(0,7^{2})\). Define model A (left) and model B (right): \[\begin{cases}x_{A}^{\prime}(t)=-0.05x_{A}(t)+(1-\delta)&\begin{cases}x_{B}^{ \prime}(t)=-0.05x_{B}(t)+(1+\delta)\\ x_{B}(0)=\xi_{B}\end{cases}\end{cases}\] Figure 5: Plots describing the model setups and results of the test size simulations. Here \(\delta\) is a known constant that shifts models from the DGP. The only unknown parameters are the initial values \(\xi\). As an illustration, Figure 4(a) shows simulated observations, as well as the curves of the truth and A and B models. Here \(\delta=0.3,\xi_{0}=\xi_{A}=\xi_{B}=100,\psi_{0}=\psi_{A}=\psi_{B}=-0.05\), number of observations is \(300\) and the range of time points is up to \(\tau=150\). Both models and the DGP can be written as \[\begin{cases}x_{0}^{\prime}(t)=\psi_{1}x_{0}(t)+\psi_{2}\\ x_{0}(0)=\xi_{0}\end{cases}\] The solution to this nonhomogeneous linear ODE is \[x_{0}(t)=\xi_{0}e^{\psi_{1}t}+\frac{\psi_{2}}{\psi_{1}}e^{\psi_{1}t}-\frac{ \psi_{2}}{\psi_{1}}.\] The DGP corresponds to \(\psi_{2}=1\), while Models \(A\) and \(B\) are determined, respectively, by \(\psi_{2}=1-\delta\) and \(\psi_{2}=1+\delta\). With Gaussian errors, and the assumption that the errors are independet of observation times, the Kullback-Leibler divergence between the DGP and either Model A or B is \[\frac{1}{2\sigma^{2}}\frac{\delta^{2}}{\psi_{1}^{2}}\int_{0}^{T}\left(1-e^{ \psi_{1}t}\right)^{2}f_{T}(t)dt,\] where \(f_{T}(\cdot)\) is the density of \(t_{i}\)'s with support on \([0,T]\). The factor \[\frac{1}{2\sigma^{2}}\frac{1}{\psi_{1}^{2}}\int_{0}^{T}\left(1-e^{\psi_{1}t} \right)^{2}f_{T}(t)dt\] is a constant, and hence the divergence as a function of \(\delta\) scales as \(\delta^{2}\), meaning that whenever \(\delta\) is the same in Models A and B, the KL-divergences between them and the DGP are equal. To calculate the real level at which the test controls size (the probability to reject \(H_{0}\) when it is true), we ran \(1000\) simulations for \(\delta=0.03,0.06,\ldots,0.3\) and \(\alpha=0.05\). The results of this experiment can be observed in Figure 4(b): as we see the test controls the size around the desired level \(\alpha=0.05\) for any \(\delta\). Furthermore, Figure 4(c) shows that the S-W test asymptotically controls the size for any level of alpha (the simulation setup is the same as above with \(\delta=0.1\) and \(n\in[10,20,50,100,150,200,250,300,400,500]\)). Additionally, we provide results for the size of the test run on an equally distant time grid to show that deterministic time in our approach works similarly to uniform one (compare Figure 4(d) to Figure 4(c) with \(\alpha=0.05\)). ### Power of the Test We conducted two sets of simulations to assess the power of the S-W test. We considered from Section 2.2 Model 1 and Model 2 reparameterized as follows: \[x_{1}^{\prime}(t) =\psi_{2}\psi_{3}x_{1}(t)x_{2}(t)-\psi_{4}x_{1}(t),\] \[x_{2}^{\prime}(t) =\psi_{1}x_{2}(t)\left(1-\psi_{5}x_{2}(t)\right)-\psi_{2}x_{1}(t) x_{2}(t),\] \[\mathbf{x(0)} =[\xi_{1},\xi_{2}];\] Hence the old parameter \(\psi_{5}\) became \(1/\psi_{5}\) in the new parametrization. The motivation here is that now with \(\psi_{5}=0\) the models are equivalent, while with \(\psi_{5}>0\) they are not. In the old parametrization, equivalence would require \(\psi_{5}\to\infty\), which would numerically destabilize estimation. For the first set of simulations, we set Model 2 as the DGP with \(\theta=(\sigma_{1}^{2}=0.1^{2},\xi=[1,2],\psi=[1,1,1,1,\psi_{5}])\), where \(\psi_{5}\) varies between \(0.0025\) and \(0.25\). The number of time points is \(20\), \(\tau=40\) and the number of simulations per each \(\psi_{5}\) is 100. The simulated power of the test is then presented in Figure 5(a). For the second set of simulations, the DGP is Model 2 with \(\theta=(\sigma_{1}^{2}=0.2^{2},\xi=[1,2],\psi=[1,1,1,1,0.1])\) and we let the sample size \(n\) vary from 20 to 110 with steps of 10. Figure 5(b) shows the simulated power of the test. From the two figures, it can be concluded that the test works as expected, because: * with a small number of observations, discrimination between models requires larger KL-distance between them and the truth (Figure 5(a)); * when both models are close to DGP, a larger number of observations is required to reveal the model that is closer to the truth (Figure 5(b)). ## 6 Discussion and Outlook ### Importance of Assumptions In situations involving uncertainty, optimal solutions are often elusive. The S-W test, as in fact any other statistical test, is derived under specific theoretical assumptions, and it is imperative to bear them in mind when applying the test in practice. Among these, the most significant assumption is that of independent and identically distributed observations, as well as various regularity conditions on the statistical model. A researcher, in line with advice in Snedecor and Cochran (1989), p. 273, must either possess confidence or take supplementary steps to verify assumptions when utilizing the S-W test for ODE model selection. ### From Theory to Practice Another important consideration is the step from a theoretical or algorithmic description of an estimation or testing method to its implementation and performance in practice. As well-documented in the literature, statistical inference in ODE models presents unique numerical challenges. Hence also when using the S-W test, the user must ensure that optimization routines and numerical integrators perform as required and do not invalidate the inferential conclusions due to a failure. Here Figure 6: Simulated power of the S-W test. For details, see the text in Section 5.2. the best practices as described, e.g., in Ramsay and Hooker (2017), but also in general nonlinear regression literature, and, above all, practical experience with fitting ODE models, are most helpful. ### Desirable Extensions Although Schennach and Wilhelm (2017) mention a potential extension of their method to the time series data, they do not furnish a comprehensive theoretical validation for this claim. The details can be worked out, however. This would take care of issues like correlated errors, that arise in estimation problems for ODEs. What is less clear is a possibility of extension of the S-W test to the mixed model setting (see, e.g., Pinheiro and Bates (2000)), which would be of great practical relevance. ### Outlook Modeling with ordinary differential equations (ODEs) serves as an indispensable tool across a myriad of scientific and engineering domains. From mathematical biology to compartmental models in epidemiology, ODEs offer a comprehensive understanding of intricate dynamics. Grounded in the foundational laws of nature, these mechanistic models aim at shedding light on the causal underpinnings of systems, proving invaluable especially when direct experimentation is constrained by feasibility or cost considerations. However, the ODE landscape is intricate. Often, a single phenomenon can be expressed through multiple models, originating from approximations of internal mechanisms or from variances in the abstraction level. When faced with this plurality, the quandary arises: how do we select the best-fitting model amidst the obscurities of statistical noise? Rather than delving into deterministic model discrimination, our study champions a testing-based approach. Anchored in the model misspecification framework, our focus narrows to identifying the most apt explanation for the data-generating mechanism within a given parametric model. Drawing from the foundational insights of Vuong and Hotelling, our methodology sidesteps the oft-tedious nesting requirements, streamlining the comparison and ranking of distinct causal explanations. However, the allure of a two-step testing mechanism is not without its pitfalls. Schennach and Wilhelm's critiques highlight potential distortions. Their modification, the S-W test, stands as a solution, offering robustness irrespective of nestedness of the statistical models. While Vuong's and Schennach & Wilhelm's paradigms were primarily designed for traditional statistical frameworks, our research has tailored them to the ODE context. This adaptation, though intricate, has been addressed with meticulous attention to detail. To fortify the claimed theoretical stance, we undertook rigorous simulation studies. These were instrumental in illustrating both the size and power of our proposed test, closely aligning with theoretical predictions. Moreover, through a series of real data examples, we showcased the practical applicability of our algorithm, underscoring its versatility and adaptability across various scenarios. Beyond the realm of theory and experimentation, we believe in the democratization of knowledge. Recognizing the importance of accessibility and hands-on application, we've provided a user-friendly Python implementation of our model selection algorithm (S-W test on GitHub). This not only fosters a deeper understanding but also empowers researchers and practitioners to implement our findings directly, paving the way for further innovations in the field. To conclude, as the mathematical modeling landscape continues to evolve, our findings and contributions seek to continually refine and advance ODE model selection methodologies for the broader scientific community.
2309.12577
Distributed Optimal Control and Application to Consensus of Multi-Agent Systems
This paper develops a novel approach to the consensus problem of multi-agent systems by minimizing a weighted state error with neighbor agents via linear quadratic (LQ) optimal control theory. Existing consensus control algorithms only utilize the current state of each agent, and the design of distributed controller depends on nonzero eigenvalues of the communication topology. The presented optimal consensus controller is obtained by solving Riccati equations and designing appropriate observers to account for agents' historical state information. It is shown that the corresponding cost function under the proposed controllers is asymptotically optimal. Simulation examples demonstrate the effectiveness of the proposed scheme, and a much faster convergence speed than the conventional consensus methods. Moreover, the new method avoids computing nonzero eigenvalues of the communication topology as in the traditional consensus methods.
Liping Zhang, Juanjuan Xu, Huanshui Zhang, Lihua Xie
2023-09-22T02:13:01Z
http://arxiv.org/abs/2309.12577v2
# Distributed Optimal Control and Application to Consensus of Multi-Agent Systems ###### Abstract This paper develops a novel approach to the consensus problem of multi-agent systems by minimizing a weighted state error with neighbor agents via linear quadratic (LQ) optimal control theory. Existing consensus control algorithms only utilize the current state of each agent, and the design of distributed controller depends on nonzero eigenvalues of the communication topology. The presented optimal consensus controller is obtained by solving Riccati equations and designing appropriate observers to account for agents' historical state information. It is shown that the corresponding cost function under the proposed controllers is asymptotically optimal. Simulation examples demonstrate the effectiveness of the proposed scheme, and a much faster convergence speed than the conventional consensus methods. Moreover, the new method avoids computing nonzero eigenvalues of the communication topology as in the traditional consensus methods. Consensus, Distributed control, Observer, Heterogeneous multi-agent system ## I Introduction Cooperative control problem for multi-agent systems has attracted considerable attentions from different scientific communities in recent years. Multiple agents can coordinate with each other via communication topology to accomplish tasks that may be difficult for single agent, and its potential applications include unmanned aerial vehicles, satellite formation, distributed robotics and wireless sensor networks [1, 2, 3]. In the area of cooperative control of multi-agent systems, consensus is a fundamental and crucial problem, which refers to designing an appropriate distributed control protocol to steer all agents to achieve an agreement on certain variable [4]. Thus, the consensus problem has been widely studied by numerous researchers from various perspectives. Homogeneous multi-agent systems means that all agents have identical system dynamics, which mainly includes leaderless consensus and leader-follower consensus. [5] proposed a general framework of the consensus problem for networks of first-order integrator agents with switching topologies based on the relative states. [6] derived a sufficient condition, which was more relaxed than that in [5], for achieving consensus of multi-agent systems. Extending first-order consensus protocols in [6], the author in [7] further studied distributed leader-following consensus algorithms for second-order integrators. [8] considered the consensusability of discrete-time multi-agent systems, and an upper bound of the eigenratio (the ratio of the second smallest to the largest eigenvalues of the graph Laplacian matrix) was derived to characterize the convergence rate. [4] proposed a consensus region approach to designing distributed adaptive consensus protocols with undirected and directed graph for general continuous-linear dynamics. [9] recently derived an optimal consensus region over directed communication graphs with a diagonalizable Laplacian matrix. Besides, variants of these algorithms are also currently applied to tackle communication uncertainties, such as fading communication channels [10], packet loss [11] and communication time-delays [12]. It should be pointed out that the aforementioned consensus control protocols merely use each agent and its neighbor's current state information, and ignore their historical state information. Additionally, the solvability condition of the consensus gain matrix \(K\) is dependent on the nonzero eigenvalues of Laplacian matrix or even requires the communication topology to be a complete graph [13]. In particular, when agent number is large, the eigenvalues of the corresponding Laplacian matrix are difficult to be determined, even if the eigenvalues are computable, since their calculation still imposes a significant computational burden. [14] presented a distributed consensus algorithm based on optimal control theory, while the state weight matrix in given performance index is a special form. On the other hand, many actual systems are heterogeneous where system dynamics are different. So far, the distributed feedforward control [15, 16, 17] and internal model principle [18] are commonly used to solve the cooperative output regulation problem. These tools are also generalized for dealing with robust output regulation, switching networks and cooperative-compete networks [19, 20, 21, 22, 23]. In fact, the essence of both the algorithms can be attributed to two aspects: first, the reference generator [24] or the distributed observer estimating the reference system's state is a critical technology for designing distributed controllers; second, the solvability conditions of output regulator equations or transmission zero conditions of the system are also necessary for solving the output consensus problem. Motivated by the above analyses, in this paper, we study the consensus problem of discrete-time linear heterogeneous multi-agent systems with a novel consensus control protocol based on LQ optimal control theory. Compared with the existing results, the main contributions of this work are: 1) We develop a novel consensus algorithm by minimizing the weighted state errors of different neighbor agents. An optimal consensus controller with the observer incorporating each agent's historical state information is designed by solving Riccati equations. The corresponding global cost function under the proposed controllers is shown to be asymptotically optimal. 2) The proposed new consensus controller can achieve much faster consensus speed than the traditional consensus method, and avoid computing nonzero eigenvalues of the Laplacian matrix associated with the communication topology. The following notations will be used throughout this paper: \(\mathbb{R}^{n\times m}\) represents the set of \(n\times m\)-dimensional real matrices. \(I\) is the identity matrix of a given dimension. \(\text{diag}\{a_{1},a_{2},\cdots,a_{N}\}\) denotes the diagonal matrix with diagonal elements being \(a_{1},\cdots,a_{N}\). \(\rho(A)\) is the spectral radius of matrix \(A\). \(\otimes\) denotes the Kronecker product. Let the interaction among \(N\) agents be described by a directed graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E},\mathcal{A}\}\), where \(\mathcal{V}=\{1,2,\cdots,N\}\) is the set of vertices (nodes), \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the set of edges, and \(\mathcal{A}=[a_{ij}]\in\mathbb{R}^{N\times N}\) is the signed weight matrix of \(\mathcal{G}\), \(a_{ij}\neq 0\) if and only if the edge \((v_{j},v_{i})\in\mathcal{E}\), and we assume that the graph has no self-loop, i.e., \(a_{ii}=0\). The neighbor of \(v_{i}\) is denoted by \(\mathcal{N}_{i}=\{j|(v_{j},v_{i})\in\mathcal{E}\}\). The Laplacian matrix \(\mathcal{L}=[l_{ij}]_{N\times N}\) associated with the adjacency matrix \(\mathcal{A}\) is defined as \(l_{ii}=\sum\limits_{j\in N_{i}}a_{ij}\), \(l_{ij}=-a_{ij}\) for \(i\neq j\). A directed path form \(v_{i}\) to \(v_{j}\) is represented by a sequence of edges \((v_{i},v_{i1}),(v_{i1},v_{i2}),\cdots,(v_{im},v_{j})\). A directed graph is strongly connected if there exists a directed path between any pair of distinct nodes. ## II Preliminary and Problem Formulation ### _Problem Formulation_ We consider a heterogeneous discrete-time multi-agent system consisting of \(N\) agents over a directed graph \(\mathcal{G}\) with the dynamics of each agent given by \[x_{i}(k+1)=Ax_{i}(k)+B_{i}u_{i}(k),i=1,2,\cdots,N \tag{1}\] where \(x_{i}(k)\in\mathbb{R}^{n}\) and \(u_{i}(k)\in\mathbb{R}^{m_{i}}\) are the state and the input of each agent. \(A\in\mathbb{R}^{n\times n}\) and \(B_{i}\in\mathbb{R}^{n\times m_{i}}\) are the coefficient matrices. The cost function of multi-agent systems (1) is given by \[J(s,\infty) =\sum_{k=s}^{\infty}\left(\sum_{i=1}^{N}\sum_{j\in N_{i}}(x_{i}(k )-x_{j}(k))^{T}Q(x_{i}(k)-x_{j}(k))\right.\] \[\left.+\sum_{i=1}^{N}u_{i}^{T}(k)R_{i}u_{i}(k)\right), \tag{2}\] where \(Q\geq 0\) and \(R_{i}>0\) are weighting matrices. We aim to design a distributed control protocol \(u_{i}(k)\) based on the available information from neighbors in (1) to minimize the performance (2). Based on the optimal control theory, it is clear that if the optimal controller exists, it must have that \[\lim_{k\rightarrow\infty}\|x_{i}(k)-x_{j}(k)\|=0,i=1,\cdots,N. \tag{3}\] In other word, multi-agent systems (1) achieve consensus, and the protocol is termed as an optimal control based protocol, which is completely different from classical approaches. In fact, the commonly used consensus protocol [5, 6] for multi-agent systems is designed as: \[u_{i}(k)=F\sum_{j\in N_{i}}a_{ij}(x_{j}(k)-x_{i}(k)), \tag{4}\] where \(F\) is a feedback gain matrix, which actually is dependent on \(\lambda_{2}(\mathcal{L})\) and \(\lambda_{N}(\mathcal{L})\)[8]. That is to say, one needs to solve non-zero eigenvalues for the Laplacian matrix \(\mathcal{L}\) associated with the communication topology to determine the feedback gain \(F\). Different from the commonly used consensus protocol where the protocol is artificially defined, the protocol in this paper is derived by optimizing a given LQ performance, and the performance index (2) is more general with a positive semi-definite weight matrix \(Q\geq 0\). ### _Preliminary_ Define the neighbor error variable among agents as: \(e_{ij}(k)=x_{i}(k)-x_{j}(k)\). Then, it can be obtained from (1) that \[e_{ij}(k+1)=Ae_{ij}(k)+B_{i}u_{i}(k)-B_{j}u_{j}(k). \tag{5}\] Let \(\delta_{i}(k)=\begin{bmatrix}c_{ij_{1}}^{T}&c_{ij_{2}}^{T}\cdots&c_{ij_{t}}^{T }\end{bmatrix}^{T}\) be the error vector between the \(i\)-th agent and its neighbor agent \(\mu\) with \(\mu=j_{1},\cdots,j_{\ell}\). By stacking the error vectors, the global error dynamics for the multi-agent system (1) has the form \[e(k+1) =\tilde{A}e(k)+\sum_{i=1}^{N}\bar{B}_{i}u_{i}(k), \tag{6}\] \[\mathcal{Y}_{i}(k) =H_{i}e(k), \tag{7}\] where \(e(k)=\begin{bmatrix}\delta_{1}^{T}(k)&\delta_{2}^{T}(k)&\cdots&\delta_{N}^{T }(k)\end{bmatrix}^{T}\) is the global error vector, \(\mathcal{Y}_{i}(k)\) is measurement. \(\tilde{A}=I_{N}\otimes A\), \(\bar{B}_{i}\) consists of \(0_{n\times m_{i}}\), \(B_{i}\) and \(-B_{i}\), and \(H_{i}\) is composed of \(0\) and \(I_{n}\), whose specific forms are dependent on the interaction among agents. Denote \(\bar{B}=\begin{bmatrix}\bar{B}_{1}&\bar{B}_{2}&\cdots&\bar{B}_{N}\end{bmatrix}\), and \(u(k)=\begin{bmatrix}u_{1}^{T}(k),\cdots,u_{N}^{T}(k)\end{bmatrix}^{T}\). The cost function (2) is rewritten as \[J(s,\infty) =\sum_{k=s}^{\infty}\left(\sum_{i=1}^{N}\sum_{j\in N_{i}}e_{ij}^{ T}(k)Qe_{ij}(k)+\sum_{i=1}^{N}u_{i}^{T}(k)R_{i}u_{i}(k)\right)\] \[=\sum_{k=s}^{\infty}[e^{T}(k)\tilde{Q}e(k)+u^{T}(k)Ru(k)] \tag{8}\] with \(\tilde{Q}=diag\{Q,Q,\cdots,Q\}\geq 0\) and \(R=diag\{R_{1},R_{2},\cdots,R_{N}\}>0\). **Assumption 1**.: _The directed graph \(\mathcal{G}\) is strongly connected._ In the ideal (complete graph) case, the error information \(e(k)\) is available for all agents, the solvability of the optimal control problem for system (6) with the cost function (8) is equivalent to the following standard LQ optimal control problem [25]. Moreover, under the centralized optimal controller (9), multi-agent system (1) is able to achieve consensus. **Lemma 1**.: _[_25_]_ _Suppose that error information \(e(k)\) is available for all agents, the optimal controller with respect to the cost function (8) is given by_ \[u^{*}(k)=K_{e}e(k), \tag{9}\] _where the feedback gain \(K_{e}\) is given by_ \[K_{e}=-(R+\bar{B}^{T}P_{e}\bar{B})^{-1}\bar{B}^{T}P_{e}\tilde{A} \tag{10}\] _and \(P_{e}\) is the solution of the following ARE_ \[P_{e}=\tilde{A}^{T}P_{e}\tilde{A}+\tilde{Q}-\tilde{A}^{T}P_{e} \bar{B}(R+\bar{B}^{T}P_{e}\bar{B})^{-1}\bar{B}^{T}P_{e}\tilde{A}. \tag{11}\] _The corresponding optimal cost function is_ \[J^{*}(s,\infty)=e^{T}(s)P_{e}e(s). \tag{12}\] _Moreover, if \(P_{e}\) is the unique positive definite solution to (11), then \(\tilde{A}+\bar{B}K_{e}\) is stable._ ## III Main Results ### _Consensus of multi-agent systems (1) based on relative error feedback_ In this subsection, we design a novel distributed observer-based controller only using the relative error information from neighbors. We design the following distributed controllers as \[u_{i}^{*}(k)=K_{ei}\hat{e}_{i}(k),\quad i=1,2,\cdots,N, \tag{13}\] where \(\hat{e}_{i}(k),i=1,2,\cdots,N\) are distributed observers to estimate the global error \(e(k)\) in system (6), which is based on the available information of agent \(i\). Thus, \[\hat{e}_{1}(k+1) =\tilde{A}\hat{e}_{1}(k)+\bar{B}_{1}u_{1}^{*}(k)+\bar{B}_{2}K_{e 2}\hat{e}_{1}(k)\] \[\quad+\cdots+\bar{B}_{N}K_{eN}\hat{e}_{1}(k)\] \[\quad+\Upsilon_{1}(\mathcal{Y}_{1}(k)-H_{1}\hat{e}_{1}(k)) \tag{14a}\] \[\quad\cdots\quad\cdots\] \[\hat{e}_{i}(k+1) =\tilde{A}\hat{e}_{i}(k)+\bar{B}_{1}K_{e1}\hat{e}_{i}(k)+\cdots\] \[\quad+\bar{B}_{i-1}K_{e_{i-1}}\hat{e}_{i}(k)+\bar{B}_{i}u_{i}^{*}(k)\] \[\quad+\bar{B}_{i+1}K_{e_{i+1}}\hat{e}_{i}(k)+\cdots+\bar{B}_{N}K_ {eN}\hat{e}_{i}(k)\] \[\quad+\Upsilon_{i}(\mathcal{Y}_{i}(k)-H_{i}\hat{e}_{i}(k)),\] (14b) \[\quad\cdots\quad\cdots\] \[\hat{e}_{N}(k+1) =\tilde{A}\hat{e}_{N}(k)+\bar{B}_{1}K_{e1}\hat{e}_{N}(k)+\cdots\] \[\quad+\bar{B}_{N-1}K_{e_{N-1}}\hat{e}_{N}(k)+\bar{B}_{N}u_{N}^{*}(k)\] \[\quad+\Upsilon_{N}(\mathcal{Y}_{N}(k)-H_{N}\hat{e}_{N}(k)), \tag{14c}\] with \(K_{ei}=\begin{bmatrix}0&\cdots&I&0&\cdots&0\end{bmatrix}K_{e}\), which is obtained by solving an ARE (11), and the observer gain \(\Upsilon_{i}\) is to be determined later to ensure the stability of the observers. **Theorem 1**.: _Consider the global error system (6), and the distributed control laws (13) and (14). If there exist observer gains \(\Upsilon_{i},i=1,\cdots,N\) such that the matrix_ \[\tilde{A}_{ec}=\begin{bmatrix}\Theta_{1}&-\bar{B}_{2}K_{e2}&\cdots&-\bar{B}_ {N}K_{eN}\\ -\bar{B}_{1}K_{e1}&\Theta_{2}&\cdots&-\bar{B}_{N}K_{eN}\\ \vdots&\vdots&\ddots&\vdots\\ -\bar{B}_{1}K_{e1}&\cdots&-\bar{B}_{N-1}K_{e_{N-1}}&\Theta_{N}\end{bmatrix} \tag{15}\] _is stable, where \(\Theta_{i}=\tilde{A}+\bar{B}K_{e}-\bar{B}_{i}K_{ei}-\Upsilon_{i}H_{i}\). Then the observers (14) are stable under the controller (13), i.e.,_ \[\lim_{k\to\infty}\|\hat{e}_{i}(k)-e(k)\|=0. \tag{16}\] _Moreover, if the Riccati equation (11) has a positive definite solution \(P_{e}\), under the distributed feedback controllers (13), the multi-agent systems (1) can achieve consensus._ Proof.: Denoting observer error vectors \[\tilde{e}_{i}(k)=e(k)-\hat{e}_{i}(k). \tag{17}\] Then, combining system (6) with observers (14), one obtains \[e(k+1) =(\tilde{A}+\bar{B}K_{e})e(k)-\bar{B}_{1}K_{e1}\tilde{e}_{1}(k)\] \[\quad-\bar{B}_{2}K_{e2}\tilde{e}_{2}(k)-\cdots-\bar{B}_{N}K_{eN} \tilde{e}_{N}(k) \tag{18}\] and \[\tilde{e}_{1}(k+1) =(\tilde{A}+\bar{B}K_{e}-\bar{B}_{1}K_{e1}-\Upsilon_{1}H_{1}) \tilde{e}_{1}(k)\] \[\quad-\bar{B}_{2}K_{e2}\tilde{e}_{2}(k)-\cdots-\bar{B}_{N}K_{eN} \tilde{e}_{N}(k) \tag{19a}\] \[\quad\cdots\quad\cdots\quad\cdots\] \[\tilde{e}_{i}(k+1) =(\tilde{A}+\bar{B}K_{e}-\bar{B}_{i}K_{ei}-\Upsilon_{i}H_{i}) \tilde{e}_{i}(k)\] \[\quad-\bar{B}_{1}K_{e1}\tilde{e}_{1}(k)-\cdots-\bar{B}_{i-1}K_{e_ {N-1}}\tilde{e}_{i-1}(k)\] \[\quad-\bar{B}_{i+1}K_{e_{i+1}}\tilde{e}_{i+1}(k)-\cdots-\bar{B}_{N }K_{eN}\tilde{e}_{N}(k)\] (19b) \[\quad\cdots\quad\cdots\quad\cdots\] \[\tilde{e}_{N}(k+1) =(\tilde{A}+\bar{B}K_{e}-\bar{B}_{N}K_{eN}-\Upsilon_{N}H_{N}) \tilde{e}_{N}(k)\] \[\quad-\bar{B}_{1}K_{e1}\tilde{e}_{1}(k)-\cdots-\bar{B}_{N-1}K_{eN -1}\tilde{e}_{N-1}(k). \tag{19c}\] According to (19), we have \[\tilde{e}(k+1)=\tilde{A}_{ec}\tilde{e}(k), \tag{20}\] where \(\tilde{e}(k)=\begin{bmatrix}\tilde{e}_{1}^{T}(k),\tilde{e}_{2}^{T}(k),\cdots, \tilde{e}_{N}^{T}(k)\end{bmatrix}^{T}\). Obviously, if there exist matrices \(\Upsilon_{i}\) such that \(\tilde{A}_{ec}\) is stable, then observer errors \(\tilde{e}(k)\) converge to zero as \(k\to\infty\), i.e., Eq. (16) holds. Furthermore, it follows from (18) and (19) that \[\begin{bmatrix}e(k+1)\\ \tilde{e}(k+1)\end{bmatrix}=\bar{A}_{ec}\begin{bmatrix}e(k)\\ \tilde{e}(k)\end{bmatrix} \tag{21}\] where \(\bar{A}_{ec}=\begin{bmatrix}\tilde{A}+\bar{B}K_{e}&\Omega_{e}\\ 0&\tilde{A}_{ec}\end{bmatrix}\) and \(\Omega_{e}=\begin{bmatrix}-\bar{B}_{1}K_{e1}&\cdots&-\bar{B}_{N}\bar{K}_{eN} \end{bmatrix}\). Since \(P_{e}\) is the positive definite solution to Riccati equation (11), then \(\tilde{A}+\bar{B}K_{e}\) is stable, based on the LQ control theory, the consensus of multi-agent system (1) can be achieved. The proof is completed. Observe from (21) that since \(K_{e}\) has been given in (10), the consensus error dynamics is dependent on \(\tilde{A}_{ec}\), which is determined by the observer gains \(\Upsilon_{i}\). Thus, to speed up the convergence of consensus, the remaining problem is to choose the matrix of \(\Upsilon_{i},i=1,2,\cdots,N\) such that the maximum eigenvalue of \(\tilde{A}_{ec}\) is as small as possible. To this end, let \(\rho>0\) such that \[\tilde{A}_{ec}^{T}\tilde{A}_{ec}\leq\rho I,\] or \[\begin{bmatrix}-\rho I&\tilde{A}_{ec}^{T}\\ \tilde{A}_{ec}&-I\end{bmatrix}\leq 0, \tag{22}\] where \(\tilde{A}_{ec}\) is as in (15). Then the optimal gain matrices \(\Upsilon_{i}\) are chosen by: \[\min_{\Upsilon_{i}}\rho\qquad\text{s.t.}\quad\eqref{eq:Cos optimization in (23), we can appropriately select \(\Upsilon_{i}\) such that the upper bound of the spectral radius \(\rho(\tilde{A}_{ec})\) is as small as possible. From these perspectives, \(\rho(\tilde{A}_{ec})\) is made more to be small. This is in comparison with the conventional consensus algorithms where the maximum eigenvalue of the matrix \(\tilde{A}_{ec}\) is not minimized and determined by the eigenvalues of the Laplacian matrix \(\mathcal{L}\). Therefore, it can be expected that the proposed approach can achieve a faster convergence than the conventional algorithms as demonstrated in the simulation examples in Section IV. Second, the cost difference \(\Delta J(s,\infty)\) between the new distributed controller (13) and the centralized optimal control (9) is provided in Theorem 2, and it is equal to zero as \(s\rightarrow\infty\). That is to say, the corresponding cost function under the proposed distributed controllers (13) is asymptotically optimal. ### _Special case: consensus of multi-agent systems (1) via state feedback controller_ By stacking the state vectors, the multi-agent systems (1) can be rewritten as \[X(k+1)=\tilde{A}X(k)+\tilde{B}u(k) \tag{31}\] where \(X(k)=\left[x_{1}^{T}(k),\cdots,x_{N}^{T}(k)\right]^{T}\) is the global state variable. \(\tilde{A}=I_{N}\otimes A\) and \(\tilde{B}=diag\{B_{1},B_{2},\cdots,B_{N}\}\). The cost function (2) is rewritten as \[J(s,\infty)=\sum_{k=s}^{\infty}[X^{T}(k)\mathcal{Q}X(k)+u^{T}(k)Ru(k)] \tag{32}\] where \(\mathcal{Q}=[\mathcal{Q}]_{ij}\geq 0\) with \(\mathcal{Q}_{ii}=(N-1)Q\), \(\mathcal{Q}_{ij}=-Q\) for \(i\neq j\), and \(R=diag\{R_{1},R_{2},\cdots,R_{N}\}>0\). **Lemma 2**.: _Assume the state information \(X(k)\) is available for all agents subject to (1), the optimal controller with respect to the cost function (2) is given by:_ \[u^{*}(k)=KX(k), \tag{33}\] _where the feedback gain matrix \(K=\mathcal{L}\otimes F\) is given as_ \[K=-(R+\tilde{B}^{T}P\tilde{B})^{-1}\tilde{B}^{T}P\tilde{A} \tag{34}\] _and \(P\) is the solution of the following ARE_ \[P=\tilde{A}^{T}P\tilde{A}+\mathcal{Q}-\tilde{A}^{T}P\tilde{B}(R+\tilde{B}^{T}P \tilde{B})^{-1}\tilde{B}^{T}P\tilde{A}. \tag{35}\] _The corresponding optimal cost function is_ \[J^{*}(s,\infty)=X^{T}(s)PX(s). \tag{36}\] _Moreover, if \(P\) is the unique positive definite solution to (35), \(\tilde{A}+\tilde{B}K\) is stable._ The system (31) is rewritten as \[X(k+1) =\tilde{A}X(k)+\sum_{i=1}^{N}\tilde{B}_{i}u_{i}(k) \tag{37}\] \[Y_{i}(k) =C_{i}X(k) \tag{38}\] where \(\tilde{B}_{i}=\begin{bmatrix}0&\cdots&B^{T}&0&\cdots&0\end{bmatrix}^{T}\), \(Y_{i}(k)\) is measurement, \(C_{i}\) is composed of \(0\) and \(I_{n}\), which is determined by the interaction among agents. We design the distributed observer-based state feedback controllers: \[u_{i}^{*}(k)=K_{i}\hat{X}_{i}(k),\quad i=1,2\cdots,N \tag{39}\] where the distributed observers \(\hat{X}_{i}(k)\) are given by \[\hat{X}_{i}(k+1) =\tilde{A}\hat{X}_{i}(k)+\tilde{B}_{1}K_{1}\hat{X}_{i}(k)+\cdots\] \[\quad+\tilde{B}_{i-1}K_{i-1}\hat{X}_{i}(k)+\tilde{B}_{i}u_{i}^{* }(k)\] \[\quad+\cdots+\tilde{B}_{N}K_{N}\hat{X}_{N}(k)\] \[\quad+L_{i}(Y_{i}(k)-C_{i}\hat{X}_{i}(k)) \tag{40}\] with \(L_{i}\) the observer gains to be designed, and \(K_{i}=\begin{bmatrix}0&0&\cdots&I&\cdots&0\end{bmatrix}K\). **Theorem 3**.: _Let Assumption 1 hold. Consider the global system (37) for the multi-agent systems (1), and the control laws (39), if there exist observer gains \(L_{i}\) such that the matrix_ \[\tilde{A}_{c}=\begin{bmatrix}W_{1}&-\tilde{B}_{2}K_{2}&\cdots&-\tilde{B}_{N} K_{N}\\ -\tilde{B}_{1}K_{1}&W_{2}&\cdots&-\tilde{B}_{N}K_{N}\\ \vdots&\vdots&\ddots&\vdots\\ -\tilde{B}_{1}K_{1}&-\tilde{B}_{2}K_{2}&\cdots&W_{N}\end{bmatrix} \tag{41}\] _is stable with \(W_{i}=\tilde{A}+\tilde{B}K-\tilde{B}_{i}K_{i}-L_{i}C_{i}\). Then the observers (40) are stable, that is,_ \[\lim_{k\rightarrow\infty}\|\hat{X}_{i}(k)-X(k)\|=0,i=1,\cdots,N. \tag{42}\] _Moreover, if \(P\) is the positive definite solution of (35), under the feedback controller (39), the multi-agent systems (1) can achieve consensus._ Proof.: This proof is similar to that in Theorem 1. So we will not repeat the details. Through analyzing Theorem 3, under the distributed state feedback controllers (39) and (40), all agents can achieve consensus, and the state of each agent converges to zero, which is also consistent with the single system's result [26]. Similar to Theorem 2, we will also discuss asymptotical optimal property of the new distributed controllers (39). **Theorem 4**.: _Under the proposed distributed controller (39) and (40) with \(L_{i},i=1,2,\cdots,N\) chosen from Theorem 3, the cost function is given by_ \[J^{*}(s,\infty)\] \[=X^{T}(s)PX(s)+\sum_{k=s}^{\infty}\begin{bmatrix}X(k)\\ \tilde{X}(k)\end{bmatrix}^{T}\begin{bmatrix}0&M_{1}\\ M_{1}^{T}&M_{2}\end{bmatrix}\begin{bmatrix}X(k)\\ \tilde{X}(k)\end{bmatrix}, \tag{43}\] _where_ \[M_{1} =(\tilde{A}+\tilde{B}K)^{T}P\Omega-\begin{bmatrix}K_{1}^{T}R_{1} K_{1}&\cdots&K_{N}^{T}R_{N}K_{N}\end{bmatrix},\] \[M_{2} =\begin{bmatrix}K_{1}^{T}R_{1}K_{1}&0&\cdots&0\\ 0&K_{2}^{T}R_{2}K_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&K_{N}^{T}R_{N}K_{N}\end{bmatrix}+\Omega^{T}P\Omega.\] _Moreover, the cost difference between the cost function (43) and the optimal cost (36) is given by_ \[\Delta J(s,\infty) =J^{*}(s,\infty)-J^{*}(s,\infty)\] \[=\sum_{k=s}^{\infty}\begin{bmatrix}X(k)\\ \bar{X}(k)\end{bmatrix}^{T}\begin{bmatrix}0&M_{1}\\ M_{1}^{T}&M_{2}\end{bmatrix}\begin{bmatrix}X(k)\\ \bar{X}(k)\end{bmatrix}. \tag{44}\] _Specially, when \(s\) is sufficiently large, the cost difference will be equal to zero, i.e., the proposed consensus controller (39) can achieve the optimal cost (asymptotically)._ Proof.: This proof is similar to that in Theorem 2. So the details are omitted. ## IV Numerical Simulation In this section, we validate the the proposed theoretical results through the following numerical examples. **Example 1**.: _Consider the multi-agent system consisting of four homogeneous agents with the system matrices taken from [27],_ \[A=\begin{bmatrix}1.1&0.3\\ 0&0.8\end{bmatrix},B=\begin{bmatrix}1\\ 0.5\end{bmatrix}. \tag{45}\] _The interactions of agents are given in Fig.1, in which each agent receives neighbor error information, Then, we can determine \(H_{i},i=1,2,3,4\) as_ \[H_{1} =\begin{bmatrix}I_{2}&0&0\end{bmatrix},H_{2}=\begin{bmatrix}0&I_ {2}&0\end{bmatrix},\] \[H_{3} =\begin{bmatrix}0&I_{2}&0\end{bmatrix},H_{4}=0.\] _We choose_ \[Q=I_{2},R_{1}=R_{2}=R_{3}=R_{4}=1.\] _According to ARE (11) and the optimization in (23), the feedback gains \(K_{ei}\) and the observer gain can be obtained, respectively. Fig.2 displays the evolution of each agent's state by the proposed consensus algorithm(M2), it's shown that all agent's states reach the consensus value after 12 steps. The corresponding observer error vector \(e_{1}(k)-\hat{e}_{1}(k)\) under the proposed controller (13) converges to zero. With the same initial conditions, Fig.3 shows the state trajectories by the traditional state feedback method (M1) [5]. One can see that the second state of each agent reach consensus after 25steps. To further compare the convergence performance of different consensus algorithms, the quantitative calculations based on the spectral radius \(\rho(\tilde{A}_{ec})\) and the norm of the first agent's state at different instants of time are shown in Table I. It can be observed that the proposed consensus algorithm reduces the maximum eigenvalue's value of \(\tilde{A}_{c}\), and the norm of each agent's state \(\|x_{i}(k_{0})\|\). Therefore, the proposed distributed observer-based consensus algorithm (13) can ensure all agents reach consensus with a faster convergence speed._ _Fig.5. The system dynamic parameters are set as_ \[A =1,B_{1}=\begin{bmatrix}1.5&0.5\end{bmatrix},B_{2}=\begin{bmatrix}0.8 &1\end{bmatrix},B_{3}=\begin{bmatrix}1&-0.2\end{bmatrix},\] \[C_{1} =\begin{bmatrix}1&0&0\\ 0&0&1\end{bmatrix},C_{2}=\begin{bmatrix}1&0&0\\ 0&1&0\end{bmatrix},C_{3}=\begin{bmatrix}0&1&0\\ 0&0&1\end{bmatrix},\] \[Q =1,R_{1}=R_{2}=R_{3}=1.\] _By solving ARE (35) and according to Theorem 3, the feedback gains in (34) are obtained_ \[K_{1} =\begin{bmatrix}-0.3935&-0.0000&-0.0000\\ -0.1312&-0.0000&-0.0000\end{bmatrix},\] \[K_{2} =\begin{bmatrix}-0.0000&-0.2849&0.0000\\ -0.0000&-0.3561&0.0000\end{bmatrix},\] \[K_{3} =\begin{bmatrix}-0.0000&0.0000&-0.4871\\ 0.0000&-0.0000&0.0974\end{bmatrix}.\] _We obtain observer gain matrices from the optimization in (23) as_ \[L_{1} =\begin{bmatrix}1.0000&-0.0000\\ -0.0000&0.0000\\ -0.0000&0.4934\end{bmatrix},L_{2}=\begin{bmatrix}0.3441&-0.0000\\ 0.0000&1.0000\\ -0.0000&0.0000\end{bmatrix},\] \[L_{3} =\begin{bmatrix}-0.0000&-0.0000\\ 0.4160&0.0000\\ -0.0000&1.0000\end{bmatrix}.\] _The observer error systems are stable as shown in Fig.6. Meanwhile, Fig.7 shows the evolution of the multi-agent systems (1) under the distributed state feedback controllers (39), where one can see that all agents' states converge to zero within ten steps, which indicates consensus can be achieved rapidly._ ## V Conclusions In this paper, we have studied the consensus problem for discrete-time linear multi-agent systems by LQ optimal control theory. Different from the existing consensus algorithms, we designed a novel distributed controller based on observers involving agent's historical state information by solving Riccati equations. It's shown that the corresponding global cost function under the proposed controller is asymptotically optimal. The new consensus algorithm does not require to compute the eigenvalues of the communication topology, and can achieve a much faster consensus speed than the traditional consensus methods. Finally, simulation examples and comparisons with the existing consensus algorithm were provided to demonstrate the feasibility and effectiveness of the proposed control algorithms.
2303.00128
Representation Disentaglement via Regularization by Causal Identification
In this work, we propose the use of a causal collider structured model to describe the underlying data generative process assumptions in disentangled representation learning. This extends the conventional i.i.d. factorization assumption model $p(\mathbf{y}) = \prod_{i} p(\mathbf{y}_i )$, inadequate to handle learning from biased datasets (e.g., with sampling selection bias). The collider structure, explains that conditional dependencies between the underlying generating variables may be exist, even when these are in reality unrelated, complicating disentanglement. Under the rubric of causal inference, we show this issue can be reconciled under the condition of causal identification; attainable from data and a combination of constraints, aimed at controlling the dependencies characteristic of the \textit{collider} model. For this, we propose regularization by identification (ReI), a modular regularization engine designed to align the behavior of large scale generative models with the disentanglement constraints imposed by causal identification. Empirical evidence on standard benchmarks demonstrates the superiority of ReI in learning disentangled representations in a variational framework. In a real-world dataset we additionally show that our framework, results in interpretable representations robust to out-of-distribution examples and that align with the true expected effect from domain knowledge.
Juan Castorena
2023-02-28T23:18:54Z
http://arxiv.org/abs/2303.00128v3
# Representation Disentaglement via Regularization by Identification ###### Abstract This work focuses on the problem of learning disentangled representations from observational data. Given observations \(\{\mathbf{x}^{(i)}\}_{i=1}^{N}\) drawn from \(p(\mathbf{x}|\mathbf{y})\) with generative variables \(\mathbf{y}\) admitting the distribution factorization \(p(\mathbf{y})=\prod_{c}p(\mathbf{y}_{c})\), we ask whether learning disentangled representations matching the space of observations with identification guarantees on the posterior \(p(\mathbf{z}|\mathbf{x},\hat{\mathbf{y}}_{c})\) for each \(c\), is plausible. We argue modern deep representation learning models are ill-posed with collider bias behaviour; a source of bias producing entanglement between generating variables. Under the rubric of causality, we show this issue can be explained and reconciled under the condition of identifiability; attainable under supervision or a weak-form of it. For this, we propose regularization by identification (ReI), a regularization framework defined by the identification of the causal queries involved in the learning problem. Empirical evidence shows that enforcing ReI in a variational framework results in disentangled representations equipped with generalization capabilities to out-of-distribution examples and that aligns nicely with the true expected effect between generating variables and measurement apparatus. Machine Learning, ICML ## 1 Introduction One of the principal objectives of learning representations has been that of detecting/recognizing patterns or signatures of measurements to represent the qualitative and quantitative characteristics of the underlying physical processes being sensed. Most of the times sensing as dictated for example by the Nyquist rate (Shannon, 1948) acquires sufficient information for detection but leaves potentially unnecessary and redundant information on its measurements. Ideas to reduce such redundancies by representing information as concepts, patterns or features to achieve an economy of information have been the focus of study since its early days (Pearson, 1901a). Classical examples include principal component analysis (Pearson, 1901b) which assumes linear mapping functions along with orthogonality in its parametrizations. Independent component analysis (Bell & Sejnowski, 1995) on the other hand, restricts the elements of the parametrizations to be independent using mutual information. However, both of these methods assume all measurements \(\mathbf{x}\) live in a low dimensional space, an assumption not applicable in all task contexts. The standard variational formulation of representation learning frameworks consists in learning from the observables \(\mathbf{x}\in\mathbb{R}^{N}\) a generative model \(p_{\boldsymbol{\theta}}(\mathbf{x},\mathbf{z})=p_{\boldsymbol{\theta}}(\mathbf{ x}|\mathbf{z})p_{\boldsymbol{\theta}}(\mathbf{z})\) whose learned marginal likelihood \(p_{\boldsymbol{\theta}}(\mathbf{x})\) approximates the true \(p_{\boldsymbol{\theta}^{*}}(\mathbf{x})\). The latent variables \(\mathbf{z}\in\mathbb{R}^{N}\) distributed as \(p(\mathbf{z})\) and parameters \(\boldsymbol{\theta}\in\) are assumed unknown and focus on this problem has concentrated on finding priors that make the marginal and posterior \(p_{\boldsymbol{\theta}}(\mathbf{z}|\mathbf{x})\) tractable. Variational auto-encoders (VAE)'s (Kingma & Welling, 2013) for example, builds an efficient optimization approach that maximizes the conditional likelihood \(p(\mathbf{x}|\mathbf{z})\) subject to similarity constraints quantified by the Kullback-Leibler (KL) divergence between a posterior approximate \(q_{\boldsymbol{\theta}}(\mathbf{z}|\mathbf{x})\) and a family of induced latent distribution priors \(p_{\boldsymbol{\theta}}(\mathbf{z})\) (typically an isotropic Gaussian). Problem with these group of methods for disentanglement is their focus on learning approximations of the true marginal data distributions without any restriction or guarantees on the prior in modeling the true generative mechanisms (Khemakhem et al., 2020). This not only disconnects meaning of the learned latent representations from supporting access to clear explanations and understanding (Lake et al., 2017) of the true generative processes, but also invokes problems of trust in their lack of robustness to out-of-distribution (OOD) examples (D'Amour et al., 2020) while keeping the necessary adjustments for diagnosis and repair (Szegedy et al., 2013) obscure (Pearl, 2019). Recent trends (Bengio et al., 2013; Higgins et al., 2018; Locatello et al., 2019; Van Steenkiste et al., 2019; Khemakhem et al., 2020) are in consensus that disentanglement of the generating factors leads to increased robust representations that are less susceptible to the appearance of subsets of entangled factor variables in the intended tasks. Recent efforts along this line of work, includes unsupervised learning methods that exploit the enormous amounts of data available without the requirement of labels for each generating factor. Among the most popular methods includes \(\beta\)-VAE (Higgins et al., 2016), Annealed VAE (Burgess et al., 2018), Factor VAE (Kim & Mnih, 2018), DIP-VAE (Kumar et al., 2018) all imposing specific structure in the latent prior through modifications of the KL term. The \(\beta\)-VAE (Higgins et al., 2016) for example includes a \(\beta\) scalar that controls the strength enforcing the latent prior. Increasing this scalar promotes the structure of the prior at the cost of divergence from the true marginal likelihood, thus \(\beta\) being a balancing term. Rooted in the concept of model identifiability, (Locatello et al., 2019) however, challenged the line of work of unsupervised learning disentanglements as impossible. Weakly-supervised learning methods on the other hand have exploited weak-labels as they have shown to facilitate some form of disentanglement. (Mitrovic et al., 2020) proposes a method that consists in learning representations explaining the causal data generation mechanisms by promoting invariance to augmented data transformations, this under the principle of independent causal mechanism (Peters et al., 2016, 2017). Invariant risk minimization (IRM) of (Arjovsky et al., 2019) seeks for representations that produce predictions invariant to environment contexts. (Bouchacourt et al., 2018) seeks to learn representations by grouped observations (i.e., a factor of variation shared between observations within a group) and proposes a multi-level VAE for learning group representations and tackle the limitations of VAE's assuming i.i.d observations. (Locatello et al., 2020) shows that disentangled representations can be obtained under weak-supervision when pairs of measurements share a factor of variation. Their approach modifies the \(\beta\)-VAE objective by enforcing similarities between the shared generative factors of variation and a decoupling of those uncommon. Worth noting is (Trauble et al., 2021), whose findings depict the method's capability to also disentangle when training in datasets constructed by unknown correlations between the generative factors (i.e., at least two factors that vary together) while being independent; a problem known to be one of the root causes affecting DL OOD robustness and fairness (D'Amour et al., 2020). Identifiable VAE (iVAE) (Khemakhem et al., 2020) proposes a method enabling a weak-form of model identification. The definition therein consists in finding models that for all \(\mathbf{x},\mathbf{z}\), \(\forall\mathbf{\theta},\mathbf{\theta}^{\prime}:p_{\mathbf{\theta}}(\mathbf{x})=p_{\mathbf{ \theta}^{\prime}}(\mathbf{x})\); in other words, find distinct \(\mathbf{\theta}\) and \(\mathbf{\theta}^{\prime}\) for which the marginal data distributions are equal. This however, is done without using and testing (Pearl, 2009) the constraints readable from the graphical generative model. In light of these problems, the breadth of work in this research focuses towards contributing methods that seek to reconcile the issue of entanglement in existing deep representation learning models. The contributions of this work can be summarized as: * "Regularization by Identification" (ReI) a regularization method defined by identification of the causal queries (Pearl, 1995) involved in the learning problem. The identification criteria is anchored in analysis of directed acyclic graphs (DAG's), which, clearly stipulates the generative model assumed and adverts the conditions required for identification from observations valid under the DAG model. Derivation of ReI in a given learning problem applies the rules of the _doc_ calculus of (Pearl, 1995) to delete, exchange actions for observations and provide the adjustments necessary to guarantee identification of the involved queries from observational data. * We show under a pre-specified DAG model \(G\) encoding a typical data generative process, that standard representation learning frameworks present collider bias behaviour; a type of bias producing entanglement between the effects of the generating factors. An issue which has been either underlooked or not looked at all in representation learning to the best of our knowledge. For this, we reformulate the representation learning problem from a variational inference standpoint and derive under ReI the conditions that guarantee identification and that ultimately produce disentaglement. * Provide empirical evidence that shows the potential of ReI in a dataset with joint variability between the generating factors; an issue recognized by (Trauble et al., 2021) as the problem of correlated data. In this case, ReI offers the potential to produce representations that: (1) disentangle the effects of the generating factors with results well aligned with true expected behavior (e.g., from like physics principles) supporting interpretation and understanding and (2) are more generalizable in the presence of out-of-distribution examples in comparison to the standard non-identifiable DL model counterparts. ## 2 Approach ### Generative Model Consider the problem of approximate posterior \(p(\mathbf{z}|\mathbf{x},\mathbf{y}_{c})\) inference as in the VAE framework of (Kingma and Welling, 2013), with the distinction that unknown latent variable \(\mathbf{z}\) is conditioned on a generative factor \(\mathbf{y}_{c}\). Measurements \(\{\mathbf{x}^{(i)}\}_{i=1}^{N}\) are drawn from the marginal data distribution \(p(\mathbf{x})\) generated by factor variables \(\mathbf{y}\) which admits factorizations of the form \(p(\mathbf{y})=\prod_{c}p(\mathbf{y}_{c})\). The true posterior \(p(\mathbf{z}|\mathbf{x},\mathbf{y}_{i})\) has powerfull representational properties that if well approximated, can produce disentangled representations, remove potential sources of undesired spurious correlations, with better generalization possibilities to OOD cases while also provide clear paths for explaining and understanding the effects of the underlying generative mechanisms, a much needed requirement for scientific discovery. In the variational inference framework this generative model can be encoded by the directed acyclic graph (DAG) \(G\) in Fig.1. Variable \(\mathbf{x}\in\mathbb{R}^{M}\) represents the sensor measurements, \(\mathbf{y}\in\mathbb{R}^{n}\) with elements \(\mathbf{y}_{c},c\in\{1,...,n\}\) are the ground truth generating factors (e.g., class membership scores), respectively. Sensor noise is denoted by the unmeasured \(\mathbf{u}_{x}\in\mathbb{R}^{M}\) and \(\mathbf{z}\in\mathbb{R}^{M}\) is a learned latent representation. Arrows emanating from \(\mathbf{y}_{c}\) to \(\mathbf{z}\), \(\mathbf{z}\) to \(\mathbf{x}\) are aligned with the causality of the generation mechanisms. In other words, there is causal presccedence of the \(\mathbf{y}_{c}\)'s and they are a direct cause (i.e., implied by the direct arrows between variables) of \(\mathbf{z}\). Inspection of Fig. 1 reveals potential problems with the connections \(\rightarrow\mathbf{z}\leftarrow\), indicative of a collider (Berkson, 1946; Kim and Pearl, 1983; Pearl, 2009); a source of potential bias. Conditioning on either the consequence \(\mathbf{z}\) or its descendant \(\mathbf{x}\) unblocks information flow from the causes \(\mathbf{y}_{c}\rightarrow\mathbf{z}\leftarrow\mathbf{y}_{j}\) for \(c\neq j\) in Fig.1. For example, given that a restaurant is popular and service is bad, makes it more likely be due to good food. This collider bias phenomenon occurs as information on one of the causes makes the other causes involved more or less likely given that the consecuence has courred; even when the causes are independent (Pearl, 1995). In the variational framework, training by enforcing the likelihood term \(p(\mathbf{x}|\mathbf{z})p(\mathbf{z}|\mathbf{y}_{c})\) will, we argue open the free flow of information between the colliding generating factors \(\mathbf{y}_{c}\) producing entangled representations if it remains untreated. Related works based on variational inference fundamentals like the VAE (Kingma and Welling, 2013) and even the supervised conditional VAE (Sohn et al., 2015) are susceptible to this collider problem when conditioning on \(\mathbf{z}\) as required by the likelihood function as no adjustment for the collider is provided. We recognize this as an important issue here and provide possible avenues for its resolution via the ReI framework while also providing connections to extend some of the analysis in identifiable based learning methods (Higgins et al., 2016; Locatello et al., 2020; Trauble et al., 2021) to integrate ReI in a modularized fashion. ### Identification of Causal Effect Queries Identification here sticks to Pearl's causal analysis framework (Pearl, 2009) to control/adjust for potential sources of bias (e.g., confounding, collider bias) between the participating variables. A DAG \(G\)(Pearl, 1995) (see Appendix A) encodes domain knowledge of the data generation mechanisms serves as a graphical tool to formulate the adjustments necessary for identification to control for bias or the free flow of information between variables with causally-unsupported dependencies. Identification of a causal query \(p(y|\dot{x})=p(y|do(X=x))\) enables inference of an intervention \(do(X=x)\) from observed quantities alone and a model DAG \(G\). Deriving such identifyability enables estimation of the quantities of interest unambigously as \(G\) provides the model under which the distribution holds (Pearl, 1995). This, unlike nonidentifiable quantities which can only be defined ambiguously, and are likely to produce entangled representations. The VAE framework known to be very effective in approximating the marginal data distribution \(p(\mathbf{x})\) does not provide any identifiable guarantees on the posterior distribution, rendering latent representations meaningless (in the sense of providing information about the true generative factors or its effects). Alternatively, the breadth of recent work most succesful in representation disentanglement of the generative factors has relied in some form of identification (Peters et al., 2016; Bouchacourt et al., 2018; Mitrovic et al., 2020; Locatello et al., 2020; Khemakhem et al., 2020; Trauble et al., 2021). In light of this discussion, we seek to lift the conventional usage of the approximate posterior \(q(\mathbf{z}|\mathbf{x})\) in the VAE framework, for the causal query \(p(\mathbf{z}|\mathbf{x},\hat{\mathbf{y}}_{c})\) invoking intervention quantities of the \(do(\mathbf{Y}_{c}=\mathbf{y}_{c})\). Inspection of the assumed DAG \(G\) model in Fig.1 adverts the presence of a collider. Adjustment for the participating variables in this collider is resolved by identification of the causal query \(p(\mathbf{z}|\hat{\mathbf{y}}_{c})\) involved in the derivation of the posterior. This later (see Appendix B for full derivation) is given in Eq.(1) as: \[p(\mathbf{z}|\hat{\mathbf{y}}_{c})=\sum_{\mathbf{w}_{c}}p(\mathbf{z}|\mathbf{ y})p(\mathbf{w}_{c})=\mathbb{E}_{p(\mathbf{w}_{c})}\left[p(\mathbf{z}| \mathbf{y})\right] \tag{1}\] where for easy of exposition, we denote the generating factors not in query by the set \(\mathbf{w}_{c}=\{\mathbf{y}_{j}:j\neq c\}\), with \(\mathbf{y}=\mathbf{w}_{c}\cup\{\mathbf{y}_{c}\}\). Note that the right hand side of Eq. (1) includes terms involving only standard probabilities obtained from observations. These as Eq.(1) implies can be computed by the expectation over the generating factors not in query of the conditional latent distribution. Leaving one of the participating variables without control while conditioning on a collider unblocks the flow of information across the paths colliding at the common consequence. In the case of Fig.1 for example, the query \(p(\mathbf{z}|\hat{\mathbf{u}}_{\mathbf{x}})\) is unidentified as the factor \(\mathbf{u}_{\mathbf{x}}\) is unmeasured and so the adjustment of Eq.(1) is not possible. Bias from the unmeasured \(\mathbf{u}_{\mathbf{x}}\) under model \(G\) of Fig.1 in the causal queries \(p(\mathbf{z}|\hat{\mathbf{y}}_{c})\) will result in entanglement with the effects of \(p(\mathbf{z}|\hat{\mathbf{u}}_{\mathbf{x}})\). Alternative remedies are in existance to mitigate this problem, but these have to rely on a parametrization enabling some form of approximation to \(p(\mathbf{z}|\hat{\mathbf{u}}_{\mathbf{x}})\). When \(\mathbf{u}_{\mathbf{x}}\) represents sensor noise for example, assumed to be stationary and un Figure 1: DAG \(G\) encoding generative process. correlated with \(\mathbf{y}\) as depicted in the DAG model \(G\) of Fig.1, possibilities like the denoising methods of (Lehtinen et al., 2018; Laine et al., 2019) exist, which have shown to be highly effective. ### Regularization by Identification Regularization by identification (ReI) is a regularization engine defined by identification of the causal queries involved in the learning problem. Key step is that identification of the causal queries is anchored in analysis of a pre-specified DAG \(G\) model encoding the particular data generation mechanisms under which the identifications are valid. ReI reformulates traditional learning frameworks to make the necessary adjustments for causal identification of the involved queries; made explicit by regularization. This altogether different from the weakly-supervised setting of (Mitrovic et al., 2020; Peters et al., 2017; Arjovsky et al., 2019) which aim at finding the generating mechanisms by imposing some form of invariance to real or augmented data variability. Or to (Bouchacourt et al., 2018; Locatello et al., 2020) imposing invariance to shared generative factors between at least pairs of observations while keeping those detected as varying, free. In addition, distinct from (Khemakhem et al., 2020), our method relies on identification of the causal queries valid under a specified DAG \(G\) model rather than a statistical notion of model identification. Lastly, throughout this work the latent space lives in the same space as the observations (this made without restriction of analysis to latents living in lower dimensional spaces). This characteristic made in similarity with inverse denoising problems (Zhang et al., 2017; Lehtinen et al., 2018) and denoising diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) as examples. We assume this to yield representations more suitable to support interpretation, explanation and understanding of the effects of generating factors on data; as later verified empirically in Section 3.2. The learning problem of the variational framework of (Kingma and Welling, 2013) optimizes the evidence lower bound (ELBO) to approximate the true posterior \(p^{*}(\mathbf{z}|\mathbf{x})\) given measurements \(\{\mathbf{x}^{(i)}\}_{i=1}^{N}\). The ELBO can be formulated by two terms: a likelihood \(\mathcal{L}_{\ell}(\mathbf{\theta},\mathbf{\phi};\mathbf{x}^{(i)})\) and a regularizer \(\mathcal{L}_{\rho}(\mathbf{\theta},\mathbf{\phi};\mathbf{x}^{(i)})\) as: \[\mathcal{L}(\mathbf{\theta},\mathbf{\phi};\mathbf{x}^{(i)})=\mathcal{L}_{\ell}(\mathbf{ \theta},\mathbf{\phi};\mathbf{x}^{(i)})-\lambda\mathcal{L}_{\rho}(\mathbf{\theta},\bm {\phi};\mathbf{x}^{(i)}) \tag{2}\] where the likelihood term of most VAE flavors out there is given by Eq.(3) as: \[\mathcal{L}_{\ell}(\mathbf{\theta},\mathbf{\phi};\mathbf{x}^{(i)})=\mathbb{E}_{q( \mathbf{z}|\mathbf{x}^{(i)})}\left[\log p(\mathbf{x}^{(i)}|\mathbf{z})\right] \tag{3}\] and the regularizer \(\mathcal{L}_{\rho}(\mathbf{\theta},\mathbf{\phi};\mathbf{x}^{(i)})\) given in case of the standard VAE by the Kullback-Leibler (KL) divergence \[\mathcal{L}_{\rho}(\mathbf{\theta},\mathbf{\phi};\mathbf{x}^{(i)})=D_{KL}(q(\mathbf{z }|\mathbf{x}^{(i)})||p(\mathbf{z})) \tag{4}\] imposing a prior \(p(\mathbf{z})\), typically a standard Gaussian, on the approximate posterior. The \(\mathbf{\theta}\), \(\mathbf{\phi}\) are the parameters of the encoder and decoder models, respectively, and optimized over the training dataset. The scalar \(\lambda\) in Eq.(2) is the regularizer strength balancing tradeoffs between the likelihood and priors, a parameter utilized by the \(\beta\)-VAE to promote the prior structure. Other extensions without identification guarantees, structuring the latent space by class under supervision is the conditional VAE of (Sohn et al., 2015) approximating instead, the conditional posterior \(p(\mathbf{z}|\mathbf{x},\mathbf{y})\) given data pairs \(\{\mathbf{x}^{(i)},\mathbf{y}^{(i)}\}_{i=1}^{N}\). ReI reformulates the VAE, CVAE by lifting as in Section 2.2 the posterior \(p(\mathbf{z}|\mathbf{x},\mathbf{y})\) to the causal query \(p(\mathbf{z}|\mathbf{x},\hat{\mathbf{y}}_{c})\). The reformulated posterior adjusted to render it identifiable under DAG \(G\) in Fig.1 is equivalent to: \[p(\mathbf{z}|\mathbf{x},\hat{\mathbf{y}}_{c})=p(\mathbf{x}|\mathbf{z})\, \mathbb{E}_{p(\mathbf{w}_{c})}\left[p(\mathbf{z}|\mathbf{y})\right]/p(\mathbf{ x},\mathbf{y}_{c}) \tag{5}\] involving the observables \(\{\mathbf{x}^{(i)},\mathbf{y}^{(i)}\}_{i=1}^{N}\). The adjustments in Eq.(5) block the flow of information between variables \(\mathbf{y}_{c}\rightarrow\mathbf{z}\leftarrow\mathbf{w}_{c}\) when conditioning on \(\mathbf{z}\), as required by the variational framework. The full derivation of Eq.(5) showing it an identifiable quantity can be referred to in Appendix B. The adjustment of the ELBO made to reflect the causal query \(p(\mathbf{z}|\mathbf{x},\hat{\mathbf{y}}_{c})\) instead of the standard posterior \(p(\mathbf{z}|\mathbf{x},\mathbf{y}_{c})\). Derived in Appendix C, the adjustments required for identification of \(p(\mathbf{z}|\mathbf{x},\hat{\mathbf{y}}_{c})\) results in the reformulated regularizer of the ELBO given in Eq.(6) as: \[\mathcal{L}_{\rho}(\mathbf{\theta},\mathbf{\phi};\mathbf{x}^{(i)},\mathbf{ y}^{(i)})=\] \[D_{KL}\Big{(}q(\mathbf{z}|\mathbf{x}^{(i)},\mathbf{y}_{c}^{(i)}) ||\,\mathbb{E}_{p(\mathbf{w}_{c})}\left[p(\mathbf{z}|\mathbf{y}^{(i)})\right] \Big{)} \tag{6}\] Note that the necessary adjustments that render the causal query identifiable imposes constraints only on the ELBO regularizer. The likelihood function in learning problems remains, at least to the best of our knowledge, without modification in general. This is also consistant with the causal literature (Spirtes et al., 1996; Pearl, 2009) where free parameters abide to the likelihood function (e.g., least squares) while the adjustments imposed by identification constrain it (e.g., to be zero in the case of linear models (Spirtes, 1995)). Given these characteristics, we term our method ReI; as the constraints required to make the causal queries involved in the learning problem identifiable can be directly imposed by a regularizer. One of the motivating characteristics of ReI observed in the aforementioned reformulation of the ELBO is its modularity. This enables, the constraints imposed by ReI, like the regularizer derived in the VAE reformulation be added in a plug and play manner to other frameworks like(Higgins et al., 2016, 2018; Kim and Mnih, 2018; Bouchacourt et al., 2018; Locatello et al., 2020) similar to the work of (Venkatakrishnan et al., 2013) this last one focusing on regularizating denoisers. In term of the functional encoder/decoder approximators, deep model capacity is assumed to satisfy the data processing inequality with equality constraints. In other words, the mutual information \(I\) between \(\mathbf{Y}_{c}\) and \(\mathbf{Z}\) is preserved relative to \(\mathbf{Y}_{c}\) and \(\mathbf{X}\) (i.e. \(I(\mathbf{Y}_{c},\mathbf{Z})=I(\mathbf{Y}_{c},\mathbf{X})\)). This assumption has been used in other works (Locatello et al., 2020; Mao et al., 2022) and justified in the VAE's objective to faithfully approximate the marginal data distribution. ### Linear Parametrization Parametrization of the encoding and decoding functionals \(f_{\boldsymbol{\theta}}(\mathbf{z})\), \(h_{\boldsymbol{\phi}}(\mathbf{x})\)), respectively, imply interesting simplifications. In the simplest case assuming a linear model, in other words \(f_{\boldsymbol{\theta}}(\mathbf{z})=\boldsymbol{\theta}\mathbf{z}\), \(h_{\boldsymbol{\phi}}(\mathbf{x})=\boldsymbol{\phi}\mathbf{x}\) for example, simplifies adjustments of the type of Eq.(1) as these are equivalent to vanishing correlations. In a DAG \(G\) of disjoint variable sets \(X,Y,Z\), a set \(S\) of correlation coefficients are set to zero or vanish (i.e., \(\rho_{YX.Z}=0\)) if and only if \(X\) and \(Y\) are \(d\)-separated given set \(Z\)(Spirtes, 1995; Spirtes et al., 1996). Definitions of \(d\)-separation are included in Appendix A, but in summary, this is the fundamental tool to derive the conditions for identification of causal queries (Pearl, 1995). The adjustments in Eq.(1) thus imply in the linear parametrization constraining partial correlations between latent representations conditional on a given generating factor and those from others to zero in advance while keeping all others free (i.e., \(r_{\mathbf{xy}\text{-}}\mathbf{w}_{c}=0\)). One way of imposing these constraints is by parametrizing the regularizer in Eq.(6) which can be equipped with these restrictions through minimization of the partial correlations. However, as there are no guarantees with standard optimizers used in the DL world to attain vanishing partial correlatations a better approach is to note that the vanishing partial correlations can be implemented via the orthogonal constraints. \[\begin{split}(\tilde{\boldsymbol{\phi}}\tilde{\boldsymbol{\theta }})^{T}\tilde{\boldsymbol{\phi}}\tilde{\boldsymbol{\theta}}=\text{I}\\ \tilde{\boldsymbol{\phi}}=\prod_{d=D}^{1}\boldsymbol{\phi}_{d}, \qquad\tilde{\boldsymbol{\theta}}=\prod_{d=D}^{1}\boldsymbol{\theta}_{d}\end{split} \tag{7}\] where I is the identity matrix and the index order in \(d\) respects matrix multiplication order. Equipped with these constraints, the ELBO regularizer in the linear case restricts the free flow of information between the variables; preventing entanglement under the DAG \(G\) in Fig.1. ## 3 Experimentation Experiments were conducted in applications of spectroscopic sensors, specifically using data from a laser induced breakdown spectroscopy (LIBS) instrument. LIBS is a remote sensing technology used for prediction of the chemical composition of geomaterials (e.g., rocks, soil) based on its signatures. On Mars, the ChemCam/SuperCam LIBS based instruments are capable of investigating \(<1\)mm size samples by laser excited samples from up to 7m distances. It is equipped with a 1064nm laser and ultraviolet (UV), visible (VIO) and near infrared (NIR) band spectrometers; which altogether are capable of collecting the sample's spectral signatures occurring between 240-905nm. It is through the analysis of such spectral signatures that chemical composition information of a sample can be readily extracted. Focus here, is appying the ReI framework directed towards three tasks: (1) representation disentanglement, (2) prediction and (3) transfer. First one consists in learning representations that characterize the spectral signatures of specific chemical elements. Second, uses the learning representations to predict chemical content of sampled materials while the third tests the generalization capabilities to dataset shifts by training from data collected in a controlled laboratory setting on Earth while deployment occurs in the wild on the Martian surface. ### Dataset The ChemCam LIBS instrument (Wiens et al., 2012) datasets contain raw and denoised spectra obtained from a variety of targets (e.g., rocks, soil) and from reference calibration standards of known and certified chemical composition. The specific datasets we employ consists of spectrally resolved LIBS signal measurements collected on Earth in a laboratory setting from a set of \(\sim\) 585 reference calibration standards (Clegg et al., 2017) and on Mars from a set of 10 reference standards of known true composition. Each target is repeatedly shot (e.g., 50 times) following each time measurements of the full 240-905 nm LIBS signal. After collection, wavelengths within the bands [240.811,246.635], [338.457,340.797], [382.13,387.859], [473.184,492.427], [849,905.574] were ignored out consistent with practices of the ChemCam team (Clegg et al., 2017). ### Representation disentanglement The abilities of ReI for learning disentangled representations were evaluated here. Training utilizes example pairs \(\{\mathbf{x}^{(i)},\mathbf{y}^{(i)}\}_{i=i}^{N}\) of LIBS signal \(\mathbf{x}\) measurements and corresponding chemical composition scores ground truth \(\mathbf{y}\). Scores \(\mathbf{y}_{c}\) represent \(\%\) oxide composition for \(c\in\{1,...,11\}\) indexing \(\{\text{SiO}_{2},\text{TiO},\text{Al}_{2}\text{O}_{3},\)\(\text{FeO}_{\text{T}},\text{MgO},\text{MnO},\text{CaO},\text{Na}_{2}\text{O},\text{K}_{2} \text{O},\text{CO}_{2},\text{H}_{2}\text{O}\}\) and which altogether with sensor noise \(\mathbf{u}_{\mathbf{x}}\in\mathbb{R}^{M}\), produces the measured LIBS signal \(\mathbf{x}\in\mathbb{R}^{M}\) with \(M=5485\). For this, qualitative comparative evaluations of the produced representations from our ReI framework were performed in light of the known characteristic spectral response of each chemical oxide. We trained the VAE in an MLP architecture to produce representations which are compared in three cases: (1) the standard unidentified (causally-unconstrained), (2) constrained by ReI as in Eq.(2) with generating factors identified except for sensor noise \(\mathbf{u}_{\mathbf{x}}\) and (3) ReI with all generating factors identified. Training uses the \(585\) reference calibration targets under leave one out while testing was done on data from the standard left out until all targets are covered. Hyperparameters of the DL model were set to an initial lr of 1.0, decayed after 75 epochs with cosine annealing (Loshchilov and Hutter, 2017) and with #epochs 300. Batches were constructed at each epoch from a set of 64 shot-averaged examples randomized over the whole training set without replacement. The shot-averages are computed by averaging the LIBS signal representations over an individual target and laser shot location. This averaging is consistent with common practices of the ChemCam team (Wiens et al., 2013; Clegg et al., 2017). A representative example on the learned representations corresponding to chemical oxide K\({}_{2}\)O is shown in Fig.2. These where generated by sampling from \(q(\mathbf{z}|\mathbf{x}^{(i)},\mathbf{y}_{c}^{(i)})\) by querying \(\mathbf{y}_{c}\) as K\({}_{2}\)O and averaging over \(L=100\) samples. Figs.(a)a, (c)c and (e)e shows the learned representations for K\({}_{2}\)O in: (1) standard causally-unconstrained, (2) ReI with sensor noise \(\mathbf{u}_{\mathbf{x}}\) unidentified and (3) ReI with identification of all generating factors. The vertical axis of each plot shows the normalized magnitude and the horizontal axis represents spectral wavelength importance. Figs.(b)b, (d)d and (f)f illustrate the corresponding prediction performance \(\tilde{\mathbf{y}}_{c}\) of the three cases using the representations \(\mathbf{z}\) along with a trained linear prediction head. Prediction performance by looking into point distribution along the 1:1 line and as measured by the root mean squared error (RMSE) shows similar performances in all three cases; with a marginal advantage of the standard VAE (i.e., (1) 0.8, (2) 1.21, (3) 1.30). However, these all come from distinct learned representations with key observations supporting evidence of collider behavior. First, note that K\({}_{2}\)O (Potassium oxide) is known and expected to respond to wavelengths around \(\sim 770\)nm as shown in Fig.3 illustrating the ground truth expected spectral responses of a variety of chemical elements. The standard VAE in Fig.(a)a resulted in a representation with spectral peaks deemed important spread throughout the entire spectrum. This is indicative of information flow between the generative causes \(\mathbf{w}_{c}=\{\mathbf{y}_{j}:j\neq c\}\) not in the query \(\mathbf{y}_{c}\) (i.e., K\({}_{2}\)O) to \(\mathbf{y}_{c}\) from the unblocked paths \(\mathbf{w}_{c}\rightarrow\mathbf{z}\leftarrow\mathbf{y}_{c}\) by conditioning on \(\mathbf{z}\) or its descendant \(\mathbf{x}\). Fig.(c)c in contrast shows the resulting representation obtained by ReI with all generative factors except for sensor noise \(\mathbf{u}_{\mathbf{x}}\) identified. Although most of the wavelengths previously deemed important where flattened, some spectral peak patterns from Fig.(a)a still persist although at a lower scale. This, we argue is indicative of the paths \(\mathbf{w}_{c}\rightarrow\mathbf{z}\leftarrow\mathbf{u}_{\mathbf{x}}\) and \(\mathbf{u}_{\mathbf{x}}\rightarrow\mathbf{z}\leftarrow\mathbf{y}_{c}\) producing information bridges between these variables. Finally, Fig.(e)e illustrates the representation by ReI with all generative factors identified. Note that in this case, most wavelengths where brought down to zero except for the two strong peaks at \(\sim 770\)nm. This, in alignment with the expected characteristic spectal response for K\({}_{2}\)O as corroborated by Fig.3. Identification of the causal queries involved in the learning variational framework thus produced representations well aligned with the expected effects of the generating factors. Empirical evidence provided thus supports our claim that DL models ill-suffer from collider bias. Although, as presented here in the prediction of in-distribution examples, downstream tasks can obscure the aforementioned illness, Figure 2: Comparison of the learned representations for chemical oxide K\({}_{2}\)O. the task of representing the effects of generating factors; clearly shows evidence of this problem, with effects supporting collider behavior. These findings, thus provide a plausible alternative explanation to the spurious association problems between factors found in (Glocker et al., 2019; Geirhos et al., 2020; Pezeshki et al., 2021; Banerjee et al., 2021), to fairness (Zhao et al., 2017), and provide a venue for analysis and remediation through causality as viewed by (Pearl, 2010) and tackled here by ReI. We would like to note also that the learned representations from ReI visualized pictorically are amenable for interpretation, explain the effects of generating factors as they relate to the effects of the measuring apparatus, and can support understanding much needed for scientific discovery. ### Prediction and Transfer Quantitative evaluations that compared generalization in the prediction tasks of chemical composition of the learned representations in the presence of dataset shifts are evaluated here. The problem of dataset shifts originate within the scope of the LIBS application by training from data collections of LIBS measurements from targets on Earth in a laboratory setting while deployment occurs in data collections in the wild from Mars. (Clegg et al., 2017) found that the Martian environment has effects that shift the distribution of measurements in relationship to those on Earth and provided a manual engineered approach for its correction. This task then seeks to investigate the transferability of the representations in the presence of out-of-distribution shifts caused by deployment in a distinct environment. Example representative results were included in Fig.4 which show ground truth versus prediction plots for two element oxides Al\({}_{2}\)O\({}_{3}\) and MgO. Four leftmost Figs.4a,4e,4b,4f corresponds to performance results of the learned representation using standard VAE+FC linear head whereas the rightmost four Fig.4c,4g,4d,4h shows those from ReI. Figs. 4a,4e,4c,4g show that both standard and ReI present similar performance for in distribution example testing (under leave one out), with the standard being marginally better in RMSE. In contrast, Figs.4b,4f,4c,4g shows significant differences in performance for out-of-distribution (OOD) example testing. ReI presents better behaved performance and outperforms by larger margins compared to the standard in all cases. Note that even though the standard methods present a performance advantage over ReI for in-distribution examples, that this is not the case for out-of-distribution examples collected in the wild from Mars. ReI learns representations with disentangled variables that show better generalizations against the tested out-of-distribution examples. This behavior, consistant with findings by (Tsipras et al., 2018) showing that highly predictive non-robust features in the data tend to reduce learner performance when presented with out-of-distribution examples. ### DL model architectures at transfer Table 1 provides addditional results comparing performance on Earth-to-Mars transfer on a variety of DL architectures and averaged over all elements \(\mathbf{y}\in\mathbb{R}^{n}\) with \(n=11\). Comparisons include fully connected (FC), multilayer perceptron (MLP), MLP Mixer (Tolstikhin et al., 2021), ResNet (He et al., 2016), U-Net (Ronneberger et al., 2015), Transformers (Dosovitskiy et al., 2020) and again the standard VAE (Kingma and Welling, 2013). Note that some of the architectures do not produce a latent representation explicitly, these are however rather trained end-to-end for prediction. The number in parenthesis next to each architecture name (e.g., FC(10)) expresses the corresponding depth of layers. The results of Table 1 show that ReI outperforms standard architectures in cases of OOD examples regardless of the inductive biases implied by the architectural designs in comparison. Fig.5 shows instead transfer performance as a function of DL model depth. In this case, the FC, MLP, MLPMixer and ResNet+FC networks were compared. This plot shows that ReI is capable of outperforming standard DL models which did not exhibit generalization capabilities to OOD cases regardless of depth in this case. As a final remark we would like to highlight that gains in task performance may not necessarily translate into more generalizable DL models. In fact, as evidenced by experiments, these may sometimes trick one's belief of a better model. In our case, these issues where settled through experiments evaluating the resulting representations of the effects between the generating factors and the measuring apparatus and the \begin{table} \begin{tabular}{l r} \hline \hline Architecture & RMSE (\% oxide) \\ \hline FC(10) & 5.19 \\ MLP(10) & 5.06 \\ MLPMixer(8) & 5.23 \\ ResNet(18) + FC(10) & 6.23 \\ U-Net +FC(5) & 5.12 \\ Transformer + FC(10) & 6.12 \\ VAE + FC(10) & 4.51 \\ ReI-VAE+FC(1) & **2.45** \\ \hline \hline \end{tabular} \end{table} Table 1: Transfer Performance Comparison Figure 3: Known reference characteristic spectral response. expectations from domain knowldege. ## 4 Conclusions In this work, we proposed ReI: a regularization method defined by the identification of causal queries anchored in analysis of graphical generative models of data. We argued that standard non-identifiable DL models are ill-biased by collider behaviour and showed supporting empirical evidence of this. In a variational framework, we showed how analysis of the graphical data generative model under the lens of causality can be used to adjust for collider bias via ReI in representation learning problems. Empirical evidence shows ReI is capable of learning the effects between the individual generating factors and the sensor, removing collider bias, producing representations in disentangled form, generalizable to OOD example cases and supporting interration, explanation and understanding of both factor effects and manipulations of these for sampling posterior generation.
2309.14106
Performance study update of observations in divergent mode for the Cherenkov Telescope Array
Due to the limited field of view (FoV) of Cherenkov telescopes, the time needed to achieve target sensitivity for surveys of the extragalactic and Galactic sky is large. To optimize the time spent to perform such surveys, a so-called "divergent mode" of the Cherenkov Telescope Array Observatory (CTAO) was proposed as an alternative observation strategy to the traditional parallel pointing. In the divergent mode, each telescope points to a position in the sky that is slightly offset, in the outward direction, from the original center of the field of view. This bring the advantage of increasing the total instantaneous arrays' FoV. From an enlarged field of view also benefits the search for very-high-energy transient sources, making it possible to cover large sky regions in follow-up observations, or to quickly cover the probability sky map in case of Gamma Ray Bursts (GRB), Gravitational Waves (GW), and other transient events. In this contribution, we present the proposed implementation of the divergent pointing mode and its first preliminary performance estimation for the southern CTAO array.
A. Donini, I. Burelli, O. Gueta, F. Longo, E. Pueschel, D. Tak, A. Vigliano, T. Vuillamme, O. Sergijenko, A. Sarkar
2023-09-25T13:00:00Z
http://arxiv.org/abs/2309.14106v1
# Performance study update of observations in divergent mode for the Cherenkov Telescope Array ###### Abstract: Due to the limited field of view (FoV) of Cherenkov telescopes, the time needed to achieve target sensitivity for surveys of the extragalactic and Galactic sky is large. To optimize the time spent to perform such surveys, a so-called "divergent mode" of the Cherenkov Telescope Array Observatory (CTAO) was proposed as an alternative observation strategy to the traditional parallel pointing. In the divergent mode, each telescope points to a position in the sky that is slightly offset, in the outward direction, from the original center of the field of view. This bring the advantage of increasing the total instantaneous arrays' FoV. From an enlarged field of view also benefits the search for very-high-energy transient sources, making it possible to cover large sky regions in follow-up observations, or to quickly cover the probability sky map in case of Gamma Ray Bursts (GRB), Gravitational Waves (GW), and other transient events. In this contribution, we present the proposed implementation of the divergent pointing mode and its first preliminary performance estimation for the southern CTAO array. Introduction The Cherenkov Telescope Array Observatory (CTAO) is going to be the major next-generation observatory for ground-based very-high-energy gamma-ray astronomy [1]. A significant improvement in angular resolution, energy resolution and sensitivity with respect to existing IACT experiments will be achieved by building a large number of telescopes with three different sizes across two sites, one in the Northern Hemisphere, at La Palma in the Canary Islands, and one in the Southern Hemisphere, at Cerro Paranal in Chile. The Small-Sized Telescopes (SSTs) will have a primary mirror of about 4 meters in diameter, the Medium-Sized Telescope (MSTs) with a primary mirror of 12 meters and the Large-Sized Telescopes (LSTs) that will be the largest one, with a 23 meters primary mirror. The work, here presented, shows for the first time the performances of different divergent pointing configurations for the Southern CTA site. The dataset used is based on Monte Carlo (MC) simulations tailored to each divergent configuration. ## 2 Divergent pointing Divergent pointing was first introduced as a possible pointing strategy for CTAO to optimize the extragalactic survey task by Dubus et al. in 2013 [2]. This study analysed the behavior of an array of H.E.S.S.-like MSTs arranged to cover a region of the sky of \(20^{\circ}\times 20^{\circ}\). The idea underlying the divergent mode is to slightly incline each telescope into the outward direction by an angle increasing with the telescope distance from the center of the array as shown in Figure 1. The advantage of this configuration is an increased FoV, which reduces the time needed to cover large areas of the sky and also increases the probability to observe a transient source inside this enlarged FoV. The drawback is that reduced energy and angular resolution are expected. The goal should be to maximize the size of the field of view while maintaining a good performance. However, different divergent pointing configurations are possible, and the most suitable for the science goals one intends to achieve should be chosen. Following the work presented by Dubus et al., Szanecki et al.[3] analysed the performance of an array of MSTs showing that, for the configuration considered, divergent pointing could lead to a gain of a factor 2.3 in time for the extragalactic survey for a defined sensitivity level. A more recent study [4], analysed the performance of the CTAO Northern array in the so called Omega or Baseline configuration (4 LSTs + 15 MSTs) using the simulations from the third massive MC production (prod3[5]). The configurations studied were optimized for the northern site, increasing the array FoV of a variable factor going for 1.5 to 5. The need to cover a large area of the sky is a requirement not only for the extragalactic survey. Since Gravitational Waves are poorly localized, the search for their electromagnetic counterparts is performed inside a large uncertainty region (\(\sim\) 100-1000 deg\({}^{2}\)), making it a topic that can benefit from the divergent pointing strategy. The same can be applied to GRBs which are poorly localized. Figure 1: Two modes of configuration of the telescope system: a) normal (parallel) mode; b) divergent mode. From [3] As mentioned before, an enlarged FoV means also an enhanced probability for a transient source to fall inside the region of the sky observed by the instrument. This could hopefully lead to the observation of the onset phase of Gamma-Ray Bursts, also known as prompt emission, which has never been observed with IACTs. ### The divtel library The main code that calculates the optimal pointing direction for each telescope is hosted under the official CTAO GitHub page [6] and should be used by the Array Control And Data Acquisition system (ACADA) group, which takes care of the implementation of the divergent pointing strategy in the control system of CTAO. The main idea of the algorithm is to have a single parameter, called div, through which the divergent configuration of the array can be modified. However, as for now, this implementation does not allow to control the radial symmetry of the FoV and should be soon updated to have better control of the final geometry of the chosen configuration. The current version of the code draws an imaginary line aligned with the telescope pointing direction and connecting each telescope ground position, represented by **T** in Figure 2 (left), to an axis perpendicular to the ground and passing through the Center of Gravity (CoG) of the array, labeled with z in the same fig. These lines are defined so as to meet all in the same point, called ground point (**G**), whose position along the z axis determines the pointing direction of single telescopes. In fact the parameter div is directly related to the angle between the axis of the CoG and the telescope pointing direction by the equation \[div =\sin(\alpha) \tag{1}\] \[|\widehat{GB}| =\frac{f}{\tan(\arcsin(div))}\] where f is a normalization factor taking into account the real distance of the telescope from **B**. Notice that our goal is to define \(|\widehat{GB}|\), which can be defined from the div (or \(\alpha\)) value of a single telescope. Once the position of the ground point is defined the array pointing directions can be computed. A useful analogy already introduced in previous works is the one of an umbrella where moving the runner up and down the inclination of the stretchers is modified accordingly. ## 3 Simulations and data analysis The response of the telescope array to Extensive Air Showers (EAS) induced by gamma-rays and proton background is simulated thanks to two packages: CORSIKA and sim_telarray[7]. Both packages are the standard for CTAO simulations. The former takes care of simulating the development of the shower in the atmosphere, the latter simulates the array response to the shower. For this contribution gamma-rays have been simulated both for a point-like source for an incoming direction of Zd=20\({}^{\circ}\) and Az=0\({}^{\circ}\) and for a diffuse component with a incoming directions isotropically distributed inside a cone of aperture 20\({}^{\circ}\) centered around the same direction of the point-like gammas. This component is used both to train the random forests and to compute the array response to a diffuse source. The parameters related to the array, such as telescope ground positions and camera and telescope types are taken from an updated version of the Prod5 model [8]. The simulated array configuration contains globally 87 telescopes, of which only 60 are used - 4 LSTs, 14 MSTs and 42 SSTs - in order to have the analysis of a subarray more similar to what is called Alpha configuration. The main difference is the addition of the LSTs to the configuration. This class of telescopes is not included in the Alpha configuration but since funding for at least two of them have been allocated we added them to our simulations. simulated. The latter are used to train the energy regression model and the particle classification one, while point-like gammas are used to produce IRFs for point-like sources. The div values considered in this study are the same defined in [4] and they are reported in table 1 together with other relevant values of the simulated configurations. FoV represents the global, geometric Field of View of the configuration while FoV\({}_{eff}\) represents the area geometrically covered by at least three telescopes. At these stage no trigger information are available so the information on FoV\({}_{eff}\) is only geometrical. Figure 2 (right) shows as example the geometric FoV and multiplicity for cfg2. The choice to have at least three telescopes is arbitrary but we think it might be a good starting point in order to guarantee a proper shower reconstruction. The values reported show that the effective FoV is enlarged up to a factor \(\sim\)8 with respect to the parallel mode. The div values chosen allow to span from the maximum value of m\({}_{ave}\), obtained in parallel mode, to the value chosen set as a threshold (m\({}_{ave}\sim 3\)). The simulations have been analysed using c tapping, the python package developed for the processing of CTA low-level data [10][11]. The reconstruction method for divergent mode is already available in c tapping[12]. From the image recorded by each telescope a plane is defined by the projection of the shower axis on the camera and the telescope position. Those planes, belonging to a 3D reference frame common to all telescopes, are then intersected pair-wise and the angle between them is used as a weight for the computation of the final reconstructed direction. This is computed as a weighed average between all pair-wise directions. The strength of this method is that no correction is needed in the direction reconstruction with respect to the parallel pointing case. ## 4 Results and discussion The goal of this study is to determine some criteria, aside from FoV extension, to constrain the usable divergent configurations. The obvious condition is the array performance: sensitivity, effective area, angular resolution, gamma-hadron separation, point-spread function, ecc. Among possible criteria there are the number of triggered and reconstructed telescopes at various energies and the number of truncated images we obtain for the different configurations. Both studies are at the moment in a preliminary phase. The results reported here refer to a point-like gamma-ray sources. The performance for diffuse sources will also be computed but in this case some of the symmetry assumptions that are valid for parallel pointing might not be satisfied anymore when dealing to divergent pointing. This means that radial symmetry in the FoV must be checked for effective Figure 3: **Left:** Preliminary angular resolution for cfg2. **Right:** Preliminary energy bias and resolution for cfg2 area, energy dispersion and PSF. The performance presented here is obtained for the configuration named cfg2, see Table 1. As can be seen from Figure 3 and Figure 4 the performance of this mildly divergent configuration is in line with CTAO requirements. This is a promising result, telling us that, at least up to the divergent value here analyzed, the expected drop in the array performance is not that intense. This condition confirms that divergent pointing is a suitable observational strategy. The expected behaviour for more divergent configurations is a growing worsening in the performance. Since the pipeline for data analysis has been recently changed only one configuration has been analysed at the moment, in order to make sure that everything is working properly code-wise. The analysis and performance plots of the other configurations will be produced soon. ## 5 Conclusions Divergent pointing is a promising pointing strategy for the Cherenkov Telescope Array Observatory. Its main objective is to increase the array instantaneous FoV thus reducing the time needed to cover a large area of the sky and increasing the probability to detect transient sources. The drawback of the technique are a reduced angular sensitivity and energy reconstruction capability of the array. The goal is thus to find a set of configurations that allow to exploit the enlarged FoV while maintaining an acceptable performance. With this study the performance of CTAO-South has been analysed for some preliminary configurations. This analysis allowed to test the analysis pipeline, which changed recently and to look into the performance of the array not only at the center of the FoV but also for several offset positions. Only one of the simulated configurations has been analysed at the moment, in the next months the other configurations will be analysed as well. The results obtained so far are promising, since the performance is consistent with the requirements and we don't observe a significant worsening in the array sensitivity. The configurations simulated so far have been selected to better understand the applicability and limits of the method. Next step will be to optimize the configurations to specific science goals. Figure 4: Sensitivity curve for the divergent configuration named cfg2, compared with CTA requirements (black line) and southern array parallel pointing sensitivity (orange). The latter is referred to alpha configuration, where no LSTs are included. ## 6 Acknowledgments This work was conducted in the context of the CTA Consortium and CTA Observatory. We gratefully acknowledge financial support from the agencies and organizations listed here: [https://www.cta-observatory.org/consortium_acknowledgments/](https://www.cta-observatory.org/consortium_acknowledgments/)
2309.07634
Coupling Constants as Conserved Charges in Black Hole Thermodynamics
In a generic theory of gravity coupled to matter fields, the Smarr formula for black holes does not work properly if the contributions of the coupling constants defining the theory are not incorporated. However, these couplings, such as the cosmological constant or the dimensionful parameters that appear in the Lagrangian, are fixed parameters defining the theory, and they cannot be varied. Here, we present a robust method, applicable to any covariant Lagrangian, that upgrades the role of the couplings from being constants in the theory to being free parameters of the solutions. To this end, for each one of the couplings in a theory, a pair of auxiliary scalar and gauge fields is introduced. The couplings are shown to be conserved charges of the global part of the implemented gauge symmetry. Besides, their conjugate chemical potentials are defined as the electric potential of the corresponding gauge fields on the black hole horizon. Using this method, we systematically extend the first law and the Smarr formula by coupling conserved charges and their conjugate potentials. The thermodynamics of a black hole solution in a quadratic gravity theory is given as an example.
Kamal Hajian, Bayram Tekin
2023-09-14T11:58:09Z
http://arxiv.org/abs/2309.07634v2
###### Abstract ###### Abstract In a generic theory of gravity coupled to matter fields, the Smarr formula and the first law of thermodynamics of black holes do not work properly if the contributions of the coupling constants defining the theory are not incorporated. However, these couplings, such as the cosmological constant or the dimensionful parameters that appear in the Lagrangian, are fixed parameters defining the theory; and they cannot be varied. Here, we present a robust method, applicable to any covariant Lagrangian, that changes the role of the couplings from the constants in a theory to the free parameters in solutions. To this end, for each one of the couplings in a theory, a pair of auxiliary scalar and gauge fields is introduced. The couplings are shown to be conserved charges of the global part of the implemented gauge symmetry. Besides, their conjugate chemical potentials are defined as the electric potential of the corresponding gauge fields on the black hole horizon. Using this method, we systematically extend the first law and the Smarr formula by coupling conserved charges and their conjugate potentials. The thermodynamics of a black hole solution in a quadratic gravity theory is given as an example. **Coupling Constants as Conserved Charges** **in Black Hole Thermodynamics** Kamal Hajian\({}^{\dagger}\)\({}^{*}\)12 and Bayram Tekin\({}^{*}\)3 Footnote 1: [email protected] Footnote 2: [email protected] Footnote 3: [email protected] \({}^{\dagger}\)_Institute of Physics, University of Oldenburg, P.O.Box 2503, D-26111 Oldenburg, Germany_ \({}^{*}\)_Department of Physics, Middle East Technical University, 06800, Ankara, Turkey_ ## 1 Introduction In the last decades, the cosmological constant \(\Lambda\)[1] has been the focus of astronomical observations [2, 3] as well as theoretical research such as the AdS/CFT correspondence [4, 5] and the black hole thermodynamics [6, 7, 8] and chemistry [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. It has been understood in the black hole chemistry context that its variation has to be included in the black hole first law in order to have a consistent Smarr formula [23]. However, by fiat, it is a constant in the Lagrangian and not a property of a particular solution (such as the mass or the charge or angular momentum of the solution). Thus, it is not clear how it can be varied to fit into the first law of black hole thermodynamics. Besides, its conjugate chemical potential lacks a firm geometrical prescription as a volume. As a remedy, the Lagrangian can be modified by an auxiliary gauge field [24, 25, 26, 27] such that the \(\Lambda\) changes its role to be the conserved charge of the induced gauge symmetry and hence a parameter in the solution. In addition, its conjugate is defined naturally as the horizon gauge potential [28, 29]. The Smarr formula is democratic as far as the coupling constants other than \(\Lambda\) are considered and cannot be satisfied unless other couplings are also included in the black hole thermodynamics. It suggests a general formulation that can capture all couplings as conserved charges in black hole physics. In this paper, we provide a modification of the Lagrangian that can put the black hole chemistry with all couplings on firm ground. The paper is organized as follows: In the next section, pairs of auxiliary fields are introduced in the Lagrangian, which allows one to interpret the coupling constants as parameters in the solutions. In Section 3, we show that the couplings are conserved charges. Section 4 is devoted to introducing conjugate chemical potentials. The first law of black hole thermodynamics and the Smarr formula are extended in Section 5. In the last section, a clarifying example, which is a well-studied, 3D higher curvature gravity, is given to show the validity of our approach. ## 2 Couplings as solution parameters In a \(D\)-dimensional space-time, we consider the action \[I=\int\mathrm{d}^{D}x\sqrt{-g}\mathcal{L},\qquad\mathcal{L}=\mathcal{L}_{0}- \sum_{i}\alpha_{i}\mathcal{L}_{i}, \tag{1}\] where \(\mathcal{L}_{0}\) is Lagrangian that includes the kinetic term and comes with no coupling constant such as the Einstein-Hilbert term (Newton's constant is set to unity, or it multiplies the whole action); and \(\mathcal{L}_{i}\)'s are some number of other Lagrangian densities labeled by index \(i\) that are coupled with the corresponding coupling constant \(\alpha_{i}\). The densities can be, e.g., the (cosmological) constant term, higher curvatures, scalar tensor theories, Maxwell gauge fields, and in general, any covariant term that is built of the metric \(g_{\mu\nu}\), curvature \(R^{\alpha}_{\ \beta\mu\nu}\), covariant derivative \(\nabla_{\mu}\), and other dynamical fields in the theory. The action (1) can be conventionally re-written in terms of the volume \(D\)-form \(\boldsymbol{\epsilon}\), i.e., \[\boldsymbol{\epsilon}=\frac{\sqrt{-g}}{D!}\epsilon_{\mu_{1}\ldots\mu_{D}} \mathrm{d}x^{\mu_{1}}\wedge\cdots\wedge\mathrm{d}x^{\mu_{D}},\qquad I=\int \mathbf{L}=\int\left(\mathbf{L}_{0}-\sum_{i}\alpha_{i}\mathbf{L}_{i}\right), \tag{2}\] where we defined the \(D\)-forms as \[\mathbf{L}\equiv\mathcal{L}\boldsymbol{\epsilon},\qquad\mathbf{L}_{0}\equiv \mathcal{L}_{0}\boldsymbol{\epsilon},\qquad\mathbf{L}_{i}\equiv\mathcal{L}_{i }\boldsymbol{\epsilon} \tag{3}\] with \(\epsilon_{01\ldots D-1}=+1\) for the Levi-Civita symbol. Variation of \(\mathbf{L}\) with respect to all dynamical fields, collectively denoted by \(\Phi(x)\) including the metric \(g_{\mu\nu}\), yields \[\delta\mathbf{L}=\mathbf{E}^{\Phi}\delta\Phi+\mathrm{d}\boldsymbol{\Theta} \tag{4}\] in which the summation convention over the fields should be understood. Setting \(\delta\mathbf{L}=0\) yields the field equations \(\mathbf{E}^{\Phi}=0\) associated with each one of the fields in the set \(\Phi\); appropriate boundary conditions must also be provided for the well-posedness of the problem. We note that \(\mathbf{E}^{\Phi}\) and \(\mathbf{\Theta}\) are linear in terms of the Lagrangian components: namely \[\mathbf{E}^{\Phi}\delta\Phi=\left(\mathbf{E}_{0}^{\Phi}-\sum_{i}\alpha_{i} \mathbf{E}_{i}^{\Phi}\right)\delta\Phi,\qquad\mathbf{\Theta}=\mathbf{\Theta}_{0} -\sum_{i}\alpha_{i}\mathbf{\Theta}_{i}. \tag{5}\] It is possible to introduce pairs of auxiliary fields in the Lagrangian in order to promote \(\alpha_{i}\)'s to be free parameters in the solution, not the theory. Each pair, labeled also by the index \(i\), is composed of one scalar field denoted by \(\alpha_{i}(x)\) and one \((D-1)\)-form gauge field \(\mathbf{A}_{i}\). The field strength \(\mathbf{F}_{i}\equiv\mathrm{d}\mathbf{A}_{i}\) is a top-form that is invariant under the gauge transformation \[\mathbf{A}_{i}\rightarrow\mathbf{A}_{i}+\mathrm{d}\boldsymbol{\lambda}_{i}. \tag{6}\] Equipped with the auxiliary field pairs, a given action \(I\) in (2) can be modified to an extended action \(\tilde{I}\) as \[\tilde{I}=\int\tilde{\mathbf{L}}\equiv\int\Big{(}\mathbf{L}_{0}-\sum_{i} \alpha_{i}(x)(\mathbf{L}_{i}-\mathbf{F}_{i})\Big{)}. \tag{7}\] This action is symmetric under the gauge transformation (6) since \(\mathbf{F}_{i}\) is gauge invariant by construction. Besides, it reproduces the dynamics of the fields in the original action (2) on-shell. To this end, we denote the collection of the fields \(\{\Phi,\alpha,\mathbf{A}_{i}\}\) by \(\tilde{\Phi}\). Then, the variation of (7) followed by the standard integration by parts gives \[\delta\tilde{\mathbf{L}}=\mathbf{E}^{\tilde{\Phi}}\delta\tilde{\Phi}+\mathrm{ d}\tilde{\mathbf{\Theta}}, \tag{8}\] in which the modified variations read as \[\mathbf{E}^{\tilde{\Phi}}\delta\tilde{\Phi}=\mathbf{E}_{0}^{\Phi}\ \delta\Phi-\sum_{i}\left[\Big{(}\alpha_{i}(x)\mathbf{E}_{i}^{\Phi}-\mathrm{d} \alpha_{i}(x)\frac{\partial\mathbf{L}_{i}}{\partial(\mathrm{d}\Phi)}\Big{)} \delta\Phi+(\mathbf{L}_{i}-\mathbf{F}_{i})\delta\alpha_{i}(x)+\mathrm{d}\alpha _{i}(x)\delta\mathbf{A}_{i}\right], \tag{9}\] \[\tilde{\mathbf{\Theta}}=\mathbf{\Theta}_{0}-\sum_{i}\alpha_{i}(x)\mathbf{ \Theta}_{i}+\sum_{i}\alpha_{i}(x)\delta\mathbf{A}_{i}. \tag{10}\] For clarity, the \(x\)-dependency of the scalar fields \(\alpha_{i}\) is shown explicitly. In order to find the field equations by the action principle \(\delta\tilde{\mathbf{L}}=0\), the coefficients of \(\delta\Phi\), \(\delta\alpha_{i}(x)\), and \(\delta\mathbf{A}_{i}\) in (8) should vanish independently. The equations that arise from the lat two terms yield the on-shell relations \[\mathbf{F}_{i}=\mathbf{L}_{i},\qquad\mathrm{d}\alpha_{i}(x)=0, \tag{11}\] respectively. The last equality above implies \[\alpha_{i}(x)=\mathrm{const.}, \tag{12}\] which means that \(\alpha_{i}\)'s are some _free solution parameters_ that are constant over space-time. Inserting this crucial result in the overall coefficient of \(\delta\Phi\) in (8) the original equations of motion in (5) are recovered. The argument above shows that, as far as the dynamics of the fields \(\Phi\) are concerned, one can use the Lagrangian \(\mathbf{L}\) and \(\tilde{\mathbf{L}}\) interchangeably. However, the main advantage of \(\tilde{\mathbf{L}}\) is to promote the coupling constants \(\alpha_{i}\)'s in \(\mathbf{L}\) to be free parameters of the solutions \(\tilde{\Phi}\). This vantage point of view is important for the rest of this paper, where we need the variations of \(\alpha_{i}\)'s as solution parameters to be considered in the first law of black hole thermodynamics. Interestingly, one can go further and show that not only \(\alpha_{i}\)'s can be considered as free parameters in the solutions, but there are also conserved charges which we discuss next. ## 3 Coupling constants as conserved charges Motivated by the analysis in [28, 29] where the cosmological constant was re-interpreted as a conserved charge, this section is devoted to showing that in a theory that is described by the action \(I\) in (2) or equivalently by \(\tilde{I}\) in (7), the coupling constants or the solution parameters \(\alpha_{i}\) in (12) are conserved charges associated with the _global_ part of the gauge symmetries in (6). For this purpose, the "covariant phase space" formulation of charges, also known as the Iyer-Wald formulation [30, 31, 32, 33] (initiated and followed in [34, 35, 36, 37]) is apt. The formalism is reviewed, e.g., in [38, 39, 40]) and applied to various theories (e.g., see [41]). Let us first focus on the action \(I\) in (2). In the covariant phase space method, the symplectic current \(\boldsymbol{\omega}\) is defined by taking an exterior derivative of \(\boldsymbol{\Theta}\) on the field configuration space, i.e., \[\boldsymbol{\omega}(\delta_{1}\Phi,\delta_{2}\Phi,\Phi)=\delta_{1}\boldsymbol {\Theta}(\delta_{2}\Phi,\Phi)-\delta_{2}\boldsymbol{\Theta}(\delta_{1}\Phi, \Phi)\,. \tag{13}\] If the fields \(\Phi\) and their variations \(\delta\Phi\) satisfy the field equations and their linearized versions respectively, then the symplectic current is locally conserved, i.e., \(\mathrm{d}\boldsymbol{\omega}=0\). As a result, it is possible to define the symplectic 2-form \[\Omega(\delta_{1}\Phi,\delta_{2}\Phi,\Phi)\equiv\int_{\Sigma}\boldsymbol{ \omega}(\delta_{1}\Phi,\delta_{2}\Phi,\Phi)\,, \tag{14}\] which makes the field configuration space a phase space. The hypersurface \(\Sigma\) is a Cauchy surface, and the result in (14) is independent of its choice by the conservation of \(\boldsymbol{\omega}\) and the appropriate boundary conditions. Having the symplectic form in hand, one can associate a charge variation \(\delta H_{\epsilon}\) to a symmetry generator \(\epsilon\equiv\{\xi^{\mu},\lambda\}\) that is composed of a diffeomorphism \(x^{\mu}\to x^{\mu}-\xi^{\mu}\) and some Maxwell (or Yang-Mills) gauge transformation \(A_{\mu}\to A_{\mu}+\partial_{\mu}\lambda\). The charge variations is defined as \(\delta H_{\epsilon}\equiv\delta_{\epsilon}\Phi\cdot\Omega\), which yields \[\delta H_{\epsilon}(\Phi)=\int_{\Sigma}\big{(}\delta\boldsymbol{\Theta}( \delta_{\epsilon}\Phi,\Phi)-\delta_{\epsilon}\boldsymbol{\Theta}(\delta\Phi, \Phi)\big{)}=\int_{\Sigma}\big{(}\delta\boldsymbol{\Theta}(\delta_{\epsilon }\Phi,\Phi)-\mathcal{L}_{\xi}\boldsymbol{\Theta}(\delta\Phi,\Phi)\big{)}, \tag{15}\] where in the last equation the gauge invariance of \(\boldsymbol{\Theta}\) is used. By the Cartan identity, \(\mathcal{L}_{\xi}\boldsymbol{\Theta}=\xi\cdot\mathrm{d}\boldsymbol{\Theta}+ \mathrm{d}(\xi\cdot\boldsymbol{\Theta})\), and the on-shell relation \(\mathrm{d}\boldsymbol{\Theta}=\delta\mathbf{L}\) the charge variation in (15) is equal to \[\int_{\Sigma}\Big{(}\delta(\boldsymbol{\Theta}(\delta_{\epsilon}\Phi,\Phi)- \xi\cdot\mathbf{L})-\mathrm{d}\big{(}\xi\cdot\boldsymbol{\Theta}(\delta\Phi, \Phi)\big{)}\Big{)}=\int_{\Sigma}\mathrm{d}\big{(}\delta\mathbf{Q}_{\epsilon} (\Phi)-\xi\cdot\boldsymbol{\Theta}(\delta\Phi,\Phi)\big{)}. \tag{16}\] The last equality follows from the celebrated Noether current \[\mathbf{J}_{\epsilon}=\boldsymbol{\Theta}(\delta_{\epsilon}\Phi,\Phi)-\xi \cdot\mathbf{L}(\Phi),\qquad\mathrm{d}\mathbf{J}_{\epsilon}=0\quad\Rightarrow \quad\mathbf{J}_{\epsilon}=\mathrm{d}\mathbf{Q}_{\epsilon}, \tag{17}\] in which the Poincare lemma is used to introduce \(\mathbf{Q}\) as the Noether charge density, and \(\delta\mathrm{d}=\mathrm{d}\delta\) was also used. By the Stokes' theorem, the last term in (16) can be written as a surface integral \[\delta H_{\epsilon}=\oint_{\partial\Sigma}\mathbf{k}_{\epsilon}\,,\qquad\qquad\mathbf{k} _{\epsilon}(\delta\Phi,\Phi)\equiv\delta\mathbf{Q}_{\epsilon}(\Phi)-\xi\cdot \mathbf{\Theta}(\delta\Phi,\Phi)\,. \tag{18}\] Notice that similar to the field equations and \(\mathbf{\Theta}\) in (5), \(\mathbf{k}\) is also linear in the Lagrangian components of arbitrary action, e.g., for the action \(I\) in (2) \[\mathbf{k}=\mathbf{k}_{0}-\sum_{i}\alpha_{i}\mathbf{k}_{i}. \tag{19}\] Now, we are ready to follow these steps verbatim, this time for the action \(\tilde{I}\) in (7). Considering the additional gauge symmetry in (6), the charge generator \(\epsilon\) is extended to capture this feature, namely \[\tilde{\epsilon}\equiv\{\xi^{\mu},\lambda,\{\mathbf{\lambda}_{i}\}\} \tag{20}\] for \(i\) number of gauge generators \(\{\mathbf{\lambda}_{i}\}\). Then, replacing \(\mathbf{L}\rightarrow\tilde{\mathbf{L}}\) and \(\mathbf{\Theta}\rightarrow\tilde{\mathbf{\Theta}}\) in (17) and (18) with multiple usage of the on-shell condition (12) we find \[\delta H_{\tilde{\epsilon}}=\oint_{\partial\Sigma}\mathbf{k}_{\tilde{\epsilon}}\,, \qquad\qquad\tilde{\mathbf{k}}_{\tilde{\epsilon}}=\mathbf{k}_{\epsilon}+\sum_{i} \big{(}\xi\cdot\mathbf{A}_{i}\delta\alpha_{i}+\delta(\alpha_{i}\mathbf{\lambda}_{ i})\big{)}. \tag{21}\] This relation can be used to calculate the charges of the diffeomorphism and gauge symmetries, and we will use it to find mass, angular momentum, entropy, and other black hole charges in the last section. However, here we focus on a very specific symmetry that is a global part of the gauge transformation (6). The generator that we choose is proportional to \(\{0,0,\{\mathbf{\lambda}_{i}\}\}\) with only one non-zero gauge generator, calling it \(\mathbf{\lambda}_{j}\), that satisfies \(\mathrm{d}\mathbf{\lambda}_{j}=0\). The rest of gauge generators in \(\{\mathbf{\lambda}_{i}\}\), i.e., all \(\mathbf{\lambda}_{i}\) for \(i\neq j\) vanish. To fix the normalization of the generator, we can divide it by the factor \(|\mathbf{\lambda}_{j}|\equiv\oint_{\partial\Sigma}\mathbf{\lambda}_{j}\) which is a constant, i.e., independent of the arbitrarily chosen \(\partial\Sigma\) as well as the \(x^{\mu}\) (explained below). So, we define \[\hat{\mathbf{\lambda}}_{j}=\frac{\mathbf{\lambda}_{j}}{|\mathbf{\lambda}_{j}|},\qquad \tilde{\epsilon}_{j}\equiv\{0,0,\hat{\mathbf{\lambda}}_{j}\}. \tag{22}\] For such generators that are solely composed of the gauge transformations (6), the only relevant part of the charge variation is the last term in (21). So, \[\delta H_{\tilde{\epsilon}_{j}}=\oint_{\partial\Sigma}\delta(\alpha_{j}\hat{ \mathbf{\lambda}}_{j})=\delta\alpha_{j}, \tag{23}\] where the last equation is a result of the on-shell \(x^{\mu}\)-independency of \(\alpha_{i}\), \(\delta\hat{\mathbf{\lambda}}_{j}=0\), and the normalization convention in (22). This result is one of the main achievements of this paper, because it shows that the coupling \(\alpha_{j}\) is the conserved charge \(H_{\tilde{\epsilon}_{j}}\), \[H_{\tilde{\epsilon}_{j}}=\alpha_{j}. \tag{24}\] In order to complete the argument, we clarify why \(\oint_{\partial\Sigma}\boldsymbol{\lambda}_{j}\) is a constant if \(\mathrm{d}\boldsymbol{\lambda}_{j}=0\). By the last term in (21), the charge variation \(\delta H\) for the generator \(\boldsymbol{\lambda}_{j}\) is proportional to this surface integral. However, the transformation is an exact symmetry, i.e., \(\delta_{\boldsymbol{\lambda}_{j}}\tilde{\Phi}=0\) whose symplectic current \(\boldsymbol{\omega}\) vanishes. This feature makes the charges not only to be conserved, i.e., independent of some time coordinate, but also independent of the choice of the \(\partial\Sigma\). The remaining \(D-2\) coordinates that parameterize \(\partial\Sigma\) are integrated out. So, no space-time and \(\partial\Sigma\)-dependency remain. ## 4 Conjugate chemical potentials for the coupling constants The electric charge in Maxwell's electrodynamics is the charge of the global part of the \(U(1)\) gauge symmetry \(A\to A+\mathrm{d}\lambda\) with \(\mathrm{d}\lambda=0\). For a black hole, its conjugate chemical potential is the electric potential, i.e., \(\,\varPhi_{{}_{\rm H}}\equiv\xi_{{}_{\rm H}}\cdot A\) calculated on the event horizon, in which \(\xi_{{}_{\rm H}}\) is the horizon generating Killing vector field. Motivated by this potential, for the action \(\tilde{I}\) in (7) which has the gauge fields \(\mathbf{A}_{i}\) with associated conserved charges \(\alpha_{i}\), we can define their conjugate chemical potentials on the event horizon as \[\varPsi_{{}_{\rm H}}^{i}\equiv\oint_{\rm H}\xi_{{}_{\rm H}}\cdot\mathbf{A}_{i}. \tag{25}\] Such a definition of chemical potential for a coupling constant was first introduced in the context of upgrading the cosmological constant to be conserved charge in [28]. It has been proven that it reproduces the conjugate thermodynamic volume and has been examined for various examples [29]. In the next section, we use the coupling constants as conserved charges (23) and their conjugates (25) to extend the first law of black hole thermodynamics. ## 5 Extension of the first law of black hole thermodynamics Let us consider a stationary black hole in the coordinates where the horizon generating Killing vector is given as \(\xi_{{}_{\rm H}}=\partial_{t}+\varOmega^{n}\partial_{\varphi^{n}}\) in which \(n\) runs over the the axial isometries, and \(\varOmega^{n}\) are corresponding horizon angular velocities. In [31, 32], Iyer and Wald showed that entropy for non-extremal black holes is a conserved charge of this vector normalized by the Hawking temperature \(T_{{}_{\rm H}}=\frac{\kappa_{{}_{\rm H}}}{2\pi}\)[8], where \(\kappa_{{}_{\rm H}}\) is the surface gravity of the Killing horizon H. Later in [42, 43], an infinite number of horizon Killing vectors whose charges are the entropy of extremal black holes were found in their near-horizon region. However, in the presence of electromagnetic gauge fields, integrability shows that the proposed Killing vectors (both for extremal and non-extremal) have missed a contribution from the gauge fields. In [44, 45] it was shown that in order to have integrable and gauge-, as well as, diffeomorphism-invariant conserved charges, their vector field generator should be augmented by some gauge transformations (the reader is invited to read [46, 47, 48, 49] for reviews and applications). Here, we focus on the action \(\tilde{I}\) in (7) that, in addition to some probable Maxwell field \(A_{\mu}{\rm d}x^{\mu}\), it also has the auxiliary gauge fields \({\bf A}_{i}\). Then, in appropriately chosen gauges, the generator of the integrable entropy is \[\tilde{\epsilon}_{{}_{S}}=\frac{2\pi}{\kappa_{{}_{\rm H}}}\{\xi_{{}_{\rm H}},- \,\varPhi_{{}_{\rm H}},\{-\,\varPsi^{i}_{{}_{\rm H}}\hat{\boldsymbol{\lambda}}_ {i}\}\}. \tag{26}\] Let us denote the symplectic symmetry generators of the mass \(M\), angular momenta \(J_{n}\), and electric charge \(Q\) by \(\tilde{\epsilon}_{{}_{M}}=\{\partial_{t},0,\{0\}\}\), \(\tilde{\epsilon}_{{}_{J_{n}}}=\{-\partial_{\varphi^{n}},0,\{0\}\}\) and \(\eta_{{}_{Q}}=\{0,1,\{0\}\}\), respectively. They are assumed to be "_exact_" symmetries, i.e., they satisfy \(\delta_{\tilde{\epsilon}}\tilde{\Phi}=0\). Then, (26) reads \[\frac{\kappa_{{}_{\rm H}}}{2\pi}\tilde{\epsilon}_{{}_{S}}=\tilde{\epsilon}_{{} _{M}}-\varOmega^{n}_{{}_{\rm H}}\tilde{\epsilon}_{{}_{J_{n}}}-\,\varPhi_{{}_{ \rm H}}\tilde{\epsilon}_{{}_{Q}}-\sum_{i}\varPsi^{i}_{{}_{\rm H}}\tilde{ \epsilon}_{i}, \tag{27}\] where definition of \(\tilde{\epsilon}_{i}\) in (22) was also used. However, by the linearity of charge variations \(\delta H_{a\tilde{\epsilon}_{1}+b\tilde{\epsilon}_{2}}=a\delta H_{\tilde{ \epsilon}_{1}}+b\,\delta H_{\tilde{\epsilon}_{2}}\) in (21), the first law of black hole thermodynamics is derived as \[T_{{}_{\rm H}}\delta S=\delta M-\varOmega^{n}_{{}_{\rm H}}\delta J_{n}-\, \varPhi_{{}_{\rm H}}\delta Q-\sum_{i}\varPsi^{i}_{{}_{\rm H}}\delta\alpha_{i}. \tag{28}\] This method of proving the first law was first introduced in [44], where the non-extended version of (28) was derived directly from the local identity (27), i.e., without addressing the surfaces of integration on the horizon and at infinity (which is used in the Iyer-Wald proof of the first law). It works because the generators in (27) are all "_exact_" symmetries, and as a result, their charge variations in (21) are independent of the surface of integration [50]. By dimensional analysis and scaling argument, the Smarr formula can be deduced from the first law (see, e.g., [18, 51]). The same analysis for the extended first law yields \[(D-3)M=(D-2)T_{{}_{\rm H}}S+(D-2)\varOmega^{n}_{{}_{\rm H}}J_{n}+(D-3)\, \varPhi_{{}_{\rm H}}Q+k^{(i)}\,\varPsi^{i}_{{}_{\rm H}}\alpha_{i}. \tag{29}\] Details of the derivation can be found in Section 6 of Ref. [29]. The factor \(k^{(i)}\) is the scaling of the \(\alpha_{i}\), i.e., if the length \(l\) is scaled by a factor \(z\) as \(l\to z\times l\), then \(\alpha_{i}\to z^{k^{(i)}}\times\alpha_{i}\). ## 6 Example: Thermodynamics of the rotating BTZ black hole in the New Massive Gravity In [29] some black hole solutions were studied of which the Smarr formula was not satisfied if the contribution of the couplings was not included. Here, we provide one of these as an example: that is the rotating BTZ black hole solution of the NMG theory [52] in the coordinates \(x^{\mu}=(t,r,\varphi)\)[53, 54] \[\mathcal{L}=\frac{1}{16\pi}\left(R-2\Lambda-\beta\left(\frac{3}{8}R^{2}-R_{ \mu\nu}R^{\mu\nu}\right)\right). \tag{30}\] The metric is given as \[{\rm d}s^{2}=-\Delta{\rm d}t^{2}+\frac{{\rm d}r^{2}}{\Delta}+r^{2}({\rm d} \varphi-\omega{\rm d}t)^{2},\qquad\Delta\equiv-m+\frac{r^{2}}{\ell^{2}}+\frac{ j^{2}}{4r^{2}},\qquad\omega\equiv\frac{j}{2r^{2}}, \tag{31}\] for \(\Lambda=\frac{-1}{\ell^{2}}+\frac{\beta}{4\ell^{4}}\), \(\Lambda<0\) and \(\beta>0\). The black hole outer and inner horizons are at the radii \(r_{+}\) and \(r_{-}\) which satisfy \(2r_{\pm}^{2}=\ell^{2}(m\pm\sqrt{m^{2}-j^{2}/\ell^{2}})\) where \(m\) and \(j\) are free parameters of the solution. The thermodynamic properties of this solution are [53, 55] \[M=(1+\frac{\beta}{2\ell^{2}})\frac{m}{8},\qquad J=(1+\frac{ \beta}{2\ell^{2}})\frac{j}{8},\] \[\Omega_{\pm}=\frac{r_{\mp}}{\ell r_{\pm}},\qquad T_{\pm}=\frac{r_ {\pm}^{2}-r_{\pm}^{2}}{2\pi\ell^{2}r_{\pm}},\qquad S_{\pm}=(1+\frac{\beta}{2 \ell^{2}})\frac{\pi r_{\pm}}{2}, \tag{32}\] with the Horizon Killing vectors \(\xi_{\pm}=\partial_{t}+\Omega_{\pm}\partial_{\varphi}\). The mass and angular momentum follow from the general construction of conserved charges in higher derivative theories of gravity [56, 57]. It is easy to check that the quantities in (32) do not satisfy the Smarr formula if the last term in (29) is omitted. Examples such as this led us to the current attempt to rescue the Smarr formula. To remedy this issue, we can apply the procedure described above. The couplings \(\Lambda\) and \(\beta\) are promoted to the scalars \(\Lambda(x)\) and \(\beta(x)\), and their paired field strengths \(F_{{}_{\!\Lambda}}(x)\) and \(F_{{}_{\!\beta}}(x)\) (which are related to \({\bf F}_{{}_{\!\Lambda}}(x)\) and \({\bf F}_{{}_{\!\beta}}(x)\) in (7) by a Hodge dual transformation) are implemented in the Lagrangian (30) \[\tilde{\mathcal{L}}=\frac{1}{16\pi}\left(R-2\Lambda(x)\Big{(}1-F_{{}_{\! \Lambda}}(x)\Big{)}-\beta(x)\Big{(}\frac{3}{8}R^{2}-R_{\mu\nu}R^{\mu\nu}-F_{{} _{\!\beta}}(x)\Big{)}\right). \tag{33}\] It is clear that there is a conventional normalization in defining the couplings, e.g., instead of the \(\Lambda\) one can consider \(\frac{\Lambda}{8\pi}\) as the coupling. Nonetheless, as is expected, such a convention does not affect the physical thermodynamic laws because these factors are compensated in the conjugate potentials. By variation of the Lagrangian with respect to the new pairs of fields, equations of motion in (11) are derived that imply the following on-shell relations: \[F_{{}_{\!\Lambda}}(x)=1,\qquad F_{{}_{\!\beta}}(x)=\frac{3}{8}R^{2}-R_{\mu\nu }R^{\mu\nu}, \tag{34}\] as well as the constancy of the \(\Lambda\) and \(\beta\). Therefore, \[{\bf F}_{{}_{\!\Lambda}}(x) =\boldsymbol{\epsilon}=\sqrt{-g}\ {\rm d}t\wedge{\rm d}r\wedge{\rm d}\varphi=r\ {\rm d}t\wedge{\rm d}r\wedge{\rm d}\varphi, \tag{35}\] \[{\bf F}_{{}_{\!\beta}}(x) =\Big{(}\frac{3}{8}R^{2}-R_{\mu\nu}R^{\mu\nu}\Big{)}\boldsymbol{ \epsilon}=\frac{3r}{2\ell^{4}}\ {\rm d}t\wedge{\rm d}r\wedge{\rm d}\varphi. \tag{36}\] The gauge fields whose field strengths are calculated above are \[{\bf A}_{{}_{\!\Lambda}}(x) =-\Big{(}\frac{r^{2}}{2}-\frac{\beta m\ell^{2}}{2\beta-4\ell^{2}} \Big{)}{\rm d}t\wedge{\rm d}\varphi, \tag{37}\] \[{\bf A}_{{}_{\!\beta}}(x) =-\Big{(}\frac{3r^{2}}{4\ell^{4}}-\frac{m(\beta-4\ell^{2})}{4 \ell^{2}(\beta-2\ell^{2})}\Big{)}{\rm d}t\wedge{\rm d}\varphi. \tag{38}\] Notice that the second term in each one of the parentheses is a pure gauge, which can be fixed by different methods, e.g., we have requested that the integrability of the black hole charges be respected by the new contribution of \(\xi\cdot{\bf A}_{i}\delta\alpha_{i}\) in (21) (\({\mathbf{k}}_{\epsilon}\) can be found in [29, 47]). Now, we can insert the gauge fields into (25) to find the chemical potentials, \[\Psi^{\Lambda}_{\pm}=-\pi\Big{(}r_{\pm}^{2}-\frac{\beta m\ell^{2}}{\beta-2\ell^ {2}}\Big{)},\qquad\Psi^{\beta}_{\pm}=-\pi\Big{(}\frac{3r_{\pm}^{2}}{2\ell^{4}} -\frac{m(\beta-4\ell^{2})}{2\ell^{2}(\beta-2\ell^{2})}\Big{)}. \tag{39}\] One can check that the first law and the Smarr formula are satisfied for each one of the horizons as \[\delta M=T_{\pm}\delta S_{\pm}+\varOmega_{\pm}\delta J+\,\Psi^{ \Lambda}_{\pm}\delta(\frac{\Lambda}{8\pi})+\,\Psi^{\beta}_{\pm}\delta(\frac{ \beta}{16\pi}), \tag{40}\] \[0=T_{\pm}S_{\pm}+\varOmega_{\pm}J-2\,\Psi^{\Lambda}_{\pm}( \frac{\Lambda}{8\pi})+2\,\Psi^{\beta}_{\pm}(\frac{\beta}{16\pi}), \tag{41}\] respectively. The numerical factors \(\frac{1}{8\pi}\) and \(\frac{1}{16\pi}\) in the coupling charges are conventional, and come from how \(\alpha_{i}\) and \(\mathcal{L}_{i}\) are defined from the combination \(\alpha_{i}\mathcal{L}_{i}\) in the Lagrangian (33). However, independent of this convention, the relation \[\Psi^{\Lambda}_{\pm}\delta(\frac{\Lambda}{8\pi})=V_{\pm}\delta P \tag{42}\] reproduces the volume-pressure term in black hole chemistry in which \(V\) is the "thermodynamic volume" introduced in [18] (note the conventional minus signs that cancel each other in \(V\) and \(P\)). This result is not accidental, and proof of it can be found in Section 2 of the Ref. [29]. ## 7 Conclusions When various dimensionful couplings enter a theory of gravity that has matter fields, cosmological constant, and higher derivative terms, the first casualty of black hole thermodynamics are the first law and the beautiful Smarr formula expressing the relation between the variations of conserved charges and their total values, respectively. These two laws are simply not valid anymore for black holes in such a generic theory of gravity. One has two options: accept that the Smarr formula and the first law of thermodynamics are an accident of Einstein's gravity, or try to rescue these by upgrading the coupling constants of the theory to be conserved charges of the corresponding solution. But this requires a crucial step: first, the coupling constants are assumed to be space-time-dependent fields which are set to be constants as a consequence of the field equations. This can be done by the introduction of auxiliary abelian gauge fields as described in this work. The vantage point presented here, that is enlarging the theory by considering the coupling constants as space-time fields that take constant values as a result of the field equations, saves the first law of black hole mechanics and the Smarr formula. We have given a detailed account of this above and applied the new formalism to the rotating BTZ black hole in the New Massive Gravity (a quadratic theory much studied in the last decade). **Acknowledgements:** KH is thankful to Jutta Kunz for her supports, and members of the HEP group at IPM for the useful discussions in the weekly meetings. This work has been supported by TUBITAK International Researchers Program No. 2221.
2309.08507
Efficient proton arc optimization and delivery through energy layer pre-selection and post-filtering
Proton arc therapy (PAT) has emerged as a promising approach for improving dose distribution, but also enabling simpler and faster treatment delivery in comparison to conventional proton treatments. However, the delivery speed achievable in proton arc relies on dedicated algorithms, which currently do not generate plans with a clear speed-up and sometimes even result in increased delivery time. This study aims to address the challenge of minimizing delivery time through a hybrid method combining a fast geometry-based energy layer (EL) pre-selection with a dose-based EL filtering. Three methods of EL filtering were developed; The unrestricted method filters the lowest weighted EL while the SU gap filtering removes the EL around a new SU to minimize the gantry rotation braking. The SU filtering removes the lowest weighted group of EL that includes a SU. These filters were combined with the RayStation dynamic proton arc optimization framework (ELSA). Four bilateral oropharyngeal and four lung cancer patients' data were used for evaluation. Objective function values, target coverage robustness, organ-at-risk doses and NTCP evaluations, as well as comparisons to IMPT plans, were used to assess plan quality. The SU gap filtering algorithm performed best in five out of the eight cases, maintaining plan quality within tolerance while reducing beam delivery time, in particular for the oropharyngeal cohort. It achieved up to approximately 22% and 15% reduction in delivery time for oropharyngeal and lung treatment sites, respectively. The unrestricted filtering algorithm followed closely. In contrast, the SU filtering showed limited improvement, suppressing one or two SU without substantial delivery time shortening. Robust target coverage was kept within 1% of variation compared to the PAT baseline plan while organs-at-risk doses slightly decreased or kept about the same for all patients.
S. Wuyckens, V. Wase, O. Marthin, J. Sundström, G. Janssens, E. Borderias-Villarroel, K. Souris, E. Sterpin, E. Engwall, J. A. Lee
2023-09-15T16:10:25Z
http://arxiv.org/abs/2309.08507v2
# Accelerating proton arc therapy delivery through dosimetry-guided energy layer filtering ###### Abstract _Introduction._ Proton arc therapy has emerged as a promising approach for faster treatment delivery in comparison to conventional proton treatments. However, unexpected prolonged treatment times in several proton arc planning studies have raised concerns. In this study, we aim to address the challenge of minimizing delivery time through dosimetry-guided energy layer filtering, comparing its performance to a baseline approach without filtering. _Approach._ We developed three methods of energy layer (EL) filtering: unrestricted, switch-up (SU), and switch-up gap (SU gap) filtering. The unrestricted method filters the lowest weighted EL while the SU gap filtering removes in priority the EL around a new SU to minimize the gantry rotation braking. The SU filtering removes the lowest weighted group of EL that includes a SU. These post-processing filters were used in conjunction with the RayStation dynamic proton arc optimization framework (ELSA). Four bilateral oropharyngeal cancer patients' data were used for evaluation. Objective function values, target coverage robustness, organ-at-risk doses and NTCP evaluations, as well as comparisons to IMPT plans, were used to assess plan quality. _Result._ The SU gap filtering algorithm performed best in three out of the four cases, maintaining plan quality within tolerance while significantly reducing beam delivery time. It achieved up to approximately 22% reduction in delivery time. The unrestricted filtering algorithm followed closely. In contrast, the SU filtering showed limited improvement, suppressing hardly a few SU without substantial delivery time shortening. Robust target coverage was kept within 1% of variation compared to the ELSA baseline plan while OAR doses slighlty decreased in all four patients. Both arc plans present large reductions in NTCP values for dysphagia and xerostomia compared to the reference IMPT plan. _Significance._ This study provides insights to accelerate proton arc therapy delivery without compromising plan quality. These advancements could enhance treatment efficiency and patient throughput. Proton Therapy Treatment Planning Pareto ELSA Proton Arc Therapy Optimization Delivery time ## 1 Introduction Over the past decade, proton arc therapy has raised significant attention in the field of radiation oncology. This treatment technique involves continuous rotation of the gantry while delivering radiation, similar to Volumetric Modulated Arc Therapy (VMAT) in photon therapy [19]. The literature has witnessed a substantial increase in publications related to proton arc therapy between the first one in 1997 [21] and today, with a particular focus on clinical proofs of concepts across various disease sites. Additionally, researchers have been developing treatment planning algorithms to effectively utilize the new degrees of freedom brought by the arc delivery. One of the anticipated advantages of proton arc therapy is the potential for faster treatment delivery compared to conventional proton treatments as was seen with VMAT. Several studies have demonstrated that proton arc therapy can reduce overall treatment time [4, 16, 9, 15]. Importantly, though, reported delivery times are often estimated with rough approximations, relying solely on the total energy layer switching time (ELST), without accounting for the distinction between upward and downward energy switching, likely due to the limitations of earlier delivery systems. The arc timing advantage was actually only evident when ELST fell between 0.5 and 2 seconds, but this assumption for upward energy switching is not realistic in most of the currently available hardware. This makes it challenging to draw definitive conclusions based on these estimates only. Furthermore, contradictory findings have also been reported, where proton arc therapy exhibited prolonged treatment delivery durations compared to Intensity-Modulated Proton Therapy (IMPT) [7, 25], especially in the discrete arc mode where the delivery entails a large number of static fields for which the gantry needs to come to a full stop at each discrete direction to deliver a number of stacked energy layers [5]. In the dynamic mode, the prolonged delivery times stem from a complex combination of several factors, including energy layer switching time, spot irradiation, gantry rotation, a large number of layers and spots, and the requirement for precise delivery from the right directions. Consequently, the gantry needs to undergo frequent deceleration and acceleration, contributing to the overall prolonged delivery times. The scientific community has therefore dedicated efforts to devise approaches to minimize the delivery time of proton arc plans. One such approach is to formulate additional objectives to be minimized, which are surrogate measures of the delivery time, while simultaneously optimizing the spot weights within an integrated framework [12, 27, 24, 29]. While the concept is elegant in theory, this approach encounters practical challenges. These include the requirement for simplified mathematical models to estimate beam delivery time, the need for substantial computational resources and suitable solvers, as well as the necessity for careful tuning of objective weights. These limitations hinder its practical use and effectiveness in achieving significant time savings in proton arc therapy planning and delivery. Another approach involves the pre-selection of a user-defined number of energy layers using heuristics based on dosimetry [8, 17] or on geometrical considerations [10], with the intention of controlling the maximum number of upward energy switches. The spot-scanning proton arc (SPArc) therapy algorithm has demonstrated efficacy across various disease sites, primarily due to its dose-based approach. However, its main drawbacks are long optimization times and limited flexibility to select energy switch locations. On the other hand, ELSA has been noted for its fast execution time but also exhibits limitations in terms of flexibility, primarily stemming from its energy pre-selection based on geometry only. To bridge the gap between the dose-based and geometry-based approaches, we propose a hybrid method that combines both strengths. Our hybrid approach extends ELSA to create an initial set of energy layers, which are subsequently filtered to a final set using dose-based objectives, thereby enabling accelerated delivery. This approach is inspired by the Reverse Greedy algorithm [1], originally proposed for discrete proton arcs, which successively suppresses the energy layers contributing the least to the solution according to a score. We have developed and compared three distinct methods capable of energy layer filtering. These methods act as post-processing tools that complement the spot weight optimization process. Through the implementation of these filtering techniques, our aim is to enhance the overall efficiency of proton arc therapy delivery. ## 2 Methods and Materials ### Energy layer filtering algorithms Unlike IMPT plans, where each beam angle is optimized to cover the target in depth using a decreasing energy sequence, proton arc plans typically adopt a sector-based approach. Arc beams are divided into sectors, with energy decreasing within each sector and increasing only between sectors. This design stem from the anticipated increased time required for upward energy switches. Delivery occurs within fixed intervals (centered around a discrete direction) known as control points across each sector. Although more than one energy layer per discrete direction could be delivered in practice, doing so would slow down gantry rotation and compromise treatment efficiency. The number of sectors in a proton arc plan, which defines the number of switch-ups (SU) between energy levels, is a carefully studied topic as it directly correlates with the delivery time for most machines [12, 24, 27, 26]. Researchers have sought to develop methods of minimizing the number of SU in these plans. The basic assumption of this conceptual model is that delivery time is proportional to the irradiation time of each energy layer and the number of switch-ups in the plan. Therefore, minimizing the delivery time can be achieved by filtering energy layers and/or reducing the number of SU. In this study, we developed and optimized proton arc plans in a modified research build of the RayStation dynamic proton arc optimizer. While not exactly the same as the one showcased in the original publication [10], our approach still follows the ELSA framework for selecting energy layers based on geometry followed by the classical spot weight optimization. Considering an optimized arc plan as a starting point (fig. 1(a)), we have developed three distinct methods that can filter energy layers and/or reduce the number of SU as a post-processing step. Assuming \(N\) initial energy layers and \(M\) initial SU in the plan, we aim at keeping at most \(X\) energy layers or \(Y\) SU. The three methods are summarized below: 1. **Unrestricted filtering**: This algorithm filters the \(N-X\) energy layers with the lowest corresponding monitor unit (MU) from the plan. By removing the lowest MU energy layers, we aim to reduce the overall number of energy layers in the plan while preserving the important dose contributions. In Fig. 1, from (a) to (b) individual holes appear where the lowest ELs have been suppressed. 2. **SU filtering**: This algorithm filters \(M-Y\) SU by deactivating the partial sectors with the lowest corresponding MU. It iterates through two nested loops over the energy layers in the plan. It checks if at least one SU occurs between the two indexed energy layers and verifies that the energy of the second indexed angle is lower than the energy of the first indexed angle. When these conditions are met, the algorithm accumulates the MUs of the energy layers between the two indexes, forming a partial sector. These partial sectors, containing SU, are then ordered based on increasing MU values. The energy layers included in each partial sector are subsequently removed from the plan. By eliminating the energy layers associated with partial sectors containing SU and having the lowest MUs, the SU filter effectively reduces the overall number of SU in the plan. Fig. 1(c) displays one large hole between gantry angles \(\sim\)100\({}^{\circ}\) and 125\({}^{\circ}\), effectively reducing from 5 to 4 SU in the arc beam. 3. **SU gap filtering**: This algorithm removes the energy layers with the lowest MU, similarly to the unrestricted filtering algorithm. The difference is that this algorithm is restricted to only choose from the energy layers that are located just before or just after a SU. Note that after an energy layer is removed to create a gap, the adjacent energy layers on either side of the gap become the ones positioned just before and just after a SU. The rationale behind this approach is that the gantry is often forced to decelerate to a rather low velocity in conjunction with a SU and increasing the corresponding angular span might mitigate this effect. The algorithm contains a maximum size of this gap, since at a certain point the gantry will be able to keep a constant speed and still perform a SU. We establish this maximum gap size, which is measured in terms of the number of removed layers, as follows: \[\left\lfloor\frac{t_{\text{SU}}+t_{\text{EL}}^{\text{mean}}}{t_{\text{SD}}+t_ {\text{EL}}^{\text{mean}}}\right\rfloor\] (1) where \(t_{\text{SU}}\) and \(t_{\text{SD}}\) are the times to switch the energy upward and downward respectively and \(t_{\text{EL}}^{\text{mean}}\) is the average irradiation time per energy layer. Which is to say, this is the ratio between the delivery time of a layer with an SU and a layer with a SD. Note that this means that different plans may have different maximum gap sizes. Each algorithm, depicted in Fig. 1, is run multiple times, decreasing monotonically the allotted budget of energy layers (for unrestricted and SU gap filters) or switch-ups (for SU filtering) in the plan, hereinafter denoted as the MaxBoundSequence. Following each filtering operation, we re-run the spot weight optimization with a reduced number of iterations to compensate for the loss of monitor units (MU) and to maintain plan quality. Each filtering run is independent from the others and start from the initial optimized spot weights of the optimized arc plan. This strategy allows us to compute Pareto fronts, which then relate the attainable dosimetric objective values to the necessary dynamic delivery times. By examining the trade-off between these two sides of the problem, one can select the most suitable trade-off plan from the Pareto front. Algorithm 1 summarizes the workflow of the methodology. ### Patient data and treatment planning To demonstrate the efficiency of the algorithms, we selected four patients with oropharyngeal cancer from an anonymized database obtained from Cliniques Universitaires Saint-Luc in Brussels, Belgium. The local ethics committee approved the retrospective use of this database. The choice of oropharyngeal cancer as the disease site follows from the expectation that proton arc therapy would benefit to those patients, as compared to conventional treatments, with a lower integral dose, reduced doses to OARs, and overall less toxicity [16, 5, 6]. Three out of the four patients were selected from the database based on their higher NTCPs in the IMPT plans for the side effects under consideration. Patient 1 showed the highest probability for dysphagia grade 2, patient 2 had the highest probability for dysphagia grade 3, and patient 3 had the worst NTCPs for xerostomia grade 2 and 3. The fourth patient was selected randomly from the database. The patient database comprises IMPT plans manually generated by the same experienced dosimetrist using a treatment planning protocol to ensure consistency (see Table SM1 from [2]). These treatments were planned retrospectively in the treatment planning system (TPS) RayStation 11B (Raysearch Laboratories, AB). With the four selected patients being bilateral cases, the proton plans were designed using a simultaneous integrated boost IMPT technique using a constant relative biological effectiveness (RBE) factor of 1.1. The prescribed dose was 70 Gy(RBE) to the high-risk clinical target volume (CTV) and 54.25 Gy(RBE) to the low-risk CTV nodal regions, delivered in 35 fractions. The plan was designed with four beams at couch and gantry angles of (10\({}^{\circ}\), 60\({}^{\circ}\)), (10\({}^{\circ}\), 120\({}^{\circ}\)), (350\({}^{\circ}\), 240\({}^{\circ}\)), and (350\({}^{\circ}\), 300\({}^{\circ}\)), respectively. A range shifter was systematically used for each beam. Monte Carlo dose calculation and robust worst-case optimization [11] were employed for the proton plans, considering 21 scenarios (2.6% proton range error and 4 mm systematic error in all three spatial directions) for the CTV volume, the spinal cord and the brainstem. Figure 1: Strategies to filter energy layers (EL) for a random generated case. (a) Nominal arc plan used as starting point. (b) Unrestricted filtering removes the 15 lowest MU (EL). (c) SU gap filters the 15 EL that reduce the gantry time to break before each SU. (d) SU filters the lowest MU partial sector that includes at least one SU. The proton arc plans, referred to as ELSA plans according to [10], were designed with a single full arc beam with one-degree spacing between each control point, resulting in 360 initial energy layers available for optimization. It's worth noting that the version of ELSA used here is adapted with a more predefined sector length, which also explains the consistency in the number of SU across the four patients. They were planned for the study in a research version of RayStation 2023B using a total of 250 iterations and spot filtration iteration set at 150. Filtered ELSA plans, employ the same set of objectives and robust settings as the original ELSA plans which were initially obtained from the IMPT plans. However, in order to expedite the planning process, the total number of iterations for optimization is reduced to 50. All plans are normalized to the median dose (D50). The Monte Carlo (MC) dose calculation was carried out until it reached an uncertainty of 0.5%. Going back to Algorithm 1, given \(R=10\) runs, the 360 initial EL and the 16 initial SU in each ELSA plan, we chose the MaxBoundSequence to be 10 evenly spaced numbers over the interval \([350,200]\) for the maximum number of allowed energy layers (Method 1 & 3) and the sequence of numbers decremented by 1 over the interval \([15,5]\) for the maximum number of allowed SU (Method 2). These specific numbers were chosen intentionally to ensure a smooth transition towards the number of energy layers present in the IMPT plans, which is approximately 160 energy layers. ### Treatment plan evaluation #### 2.3.1 Delivery time Since the objective of this study is to achieve faster delivery rather than improving plan quality, it is noteworthy that suppressing energy layers from the initial set of energy layers is not expected to enhance plan quality. The aim is merely to minimize the delivery time with precautions to ensure that plan quality remains comparable to the ELSA baseline plan. Dynamic beam delivery time (BDT) was calculated using the ATOM (Arc Trajectory Optimization Method) algorithm, an open-source tool [20] published by RaySearch Laboratories. ATOM is designed to determine a fast plan delivery while adhering to the mechanical constraints specified by the user. The machine parameters utilized in this study are detailed in Table 1. The maximum window size specifies the angular safety tolerance around each irradiation direction. The selection of ATOM was driven by its utility, given that no proton arc machine has yet been established and commissioned in clinical settings; ATOM serves as a reliable estimator at present. #### 2.3.2 Dosimetry In addition to our primary objective of achieving faster delivery, we have included a comprehensive dosimetric evaluation to consider the selected trade-off from the generated Pareto front. Target coverage is evaluated in the IMPT, ELSA, and filtered ELSA plans employing the worst-case scenario robustness evaluation approach. This approach considers 14 different isocenter shifts, comprising 6 points along the main axes and 8 points along the diagonals, which are defined by the setup uncertainty. Additionally, three density shifts were taken into account, resulting in a total of 42 scenarios. The assessment of robust target coverage was based on the criterion of achieving a minimum dose of at least 95% of the prescribed dose at 98% of the target volumes in a worst-case scenario dose distribution (D98\(\%\geq 95\%\) Dp). This criterion was applied to both the high-risk and prophylactic lymph nodal regions, ensuring sufficient dose coverage for these target areas. Both arc plans (ELSA and filtered ELSA) are compared by evaluating the deviations in target coverage from the values achieved by the IMPT plan in the nominal and worst-case scenarios. We also assess the nominal average doses received by organs-at-risk in the head-and-neck region using the same comparative approach. Finally, we have incorporated the evaluation of NTCPs for xerotormia and dysphagia using the models from the Dutch model-based approach [14]. The \(\Delta\)NTCP is calculated by taking the difference between the arc plans and the IMPT plan in the nominal scenario for each patient. \begin{table} \begin{tabular}{|l|l|} \hline Maximum gantry velocity & 5\({}^{\circ}\)/s \\ \hline Maximum gantry acceleration & 0.5 \({}^{\circ}\)/s\({}^{2}\) \\ \hline Maximum gantry jerk & 0.5 \({}^{\circ}\)/s\({}^{3}\) \\ \hline Downward energy switching time & 0.5 s \\ \hline Upward energy switching time & 5 s \\ \hline Spot switching time & 2 ms \\ \hline Spot delivery time (per MU) & 5 ms \\ \hline Maximum window size & 1.99\({}^{\circ}\) \\ \hline \end{tabular} \end{table} Table 1: Machine constraints parameters used for BDT estimation. ## 3 Results ### Energy layer filtering algorithms comparison The plan quality of the three algorithms is compared for each of the four patients, using the objective function value as a global scalar surrogate of the plan quality. Fig. 2 illustrates the evolution of the objective value with respect to the remaining number of energy layers in the plan, where the maximum number of allowed EL or SU decreases linearly from right to left. The red star represents the optimized plan from which EL or SU are filtered. The filtering process is performed independently for each run, starting from the red star (i.e. the baseline ELSA plan). The multiple runs for each algorithm form a front, with a pink band denoting the tolerance range around the initial objective value that should not be exceeded (above). A tolerance value of 10% is employed. From the plotted data, it looks like that the unrestricted filtering algorithm performs better compared to others, with a majority of data points falling within the tolerance band compared to the other two methods. The SU gap filtering also shows favorable results, closely following the unrestricted filtering, for patient 1 & 2, with a behavior indicating the importance of preserving energy layers for optimal plan quality. SU filtering struggles more, as it experiences a rapid increase in the objective function value. Notably, SU filtering suppresses many more layers than the other two filters. Interestingly, the assumption that suppressing more energy layers at once would lead to a stronger decrease in BDT is proven incorrect in Fig. 3, that illustrates the true Pareto front. The SU filtering, despite suppressing more energy layers simultaneously, actually exhibits the smallest reduction in delivery time. On the contrary, the SU gap filtering effectively reduces the beam delivery time with just a small number of suppressed energy layers. For instance, in patient 1, implementing the SU gap filtering and suppressing 44 energy layers in the third run resulted in a significant reduction of the BDT from 422 seconds to 330 seconds. In contrast, the unrestricted filter required 77 energy layers to be suppressed in the fifth run to achieve a comparable decrease in BDT (336 seconds). We also observe a decrease of approximately 30 seconds by only filtering 10 energy layers with the SU gap method in patient 4. Upon closer look at the Pareto front data points within the tolerance band in Fig. 3, the optimal trade-off between plan quality and delivery time can be determined for each of the four patients. The SU gap filter proves to be the most efficient for patients 1 and 2, achieving an effective reduction of approximately 22% in beam delivery time by suppressing 44 energy layers in the third run, while still maintaining plan quality within the tolerance band. It is closely followed by the simplistic unrestricted filter, which reduces beam delivery time by up to 18% and 15% for patients 1 and 2, respectively. SU gap filtering was also the best compromise found for patient 4 although it could only decrease the BDT by 6%. Conversely, the SU filter demonstrates limited improvement, as it could only suppress 1, 2, 1 and 1 SU for patients 1, 2, 3 and 4, respectively, without exceeding the tolerance zone for plan quality. Consequently, the SU filter shows the smallest gain in beam delivery time. Patient 3 presented a bigger overal CTV, compared to the three other cases and raised tougher planning challenges. The filtering for this patient exhibited better results with Figure 2: Objective function value versus **number of energy layers** for three energy layers filtering methods for the four oropharyngeal patients. Plan quality is deteriorating as more energy layers are filtered from right to left. the unrestricted filtering method. It achieved a 10% reduction in beam delivery time in the third run. This smaller reduction in beam delivery time, as the one observed for patient 4, compared to the two first patients may indicate that more energy layers are required to meet the clinical goals effectively for these particular patients. Nevertheless, SU gap Pareto front in patient 4 looks like we could have had filtered a few more energy layers while staying in the tolerance band. The final best compromises for each patient were circled in red in Fig. 2-3. Figure 4 provides an illustration of the energy layers retained after determining the best trade-offs for each filtering method, compared to the ELSA baseline plan. Additionally, relevant metrics are presented to assess the timing results. Notably, the SU filtering demonstrates a consistent pattern of energy layers suppression around 270deg for all four patients. Furthermore, the removal of the second SU for patient 2 appears to be symmetrically suppressed relative to the first SU. One possible explanation for this behavior is that the optimizer tends to avoid shooting through the shoulders, which may require higher energy levels and deliver higher dose to the body. Another explanation could be related to the position of the primary tumor and also the relative position of the OARs to the tumor. The primary tumor is centrally located for the first three patients while it slightly shifted to the right for patient 4. In contrast, the unrestricted filter sporadically suppresses energy layers, resulting in holes across each sector. Moreover, this filter tends to remove high energy layers. On the other hand, the SU gap filtering displays a cleaner pattern by sequentially removing energy layers around the SU points. This method generates gaps primarily in between sector transitions. After optimizing filtered proton arc plans (filtered ELSA plans) for each patient and keeping the best compromise found, the next question that arises is whether these plans can be delivered more rapidly than IMPT plans. In the case of the IMPT plans, the delivery time is computed for each beam individually, resulting in what we refer to as static delivery time, as opposed to the dynamic delivery time of proton arc plans. To enable a fair comparison between static and dynamic delivery timings, one should consider the manual interventions required between each beam delivery in the control room and/or treatment room. This includes beam loading in the console, rotation of the gantry and/or the couch with potential re-imaging step. Moreover, given that range shifters are used for each IMPT beam, extra caution is needed for the sake of collision. After consulting with two separate proton centers, we introduce therefore a simulated delay of 60 seconds between each beam plus a unique additional time of 120 seconds required for the couch-kick for each of the IMPT plans. Figure 5 illustrates this comparison. Including the simulated delay, all arc plans are faster to be delivered than the IMPT plans. The filtered ELSA plans lead to significant improvement compared to the IMPT delivery time. The biggest improvement is observed for patient 1 and 2 where the filtered ELSA plans speed-up the delivery by 25% and 31% respectively compared to IMPT. Regarding ELSA baseline plans, they were only able to speed-up the delivery by 5% and 9% for these patients. Patient 3 and 4 show more resistance for the delivery speed-up but filtered ELSA plans still show an improvement of 15% and 6% compared to IMPT. Figure 3: Pareto front comparing the objective function value versus **dynamic beam delivery** for three energy layers filtering methods for the four oropharyngeal patients. Best compromise is circled in red showing the shortest delivery time with reasonable objective value. ### Treatment plan evaluation The previous results can be complemented with a detailed dosimetric evaluation for each of the best compromises found for each patient and a comparison with the ELSA baseline plan and the IMPT plan. Figure 6 reports the evaluation results for the CTV targets in terms of D98 in the nominal and worst-case scenario for each patient. The D98 is slightly reduced in the filtered ELSA plans compared to the ELSA baseline, although the deviation is minimal, not exceeding 1% for all targets in nominal and worst-case (corresponding to at most 0.6 Gy). These small reductions is therefore not considered significant. Both ELSA plans yield comparable or better results Figure 4: Comparison of energy layers kept in the plan after best filtering compromise found for the different algorithms. Rows display each patient whereas columns showcase the filtering algorithm. compared to the IMPT plans with most of the data points displayed on the right-hand side of the plot, bringing higher CTV coverage for arc plans. The clinical goals for target coverage (i.e., D98\(\%\geq 95\)% Dp) in both high-dose and low-dose CTVs were met for all the patients in the nominal scenario and the worst-case scenario. Figure 7 illustrates the DVH metrics differences with respect to the IMPT plan for the OARs included in the clinical goals. Comparing ELSA and its faster delivered version, it is good that we can maintain OAR sparing and similar target coverage while reducing the number of energy layers. In the transition from IMPT to proton arc modality, only patient 3 shows a slight increase in doses to the spinal cord and brain stem but still under the limit. However, average doses in organs generally decrease (85% of all data points lie on the left-side of the plot), which is highly favorable for arc modality in oropharyngeal patients. Figure 8 displays the difference in NTCP relative to the IMPT plan (\(\Delta\)NTCP = NTCP\({}_{\text{arc}}\) - NTCP\({}_{\text{IMPT}}\)). The arc modality significantly decreases the NTCPs for all reported side effects, including grade 2 and 3 xerostomia and dysphagia. This finding further supports the adoption of proton arc therapy for the treatment of this particular cancer site. Moreover, patients 1 and 2, selected for their high NTCPs in the IMPT plan, show a reduction of approximately 5 and 7.5 percentage points (p.p.) in NTCPs for dysphagia grade 2 and 3, respectively. Similarly, patient 3, chosen for its elevated xerostonia NTCP value, demonstrates a decrease of around 2.5 p.p. in NTCP for the proton arc plans, further highlighting the advantages of proton arc therapy. ## 4 Discussion In this study, we presented three post-processing energy layer filters to accelerate the delivery of proton arc treatment plans for oropharyngeal cancers. These filters were designed to suppress the low-weighted energy layers or partial Figure 5: Comparison of beam delivery times between IMPT, ELSA and filtered ELSA plans for the best compromises found for each patient. Filtered ELSA employed the SU gap filtering algorithm for patients 1, 2 and 4 whereas unrestricted filtering was applied for patient 3. Figure 6: Differences in target coverage (D98 - dose received by 98% of the CTV) between the evaluated arc plans and the IMPT plan in nominal and worst-case scenarios. Negative differences imply that the IMPT plan was better than the arc plan for that metric. sectors in the arc plans optimized with ELSA. Among the three methods, the SU gap filter was proved to be highly efficient in reducing the dynamic delivery time by suppressing a minimal number of energy layers. The unrestricted filter also achieved similar results, although it required more energy layers to be suppressed for the same reduction in BDT. On the other hand, the SU filter, despite removing a significant number of energy layers (partial sectors), was the least effective in reducing the BDT. Pareto plots, showcasing the objective value against the BDT, revealed that the SU gap filtering algorithm, which suppressed 44 energy layers out of 360 for patients 1 and 2, provided the best trade-off between plan quality and delivery time, accelerating the delivery by \(\sim\)100 s. Further removal of energy layers resulted in a significant increase in the objective value, correlating with a degradation in plan quality. ELSA plans for patient 3 and 4 were harder to Figure 8: Differences in terms of NTCP between arc plans and the IMPT plan. Negative differences imply that the arc plan was better than the IMPT plan for that metric (i.e., reduced probability to develop the side-effect). IMPT NTCP values for the ordered four patients: Grade 2 xerostomia = [43.5, 48.7, 53.1, 2.1], grade 3 xerostomia = [12.1, 14.1, 16.1, 11.6], grade 2 dysphagia = [44.2, 43.8, 19.3, 17.2], grade 2 dysphagia = [22.6, 24.4, 3.2, 7.1]. Figure 7: Differences in terms of clinical planning criteria for the OARs between arc plans and the IMPT plan. Negative differences imply that the arc plan was better than the IMPT plan for that metric (i.e., less dose to the OARs). SMG, submandibular gland; PCM, pharyngeal descriptor muscle. accelerate where the best speed-up gain was about 50 and 30 seconds but removing as few as 44 and 10 energy layers respectively. Both patients had much higher initial number of spots, attesting for their more complex tumor geometry. Energy layers could therefore hardly be filtered. Although the choice of a 10% tolerance criterion for the objective values was arbitrary, further investigation might determine the actual threshold passed which plan degradation becomes unacceptable. Moreover, additional iterations may be necessary after the post-processing filter, depending on the number of energy layers removed from the plan. Ultimately, the Pareto graphs could be used for MCO-like optimization and thus help the treating physician to find the best trade-off for each patient. For the patients treated with IMPT plans, four beams were used with about 40 energy layers each, resulting in a total of 160 energy layers. In comparison, the ELSA baseline proton arc plans needed 360 energy layers, one for each beam direction, to achieve a comparable plan quality. This discrepancy might arise because it's more challenging to combine beamlets from each energy layer in the arc, ensuring an adequately spread-out Bragg Peak to cover the targets. It's also important to understand that both the plan quality and BDT heavily rely on the preliminary energy selection. The angular sampling and sectors count may not be optimal and could potentially benefit from adjustments that aren't purely geometry-driven. Yet, incorporating dose-based objectives into the initial energy layers choice can be complex due to the size of the proton arc problem [24]. Hence, dose-based filtering methods paired with rapid energy layer selection become pivotal to address the shortcomings of purely geometry-based energy layer choices. Despite the proton arc plans using a significantly higher number of energy layers than the IMPT plans, they still achieved shorter BDT for the four evaluated patients. This outcome was reached based on certain assumptions regarding the IMPT beams delivery. We factored in extra time between each static beam delivery to ensure a more fair comparison. After consulting with two separate proton centers about this additional time, we chose the lowest values (i.e. faster IMPT delivery). This suggests that the difference in BDT between IMPT and arc plans could be even more pronounced. Moreover, the filtered arc plans further reduced the BDT, attaining up to a 30% speed-up in comparison to IMPT, emphasizing the utility of the proposed method. The plan evaluation and robustness analysis demonstrated that the filtered ELSA plans maintained target coverage with insignificant deviations from the ELSA baseline plans in both nominal and worst-case scenarios. Comparisons with the 'clinical' IMPT plans, generated by an experienced dosimetrist, showed that both the non-filtered and filtered ELSA plans outperformed the IMPT plans in most clinical goals, as observed across all four patients. Additionally, the study of NTCPs for dysphagia and xerostomia further supported the superiority of proton arc therapy for oropharyngeal cancer. These findings concur with existing publications on proton arc therapy in this target region [16, 6]. Methods of layer filtering have already been extensively studied in the context of IMPT [22, 3, 23, 13]. However, recent advancements in beam energy layer selection hardware have significantly reduced the impact of suppressing energy layers in IMPT plans. This raises the question of whether this would also apply to proton arc plans. Proton arc plans actually involve more energy jumps and they must be dynamically delivered. Most machine capabilities face challenges with achieving the same efficiency for upward energy switching as they do for downward energy switching. This is due to physical constraints of the magnet [28], a common limitation seen in many proton therapy machines. However, there is potential for future improvements in energy layer systems [18]. Such advancements would simplify proton arc optimization by reducing the burden of dealing with costly SU in the plan, which is currently a major bottleneck in proton arc treatment planning. Nevertheless, despite the complexity of proton arc plans, the implementation of a post-processing algorithm that filters out the least significant energy layers can still have a positive impact on treatment delivery. Even a small reduction in BDT could potentially enhance patient throughput in clinical settings. Furthermore, the integration of delivery-accelerating strategies could be seamlessly implemented into existing treatment planning systems. It is noteworthy that post-processing filter, including the subsequent short re-optimization, only takes around 10 minutes for the patients evaluated in this study (with 5 minutes dedicated to pre-dose calculation which could be cached). This relatively short running time might fall within an acceptable range of delay for proton treatment planning. This study selected oropharyngeal cancer patients to showcase the filtering algorithms, as these patients appear to benefit significantly in terms of plan quality, even though the delivery time remains long. However, the proposed post-processing filters could potentially be applied to other treatment sites with long treatment times, such as esophageal cancer that has shown very large difference compared to its IMPT plan counterpart in a recent institutional study [25]. It is important to acknowledge the limitations of this study, including the small sample size of four patients and the absence of statistical analysis. Nevertheless, this study provides a proof of concept and lays the groundwork for future investigations. ## 5 Conclusion In conclusion, this study introduces energy layer filters as a post-processing to accelerate the delivery of proton arc treatment plans. The results demonstrate that these filters effectively reduce beam delivery times while maintaining plan quality. The findings support the use of proton arc therapy in reducing organ-at-risk toxicity and improving local tumor control in oropharyngeal cancers. Further research is needed to optimize proton arc planning and understand the underlying requirements for energy layer selection. The proposed filters have the potential to enhance clinic efficiency and patient throughput without compromising treatment quality. ## 6 Acknowledgments Sophie Wuyckens is funded by the Walloon Region as part of the Arc Proton Therapy convention (Poles Mecatech et Biowin). Computational resources have been provided by the supercomputing facilities of the Universite catholique de Louvain (CISM/UCL) and the Consortium des Equipements de Calcul Intensif en Federation Wallonie Bruxelles (CECI) funded by the F.R.S.-FNRS under convention 2.5020.11. John A. Lee is a Research Director with the F.R.S.-FNRS.
2309.10348
Language Guided Adversarial Purification
Adversarial purification using generative models demonstrates strong adversarial defense performance. These methods are classifier and attack-agnostic, making them versatile but often computationally intensive. Recent strides in diffusion and score networks have improved image generation and, by extension, adversarial purification. Another highly efficient class of adversarial defense methods known as adversarial training requires specific knowledge of attack vectors, forcing them to be trained extensively on adversarial examples. To overcome these limitations, we introduce a new framework, namely Language Guided Adversarial Purification (LGAP), utilizing pre-trained diffusion models and caption generators to defend against adversarial attacks. Given an input image, our method first generates a caption, which is then used to guide the adversarial purification process through a diffusion network. Our approach has been evaluated against strong adversarial attacks, proving its effectiveness in enhancing adversarial robustness. Our results indicate that LGAP outperforms most existing adversarial defense techniques without requiring specialized network training. This underscores the generalizability of models trained on large datasets, highlighting a promising direction for further research.
Himanshu Singh, A V Subramanyam
2023-09-19T06:17:18Z
http://arxiv.org/abs/2309.10348v1
# Language Guided Adversarial Purification ###### Abstract Adversarial purification using generative models demonstrates strong adversarial defense performance. These methods are classifier and attack-agnostic, making them versatile but often computationally intensive. Recent strides in diffusion and score networks have improved image generation and, by extension, adversarial purification. Another highly efficient class of adversarial defense methods known as adversarial training requires specific knowledge of attack vectors, forcing them to be trained extensively on adversarial examples. To overcome these limitations, we introduce a new framework, namely Language Guided Adversarial Purification (LGAF), utilizing pre-trained diffusion models and caption generators to defend against adversarial attacks. Given an input image, our method first generates a caption, which is then used to guide the adversarial purification process through a diffusion network. Our approach has been evaluated against strong adversarial attacks, proving its effectiveness in enhancing adversarial robustness. Our results indicate that LGAF outperforms most existing adversarial defense techniques without requiring specialized network training. This underscores the generalizability of models trained on large datasets, highlighting a promising direction for further research. Himanshu Singh, A V Subramanyam Indraprastha Institute of Information Technology, Delhi, India Adversarial purification, Language guidance, Diffusion ## 1 Introduction The use of deep neural networks, especially within the realm of computer vision, has ushered in transformative advancements in various applications. Despite these strides, a consistent vulnerability is the susceptibility of such models to adversarial perturbations [1]. These perturbations, often imperceptible, can fool even the most sophisticated neural networks, causing them to misclassify inputs. Addressing this alarming vulnerability has become a research imperative, leading to a rapidly growing body of literature dedicated to understanding and defending against these adversarial threats [2, 3]. Historically, adversarial training, introduced by Goodfellow _et al._[1], has been posited as an effective defense strategy. This approach, which integrates adversarial examples into the training phase, aims to strengthen models against specific adversarial attacks. However, its efficacy is often limited to the spectrum of attacks encountered during training, thereby leaving models vulnerable to novel adversarial strategies. This constraint underscores the necessity for alternative defensive paradigms. Given their inherent capability to generate or transform data, generative models have recently been explored as potential tools for adversarial purification [4, 5, 6]. Within this domain, diffusion models have emerged as particularly promising candidates. Recent studies, as exemplified by Nie _et al._[7] and Carlini _et al._[8], have harnessed the potential of score-based and diffusion models towards purification of adversarial samples. Primarily, the adversarial purification techniques have focussed only on the image modality, despite promising performance of diffusion models in multi-modal tasks such as text-to-image generation [9]. Thus, in our work, we investigate the Figure 1: Illustration of LGAF. A pre-trained image-captioning model (BLIP) generates captions for input images, providing a textual representation of the visual content. Leveraging the generated captions, purified images are created via the diffusion model. The red dashed lines represent the adversarial image input, while the green dotted lines indicate the resulting purified image. impact of language towards the robustness of vision models. Our research focuses on defensive strategy based on vision and language models trained on large datasets. By leveraging the capabilities of such models trained jointly on language and vision tasks, we propose a novel framework of **L**anguage **G**uided **A**dversarial **P**uification (LGAP), as illustrated in Figure 1. This novel framework, which seamlessly integrates a caption generator and a pre-trained diffusion model with a classifier, leverages the inherent generalisability of these models to purify an adversarial input. To the best of our knowledge, language based adversarial purification has not been addressed in the literature. We conduct elaborate empirical evaluations across benchmark datasets, including ImageNet [10], CIFAR-10 [11] and CIFAR-100 [11]. The results of evaluation against \(L_{\infty}\) norm attacks corroborate the robustness of our framework. Notably, for the ImageNet, our method reveals better performance compared to previous techniques. ## 2 Related Works **Diffusion models in image generation:** The landscape of image generation has been revolutionized by diffusion models. Rooted in the foundational works of Sohl-Dickstein _et al._[12] and later extended by Song _et al._[13] and Ho _et al._[14], these models have exhibited unparalleled powers in generating high-quality image samples. Song _et al._[15] further advanced this domain by combining generative learning mechanisms with stochastic differential equations, thereby broadening the horizon of diffusion models. **Language-image pretraining:** A significant milestone in deep learning, language-image pretraining bridges the gap between textual and visual data. Pioneering models such as CLIP [16] and BLIP [17] have leveraged vast amounts of text and image data to jointly train vision and language models, demonstrating tremendous progress in multi-modal tasks. **Adversarial training:** The foundational work of Madry _et al._[2] established adversarial training as a robust method for safeguarding neural networks from known adversarial attacks. While the effectiveness of the method is well-recognized, its scalability and adaptability have been enhanced through inspirations from metric learning [18] and self-supervised paradigms [19]. However, the computational demands of adversarial training has spurred research into more efficient training methods [20, 21]. **Adversarial purification**: Generative models have emerged as a pioneer in the adversarial purification realm. Initial endeavors, such as those by Samangouei _et al._[4], harnessed GANs for purification. Subsequent innovations leaned on energy-based models (EBMs) to refine the purification process using Langevin dynamics [22]. Notably, the intersection of score networks and diffusion models with adversarial purification has been explored recently, with promising results against benchmark adversarial attacks [6, 7]. ## 3 Proposed Method We propose a novel defense strategy against adversarial attacks on classification models by leveraging language guidance in diffusion models for adversarial purification. For a clean sample \(\mathbf{x}\) with label \(y\), and a target neural network \(f_{\boldsymbol{\theta}}\), the adversary aims to produce \(\mathbf{x}_{\text{adv}}\) by introducing adversarial perturbations. This results in a prediction \(f_{\boldsymbol{\theta}}(\mathbf{x}_{adv})\) that differs from the original prediction \(f_{\boldsymbol{\theta}}(\mathbf{x})=y\). The underlying premise of the proposed method is to preprocess the input \(\mathbf{x}\) through a diffusion model conditioned on a caption to remove any adversarial perturbations before feeding it to \(f_{\boldsymbol{\theta}}\). We first discuss the caption generation followed by purification using diffusion model. ### Image captioning For image captioning, we use a caption generator from BLIP [17]. BLIP has a multi-modal encoder-decoder architecture which consists of three major components a unimodal encoder for generating image and text embeddings, an image-grounded text encoder that computes cross attention and self-attention between the two encodings to give a multimodal representation of image text pair, and an image-grounded text decoder that uses casual self-attention to give the text caption. We use the unimodal encoder and image-grounded text decoder to generate the captions. Given an input \(\mathbf{x}\), the captions are generated as, \[\text{\emph{Caption}}_{\text{BLIP}}=\text{\emph{BLIP}}(\mathbf{x}).\] We show some sample captions in Figure 2. We can see that the captions for the clean samples (top row) contains the true label. In the second row, adversarial samples are given and the classifier' prediction is incorrect. Here, _truck_ is classified as _ship_. However, the BLIP caption still contains the true label _truck_, though the caption is not the same as that of clean sample. Thus, using these captions can condition the diffusion models with true semantics which can enhance purification of the adversarial images. Next, we discuss the diffusion based purification. ### Diffusion purification process **Latent diffusion process** In a standard diffusion model [14], the diffusion process can be defined as: \[\mathbf{x_{t}}=\sqrt{1-\beta_{t}}\cdot\mathbf{x_{t-1}}+\sqrt{\beta_{t}}\cdot \boldsymbol{\epsilon}_{t}\] where \(\beta_{t}\in(0,1)\) is the variance schedule, \(\mathbf{x_{t}}\) is the noisy sample, and \(\boldsymbol{\epsilon}_{t}\) is the noise at time step \(t\). In Latent Diffusion Models [9], this process is applied in latent space: \[\mathbf{z_{0}}=\mathcal{E}(\mathbf{x})\] \[\mathbf{z_{t}}=\sqrt{1-\beta_{t}}\cdot\mathbf{z_{t-1}}+\sqrt{ \beta_{t}}\cdot\boldsymbol{\epsilon}_{t}\] where \(\mathbf{z}_{0}\) is the latent vector obtained from the encoder \(\mathcal{E}\) and \(\mathbf{z}_{t}\) is the noisy latent vector at time step \(t\). **Reverse process in latent space** In the reverse process, the aim is to recover \(\mathbf{z}_{0}\) from \(\mathbf{z}_{T}\) given a sequence of noise terms \(\boldsymbol{\epsilon}_{t}\). Mathematically, this is defined as: \[\mathbf{z}_{t}=g_{\theta}(\mathbf{z}_{t+1},t,\boldsymbol{\epsilon}_{t})\] where \(g_{\theta}\) is a parameterized model. Additionally, \(g_{\theta}\) is conditioned on text by augmenting \(g_{\theta}\) architecture with cross attention layers. Since our goal is to leverage the BLIP generated captions, we condition the diffusion model as: \[\mathbf{z}_{t}=g_{\theta}(\mathbf{z}_{t+1},t,\boldsymbol{\epsilon}_{t}, \mathbf{C})\] where \(\mathbf{C}=\tau_{\theta}(\textit{Caption}_{\text{BLIP}})\), and \(\tau_{\theta}\) is text encoder. Since BLIP is a powerful model, the likelihood that it correctly identifies the image is high. This gives a better guidance to diffusion model compared to image-only case. **Final image reconstruction and training** Finally, the reconstructed image \(\hat{\mathbf{x}}\) can be obtained from the reconstructed latent representation \(\mathbf{z}_{0}\) as, \(\hat{\mathbf{x}}=\mathcal{D}(\mathbf{z}_{0})\), where \(\mathcal{D}\) is the decoder. Given model \(f_{\boldsymbol{\theta}}\), clean image \(\mathbf{x}\), its corresponding pre-processed sample \(\hat{\mathbf{x}}\) and labels \(y\), we optimize for, where \(\mathcal{L}_{CE}\) is the cross-entropy loss and \(n\) is the number of samples. In contrast to adversarial training of several epochs with adversarial samples, we only need a few epochs of finetuning with pre-processed clean samples. Further, compared to score or diffusion-based purification, which extensively trains these models, we only need minimal training of the classifier. ## 4 Experiments and Results ### Experimental settings **Datasets and network architectures:** Our experimental evaluation involves three datasets, namely CIFAR-10 [11], CIFA-100 [11] and ImageNet [10]. We utilize the base models from RobustBench [23] model zoo for CIFAR-10 and ImageNet. For CIFAR-100 we train the model following Yoon et al. [6]. We compare our approach against other adversarial purification strategies on CIFAR-10, adhering to their experimental configurations. We also evaluate our method against preprocessor blind attacks on ImageNet. Regarding classifier architectures, we opt for two prevalent models: ResNet-50 [24] for ImageNet and WideResNet-28-10 [25] for CIFAR-10 and CIFAR-100. We fine-tune the WideResNet on images generated from the diffusion network for 15 epochs. We utilize Adam optimizer with a \(10^{-3}\) learning rate. For generating captions, we use pre-trained BLIP [17] with default hyperparameters, and for the diffusion process, we use a pre-trained latent diffusion model from [9] with default parameters except for the noise parameter \(t\). We set \(t\) to 0.5 for CIFAR-10, and CIFAR-100 and 0.1 for ImageNet. We will be releasing the code soon. **Adversarial attacks:** We test our algorithm against preprocessor blind PGD attacks, in which the adversary has complete visibility into the classifier but is uninformed about the purification model. We also evaluate our algorithm against strong adaptive attack, which involves more complex scenarios due to our purification algorithm's iterative nature through neural networks, potentially leading to obfuscated gradients. To rigorously test our defense mechanism, we use potent adaptive attacks, such as Backward Pass Differentiable Approximation (BPDA) [29] and its variations. We experiment with the basic form of BPDA, where the purification function is approximated as the identity function. We further validate its robustness using Expectation Over Time (EOT) attacks [29]. \begin{table} \begin{tabular}{l c c c} \hline \hline **Methods** & \multicolumn{2}{c}{**Accuracy(\%)**} & **Architecture** \\ & Natural & Robust & \\ \hline Raw WideResNet & 95.80 & 0.00 & WRN-28-10 \\ Adv. purification methods & & & \\ **LGAP** & 90.03 & 71.68 & WRN-28-10 \\ Yoon et al. [6]* & & & \\ (\(\sigma=0.1\))* & 93.09 & **85.45** & WRN-28-10 \\ (\(\sigma=0.25\))* & 86.14 & 80.24 & WRN-28-10 \\ Hill et al. [26]* & 84.12 & 78.91 & WRN-28-10 \\ Shi et al. [5]* & **96.93** & 63.10 & WRN-28-10 \\ Du et al. [27]* & 48.7 & 37.5 & WRN-28-10 \\ Grathwohl et al. [22]* & 75.5 & 23.8 & WRN-28-10 \\ Song et al. [3]* & & & \\ Natural + PixelCNN & 82 & 61 & ResNet-62 \\ AT + PixelCNN & 90 & 70 & ResNet-62 \\ \hline Adv training methods & & & \\ Madry et al. [2]* & 87.3 & 70.2 & ResNet-56 \\ Dong et al. [28]* & 84.98 & 51.29 & ResNet-18 \\ \hline \hline \end{tabular} \end{table} Table 1: Results for preprocessor blind PGD attack for CIFAR-10, within an \(L_{\infty}\)\(\epsilon\)-ball, where \(\epsilon\) = 8/255. Data sourced from existing literature is indicated by an asterisk *. Figure 2: Purified samples given by LGAP. The first, second, and third rows contain clean, adversarial, and purified samples. The BLIP generated captions are given on the right, and the predicted label is on top of the image. ### Comparison with state of the art The results for preprocessor blind setup shown in Table 1 on CIFAR10 show that our method gives better robust performance than most previous methods, specifically adversarial training methods, while maintaining comparable performance on natural images. Our method achieves a robust accuracy of 71.68%, which clearly outperforms seven out of ten methods including two adversarial defense methods and five adversarial purification methods. A snapshot of adversarial samples and their corresponding purified images is given in Figure 2. We further extend our evaluation to the CIFAR-100 dataset, with the robust performance comparisons listed in Table 2. Unlike other methods, such as the one by Yoon _et al._, which demands training a score network and noise parameter tuning, our method, LGAP delivers competitive results with substantially lower computational overhead. Table 3 shows robust accuracy of our method against BPDA attack for CIFAR-10. Our method outperforms most previous techniques of adversarial purification and adversarial training. The gap in accuracy between our method and some recent techniques remains owing to other methods training the purification model on CIFAR10. Yoon _et al._ and Hill _et al._ which show better robust performance, train diffusion and EBM networks on CIFAR-10 for 200,000 iterations [6, 26]. Whereas our method requires no such training. Table 4 shows the robust performance of our method for ImageNet. Due to the high computational cost of some attacks, we evaluate on a fixed set of 2048 as robust accuracy does not change much on the sampled subset compared to the whole subset [7]. We can see that even against strong adaptive attack such as BPDA-40, LGAP attains an accuracy of 45.31% demonstrating the efficacy of the proposed method. The enhanced performance of the method can be attributed to the diffusion model trained on ImageNet. Similarly, a diffusion model trained on CIFAR-10 is expected to yield improved results when applied to CIFAR-10 classification. ## 5 Conclusion Our method addressed key limitations in adversarial defense by introducing a language-guided purification approach. Unlike traditional methods, which require extensive computational resources and specific attack knowledge, our method leverages pre-trained diffusion models and caption generators. This reduces computational overhead and enhances scalability. Empirical tests show our approach is robust, outperforming conventional methods in several metrics, despite trailing some diffusion-based methods. Notably, this performance is achieved with minimal training and do not require adversarial samples or training the score or diffusion networks, thus broadening the method's applicability and setting a new efficiency standard. Our method underscores the generalizability of deep learning models trained on large datasets, pointing to avenues for future research, especially in model generalizability. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & \multicolumn{2}{c}{**Accuracy**} & **Architecture** \\ Attacks & Natural & Robust & \\ \hline Undefended & 76.76 & 0 & ResNet-50 \\ **LGAP** & **69.09** & & ResNet-50 \\ AA & & **57.12** & \\ BPDA-40 & & 45.31 & \\ PGD-10 & & 52.73 & \\ \hline Adv training methods & & & \\ Salman et al. [33]* & 64.02 & & ResNet-50 \\ AA & & 34.96 & \\ Wong et al. [21]* & 55.62 & & ResNet-50 \\ AA & & 26.24 & \\ \hline \hline \end{tabular} \end{table} Table 4: Preprocessor blind attacks for ImageNet, \(\epsilon\) = 4/\(255\). \begin{table} \begin{tabular}{l c c} \hline \hline **Methods** & \multicolumn{2}{c}{**Accuracy**} \\ & Natural & Robust \\ \hline **LGAP** & 58.71 & 39.82 \\ Yoon et al.[6]* & **77.83** & **43.21** \\ \hline Adversarial training methods & & \\ Madry et al.[2]* & 59.58 & 25.47 \\ Li et al.[30]* & 61.01 & 28.88 \\ \hline \hline \end{tabular} \end{table} Table 2: Preprocessor blind PGD attack for CIFAR-100, \(\epsilon\) = 8/\(255\). Data sourced from existing literature is indicated by an asterisk *. \begin{table} \begin{tabular}{l c c} \hline \hline **Method** & \multicolumn{2}{c}{**Accuracy**} \\ & Natural & Robust \\ \hline **LGAP** & 58.71 & 39.82 \\ Yoon et al.[6]* & **77.83** & **43.21** \\ \hline Adversarial training methods & & \\ Madry et al.[2]* & 59.58 & 25.47 \\ Li et al.[30]* & 61.01 & 28.88 \\ \hline \hline \end{tabular} \end{table} Table 3: Adaptive attacks for CIFAR-10, \(\epsilon\) = 8/255.
2305.19907
Too small to fail: characterizing sub-solar mass black hole mergers with gravitational waves
The detection of a sub-solar mass black hole could yield dramatic new insights into the nature of dark matter and early-Universe physics, as such objects lack a traditional astrophysical formation mechanism. Gravitational waves allow for the direct measurement of compact object masses during binary mergers, and we expect the gravitational-wave signal from a low-mass coalescence to remain within the LIGO frequency band for thousands of seconds. However, it is unclear whether one can confidently measure the properties of a sub-solar mass compact object and distinguish between a sub-solar mass black hole or other exotic objects. To this end, we perform Bayesian parameter estimation on simulated gravitational-wave signals from sub-solar mass black hole mergers to explore the measurability of their source properties. We find that the LIGO/Virgo detectors during the O4 observing run would be able to confidently identify sub-solar component masses at the threshold of detectability; these events would also be well-localized on the sky and may reveal some information on their binary spin geometry. Further, next-generation detectors such as Cosmic Explorer and the Einstein Telescope will allow for precision measurement of the properties of sub-solar mass mergers and tighter constraints on their compact-object nature.
Noah E. Wolfe, Salvatore Vitale, Colm Talbot
2023-05-31T14:40:35Z
http://arxiv.org/abs/2305.19907v2
# Too small to fail: characterizing sub-solar mass black hole mergers with gravitational waves ###### Abstract The detection of a sub-solar mass black hole could yield dramatic new insights into the nature of dark matter and early-Universe physics, as such objects lack a traditional astrophysical formation mechanism. Gravitational waves allow for the direct measurement of compact object masses during binary mergers, and we expect the gravitational-wave signal from a low-mass coalescence to remain within the LIGO frequency band for thousands of seconds. However, it is unclear whether one can confidently measure the properties of a sub-solar mass compact object and distinguish between a sub-solar mass black hole or other exotic objects. To this end, we perform Bayesian parameter estimation on simulated gravitational-wave signals from sub-solar mass black hole mergers to explore the measurability of their source properties. We find that the LIGO/Virgo detectors during the O4 observing run would be able to confidently identify sub-solar component masses at the threshold of detectability; these events would also be well-localized on the sky and may reveal some information on their binary spin geometry. Further, next-generation detectors such as Cosmic Explorer and the Einstein Telescope will allow for precision measurement of the properties of sub-solar mass mergers and tighter constraints on their compact-object nature. 2305.19907 ## 1 Introduction If some fraction of dark matter is composed of black holes, or gravitationally collapses to form black holes, gravitational waves (GWs) may offer the opportunity to directly probe the nature of dark matter. Recent work has proposed that previous GW signals consistent with stellar-mass black holes can be sourced from black holes with a primordial origin [1; 2; 3; 4], however, there is currently no preference for those formation models over astrophysical channels [2; 3; 5; 6; 7; 8; 9; 10; 11]. For current- and next-generation ground-based gravitational-wave detectors like Advanced LIGO [12], Advanced Virgo [13], KAGRA [14], Cosmic Explorer [15], and the Einstein Telescope [16], cleaner targets for dark matter searches may be compact object mergers involving a black hole with mass \(\lesssim 1\)\(M_{\odot}\). The signals emitted by these events lie firmly in the frequency range accessible to ground-based detectors _and_ are immediately distinguished from traditional astrophysical formation channels by their mass alone1. While microlensing surveys have placed constraints on the fraction of dark matter composed by \(\mathcal{O}(1\)\(M_{\odot})\) black holes, the constraints may depend on the assumed mass distribution of the objects as well as the distribution of mass in the halo of the Milky Way [18]. Thus, it remains possible for some dark matter to be found in sub-solar mass black holes. There are two categories of hypothesized sub-solar mass black holes: primordial black holes and dark matter black holes. Footnote 1: As their mass lies below the Chandrasekhar limit of \(\sim\)1.4 \(M_{\odot}\)[17]. Primordial black holes (PBHs) could have formed from the gravitational collapse of overdensities in the early Universe [19; 20]. Such objects have been proposed as a population of cold, collisionless dark matter [21]. The existence and mass distribution of primordial black holes depends strongly on their formation mechanism and underlying density power spectrum. Based on horizon scale considerations, PBHs with a sharply peaked mass distribution near \(\mathcal{O}(1\)\(M_{\odot})\) could have formed during the radiation-dominated era near the quark-hadron phase transition; the abundance of PBH masses may depend on the background cosmology [22; 23; 24], equation of state of the early Universe [25], and the (non-)Gaussianity of the density fluctuations [26; 27; 28] (for a review of PBH formation in the radiation-dominated era, see [18]). PBH formation could have been driven by other physics during or beyond the radiation-dominated era (for a review of such scenarios, see [21]). Fluctuations generated by inflation could seed primordial black holes [29; 30; 31; 32; 33], which would be directly sensitive to the dynamics of inflation. Other structures, like cosmic loops [34], bubbles of broken symmetry [35], and domain walls [36] could also collide or collapse to form primordial black holes. Additionally, the spectrum of PBH masses would be sensitive to accretion physics [18], and their formation into binaries depends on their clustering dynamics [37; 38; 39; 40; 41; 42] and their merger rate which are theoretically uncertain [43; 4]. Alternatively, black holes could form directly from the collapse of particle dark matter in certain dissipative dark matter models [44; 45; 46; 47; 48; 49; 50; 51; 52]. The precise mass distribution of such "dark matter black holes" (DBHs) is strongly dependent on the microphysical details of the dark matter particles such as the possible dark matter species, their masses, and interaction cross sections. The merger rate of DBHs would also depend on the larger-scale dynamics of dark matter halos. Both the microphysical details and galactic-scale dynamics of dark matter are open questions [53]. Ref. [45] studied a set of atomic dark matter scenarios that lower the Chandrasekhar mass with a heavier dark-proton analog, forming black holes \(\lesssim 1\)\(M_{\odot}\). There, the dark matter microphysics is encoded in the dark matter cooling rates, and halo dynamics are encoded in the fraction of dark matter available to dynamically cool. Thus, the discovery of a sub-solar mass black hole would probe the nature of dark matter and provide critical constraints on a rich theoretical landscape of physics in the early- and modern-Universe. If such objects can form binaries and merge in the Hubble time, gravitational waves are the most promising method of detection. Even the non-detection of sub-solar mass compact objects is already providing unique constraints on the parameter space of dark matter [54; 55; 56; 57]. However, fully realizing this promise hinges on positively identifying a compact object involved in a merger as \((1)\lesssim 1~{}M_{\odot}\) in mass and (2) a black hole. This first question has previously been studied for astrophysical, super-solar mass black holes in next-generation detectors [58, for example] as well as sub-solar mass neutron stars [59]. Given a gravitational wave signal from the coalescence of compact objects, the identification of a sub-solar mass black hole could be complicated by "mimickers" such as sub-solar mass neutron stars or boson stars [60]. While such objects themselves would be astrophysically exotic [61; 62; 63] and potentially sourced from dark matter [64], it is not yet clear how well current methods of gravitational-wave data analysis will distinguish between sub-solar mass black holes and these alternatives. For example, when analyzing a low-significance sub-solar mass trigger, Ref. [65] could not exclude a neutron star origin for the sub-solar mass object.2 Footnote 2: See Ref. [66] for an analysis focusing on distinguishing light super-solar mass black holes from neutron stars. In this work, we estimate parameters for a set of simulated signals from sub-solar mass black hole mergers in current- and next-generation gravitational wave detectors to understand the feasibility of identifying sub-solar mass black holes with gravitational-wave signals across a range of binary black hole parameters. First, we inspect the constraints we can achieve on the component black hole masses, to determine if we can confidently identify that a compact object is sub-solar mass in nature at all. Then, we inspect two additional parameter sets for these signals- the spins of the compact objects and the sky location of the binary- both of which may rule out neutron stars in such signals. We conclude with a discussion of our main findings. ## 2 Methods ### Gravitational-Wave Parameter Estimation To measure the source properties of gravitational-wave signals, we perform Bayesian parameter estimation. Bayes' Theorem states that \[p(\theta|d)=\frac{\mathcal{L}(d|\theta)\pi(\theta)}{\mathcal{Z}(d)} \tag{1}\] where \(p(\theta|d)\) is the posterior probability that the data \(d\) contains a signal described by source parameters \(\theta\), \(\mathcal{L}\) is the likelihood of observing the signal given some source parameters, \(\pi(\theta)\) is the prior probability of \(\theta\), and \(\mathcal{Z}\) is a normalization commonly referred to as the evidence. In the case of gravitational-wave parameter estimation for emission from a quasi-circular black hole binary, \(\theta\) includes 8 intrinsic parameters of the component black holes (their masses and spin vectors) and 7 extrinsic parameters of the binary (including its luminosity distance and location on the sky), for 15 parameters total. We use the Whittle likelihood approximation in the frequency domain for the residual of the data minus the astrophysical signal3. Here, the goal of our analysis will be to compute the posterior probability of \(\theta\) for a series of simulated signals. In this work, we use the nested sampling algorithm [68; 69] implemented by dynesty [70] to estimate \(p(\theta|d)\). Footnote 3: In other words, we treat the noise as stationary and Gaussian-distributed. Note this may not necessarily be true in next-generation detectors which will have many overlapping signals, but in principle, our results hold as e.g. one could subtract out this astrophysical “noise” [67]. As the nested sampling algorithm iterates, we must evaluate \(\mathcal{L}(d|\theta)\) which requires evaluating the waveform model at some proposal \(\theta\). Signals from merging binaries with relatively low total mass or mass ratio will be relatively long; for the component masses listed in Table 1, signals will be \(\mathcal{O}(10^{3})\) seconds in length compared to \(\mathcal{O}(10^{2})\) seconds for the larger total mass, near equal-mass binary neutron star GW170817 [71]. This dramatically increases the number of frequencies at which we need to evaluate proposal waveforms for a given \(\theta\), in evaluating \(\mathcal{L}\), which is computationally expensive. Instead, we use a "heterodyned" likelihood [72], an approximation also known as "relative binning" [73; 74], which well-approximates \(\mathcal{L}\) at far fewer frequencies by expanding it around its value at some fiducial parameters. For simulated signals, we choose these parameters to be the true source parameters. We also forego effects due to the rotation of the Earth, which would cause the antenna pattern to vary in time and increase the computational expense of \(\mathcal{L}\). We carry out parameter estimation using bilby[75; 76] which implements a heterodyned likelihood as in Ref. [77]; for details on the heterodyned likelihood, our priors, and sampler settings, see Appendix A. ### Simulated Signals We consider a set of simulated gravitational-wave signals from binary black hole mergers involving both sub-solar and super-solar mass components. In Table 1, we show the pairs of detector-frame component masses \((\tilde{m}_{1},\tilde{m}_{2})\) of the signals studied in this work; here, \(\tilde{m}_{1}\geq\tilde{m}_{2}\). For each of these pairs, we consider mergers where the more massive component has a dimensionless spin magnitude \(a_{1}\) of 0.6, 0.8, and 0.9. These spins are chosen to be greater than the observed maximum spin for a neutron star of \(a_{\rm NS}=0.4\)[78]. The spin of the less massive component, \(a_{2}\), is always chosen as zero. At each of these masses and spins, we also consider different orientations of the spin vector of the more massive component, \(\vec{S}_{1}\), characterized by the "tilt" zenith angle \(\theta_{1}\) between \(\vec{S}_{1}\) and the orbital angular momentum \(\vec{L}\). When the black hole spins are misaligned with \(\vec{L}\) (e.g. \(\theta_{1}>0\)), all angular momenta in the system precess, driven by inertial frame dragging [79]. Here, we consider each of two cases, \(\theta_{1}=0\) and \(\theta_{1}=\pi/2\). For each set of intrinsic parameters \((\tilde{m}_{1},\tilde{m}_{2},a_{1},\theta_{1})\), we vary the luminosity distance of the source, \(d_{L}\), to achieve one of the network signal-to-noise ratios (SNRs) listed for each \((\tilde{m}_{1},\tilde{m}_{2})\) in Table 1. We also note that the black hole masses \begin{table} \begin{tabular}{c c c c c c c c} \(\tilde{m}_{1}\) [\(M_{\odot}\)] & \(\tilde{m}_{2}\) [\(M_{\odot}\)] & \(q\) & O4 SNRs & 3G SNRs & Redshift & \(m_{1}\) [\(M_{\odot}\)] & \(m_{2}\) [\(M_{\odot}\)] \\ \hline 0.5 & 0.5 & 1.00 & 10.0 & 314.7 & 0.009 & 0.495 & 0.495 \\ & & & 29.8 & 934.1 & 0.028 & 0.486 & 0.486 \\ 0.9 & 0.9 & 1.00 & 7.5 & 235.5 & 0.028 & 0.875 & 0.875 \\ & & & 16.0 & 502.4 & 0.059 & 0.850 & 0.850 \\ 0.9 & 0.5 & 0.56 & 12.5 & 392.0 & 0.009 & 0.891 & 0.495 \\ & & & 37.1 & 1163.4 & 0.028 & 0.875 & 0.486 \\ 1.4 & 0.5 & 0.36 & 21.2 & 665.5 & 0.009 & 1.387 & 0.495 \\ & & & 42.4 & 1331.6 & 0.019 & 1.374 & 0.491 \\ 1.0 & 0.1 & 0.10 & 14.3 & 447.5 & 0.009 & 0.991 & 0.099 \\ 1.4 & 0.1 & 0.07 & 13.7 & 429.0 & 0.009 & 1.387 & 0.099 \\ \end{tabular} \end{table} Table 1: Detector-frame component masses \(\tilde{m}_{1},\tilde{m}_{2}\) and the mass ratio \(q\) of the simulated signals studied in this work. At each row \((\tilde{m}_{1},\tilde{m}_{2})\) listed here, we simulated a signal for each of \(a_{1}\) chosen from \(\{0.6,0.8,0.9\}\) and the tilt angle \(\theta_{1}\) chosen as 0 (no precession) or \(\pi/2\) (precessing). For each set of intrinsic parameters \((\tilde{m}_{1},\tilde{m}_{2},a_{1},\theta_{1})\), we simulate a source in an O4 and 3G detector network with each of the SNRs shown by varying the distance. We include representative redshifts and source-frame component masses \(m_{1},m_{2}\) for each combination of detector-frame masses and SNR; the redshifts may change by as much \(\pm 0.001\) to keep the SNR constant as the spin magnitude and tilt are varied. in the detector frame are larger than the source-frame component masses \(m_{1},m_{2}\) due to cosmological redshift by at most 6%. This results in a total of 60 different sources. Our nearest source is at \(d_{L}=42\) Mpc (a redshift \(z\sim 0.009\) assuming the Planck15 cosmology [80]), and our furthest is at \(d_{L}=270.7\) Mpc (\(z\sim 0.059\)). Additional source parameters are listed in Table 3. We simulate each signal in a current-generation network of LIGO-Hanford, LIGO-Livingston, and Virgo at design sensitivities for their fourth observing run (O4)4, and a next-generation (XG) network of one Cosmic Explorer at the current site of LIGO-Hanford and the Einstein Telescope at the site of Virgo5. We use zero-noise realizations of the detector sensitivities; posteriors estimated in zero-noise will be equivalent to the average posterior estimated in many Gaussian-noise realizations [83; 84; 85; 86]. For all signals considered, we model the gravitational waveform with the phenomenological spin-precessing frequency-domain model IMRPhenomXP[87]. We forego higher-order gravitational-wave modes in our analysis for computational expediency, which are expected to be measurable during the merger and ringdown of systems with large total mass or extreme mass ratios.6 Thus, our constraints without higher-order modes are conservative upper limits. Our choices of component masses complement the parameter space studied by recent LIGO-Virgo-KAGRA (LVK) sub-solar mass compact object searches as well as that of Ref. [55]. The most recent LVK search considered source-frame masses \(m_{1}\in[0.2,10]\ M_{\odot}\) and \(m_{2}\in[0.2,1]\ M_{\odot}\)[54]; Ref. [55] considered detector-frame masses \(\tilde{m}_{1}\in[0.1,100]\ M_{\odot}\) plus \(\tilde{m}_{2}\in[0.01,1]\ M_{\odot}\) for \(\tilde{m}_{1}>20\ M_{\odot}\) and \(\tilde{m}_{2}\in[0.1,1]\ M_{\odot}\) for \(\tilde{m}_{1}<20\ M_{\odot}\). As IMRPhenomXP is only valid for mass ratios \(q=m_{2}/m_{1}>1/20\)[89; 90], we choose our minimum \(\tilde{m}_{2}=0.1\ M_{\odot}\) consistent with Ref. [55] and below the LVK search boundary. Footnote 4: Using sensitivity curves from [81]. Footnote 5: Using sensitivity curves from [82]. Footnote 6: We note that [88] recently showed that higher-order modes can improve the measurement of distance and inclination angle for binary neutron star mergers, comparable to the sub-solar mass black hole mergers studied in this work. ## 3 Results ### Component masses #### 3.1.1 Current-generation Detectors Our aim is to determine when we can identify that one or both of the component compact objects has mass \(<1\ M_{\odot}\). In general, one expects the estimation of mass parameters to get better and better as the total mass of a binary decreases and the number of inspiral cycles increase; this is why the chirp mass of binary neutron stars is measured much more precisely than that of binary black holes [91; 92; 93] (See also Tab. IV of Ref. [94]). In Figure 1, we show marginal posteriors on the source-frame mass \(m_{1}\) for the simulated signals described in Section 2.2, sorted by mass ratio and whether or not the system is precessing. Posteriors are also colored by network SNR; as expected, as SNR goes from lower values (cooler tones) to higher values (warmer tones), the width of the posteriors decreases. In all cases, we recover the simulated value (thin black lines) and do so more confidently with decreasing \(q\). This is simply due to the greater distinguishability between \(m_{1}\) and \(m_{2}\) at lower mass ratios. We achieve the best measurement of \(m_{1}\) for the precessing, \(q=0.07\) source with \(a_{1}=0.9\) with a 90% credible interval of \(1.7\times 10^{-2}\ M_{\odot}\). The worst measurement occurs with the \(q=1\), \(a_{1}=0.9\), non-precessing source with an SNR of 7.5, with a 90% credible interval of 0.84 \(M_{\odot}\). In general, the network SNR drives the measurement of \(m_{1}\), over \(a_{1}\), however, we o observe that the width of the credible interval weakly depends on \(a_{1}\) at the \(\sim\)10% level for non-precessing sources. In the most extreme example, the 90% credible interval for the non-precessing \(q=1\) source at an SNR of 7.5 decreases from 0.84 \(M_{\odot}\) to 0.73 \(M_{\odot}\) as \(a_{1}\) decreases from 0.9 to 0.6 (whereas for the precessing system with the same mass parameters, the credible interval only shrinks from 0.68 \(M_{\odot}\) to 0.61 \(M_{\odot}\)). We can also quantify the "efficiency" of measuring \(m_{1}\), which we evaluate with the ratio between the width of the 90% credible interval and the SNR; we call this ratio \(\alpha\). As for the width of the credible interval, we find that this ratio typically changes at most at the \(\sim\)10% level as we vary \(a_{1}\) and keep SNR the same. It varies noticeably as the SNR changes, indicating a nonlinear improvement in our measurement of \(m_{1}\) at increasing SNR. For example, for non-precessing, \(q=0.36\) sources, \(\alpha\) improves from \(\sim\)\(1.5\times 10^{-2}\)\(M_{\odot}\) to \(\sim\)\(1.8\times 10^{-3}\)\(M_{\odot}\) as the SNR roughly doubles from 21.2 to 42.4. Figure 1: Marginal posterior distributions on the source-frame mass of the heavier black hole, \(m_{1}\) for the simulated signals injected into an O4-design sensitivity network of LIGO-Hanford, LIGO-Livingston, and Virgo. Results for non-precessing (\(\theta_{1}=0\)) sources are shown on the left and those for precessing sources (\(\theta_{1}=\pi/2\)) on the right. The posteriors are colored by the network SNR of the signal. The linestyle of the posterior reflects the dimensionless spin magnitude of the more massive black hole, \(a_{1}=0.6\) (dotted), 0.8 (dashed), or 0.9 (solid). Posteriors are organized by increasing mass ratio of the source, from top to bottom. Thin black lines rising from the \(m_{1}\)-axis indicate the true value of \(m_{1}\), and a grey line is included at the fiducial mass scale of 1 \(M_{\odot}\). We note that these posteriors are not normalized so that they may be visualized together. Comparing each posterior to the vertical grey line at \(1\ M_{\odot}\), we are not able to confidently exclude \(m_{1}\geq 1\ M_{\odot}\) for sources with \(m_{1}<1\ M_{\odot}\) (bottom three rows). Among sources with \(q=0.56\), the weakest exclusion of a super-solar mass object occurs for the non-precessing source with \(a_{1}=0.9\), at an SNR of 12.5, where \(1\ M_{\odot}\) occurs at the 60.9% percentile of the marginal posterior distribution. At \(q=1\), with \(\tilde{m}_{1}=\tilde{m}_{2}=0.9\ M_{\odot}\) this exclusion is weaker, with \(1\ M_{\odot}\) occurring between the 23.3% (\(a_{1}=0.9\), non-precessing source with an SNR of 7.5) and 47.0% (\(a_{1}=0.9\), precessing source with an SNR of 16) percentiles. Turning to Figure 2, we show marginal posteriors on the source-frame mass \(m_{2}\). This figure is organized in the same manner as Figure 1. The relationships between the width of the credible intervals, \(q\), and SNR are qualitatively much the same as in \(m_{1}\). However, these posteriors are noticeably more narrow than those for \(m_{1}\) (so much so that for the lowest mass Figure 2: Marginal posterior distributions on the source-frame mass of the lighter black hole, \(m_{2}\) for the simulated signals injected into an O4-design sensitivity network of LIGO-Hanford, LIGO-Livingston, and Virgo. Results for non-precessing (\(\theta_{1}=0\)) sources are shown on the left and those for precessing (\(\theta_{1}=\pi/2\)) on the right. The posteriors are colored by the network SNR of the signal in an O4-design sensitivity network of LIGO-Hanford, LIGO-Livingston, and Virgo. The linestyle of the posterior reflects the dimensionless spin magnitude of the more massive black hole, \(a_{1}=0.6\) (dotted), 0.8 (dashed), or 0.9 (solid). Posteriors are organized by increasing mass ratio of the source, from top to bottom. Thin black lines rising from the \(m_{2}\)-axis indicate the true value of \(m_{2}\), and a grey line is included at the fiducial mass scale of \(1\ M_{\odot}\). We note that these posteriors are not normalized so that they may be visualized together. We observe that \(m_{2}\geq 1\ M_{\odot}\) is excluded for every simulated signal. ratios, we provide insets to resolve detail in the histograms). In particular, our best (worst) measurement occurs for the precessing (non-precessing) source with \(q=0.07,a_{1}=0.8\) (\(q=1\), \(\tilde{m}_{1}=\tilde{m}_{2}=0.9\)\(M_{\odot}\), \(a_{1}=0.8\), and network SNR of 7.5), achieving a 90% credible interval of \(9.1\times 10^{-4}\)\(M_{\odot}\) (\(0.36\)\(M_{\odot}\)). This improvement follows from the definition of the mass ratio; since \(q\) is linear in \(m_{2}\) but \(\propto 1/m_{1}\), at low mass ratios \(q\) is more sensitive to \(m_{2}\) than \(m_{1}\). Importantly, we observe that _every_ posterior excludes \(m_{2}\geq 1\)\(M_{\odot}\). We also stress that the hard edge observed towards 1 \(M_{\odot}\) in the \(q=1\) results are the posteriors railing on the prior constraint that \(m_{1}\geq m_{2}\). So, at least for systems equivalent to the injections studied in this work, we could confidently report the discovery of a sub-solar mass compact object at SNRs as low as 7.5. For completeness, we show marginal posteriors on the source-frame chirp mass \(\mathcal{M}\) and mass ratio \(q\) in the O4 network in Appendix B. #### 3.1.2 Next-generation Detectors Figure 3: Marginal posterior distributions on the source-frame mass of the heavier black hole, \(m_{1}\) for the simulated signals injected into a network of Cosmic Explorer and the Einstein Telescope. Results for non-precessing (\(\theta_{1}=0\)) sources are shown on the left and those for precessing sources (\(\theta_{1}=\pi/2\)) on the right. The posteriors are colored by the network SNR of the signal. The linestyle of the posterior reflects the dimensionless spin magnitude of the more massive black hole, \(a_{1}=0.6\) (dotted), \(0.8\) (dashed), or \(0.9\) (solid). Posteriors are organized by increasing mass ratio of the source, from top to bottom. Thin black lines rising from the \(m_{1}\)-axis indicate the true value of \(m_{1}\), and a grey line is included at the fiducial mass scale of 1 \(M_{\odot}\). We note that these posteriors are not normalized so that they may be visualized together. In Figures 3 and 4, we show marginal posteriors on the source-frame component masses \(m_{1}\) and \(m_{2}\), respectively, for signals simulated in the XG network of one Cosmic Explorer and Einstein Telescope. These figures are organized in the same manner as Figures 1 and 2 in the previous subsection. For the more massive component, \(m_{1}\), we correctly characterize the compact object as solar, sub-, or super-solar in mass for all the signals studied here, except in the case of non-precessing sources with detector-frame masses \(\tilde{m}_{1}=\tilde{m}_{2}=0.9\;M_{\odot}\). There, \(1\;M_{\odot}\) occurs at worst at the \(\gtrsim 99.9\%\) percentile, for the \(a_{1}=0.9\) source. For the less massive component, \(m_{2}\), we correctly characterize the compact object as sub-solar in nature in all cases, like in current-generation detectors. Except for signals with \(q=1\), we are also able to constrain both \(m_{1}\) and \(m_{2}\) away from the prior, in particular, without observing any railing on the mass ratio prior. Again, we stress that the hard edge on the \(q=1\) posteriors is a result of the definition \(m_{1}\geq m_{2}\). We achieve our best (worst) Figure 4: Marginal posterior distributions on the source-frame mass of the lighter black hole, \(m_{2}\) for the simulated signals injected into a network of Cosmic Explorer and the Einstein Telescope. Results for non-precessing (\(\theta_{1}=0\)) sources are shown on the left and those for precessing (\(\theta_{1}=\pi/2\)) on the right. The posteriors are colored by the network SNR of the signal. The linestyle of the posterior reflects the dimensionless spin magnitude of the more massive black hole, \(a_{1}=0.6\) (dotted), \(0.8\) (dashed), or \(0.9\) (solid). Posteriors are organized by increasing mass ratio of the source, from top to bottom. Thin black lines rising from the \(m_{2}\)-axis indicate the true value of \(m_{2}\), and a grey line is included at the fiducial mass scale of \(1\;M_{\odot}\). We note that these posteriors are not normalized so that they may be visualized together. measurement of \(m_{1}\) for the precessing (non-precessing) source with \(q=0.07\), \(a_{1}=0.8\), and a network SNR of 476.0 (\(q=1\), \(\tilde{m}_{1}=\tilde{m}_{2}=0.9\)\(M_{\odot}\), \(a_{1}=0.9\), and SNR of 235.5), that has a 90% credible of \(3.5\times 10^{-4}\)\(M_{\odot}\) (0.11 \(M_{\odot}\)). For \(m_{2}\), we achieve our best and worst measurements with the same sources which have credible intervals of \(1.7\times 10^{-5}\)\(M_{\odot}\) (\(q=0.07\), \(a_{1}=0.8\), and SNR of 476.0) and 0.10 \(M_{\odot}\) (\(q=1\), \(\tilde{m}_{1}=\tilde{m}_{2}=0.9\)\(M_{\odot}\), \(a_{1}=0.9\), and SNR of 235.5), respectively. This improvement may be driven by the same nonlinear improvement in the measurement efficiency \(\alpha\) seen in O4 detectors, which continues in the XG network. Previously, we saw that for the non-precessing, \(q=0.36\) source, \(\alpha\) for \(m_{1}\) is quartered as the SNR doubles; for the same source in a XG network, \(\alpha\) decreases from \(\sim\)\(1.7\times 10^{-5}\)\(M_{\odot}\) to \(\sim\)\(5.8\times 10^{-6}\)\(M_{\odot}\) as the SNR increases from \(\sim\)666 to \(\sim\)1333. We see a similar pattern in the measurement efficiency of \(m_{2}\). Overall, we observe an order-of-magnitude improvement in our ability to measure the component masses in XG detectors compared to the O4 network. We show marginal posteriors on the source-frame chirp mass and mass ratio in the XG network in Appendix B. ### Spins #### 3.2.1 Effective Spins in Current-generation Detectors Figure 5: Prior distribution \(\pi(\chi_{\rm eff})\) (top row) and marginal posterior distributions on the effective spin \(\chi_{\rm eff}\) for the quietest signals for a given set of intrinsic parameters \((\tilde{m}_{1},\tilde{m}_{2},a_{1},\theta_{1})\), injected into an O4-design sensitivity network of LIGO-Hanford, LIGO-Livingston, and Virgo (remaining rows). These are sorted, from top to bottom, by increasing dimensionless spin magnitude on the heavier black hole, \(a_{1}\). We color the posterior distributions by the true mass ratio of the source. In the left column, we show results for non-precessing (\(\theta_{1}=0\)) sources, where the truth is indicated with a dashed line colored according to the true \(q\). In the right column, we show results for precessing (\(\theta_{1}=\pi/2\)) sources, where the truth is indicated with a dashed black line. We also include two critical values of \(\chi_{\rm eff}\) above which \(a_{2}\) is inconsistent with a neutron star spin, as derived in Appendix C. These constraints are constructed assuming the spins are aligned (solid grey) and anti-aligned (dashed grey). We observe that \(\chi_{\rm eff}\) is best constrained for non-precessing systems, and at lower mass ratios. The spin of a compact object may distinguish it from a neutron star. The maximum observed neutron star spin is that of a pulsar in a binary, with \(a_{\rm NS}=0.4\)[78]. Whether this value is the theoretical maximal neutron star spin depends on the unknown nuclear equation of state; however, we adopt it as a fiducial value, above which a compact object is inconsistent with a neutron star description. Even at SNRs as high as \(\sim\)40, we do not expect to measure the component spin magnitudes or spin tilt angles well on their own [92, 95, 96]. However, the leading-order contributions of spins to the gravitational inspiral are measurable and can be parameterized with the effective spin \(\chi_{\rm eff}\) and effective spin precession \(\chi_{p}\). The effective spin is the mass-weighted projection of the component black hole spins along the orbital angular momentum of the system [97, 98, 99], \[\chi_{\rm eff}=\frac{m_{1}a_{1}\cos\theta_{1}+m_{2}a_{2}\cos\theta_{2}}{m_{1}+ m_{2}}=\frac{a_{1}\cos\theta_{1}+qa_{2}\cos\theta_{2}}{1+q}. \tag{10}\] The effective spin \(\chi_{\rm eff}\) is usually measured better than either component spins, even though it is known to be partially degenerate with the mass ratio for inspiral-dominated (i.e. low mass) systems [100, 101]. When \(\chi_{\rm eff}\) is zero, the black holes may be non-spinning, or spinning entirely in the plane of the orbit. When \(\chi_{\rm eff}\) is +1 (-1), the black holes are spinning parallel to and in the direction of (opposite the direction of) the orbital angular momentum. In Figure 5, we show marginal posteriors on \(\chi_{\rm eff}\) for a subset of our runs, sorted by \(a_{1}\) and colored by the mass ratio. On the left, we show results for non-precessing signals, and on the right, results for precessing signals. For clarity, we only include the quieter signals at each pair of masses shown in Table 1. The marginal posteriors on \(\chi_{\rm eff}\) look nearly identical at different SNRs with all else held equal (see Appendix E for the marginal posteriors for the louder events). Instead, the mass ratio drives the measurement of spin parameters as observed in e.g. [95]. For non-precessing signals, we confidently recover \(\chi_{\rm eff}\) at all mass ratios and values of \(a_{1}\). We find our best result for the source with \(q=0.07\), \(a_{1}=0.6\), with a 90% credible interval of 0.029, and the worst result for the source with \(q=1\), \(\tilde{m}_{1}=\tilde{m}_{2}=0.9\)\(M_{\odot}\), \(a_{1}=0.9\) with a credible interval of 0.72. For precessing signals, we find that \(\chi_{\rm eff}=0\) is confidently recovered for all sources, with a better measurement of \(\chi_{\rm eff}\) at lower \(q\). Quantitatively, the posterior distributions for precessing sources are comparable to their non-precessing counterparts; our best (worst) recovery of \(\chi_{\rm eff}\) occurs for the precessing source with \(q=0.07\), \(a_{1}=0.8\) (\(q=1\), \(\tilde{m}_{1}=\tilde{m}_{2}=0.9\)\(M_{\odot}\), \(a_{1}=0.9\)), yielding a credible interval of 0.025 (0.18). We also observe that, for some precessing sources with lower spin, we would preferentially report the incorrect \(\chi_{\rm eff}\). In particular, the marginal posterior on \(\chi_{\rm eff}\) for the precessing, \(q=0.56\) (green-blue color) signal at all spin magnitudes is consistent with zero, but peaks away from zero. Worse, at \(a_{1}=0.6\), the \(q=0.1\) signal exhibits bimodality and the \(q=0.07\) result is inconsistent with \(\chi_{\rm eff}=0\). However, these features are the result of posterior support for \(q\) slightly away from its true value, combined with large uncertainties in the spin magnitudes and tilts. We detail this behavior in Appendix D. Although it would be easier to directly measure the spin magnitudes as consistent or inconsistent with the maximal neutron star spin, we can indirectly constrain \(a_{2}\) through \(\chi_{\rm eff}\) if we assume some prior knowledge of the geometry of the system (\(a_{2}\) being the spin of the sub-solar mass object in all of our simulated signals). In Appendix C we find critical values of \(\chi_{\rm eff}\) for which \(a_{2}>a_{\rm NS}\), assuming that we know the alignment of the black hole spins. We include these constraints as grey lines in both columns of Figure 5. These constraints are constructed with conservative assumptions on the tilts of the black holes, the spin of the more massive black hole, and the mass ratio of the system. In particular, if \(|\chi_{\rm eff}|\) is larger than the spin aligned constraint (solid grey), then the spin of the lighter object (which is a sub-solar mass black hole in all of our simulated sources) _must_ be larger than the maximal neutron star spin. For non-precessing signals, we can exclude \(|\chi_{\rm eff}|\) below both constraints at the very lowest mass ratios. Otherwise, we would need to infer that the spins are anti-aligned to exclude spins consistent with a neutron star. For precessing signals, our inference of \(\chi_{\rm eff}\) provides no constraint on \(a_{2}\). This is due to the lossy nature of the \(\chi_{\rm eff}\) parameterization, as we are trying to recover zero which is degenerate with precessing signals (as we have simulated) as well as systems without any spin. Since the tilt angles are not well measured, it is difficult to break this degeneracy. These constraints may be relevant depending on the binary formation mechanism of sub-solar mass black holes and their associated spin distributions. The effective spin precession is the average magnitude of the leading-order contribution to the precession of the orbital angular momentum of the binary, and can be expressed to leading order as [103], \[\chi_{p}=\max\left(a_{1}\sin\theta_{1},q\frac{4q+3}{4+3q}a_{2}\sin\theta_{2} \right). \tag{3.2}\] While this quantity does not capture all of the relativistic dynamics of precessing binaries (see e.g. [104]), it best captures precession in systems like those we have simulated, where a single component drives the orbital precession [103]. In Figure 6, we plot marginal posteriors Figure 6: In the top row, we show the prior distribution \(\pi(\chi_{p})\) (black) and conditional priors \(\pi(\chi_{p}|q)\) from [102] at the mass ratios listed in Table 1. In the remaining rows, we show marginal posterior distributions on \(\chi_{p}\) for the quietest signals for a given set of intrinsic parameters \((\tilde{m}_{1},\tilde{m}_{2},a_{1},\theta_{1})\), injected into an O4-design sensitivity network of LIGO-Hanford, LIGO-Livingston, and Virgo. These are sorted, from top to bottom, by increasing dimensionless spin magnitude on the heavier black hole, \(a_{1}\). We color the posterior distributions by the true mass ratio of the source. In the left column, we show results for non-precessing (\(\theta_{1}=0\)) sources, and in the right column, we show results for precessing (\(\theta_{1}=\pi/2\)) sources. The truth is indicated with a dashed black line. We observe that \(\chi_{p}\) is best constrained for precessing systems, and at lower mass ratios. on \(\chi_{p}\) for the same subset of runs shown in Figure 5, organized in the same manner. We also include the prior \(\pi(\chi_{p})\) and conditional priors \(\pi(\chi_{p}|q)\), derived in [102]. First, we note that we make a weak (though incorrect) measurement of \(\chi_{p}\) for non-precessing systems at the most extreme mass ratio we studied, \(q=0.07\). For each non-precessing \(q=0.07\) source we recovered a posterior without the same "tail" shown in the priors on \(\chi_{p}\) in Figure 6. Further, for these sources, the 90% credible interval on the mass ratio is \(\sim\)0.01, with the true value at the \(\sim\)65% percentile; i.e. most of the marginal posterior in \(q\) is in smaller mass ratios. Referring to the conditional priors \(\pi(\chi_{p}|q)\), we see that lower \(q\) tends to increase support near \(\chi_{p}\) of zero, but instead, we measure \(\chi_{p}\sim 0.1\). Thus, the uncertainty on our estimate of \(q\) for these sources cannot explain the bias on our measurement of \(\chi_{p}\). For these systems, we may instead be dominated by the uncertainty in the tilt angle, and explaining some of the observed spin in the system with slightly misaligned compact object spins. For the remaining simulated signals, we can easily compare the posteriors on \(\chi_{p}\) to \(\pi(\chi_{p})\) and \(\pi(\chi_{p}|q)\), and we find that that we only measure \(\chi_{p}\) for precessing systems at the lowest mass ratios and the highest (\(a_{1}=0.8,0.9\)) spins. This can be explained by remembering that slice of the binary parameter space is where precession will modify the inspiral of the binary the most. #### 3.2.2 Component Spin Magnitudes and Tilts in Next-generation Detectors In the previous section, we showed that effective spins may be measurable and provide some constraints on the nature of sub-solar mass compact objects in O4 gravitational-wave detectors. Unfortunately, individual spins cannot be well-constrained at low SNRs. Of the simulated signals studied in this work, the lowest network SNR achieved in the XG network Figure 7: Marginal posterior distributions on the dimensionless spin magnitude of the heavier black hole \(a_{1}\), for the quietest signals for a given set of intrinsic parameters \((\tilde{m}_{1},\tilde{m}_{2},a_{1},\theta_{1})\), injected into a network of Cosmic Explorer and the Einstein Telescope. These are sorted, from top to bottom, by increasing values of \(a_{1}\). We color the posterior distributions by the true mass ratio of the source. In the left column, we show results for non-precessing (\(\theta_{1}=0\)) sources, and in the right column, we show results for precessing (\(\theta_{1}=\pi/2\)) sources. The truth is indicated with a dashed black line. For reference, we include a solid grey line at the maximal neutron star spin, \(a_{\rm NS}=0.4\). We observe that \(a_{1}\) is measured away from these limits for non-precessing systems and at lower mass ratios. is 236.6. These signals are orders of magnitude louder than any gravitational-wave event observed to date and enable us to directly measure the magnitude and angles of binary black hole spin vectors. In Figure 7, we show marginal posteriors on the spin magnitude of the more massive component, \(a_{1}\), for a subset of our signals injected into XG detectors, showing results only for the quieter signals. The posteriors are colored by the mass ratio of the source, and organized by the true value of \(a_{1}\), increasing from top to bottom. In all cases, we make a measurement consistent with the true value of \(a_{1}\). Comparing to the maximal neutron star spin \(a_{\rm NS}=0.4\) (solid grey line), we observe that the best exclusion of such spins occurs for non-precessing systems. In addition, the constraints on \(a_{1}\) improve with increasing mass ratio. Among non-precessing systems, we measure \(a_{1}\) the best (worst) for the source with \(q=0.07\), \(a_{1}=0.9\) (\(q=1\), \(\tilde{m}_{1}=\tilde{m}_{2}=0.9\)\(M_{\odot}\), \(a_{1}=0.6\)), which yields a 90% credible interval of \(7.7\times 10^{-3}\) (0.28). Among precessing systems, we similarly measure \(a_{1}\) the best (worst) for the \(q=0.07\), \(a_{1}=0.8\) (\(q=1\), \(\tilde{m}_{1}=\tilde{m}_{2}=0.9\)\(M_{\odot}\), \(a_{1}=0.8\)) source, which yields a 90% credible interval of \(8.2\times 10^{-4}\) (0.78). For systems approaching equal masses, we may not be able to exclude \(a_{1}\) consistent with a neutron star if the system is highly precessing with a fully in-plane spin. In Figure 8, we show marginal posteriors on \(\theta_{1}\) for simulated signals with mass ratios \(q=0.1\) and \(q=1\) (only the \(\tilde{m}_{1}=\tilde{m}_{2}=0.9\)\(M_{\odot}\) runs, for clarity); posteriors at all \(q\) can be found in Appendix E. These are colored by network SNR, with a linestyle denoting the value of \(a_{1}\) for each system. Our analysis employed isotropic priors on \(\theta_{1}\), shown in the top rows of both Figure 8. Comparing the marginal posteriors shown to the priors on \(\theta_{1}\), we find that Figure 8: Marginal posterior distributions on the tilt angle between the spin vector of the heavier black hole and the orbital angular momentum, \(\theta_{1}\), for the quietest signals for a given set of intrinsic parameters \((\tilde{m}_{1},\tilde{m}_{2},a_{1},\theta_{1})\), injected into a network of Cosmic Explorer and the Einstein Telescope. In the top row, we show the prior \(\pi(\theta_{1})\). For brevity, we only include results for sources with \(q=0.10\) (middle row) and \(q=1.0\) (bottom row). In the left column, we show results for non-precessing (\(\theta_{1}=0\)) sources, and in the right column, we show results for precessing (\(\theta_{1}=\pi/2\)) sources. The truth is indicated with a dashed black line. We color the posterior distributions by the network SNR, and their linestyles reflect the true spin magnitude of the heavier black hole, \(a_{1}\). We observe that \(\theta_{1}\) is well-measured for non-precessing systems, and at low mass ratios. In addition, we note that the lack of posterior support _at_\(\theta_{1}=\pi/2\) for precessing, \(q=0.1\) signals likely reflect heterodyning or waveform accuracy issues at large SNRs. we can make a measurement of the tilt angle for non-precessing systems, and for precessing systems up until \(q=1\) (for those systems, we make a measurement that excludes a totally aligned or anti-aligned spin with respect to the orbital angular momentum). For example, for the non-precessing, \(q=0.1\), \(a_{1}=0.9\) source, we find a 90% credible interval of 0.01 radians, and for the non-precessing \(q=1\), \(\tilde{m}_{1}=\tilde{m}_{2}=0.9\)\(M_{\odot}\), \(a_{1}=0.9\) source with network SNR of 7.5, we find a credible interval of 0.26 radians. For our precessing sources, the measurement of \(\theta_{1}\) improves at \(q=0.10\) while dramatically broadening at \(q=1.00\); for example, the signal with \(q=0.1\), \(a_{1}=0.9\) yields a credible interval of \(5.7\times 10^{-3}\) radians versus an interval of 0.86 radians for the \(q=1\), \(a_{1}=0.9\) source with network SNR of 7.5. Finally, we note a peculiarity in the posteriors on \(\theta_{1}\) at \(q=0.01\); there, the true value is explicitly excluded, with the distribution peaking to either side of \(\pi/2\). This is a non-physical effect reflecting a breakdown in the numerical approximation to the prior on the spin components aligned with the orbital angular momentum. In Appendix F we demonstrate this prior effect by re-analyzing one simulated signal in the XG network with uniform priors on the tilt angles and magnitudes. Sampling directly in the spin magnitudes and tilts, we recover a posterior with a similar width as shown in Figure 8 which peaks at the true \(\theta_{1}\). While there is no direct constraint on the value of \(\theta_{1}\) for a sub-solar mass black hole mimicker, better measurement of \(\theta_{1}\) improves our measurement of effective spin parameters like \(\chi_{\rm eff}\) and \(\chi_{p}\), which can indirectly constrain the nature of a compact object as discussed for signals in current-generation detectors. In Appendix G, we observe a correlation between \(\chi_{p}\) and the inclination angle of the system relative to the line of sight which also improves our measurement of binary spin physics. ### Sky Localization Binary neutron star mergers and some neutron star-black hole mergers can produce an electromagnetic (EM) signal counterpart to gravitational-wave emission [110]. While the lack of an EM-counterpart does not strictly rule out a neutron star component, it can reduce the allowed binary parameter space. In particular, a neutron star in a low-mass ratio merger is expected to be tidally disrupted [111], and we expect to measure \(q\) particularly well for the low-mass ratio systems in this work (c.f. Section 3.1). So, if a source is well-localized and no counterpart is observed, a gravitational wave event may become easier to explain with a sub-solar mass black hole component. In Figure 9, we show the area on the sky to which each of our simulated signals can be localized as a function of their network SNRs. This localization area is the solid angle subtended by the 90% confidence contour on the marginal joint posterior for the right ascension and declination, computed using ligo.skymap[112]. For comparison, we include two reference scales: the area enclosed by the 90% confidence contour for the binary neutron star merger GW170817 [113] at a network SNR of \(\sim\)32 (blue star), and the field of view of current- and next-generation optical, infrared, and microwave telescopes that could contribute to the electromagnetic follow-up of a sub-solar mass compact object merger (grey, various linestyles). We find that sub-solar mass compact object mergers achieve a higher sky localization "efficiency" than GW170817, in the sense that at a similar network SNR it is possible to localize these sub-solar mass mergers to a region on the sky that is an order of magnitude smaller (similar to the measurement efficiency \(\alpha\) introduced in Section 3.1). This follows from the fact that longer-duration signals are easier to localize and that all of our simulated systems (with either a lower total mass or mass ratio) will merge much more slowly than a binary neutron star merger like GW170817. In addition, we observe that the localization for every event, except those at the edge of detectability (at O4 network SNRs of 7.5), is within the field of view for at least one telescope, and most are within the field of view for multiple. In both scenarios (O4 and XG), we see that a compact object merger involving a sub-solar mass component will be very well-localized compared to our electromagnetic follow-up capabilities, reducing the chance of missing an associated electromagnetic counterpart if something other than black holes is involved in the merger. We also note that our analysis did not include higher-order effects like the time-evolution of the detector antenna pattern due to the rotation of the Earth or the finite size of the detector relative to the wave; however, we expect these to further improve the localization of long-duration signals [114]. ## 4 Discussion Here, we have studied the measurability of the masses, spins, and sky location for a set of spinning, precessing and non-precessing, quasi-circular binary black hole mergers involving at least one sub-solar mass component. Using a spin-precessing waveform and heterodyned likelihood, we performed parameter estimation on these signals injected into a network of LIGO-Hanford, LIGO-Livingston, and Virgo at O4-design sensitivity, and a network of next-generation detectors, Cosmic Explorer and the Einstein Telescope. We found that the long duration (\(\sim\)1000's of seconds) of these signals engendered precise measurement of the black hole masses. We confidently identify a sub-solar mass compact Figure 9: Localization areas i.e. 90% confidence regions as a function of network SNR for all simulated signals studied in this work. Signals injected into an O4-design sensitivity network of LIGO-Hanford, LIGO-Livingston, and Virgo are shown in purple. Those injected into a network of Cosmic Explorer and the Einstein Telescope are shown in gold. Grey lines indicate the field of view for electromagnetic follow-up observatories, including the Zwicky Transient Facility [105, ZTF; dash-dot-dotted], Vera C. Rubin Observatory [106, dotted], next-generation Very Large Array [107, ngVLA; dash-dotted], Dark Energy Camera [108, DECam; wide-spaced dots], and the Thirty Meter Telescope [109, TMT; dashed] object in all of the signals we studied, _at current design sensitivities_, down to network signal-to-noise ratios at the threshold of detectability. In particular, we confidently excluded the least massive component as being \(\geq 1\)\(M_{\odot}\) in mass even for an equal-mass 0.9-0.9\(M_{\odot}\) merger with an SNR of 7.5, corresponding to a luminosity distance of \(\sim\)125 Mpc, or roughly three times the distance of the binary neutron star merger GW170817. We observed that the SNR drives the measurement of the component masses over the spins, and noted an increasing, nonlinear relationship between the width of the 90% credible interval on \(m_{1}\) or \(m_{2}\) and the SNR. Driven by this improvement in measurement efficiency at high SNRs, we found that next-generation detectors will enable exquisite measurements of the source-frame component masses towards the \(10^{-5}\)\(M_{\odot}\) level, enabling confident measurement of the compact objects we studied as super-, sub-, or solar in mass. We then looked at the spins. In current generation detectors, we do not expect SNRs high enough to often confidently measure the component spin magnitudes nor their tilt angles with respect to the orbital angular moment. However, we found that the leading-order effective spin parameters \(\chi_{\rm eff}\) and \(\chi_{p}\) can be well-measured for non-precessing and precessing sources, respectively, in the O4 network. With strong assumptions on the binary geometry, we found that for non-precessing signals with the lowest mass ratios (\(q=0.07,0.10\)), we could use \(\chi_{\rm eff}\) to confidently exclude spins consistent with the fastest spinning neutron stars. In next-generation detectors, signals will be orders-of-magnitude louder at or beyond distances comparable to GW170817. This enabled us to directly measure the spin magnitude and tilt of the heavier component, and exclude spins consistent with theoretical maximal neutron star spins for non-precessing systems, and precessing systems at low mass ratios. Binary neutron star and neutron star-black hole mergers may produce an electromagnetic counterpart; if none is found, especially at low mass ratios, that may rule out a neutron star component in a binary merger. Again enabled by the long duration of these signals, we found that nearly all of the sub-solar mass sources we studied in the three-detector, O4-design sensitivity network would be localized within the field of view of at least one current- or next-generation electromagnetic follow-up instrument. We noted that many of these sources are more "efficiently" localized than GW170817, achieving smaller 90% confidence regions on the sky at similar SNRs. All of the sources in the two-detector, next-generation gravitational wave network were localized within the field of view of four current- and next-generation electromagnetic follow-up instruments. At least among the sources studied here, current-gravitational wave detectors would be able to confidently report the discovery of a sub-solar mass component of a compact object merger plus some measurement of the binary spin geometry and sky location, enabling unique constraints on the properties of dark matter and the physics of the early Universe. However, the "smoking gun" that characterizes a compact object is tidal deformability, which measures how the material of a compact object responds to the gravitational field of its companion. At present, there are no waveform models including tidal deformability that are calibrated for the low mass ratios, or small component masses studied in this work. Recently, [115] performed the first-ever numerical relativity simulation of a neutron star-sub-solar mass black hole merger. They found significant dephasing between the waveform predicted by numerical relativity and phenomenological or surrogate waveform predictions for the same merger. Once waveform models involving tides are developed for sub-solar mass compact object mergers, future work like ours could explore the measurability of those tides. Acknowledgements We thank Katerina Chatziioannou, Tim Detrich, and Carl Johan-Haster for enlightening discussions around the measurement and modeling of tidal deformability in neutron star/sub-solar mass black hole mergers, Geraint Pratten for assistance in understanding the IMRPhenomX family of waveforms, and Leo Singer for guidance on estimating sky localization regions. We additionally thank Divya Singh, Ester Ruiz-Morales, and Aditya Vijaykumar for their helpful comments on this manuscript. This work was inspired by the 2020 "Workshop on Gravitational Wave Constraints on Dark Compact Objects", which was supported by a Gordon and Betty Moore Foundation Fundamental Physics Innovation Convening Award and the American Physical Society. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. The authors are grateful for computing resources provided by the California Institute of Technology and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. SV is partially supported by NSF through the award PHY-2045740. NEW is supported by the La Gattuta Physics Fund and the Henry W. Kendall (1955) Fellowship Fund. CT is supported by an MKI Kavli Fellowship. bilby v2 [75, 76], astropy v5.1 [116], pesummary v0.13.10 [117], healpy v1.16.1 [118, 119], ligo.skymap v1.0.3 [112], corner.py v2.2.1 [120], numpy v1.22.3 [121], scipy v1.8.1 [122], pandas v1.4.2 [123, 124], dynesty v2.1.0 [70], LALSimulation v4.0.2 [125, 126] ## Appendix A Parameter Estimation ### Waveform Model and Heterodyned Likelihood We use the Whittle likelihood approximation in the frequency domain for the residual noise [127], \[\mathcal{L}(d|\theta)=\prod_{i,j}\frac{1}{2\pi S_{ij}}\exp\left(-\frac{4}{T} \frac{|d_{ij}-h(\theta)_{ij}|^{2}}{S_{ij}}\right). \tag{10}\] where the products run over the gravitational wave detectors in a network and the frequency bins of the data, \(d_{ij}\) is the data, \(h(\theta)_{ij}\) is the strain of the astrophysical signal, \(S_{ij}\) is the power spectral density, and \(T\) is the duration of the data. We inject simulated signals into zero noise realizations of the detectors, so the power spectral densities are (theoretical) design sensitivity curves. The strain is evaluated with the frequency-domain waveform model IMRPhenomXP which includes spin precession effects [87]. We choose a similar description of spin-precession as used in IMRPhenomPv2 [128] by choosing the flag PhenomXPrecVersion = 104 (see Table 3, Appendix F of [87]). In practice, taking the log-likelihood is commonly expressed in terms of a complex-valued inner product between the data and the waveform strain, weighted by the power spectral density (see Appendix B of [129]). Evaluating this inner product over \(\mathcal{O}(10^{6})\) (as we have for signals \(\mathcal{O}(10^{3})\) seconds in duration) is computationally expensive. A heterodyned likelihood [72] (an approximation also known as relative binning [73]) approximates Equation 10 by expanding the inner product between the data and the strain about its value at some fiducial values of \(\theta\), at a few frequencies whose spacing is chosen to minimize the change in waveform phase between frequencies. The bilby implementation of a heterodyned likelihood is described in Ref. [77]. Following the description of [73], the heterodyned likelihood relies on hyperparameters \(\epsilon\) and \(\chi\) which control the tolerance for this inter-frequency dephasing (see Equations 9 and 10 of [73]). In this work, we choose \(\epsilon=0.025\) and \(\chi=1\), which translates to an evaluation of the likelihood at only 1242 frequencies. For each simulated signal we study, we choose the fiducial values to be the true values chosen for each source. ### Priors Binary black hole parameter definitions follow those of Table E1 in [76]. Conceptually, we adopt the following priors: * _Component masses_: We adopt uniform priors on the detector-frame masses \(\tilde{m}_{1},\tilde{m}_{2}\); under these assumptions, we sample in the mass ratio \(q\) and detector-frame chirp mass \(\tilde{\mathcal{M}}\) as these control the leading-order contributions to the gravitational wave strain. We additionally constrain these priors by requiring \(\tilde{m}_{1},\tilde{m}_{2}\leq 2\ M_{\odot}\) in all analyses. In principle, this allows us to recover super-solar component masses while reducing the volume of the parameter space the sampling procedure may explore. * _Luminosity distance_: Uniform prior on the coalescence time in the source frame and the enclosed comoving volume, assuming the Planck15 cosmology [80]. * _Spins_: We assume isotropic priors on the spin vectors of the component black holes, parameterized in terms of the components of the spins aligned with the orbital angular momentum \(\vec{L}\), \(\chi_{1,2}\), the components pointing in the orbital plane \(\chi_{1,2}^{\perp}\), the azimuthal angle (taking \(\vec{L}\) to point along the \(z\)-direction) between the spins, \(\phi_{12}\), and the azimuthal angle between \(\vec{L}\) and the total angular momentum. * _Sky location and orientation_: We assume isotropic priors for the orientation and location of the binary on the sky, parameterized by the time of coalescence, azimuth, and zenith as defined at LIGO-Hanford. Since the time at a given detector is what is measured in practice, this parameterization is more efficient to sample in than directly sampling in the right ascension, declination, and time of coalescence as measured at the center of the Earth. In all of our analyses, we choose priors that encompass the resultant posteriors. ### Nested Sampling in bilby The strategy of nested sampling is to estimate the Bayesian evidence \(\mathcal{Z}\) as the integral of the likelihood surface over the prior mass \(X\), \[\mathcal{Z}=\int_{0}^{1}\mathcal{L}(X)dX\approx\sum_{i}w_{i}\mathcal{L}_{i} \tag{10}\] where \(i\) enumerates likelihood contours of value \(\mathcal{L}_{i}\) and the weights \(w_{i}\) are the the prior mass enclosed by those contours. The likelihood contours are chosen to enclose increasingly narrow regions of the total prior mass; this sorting allows us to estimate \(\mathcal{Z}\) without explicitly referring to the complicated, high-dimensional geometry of the likelihood surface [69]. While evaluating the likelihood is a well-defined computation using Equation 10 and given a model of the gravitational waveform, determining the prior mass enclosed by some likelihood is-contour is non-trivial. We can estimate \(w_{i}\) with uncertainty e.g. with Markov Chain Monte Carlo (MCMC) chains sampling within some bounds of the prior mass. Here, we use the nested sampling algorithm implemented by dynesty[70] and employ Differential Evolution-MCMC (DE-MC), as implemented in bilby, to estimate \(w_{i}\)[130]. In Table 2, we record the sampler settings provided to bilby when we conduct parameter estimation with nested sampling for our simulated signals. \begin{table} \begin{tabular}{|c|c|} \hline Sampler Argument & Value \\ \hline nlive & 500 \\ walks & 100 \\ naccept & 60 \\ sample & ‘acceptance-walk’ \\ proposals & [’diff’] \\ bound & ‘live’ \\ \hline \end{tabular} \end{table} Table 2: Sampler arguments used in our analysis of simulated signals as defined for the bilby implementation of dynesty and Differential Evolution-MCMC. We note that a few runs use nlive = 2000 to achieve convergence. \begin{table} \begin{tabular}{|c|c|c|} \hline Parameter & Units & Value \\ \hline \multicolumn{3}{|c|}{Intrinsic Parameters} \\ \hline \(m_{1},m_{2}\) & \(M_{\odot}\) & See Table 1 \\ \(a_{1},a_{2}\) & – & \(a_{1}\) is one of \(\{0.6,0.8,0.9\}\) \\ & & \(a_{2}=0\) \\ \(\theta_{1},\theta_{2}\) & radians & \(\theta_{1}\) is one of \(\{0,\pi/2\}\) \\ & & \(\theta_{2}=0\) \\ \(\phi_{12}\) & radians & 1.7 \\ \(\phi_{jl}\) & radians & 0.3 \\ \hline \multicolumn{3}{|c|}{Extrinsic Parameters} \\ \hline \(d_{L}\) & Mpc & Changed to achieve the SNRs in Table 1. \\ \(\theta_{JN}\) & radians & 0.4 \\ Right ascension & radians & 1.375 \\ Declination & radians & -1.2108 \\ \(\psi\) & radians & 2.659 \\ \(t_{c}\) & GPS Time & 1126259642.413 \\ \(\phi_{c}\) & seconds & 1.3 \\ \hline \end{tabular} \end{table} Table 3: Complete set of binary black hole source parameters for the simulated signals studied in this work, including the intrinsic parameters of the black holes and the extrinsic parameters of the binary’s location and orientation. Parameters of the angular momenta are evaluated at a reference frequency of 20 Hz. Parameter definitions follow Table E1 of [76]. ## Appendix B Marginal Posteriors on Chirp Mass and Mass Ratio Figure 10: Marginal posterior distributions on the source-frame chirp mass \(\mathcal{M}\) for the simulated signals injected into an O4-design sensitivity network of LIGO-Hanford, LIGO-Livingston, and Virgo. Results for non-precessing (\(\theta_{1}=0\)) sources are shown on the left and those for precessing sources (\(\theta_{1}=\pi/2\)) on the right. The posteriors are colored by the network SNR of the signal. The linestyle of the posterior reflects the dimensionless spin magnitude of the more massive black hole, \(a_{1}=0.6\) (dotted), \(0.8\) (dashed), or \(0.9\) (solid). Posteriors are organized by increasing mass ratio of the source, from top to bottom. Thin black lines rising from the \(\mathcal{M}\)-axis indicate the true value of \(\mathcal{M}\), and a grey line is included at the fiducial mass scale of \(1~{}M_{\odot}\). We note that these posteriors are not normalized so that they may be visualized together. Figure 11: Marginal posterior distributions on the mass ratio \(q\) for the simulated signals injected into an O4-design sensitivity network of LIGO-Hanford, LIGO-Livingston, and Virgo. Results for non-precessing (\(\theta_{1}=0\)) sources are shown on the left and those for precessing sources (\(\theta_{1}=\pi/2\)) on the right. The posteriors are colored by the network SNR of the signal. The linestyle of the posterior reflects the dimensionless spin magnitude of the more massive black hole, \(a_{1}=0.6\) (dotted), \(0.8\) (dashed), or \(0.9\) (solid). Posteriors are organized by increasing mass ratio of the source, from top to bottom. Dashed black lines indicate the true value of \(q\). We note that these posteriors are not normalized so that they may be visualized together. Figure 12: Marginal posterior distributions on the source-frame chirp mass \(\mathcal{M}\) for the simulated signals injected into a network of Cosmic Explorer and the Einstein Telescope. Results for non-precessing (\(\theta_{1}=0\)) sources are shown on the left and those for precessing sources (\(\theta_{1}=\pi/2\)) on the right. The posteriors are colored by the network SNR of the signal. The linestyle of the posterior reflects the dimensionless spin magnitude of the more massive black hole, \(a_{1}=0.6\) (dotted), \(0.8\) (dashed), or \(0.9\) (solid). Posteriors are organized by increasing mass ratio of the source, from top to bottom. Thin black lines rising from the \(\mathcal{M}\)-axis indicate the true value of \(\mathcal{M}\), and a grey line is included at the fiducial mass scale of \(1~{}M_{\odot}\). We note that these posteriors are not normalized so that they may be visualized together. Figure 13: Marginal posterior distributions on the mass ratio \(q\) for the simulated signals injected into a network of Cosmic Explorer and the Einstein Telescope. Results for non-precessing (\(\theta_{1}=0\)) sources are shown on the left and those for precessing sources (\(\theta_{1}=\pi/2\)) on the right. The posteriors are colored by the network SNR of the signal. The linestyle of the posterior reflects the dimensionless spin magnitude of the more massive black hole, \(a_{1}=0.6\) (dotted), \(0.8\) (dashed), or \(0.9\) (solid). Posteriors are organized by increasing mass ratio of the source, from top to bottom. Dashed black lines indicate the true value of \(q\). We note that these posteriors are not normalized so that they may be visualized together. In Figures 10 and 11 we plot marginal posteriors on the source-frame chirp mass \(\mathcal{M}\) and mass ratio \(q\), respectively, for the signals simulated in the O4-design sensitivity network. In Figures 12 and 13 we similarly plot marginal posteriors on \(\mathcal{M}\) and \(q\) for signals simulated in the XG network. All of these figures are organized in the same manner as the plots of the marginal posteriors on the source-frame component masses in Section 3.1. We note that in Figure 12 the marginal posteriors on \(\mathcal{M}\) for the louder set of non-precessing \(q=0.56\) signals plus all non-precessing, \(q=0.36\) signals do not recover the true values of the chirp mass, peaking away from these value by \(\lesssim 10^{-2}\)\(M_{\odot}\). We emphasize that this is an artifact of marginalizing a 15-dimensional posterior distribution, and not a meaningful bias, as the value of the log-likelihood at the true parameters versus the parameters with highest posterior probability differs by roughly unity. The apparent offset in the source-frame masses is due to the fact that the luminosity distance posterior (needed to convert detector-frame mass to source-frame mass) only includes the true value at the edge of its tail, see Figure 14. In turn, the exact shape of the distance posterior is driven by the interplay between the correlation of some of the binary parameters (specifically: distance, inclination angle and \(\chi_{p}\), see Appendix G below) and their non-trivial priors. As shown in Figure 6, for a system with small mass ratio, the \(\chi_{p}\) prior is rather peaked away from zero, but due to correlations, that pushes \(\theta_{JN}\) to smaller values, which in turn moves the distance to higher values. Figure 14: Marginal posteriors on the source-frame chirp mass \(\mathcal{M}\), luminosity distance \(d_{L}\), inclination angle relative to the line of sight \(\theta_{JN}\), and effective spin precession \(\chi_{p}\) for the non-precessing simulated signal with \(q=0.36\), \(a_{1}=0.9\), and a network SNR of 665.5 in the XG network. True values are shown with black lines. We observe that the chirp mass is underestimated due to an overestimated distance, driven by correlations between \(d_{L}\), \(\theta_{JN}\), and \(\chi_{p}\). Effective Spin Constraints on Spin Magnitude ### Spin-aligned Systems If we make assumptions about the spin geometry of the binary black hole system, we can constrain the spin of the lighter object \(a_{2}\) under the most conservative assumptions about the spin of the heavier object, \(a_{1}\). First, we consider two black hole spins both aligned with the orbital angular momentum, or both in the opposite direction of the orbital angular momentum; here, we refer to both cases as "spin-aligned". In these cases, \[|\chi_{\text{eff}}|=\frac{a_{1}+qa_{2}}{1+q}. \tag{108}\] Rearranging in terms of \(a_{2}\), \[a_{2}=\frac{|\chi_{\text{eff}}|(1+q)-a_{1}}{q}, \tag{109}\] and notice that \(a_{1}\leq 1\), so, \[a_{2}\geq\frac{|\chi_{\text{eff}}|(1+q)-1}{q}. \tag{110}\] Therefore, \(a_{2}>a_{\text{NS}}\) when the right-hand side of Equation 110 is greater than \(a_{\text{NS}}\). This is fulfilled for \[|\chi_{\text{eff}}|>\frac{a_{\text{NS}}q+1}{1+q} \tag{111}\] which we recognize as \(|\chi_{\text{eff}}|\) for spin-aligned black holes with \(a_{1}=1,a_{2}=a_{\text{NS}}\). For \(a_{\text{NS}}=0.4\)[78], this critical value of \(\chi_{\text{eff}}\) is maximized at \(q=1\) and we find that \(\chi_{\text{eff,crit}}=0.7\). So, for spin-aligned systems, \(|\chi_{\text{eff}}|>0.7\) implies that \(a_{2}\)_must_ be larger than the maximal neutron star spin, assuming the spins are aligned. ### Spin anti-aligned Systems We can construct a similar constraint in systems where one black hole spin is aligned with the orbital angular momentum and one points in the opposite direction. Here, we take the more massive object to spin in the direction of the angular momentum and the less massive object to spin opposite it (the constraint we construct turns out to be the same if these are switched). Then, \[\chi_{\text{eff}}=\frac{a_{1}-qa_{2}}{1+q}, \tag{112}\] and so, \[a_{2}=\frac{a_{1}-\chi_{\text{eff}}(1+q)}{q}\geq-\frac{\chi_{\text{eff}}(1+q)} {q} \tag{113}\] as \(a_{1}\geq 0\). Notice, for \(a_{1}=0\) and with the spin of the lighter object pointing opposite the orbital angular momentum, \(\chi_{\text{eff}}<0\). Thus, \[a_{2}\geq\frac{|\chi_{\text{eff}}|(1+q)}{q}, \tag{114}\] and similar to our analysis for spin-aligned systems, we that \(a_{2}>a_{\text{NS}}\) when \[|\chi_{\text{eff}}|\geq\frac{a_{\text{NS}}q}{1+q} \tag{115}\] which we recognize as the value of \(|\chi_{\text{eff}}|\) for \(a_{1}=0,a_{2}=a_{\text{NS}}\). This critical value of \(|\chi_{\text{eff}}|\) is maximized for \(q=1\), yielding \(\chi_{\text{eff,crit}}=0.2\). Again, above this threshold, \(a_{2}\)_must_ be larger than the maximal neutron star spin, assuming the spins are anti-aligned. ## Appendix D Features in Correlated Mass Ratio-Effective Spin Posteriors In Figure 15, we show marginal posteriors for two precessing simulated signals with \(a_{1}=0.6\) in the O4 detector network; the runs shown are at the lower SNRs listed in Table 1. The source with \(q=0.10\), which has a network SNR of 14.3 (left panel, lime color) shows bimodality in the posterior distribution for \(\chi_{\rm eff}\) driven by posterior support for \(q\) away from the true value, shown in black. As \(q\) and \(\chi_{\rm eff}\) are correlated, a small change in \(q\) drives samples to a new iso-likelihood ridge in the space of \(q-\chi_{\rm eff}\), introducing a new mode at larger \(\chi_{\rm eff}\). In the 2D marginal \(q-\chi_{\rm eff}\) posterior for this source, the lower probability mode coincides with the true value. It does not dominate the posterior because the likelihood is especially "peaky" and, in a sense, "underresolved". As \(q\) decreases, the length of the signal increases, and so even relatively minor deviations in the source parameters will result in a large mismatch between a proposal for the signal and the data; thus, the likelihood peaks around an especially narrow range of source parameters. Using dynesty, the number of nested sampling live points \(N_{\rm live}\) can be heuristically related to our resolution of the total prior volume. If we have too few live points, it is less likely to initially place them within the narrow width of the maximum likelihood peak. The analysis for the precessing \(q=0.10\), \(a_{1}=0.6\) source in the O4 network employed \(N_{\rm live}=2000\). Using fewer live points, the true value lay at the edge of the \(1\sigma\) level of the marginal posterior on \(q\), and with increasing \(N_{\rm live}\) we observe a (still subdominant) mode emerge centered on the true source parameters. It is likely that increasing \(N_{\rm live}\) further would allow us to fully resolve a unimodal posterior centered on the true \(q\) and \(\chi_{\rm eff}\), although this becomes increasingly computationally expensive. The source with \(q=0.56\), with a network SNR of 12.5 (right panel of Figure 15, green-blue color) underestimates \(\chi_{\rm eff}\), below the true value of zero. However, inspecting the joint posterior on \(q-\chi_{\rm eff}\) we see that the true value lies along a ridge of similar probability, Figure 15: Marginal posterior distributions on mass ratio \(q\) and effective spin \(\chi_{\rm eff}\) for two precessing signals (with \(a_{1}=0.6\)) in the O4 detector network that do not peak at the true value of \(\chi_{\rm eff}\), driven by the \(q-\chi_{\rm eff}\) correlation. True values are shown with black lines, and the distributions are colored by the mass ratio with \(q=0.10\) (left, lime) and \(q=0.56\) (right, green-blue), matching the colors of Figure 5. The \(q=0.10\) run has a network SNR of 14.3, and the \(q=0.56\) run has an SNR of 12.5. making it (roughly) equally as likely for \(\chi_{\text{eff}}\) to be slightly negative, and with slightly larger \(q\). Without better measurement of the spin magnitudes and tilts, a larger range of \(\chi_{\text{eff}}\) is allowed. Since the marginal posterior on \(q\) shows additional support at larger values, we underestimate \(\chi_{\text{eff}}\). ## Appendix F Biased Tilt Posteriors at High Signal-to-Noise Ratios Assuming isotropic priors on the spin magnitudes and tilt angles, the prior on the aligned components of the spins \(\pi(\chi_{i})\) formally diverges to infinity at \(\chi_{i}=0\), i.e. \(\theta_{i}=\pi/2\) (see equation A7 of Ref. [131]). Nested sampling in dynesty takes place in a unit hypercube, requiring the inverse of the cumulative distribution function (CDF) of the prior; however, the CDF of \(\pi(\chi_{i})\) is not analytically invertible. Instead, bilby constructs a numerical approximant to the CDF which insufficiently resolves the divergence at \(\chi_{i}=0\) in \(\pi(\chi_{i})\) for very high SNR signals, resulting in little to no posterior support at \(\theta_{i}=\pi/2\) as observed in Figures 8 and 17. Figure 17: Marginal posterior distributions on the tilt angle between the spin vector of the heavier black hole and the orbital angular momentum, \(\theta_{1}\), for the quietest signals for a given set of intrinsic parameters \((\tilde{m}_{1},\tilde{m}_{2},a_{1},\theta_{1})\), injected into a network of Cosmic Explorer and the Einstein Telescope. In the top row, we show the prior on \(\pi(\theta_{1})\). The truth is indicated with a dashed black line. We color the posterior distributions by the network SNR, and their linestyles reflect the true spin magnitude of the heavier black hole, \(a_{1}\). We observe that \(\theta_{1}\) is well-measured for non-precessing systems, and at low mass ratios. As noted in Section 3.2.2, the lack of posterior support at \(\theta_{1}=\pi/2\) for precessing signals reflects the aligned-spin prior effect investigated in Appendix F. To verify that this bias in \(\theta_{1}\) is indeed a prior effect, we sample directly in the spin magnitudes and tilts to re-analyze the simulated signal with \(q=0.10\), \(a_{1}=0.9\), and \(\theta_{1}=\pi/2\) in the XG network of Cosmic Explorer and the Einstein Telescope, with a network SNR of 490.3. In Figure 18 we compare the marginal posteriors on \(\theta_{1}\) recovered by sampling in the aligned- and in-plane (\(\chi_{i}^{\perp}\)) spin components with isotropic priors (solid line) and in the spin magnitudes \(a_{i}\) and tilt angles \(\theta_{i}\) with uniform priors (dash-dotted line). We note that, in the range of \(\theta_{1}\) considered very near \(\pi/2\), this is nearly equivalent to an isotropic (sine) prior on \(\theta_{1}\). The posterior generated by sampling in \(\chi_{i}\), \(\chi_{i}^{\perp}\) is repeated from the inset panel in the second row, right column of Figure 8. Here, we see that the posterior generated by sampling in \(a_{i}\), \(\theta_{i}\) is of a similar width as the original result; sampling in \(\chi_{i},\chi_{i}^{\perp}\) we found a 90% credible interval on \(\theta_{1}\) of \(3.3\times 10^{-3}\) radians versus a credible interval of \(2.2\times 10^{-3}\) radians when sampling in the spin magnitude and tilt directly. Importantly, we also successfully recover the true value of \(\theta_{1}\) when using uniform priors on \(a_{i}\), \(\theta_{i}\), indicating that the bias in \(\theta_{1}\) we observed in Figure 8 is a prior effect. ## Appendix G Correlation Between Spin Precession and Binary Geometry At the three lowest mass ratios of the simulated precessing signals considered in this work, \(q=0.07,0.10,0.36\), we observed a correlation between the effective spin precession \(\chi_{p}\) and the zenith angles of the orbital angular momentum \(\vec{L}\) and total angular momentum \(\vec{J}\). To our knowledge, this correlation has not been previously reported. In Figure 19 we show marginal posteriors on \(\chi_{p}\), \(\theta_{JN}\) (zenith angle between \(\vec{J}\) and the line of sight \(\hat{N}\)), \(\theta_{JL}\) (zenith angle between \(\vec{J}\) and \(\vec{L}\)), and \(\iota\) (zenith angle between \(\vec{L}\) and \(\hat{N}\)) for the precessing source with \(q=0.36\) and \(a_{1}=0.9\) in the O4-design sensitivity network, which had a network SNR of 21.2. We compute \(\theta_{JL}\) (also referred to as the opening angle, \(\beta\)) at a reference frequency of 20 Hz using pesummary.gw.conversions.spins.opening_angle(), which in turn implements Figure 18: Marginal posterior on \(\theta_{1}\) from two analyses of the simulated source with \(q=0.10\), \(a_{1}=0.9\), and \(\theta_{1}=\pi/2\) (dashed black line) injected into the XG network with an SNR of 490.3. Here, we repeat the posterior shown in Figure 8 (solid line) and compare it to the posterior generated when adopting uniform priors on the spin magnitudes \(a_{i}\) and tilt angles \(\theta_{i}\) (dash-dotted line). The uniform prior on \(\theta_{1}\) is shown in the black solid line. We observe that sampling in the spin magnitudes and tilts directly recovers a posterior of similar width as the posterior recovered when sampling in the aligned (\(\chi_{i}\)) and in-plane spins (\(\chi_{i}^{\perp}\)), and it also recovers the true value of \(\theta_{1}\). methods from LALSimulation. Along the bottom row, we see linear correlations between \(\chi_{p}\) and each of these angles. Heuristically, this correlation can be understood as follows: consider a gravitational-wave signal consistent with precession. This necessarily implies a non-zero angle \(\theta_{JL}\) between \(\vec{L}\) and \(\vec{J}\). If we infer \(\theta_{1}\neq 0\), we can explain larger (smaller) values of this angle with larger (smaller) values of \(a_{1}\) which pushes \(\vec{J}\) into (away from) the orbital plane and away from (towards) \(\vec{L}\). Note that in all of the signals we simulated with \(a_{1}>0,a_{2}=0\), we have \(\chi_{p}=a_{1}\sin\theta_{1}\) (c.f. Equation 3.2).7 So, larger \(\chi_{p}\) necessitates larger \(\theta_{JL}\), and thus these quantities are positively correlated. Alternatively, if we are uncertain of the degree to which we observe precession in a gravitational wave signal, we can fix all of the geometry of our system except for \(\theta_{JL}\) and \(\theta_{JN}\), in particular fixing the component black hole spins and \(\iota\). As \(\vec{L}\) precesses about \(\vec{J}\), if we enlarge the cone of precession by increasing \(\theta_{JL}\) then we must bring this cone closer to the line of sight by decreasing \(\theta_{JN}\) to keep \(\iota\) constant. In this way, \(\theta_{JN}\) and \(\theta_{JL}\) are negatively linearly correlated. Footnote 7: Alternatively, we could increase \(\theta_{1}\) if we infer non-zero spin \(a_{1}\) to achieve the same effect, although the change in \(\chi_{p}\) with respect to \(\theta_{1}\) is much smaller near the true value of \(\theta_{1}=\pi/2\) when \(\chi_{p}\propto\sin\theta_{1}\). Combined, we have a negative linear correlation between \(\chi_{p}\) and \(\theta_{JN}\). Similar heuristic arguments can explain the observed correlations with \(\iota\). In detail, this effect is likely because precession for these systems is in the "tropical region" noted by [79], where the plane of the orbit can wobble so violently that both the top and bottom are observed along the line of sight in the course of precession. This injects additional information on the binary geometry of the system into the gravitational wave signal, which has previously been observed to reduce uncertainty in the measurement of \(\theta_{JN}\) (see e.g. Figure 4 of [95] or Figure 20 of [93]). Figure 19: Marginal posteriors on the effective spin precession \(\chi_{p}\) and the zenith angles \(\theta_{JN}\) (between the total angular momentum and line of sight), \(\theta_{JL}\) (between the total and orbital angular momenta), and \(\iota\) (between the orbital angular momentum and line of sight), from our analysis of the simulated \(q=0.36\), \(a_{1}=0.9\) precessing signal in the O4-design sensitivity network. True values are shown in black lines.
2309.11056
A Verified Cost Analysis of Joinable Red-Black Trees
Ordered sequences of data, specified with a join operation to combine sequences, serve as a foundation for the implementation of parallel functional algorithms. This abstract data type can be elegantly and efficiently implemented using balanced binary trees, where a join operation is provided to combine two trees and rebalance as necessary. In this work, we present a verified implementation and cost analysis of joinable red-black trees in $\textbf{calf}$, a dependent type theory for cost analysis. We implement red-black trees and auxiliary intermediate data structures in such a way that all correctness invariants are intrinsically maintained. Then, we describe and verify precise cost bounds on the operations, making use of the red-black tree invariants. Finally, we implement standard algorithms on sequences using the simple join-based signature and bound their cost in the case that red-black trees are used as the underlying implementation. All proofs are formally mechanized using the embedding of $\textbf{calf}$ in the Agda theorem prover.
Runming Li, Harrison Grodin, Robert Harper
2023-09-20T04:29:05Z
http://arxiv.org/abs/2309.11056v2
# A Verified Cost Analysis of Joinable Red-Black Trees ###### Abstract. Ordered sequences of data, specified with a _join_ operation to combine sequences, serve as a foundation for the implementation of parallel functional algorithms. This abstract data type can be elegantly and efficiently implemented using balanced binary trees, where a join operation is provided to combine two trees and rebalance as necessary. In this work, we present a verified implementation and cost analysis of joinable red-black trees in **c**alf, a dependent type theory for cost analysis. We implement red-black trees and auxiliary intermediate data structures in such a way that all correctness invariants are intrinsically maintained. Then, we describe and verify precise cost bounds on the operations, making use of the red-black tree invariants. Finally, we implement standard algorithms on sequences using the simple join-based signature and bound their cost in the case that red-black trees are used as the underlying implementation. All proofs are formally mechanized using the embedding of **c**alf in the Agda theorem prover. ## 1. Introduction Ordered sequences of data are essential to the efficient implementation of parallel functional algorithms (Acar and Blelloch, 2019). One common presentation of the signature for ordered sequences containing elements of type \(\alpha\) is given in Fig. 1. This signature provides an abstract type \(\operatorname{seq}_{\alpha}\) along with three operations: 1. A constructor, empty, that represents the empty sequence containing no data of type \(\alpha\). 2. A constructor, join, that appends two sequences with an element of type \(\alpha\) in between. 3. A destructor, \(\operatorname{rec}_{\rho}\), that recurs over a sequence to produce an element of type \(\rho\). An empty sequence is mapped to the argument of type \(\rho\); a sequence \(\operatorname{join}(s_{1},a,s_{2})\) is destructed using the argument of type \[\operatorname{seq}_{\alpha}\to\rho\to\alpha\to\operatorname{seq}_{\alpha}\to \rho\to\rho,\] plugging \(s_{1}\) and \(s_{2}\) in for the sequence arguments, \(a\) in for the \(\alpha\) argument, and the recursive calls in for the \(\rho\) arguments. These three operations give rise to implementations of all algorithms on ordered sequences of data; some examples are shown in Fig. 2. Many implementations of this signature are possible, using data structures such as lists and trees. When trees are used, the data in the sequence is taken to be the in-order traversal of the tree. For parallel efficiency, balanced trees are a sensible choice (Blelloch and Greiner, 1995): if the recursor \(\operatorname{rec}_{\rho}\) performs both recursive calls in parallel, it is worthwhile to rebalance during a join in preparation for an efficient use of \(\operatorname{rec}_{\rho}\) later. As studied by Blelloch et al. (2016, 2022) and Sun (2019), when sequences are implemented as balanced binary trees, implementations of common auxiliary functions on sequences have efficient sequential and parallel cost. For example, sequences may be used as an implementation of finite sets when the stored data is sorted. Then, using empty, join, and \(\operatorname{rec}_{\rho}\), bulk set operations such as union and intersection can be implemented with polylogarithmic span. ### Red-black trees Here, we consider the _red-black tree_ (RBT) data structure (Guibas and Sedgewick, 1978; Okasaki, 1999), a flavor of binary search tree with an elegant functional description and cost analysis. For our purposes, a binary tree is inductive data structure where each inhabitant is either a _leaf_ node carrying no data or a _node_ carrying a key and two other binary tree children. A red-black tree is a binary tree satisfying the following invariants: 1. every node is colored either red or black. 2. every leaf is considered black. Figure 1. Signature for ordered sequences containing elements of type \(\alpha\). Figure 2. Sample implementations of auxiliary functions on sequences, in terms of empty, join, and \(\operatorname{rec}_{\rho}\). 3. both children of a red-colored node must be colored black. 4. the number of black nodes on any path from the root to a leaf (excluding the leaf), called the _black height_ of the tree, is the same. Following Blelloch et al. (2016, 2022), we do not require that the root of a red-black tree be colored black. In Fig. 3, we show a sample red-black tree with black height of 1. Traditionally, red-black trees have been used as binary search trees, storing data in sorted order. Then, the primitive operations are insertion, lookup, and deletion, all of which have similar implementations. However, as discussed by Blelloch et al. (2016, 2022), this causes algorithms implemented using red-black trees to have poor parallel efficiency, since operations must be performed one-at-a-time. Instead, _op. cit._, a join operation for red-black trees is given, combining two trees with a middle key and rebalancing as necessary to meet the red-black invariants and preserve the in-order traversal ordering. In Fig. 4, we show two sample red-black trees \(t_{1}\) and \(t_{2}\) which, when joined with \(x_{5}\) in the middle, produce the tree \(t\). It is well-known that red-black trees intrinsically satisfying the above invariants can be defined inductively (Licata, 2013; Wang et al., 2017; Weirich, 2014): 1. A black-colored RBT with black height 0, a leaf, may always be formed. 2. Let \(t_{1}\) and \(t_{2}\) are black-colored RBTs with black height \(n\), and let \(a\) be a key. Then, a red-colored RBT with black height \(n\) may be formed. 3. Let \(t_{1}\) and \(t_{2}\) be RBTs with black height \(n\), and let \(a\) be a key. Then, a black-colored RBT with black height \(n+1\) may be formed. We will use this presentation of red-black trees in our definitions and analysis. ### Mechanized cost analysis in calf The cost-aware logical framework (**calf**) (Niu et al., 2022) is a dependent type theory for verifying the sequential and parallel cost and correctness of algorithms. **calf** is based on the call-by-push-value paradigm (Levy, 2003), separating computations (which may have an associated cost) from values. Computation types are elements of the universe \(\mathrm{tp}^{\oplus}\), whereas value types are elements of the universe \(\mathrm{tp}^{+}\). Function types are computation types, where the input type is a value type and the output type is a computation type. In this setting, the signature for ordered sequences from Fig. 1 is augmented to include \(\mathrm{U}(-)\) and \(\mathrm{F}(-)\) type constructors, explicitly moving between value and computation types; this change is rendered in Fig. 5. In **calf**, the programmer includes cost annotations within algorithms, denoting an abstract notion of cost to later analyze. In this work, we use the usual sequential-and-parallel cost model (Niu et al., 2022, SS6), where a cost is a pair of the sequential work and the parallel span as natural numbers. To annotate a program with \(\mathrm{\underline{c}}\) (sequential and parallel) cost, we write \(\mathbf{step}\,\mathbf{c}\). Figure 4. Two red-black trees \(t_{1}\) and \(t_{2}\) along with the tree \(t\) produced when they are joined with \(a=x_{5}\) in the middle. Figure 5. Signature for ordered sequences containing elements of value type \(\alpha:\mathrm{tp}^{+}\), for computation types \(\rho:\mathrm{tp}^{\oplus}\). Figure 3. Sample red-black tree with black height of 1. Leaves are depicted as black squares, and nodes are depicted as red or black circles annotated with a key. Originally, Niu et al. (2022) studied the implementation of sequential and parallel algorithms on concrete data structures in **call**. In subsequent work, Grodin and Harper (2023) consider the analysis of sequential-use data structures in this setting. Here, we begin to investigate the implementation and analysis of parallel data structures in **call**. ### Contribution In this work, we present an implementation of sequences using joinable red-black trees in **call**. The correctness of our implementation is intrinsically verified, and we perform a separate precise cost analysis in terms of the number of recursive calls. Following Blelloch et al. (2016, 2022), we implement a variety of sequence functions generically in the given primitives, and we analyze the cost of a simple function in the case the underlying implementation of the sequence type is the red-black tree data structure. Our implementation and proofs are fully mechanized in Agda (Norell, 2009), in which **call** is embedded (Niu et al., 2022). We implement the mechanization of sequences and red-black trees in Examples/Sequence.agda and the corresponding Examples/Sequence directory. ### Related work Join-based balanced binary trees have been studied extensively by Blelloch et al. (2016, 2022), and the joinable framework is unified by Sun (2019). The correctness of red-black trees with their traditional sequential operations, such as single-element insertion, have been intrinsically (and extrinsically) verified in a variety of verification environments, including Agda (Licata, 2013; Weirich, 2014), Coq (Appel, 2011, 2023), and Isabelle (Nipkow, 2023). However, these systems do not come equipped with a notion of cost, preventing the verification of the efficiency of these algorithms: Coq does not have a formal time-cost model for its execution, so we cannot verify [the] logarithmic running time [of insertion and lookup on red-black trees] in Coq. (Appel, 2023) In another direction, the cost analysis of sequential operations on red-black trees has been verified in a resource-aware type theory (Wang et al., 2017). However, this work does not verify the correctness of the data structure. In this work, we verify both the correctness and cost of joinable red-black trees using an abstract cost model in **call**; further explanation of and examples in the **call** framework are presented in the original work of Niu et al. (2022). ## 2. Intrinsically-correct definitions In this section, we describe a binary tree data type that structurally guarantees that the red-black invariants hold. Then, we describe how it would be used to implement the sequence signature of Fig. 5; of particular interest is the implementation of the jon algorithm. Since our definitions will be well-typed, they will be intrinsically correct. We work in **call**, an extension of call-by-push-value, in which we distinguish value types in universe \(\mathrm{tp}^{+}\) from computation types in universe \(\mathrm{tp}^{\oplus}\). First, we define red-black trees as an indexed inductive type, as described in Section 1.1, guaranteeing that the red-black invariants are maintained; this definition of \(\mathrm{irbt}_{\alpha}\) is given in Fig. 6. We include an index storing the in-order traversal of the tree that we will to use to guarantee that well-typed definitions implement the desired behavior, specified in terms of lists. Additionally, we define the type \(\mathrm{rbt}_{\alpha}\) as the total space of the type family \(\mathrm{irbt}_{\alpha}\), storing an arbitrary color, black-height, and in-order traversal along with an indexed red-black tree with those parameters. Given these definitions, the goal is to implement the sequence signature of Fig. 5. We choose \(\mathrm{seq}_{\alpha}=\mathrm{rbt}_{\alpha}\), define \(\texttt{empty}=\mathrm{ret}\) (\(\|\)leaf\()\), and naturally implement \(\texttt{rec}_{\rho}\) via the induction principle for \(\mathrm{rbt}_{\alpha}\). It remains, then, to define a computation \[\texttt{join}:\mathrm{rbt}_{\alpha}\rightarrow\alpha\rightarrow\mathrm{rbt}_ {\alpha}\rightarrow\mathrm{F}(\mathrm{rbt}_{\alpha}),\] which we consider in the remainder of this section. ### The join algorithm The algorithm itself will follow Blelloch et al. (2016), although we must ensure that the intrinsic structural properties are valid.1 We recall its definition in Algorithm 1, adapting to our notation; it is defined in terms of auxiliary functions JoinRight (and the symmetric JoinLeft, which we henceforth elide), which we will consider in the next section. Informally, the algorithm proceeds as follows: Footnote 1: We omit a case listed by Blelloch et al. (2022) that our verification shows is impossible to reach. 1. If both trees have equal height, simply construct a new node without rebalancing (Fig. 7). If possible, a red node is preferable. Figure 6. Definition of indexed red-black trees as an indexed inductive type. 2. Otherwise, without loss of generality, assume \(t_{1}\) has a larger black height than \(t_{2}\). Then, use the JoinRight auxiliary function to place \(t_{2}\) on the right spine of \(t_{1}\), rebalancing as necessary. The process may cause a single red-red violation at the root of the result tree. In that case, recolor the root to black (Fig. 8); otherwise, return the valid tree. This algorithm performs no recursive calls aside from those within JoinRight, so no cost annotations are required by our cost model. It remains, then, to define the type and implementation of JoinRight. ### The JoinRight auxiliary algorithm As discussed previously, the JoinRight algorithm has a relaxed specification: rather than guaranteeing a valid red-black tree, it allows a single red-red violation between the root of the result and its right child to propagate upwards. We allow this violation only in the case that the first tree had a red root to begin with. In order to represent this condition, we define an auxiliary data structure, an _almost-right red-black tree_, abbreviated arrbt, in Fig. 9; our terminology is inspired by the "almost tree" of Weirich (2014). A well-formed red-black tree always counts as an almost-right red-black tree; a red-colored almost-right red-black tree may also be a violation, with a black-colored left child, key data, and another red-colored right child. Notably, a red-red violation for an almost-right red-black tree can only happen on the right spine, and only when the first tree originally had a red root. We thereby define arrbt to be indexed by another color parameter called leftColor, representing the color of the left tree from which it was created. Therefore, when a violation happens, the leftColor must be red. Given this definition, we wish to define a computation \[\begin{array}{l}\textsc{JoinRight}:\\ (\operatorname{irbt}_{\alpha}y_{1}n_{1}\ l_{1})\ (a:\alpha)\ (\operatorname{irbt}_{\alpha}y_{2}n_{2}\ l_{2}) \to\\ n_{1}>n_{2}\to\\ \mathsf{F}(\operatorname{arrbt}_{\alpha}y_{1}n_{1}\ (l_{1}\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\cdot}}}}}}}{}{}{}{{}{{}{{}{{}{{}{{}{{}{{}{{}{}{{} {{}{{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{} {{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{} {{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{ {}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{ {}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{ {}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{ {}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{ {}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{ {}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{ {}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{ {}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{ }{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{ {}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{ }{{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{ }{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{ }{{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{ {}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{ }{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{ }{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{ }{{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{ {}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{ {}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{ {}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{ {}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{ {}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{ {}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{ {}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{ {}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{ {}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{ {}{{}{}{}{{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{ {}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{ {}{{}{}{{}{{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{{}{ {}{{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{ {{}{{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{{}{}{}{ {}{{}{{{}{}{}{{}{}{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{ {{}{}{{}{}{{}{{}{{}{}{}{{}{{}{}{}{}{{}{}{{}{}{}{{}{{}{}{{}{}{{}{ {}{}{{}{{}{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{ {}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{}{{}{ {}{{}{}{{}{{}{}{{}{}{}{{}{{}{}{{}{}{{}{{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{ {}{{}{}{{}{{}{}{}{{}{{}{}{}{{}{{}{}{}{{}{{}{{}{}{{}{}{{}{}{{}{}{{}{ {{}{{}{}{}{{{}{}{{}{{}{}{{}{}{}{{}{}{{}{{}{}{{}{{}{{}{{}{}{{}{}{{}{}{{}{{{}{}{{}{ {}{{}{{}{} If \(n_{1}<n_{2}\), then a symmetric argument can be made. If \(n_{1}=n_{2}\), then the two trees may be joined by a red node if both are black or a black node otherwise. In either case, it forms a valid red-black tree. Now, it remains to give the JoinRight algorithm to fulfill this specification. Here, we diverge slightly from Blelloch et al. (2016, 2022) for ease of verification. The algorithm presented _op. cit._ allows for a triple-red violation on the right spine, albeit only in the base case. Moreover, as noted by Sun (2019, SS3.2.2), the triple-red issue must be resolved one recursive call after the base case. Therefore, we trade the more concise code and more complex specification for slightly more verbose code with a simpler specification. We give our definition of JoinRight in Algorithm 2. We claim that JoinRight is a well-typed program with exhaustive casework, by the definitions of \(\operatorname{irbt}_{\alpha}\) and \(\operatorname{arrbt}_{\alpha}\). Although our Agda mechanization verifies this fact, we include an informal proof below. **Lemma 2.2**.: _For all appropriate inputs \(t_{1}\), \(a\), and \(t_{2}\), the call JoinRight\((t_{1},a,t_{2})\) returns an almost-right red-black tree with black height \(n_{1}\). In other words:_ 1. _If_ \(t_{1}\) _is colored_ _black_ _then JoinRight\((t_{1},a,t_{2})\) is a valid red-black tree with the same black height as_ \(t_{1}\)_._ 2. _If_ \(t_{1}\) _is colored_ _red_ _then JoinRight\((t_{1},a,t_{2})\) is an almost-right red-black tree (valid or with a red-red violation) with the same black height as_ \(t_{1}\)_._ Proof.: We prove both items simultaneously by induction on \(t_{1}\), following the structure of the code. 1. If \(t_{1}\) is colored _red_, we must prove Item 2, and its children \(t_{1,1}\) and \(t_{1,2}\) must both be colored _black_ _. Moreover, \(n_{1}=n_{1,1}=n_{1,2}>n_{2}\). By induction, the result of the recursive call to JoinRight\((t_{12},a,t_{2})\), \(t^{\prime}\), gives a valid red-black tree with black height \(n_{1,2}\). We always return a _red_ node whose left child is the _black_ subtree \(t_{1,1}\) and whose right child is \(t^{\prime}\), which could be either red or black. Depending on the color of \(t^{\prime}\), we will either get a valid _red_ tree or a red-red violation on the right spine, both of which are allowed as the result for Item 2. 2. If \(t_{1}\) is colored _black_, we must prove Item 1. If \(n_{1}=n_{2}+1\) and \(t_{2}\) is colored _red_, then \(n_{1}=n_{2,1}+1=n_{2,2}+1\). Therefore, the returned tree is valid with black height \(n_{1}\). 3. This case is similar to the previous case, but \(t_{2}\) is colored _black_ _. If \(t_{1,2}\) is colored _red_, then \(n_{1,1}=n_{1,2}=n_{2,1}=n_{2,2}=n_{2}\). Therefore, the returned tree is valid with black height \(n_{1}\). 4. This case is similar to the previous case, but \(t_{1,2}\) is colored _black_ _. Thus, \(n_{1,1}=n_{1,2}=n_{1,2,1}=n_{1,2,2}=n_{2}\), so the returned tree is valid with black height \(n_{1}\). 5. If \(t_{1}\) is colored _black_, we must prove Item 1. Suppose \(n_{1}>n_{2}+1\). Then, \(n_{1,1}+1=n_{1,2}+1=n_{1}>n_{2}\). Regardless of the color of \(t_{1,2}\), the inductive hypothesis applies. If the result \(r\) is a valid red-black tree \(t^{\prime}\), then \(t_{1,1}\) and \(a_{1}\) can be combined at a _black_ node to create a valid red-black tree with black height \(n_{1}\). 6. This case is similar to the previous case, but the result \(r\) indicates a red-red violation between the root Figure 8. Recoloring the root of a result tree from JoinRight due to a red-red violation on the right, indicated by a dashed line. Figure 9. Definition of almost-right red-black trees, allowing for a red-red violation on the right when the color parameter (the color of the left tree from which it was created) is red, as an indexed inductive type. and its right child. Then, a left-rotation is performed to give back a valid red-colored red-black tree with black height \(n_{1}\). In every case, the in-order traversal of the tree is clearly preserved, by inspection of the left-to-right order of the subtrees and keys. Thus, we have described the join algorithm on red-black trees and intrinsically verified its correctness. Based on the correctness of JoinRight, we also get a straightforward bound on the black height of the tree produced by join, matching the result of Blelloch et al. (2016, 2022). **Theorem 2.3**.: _Let \(t_{1}\) and \(t_{2}\) be red-black trees with black heights \(n_{1}\) and \(n_{2}\), respectively. Then, the black height of the red-black tree returned by \(\textsc{join}(t_{1},a,t_{2})\) is either \(\max(n_{1},n_{2})\) or \(1+\max(n_{1},n_{2})\)._ Theorem 2.3 does not affect the cost analysis of join, but it does impact cost analysis for algorithms that use join; therefore, it is also mechanized in the implementation. For the purpose of correctness analysis, the cost annotations did not play a role. In the next section, we will state and prove cost bounds on the join and JoinRight algorithms. ## 3. Cost analysis To analyze the cost of algorithms in **calf**, we attempt to bound the number of calls to **step**. In the subsequent development, we will count informally; in our mechanization, we use the definition isBounded \((A;e;c)\) and associated lemmas from the **calf** standard library (Niu et al., 2022). From this section onward, we annotate all mechanized results with their name as defined in the Agda implementation using the typewriter font, _e.g._ joinRight/is-bounded. ### Cost of JoinRight If a red-black tree has black height \(n\), it has true height bounded by at most \(2n+1\): on top of every black node, an additional red node may (optionally) be placed without affecting the black height. Similar, then, to how an almost-right red-black tree weakens the invariants in the case of a red root, so too must the cost analysis weaken the cost bound given a red root. **Theorem 3.1** (joinRight/is-bounded).: _Let \(t_{1}\), \(a\), and \(t_{2}\) be valid inputs to JoinRight. Then, the cost of JoinRight\((t_{1},a,t_{2})\) is bounded by \(1+2(n_{1}-n_{2})\)._ Proof.: We prove a strengthened claim: 1. If \(t_{1}\) is colored red, the cost of JoinRight\((t_{1},a,t_{2})\) is bounded by \(1+2(n_{1}-n_{2})\). 2. If \(t_{1}\) is colored black, the cost of JoinRight\((t_{1},a,t_{2})\) is bounded by \(2(n_{1}-n_{2})\). The desired result follows immediately in both cases. Following the structure of the JoinRight in Algorithm 2, we go by induction on \(t_{1}\). 1. Since \(t_{1}\) is colored \(\boxed{red}\), \(t_{1,2}\) is black with \(n_{1}=n_{1,2}\), and we must prove Item 1. This case incurs \(\frac{1}{1}\) cost in addition to the cost of the recursive call. The cost of the recursive call is bounded by \(2(n_{1,2}-n_{2})=2(n_{1}-n_{2})\). Therefore, the cost of the entire computation is bounded by \(1+2(n_{1}-n_{2})\), as desired. 2. This case incurs zero cost. 3. This case incurs zero cost. 4. This case incurs zero cost. 5. Since \(t_{1}\) is colored \(\boxed{black}\), \(n_{1}=n_{1,2}+1\), and we must prove Item 2. This case incurs \(\frac{1}{1}\) cost in addition to the cost of the recursive call. The color of \(t_{1,2}\) is unknown, but in either case the cost of the recursive call is bounded by \(1+2(n_{1,2}-n_{2})\). Therefore, the cost of the entire computation is bounded by \(2+2(n_{1,2}-n_{2})=2((n_{1,2}+1)-n_{2})=2(n_{1}-n_{2})\), as desired. 6. This case is the same as the previous case. In all cases, the desired result holds. ### Cost of join Using Theorem 3.1, we may now reason about the cost of the full join implementation of Algorithm 1. For notational convenience, we write \[\overline{x_{1}} =\max(x_{1},x_{2})\] \[\overline{x_{2}} =\min(x_{1},x_{2})\] since join behaves symmetrically depending on which tree is larger. **Theorem 3.2** (join/is-bounded).: _For all \(t_{1}\), \(a\), and \(t_{2}\), the cost of \(\textsc{join}(t_{1},a,t_{2})\) is bounded by \(1+2(\overline{n_{1}}-\overline{n_{2}})\)._ Proof.: If \(t_{1}\) and \(t_{2}\) have the same black height, no cost is incurred, so the bound is trivially met. Otherwise, the result follows immediately from Theorem 3.1. This validates the claim by Blelloch et al. (2022, SS4.2) that the cost of join on red-black tree is in \(\mathcal{O}\left(|h(t_{1})-h(t_{2})|\right)\), where \(h(t)\) is the height of tree \(t\). Since black height is a property only understood in the implementation, rather than the abstract sequence interface, we wish to publicly characterize the cost of join in terms of the lengths of the involved sequences. To accomplish this, we bound the black height of a red-black tree in terms of the overall size of the tree, which we write \(|t|\) for a tree \(t\). **Lemma 3.3** (nodes/upper-bound).: _For any red-black tree \(t\) with black height \(n\), we have_ \[n\leq\left\lceil\log_{2}(1+|t|)\right\rceil.\] **Lemma 3.4** (nodes/lower-bound).: _For any red-black tree \(t\) with black height \(n\), we have_ \[\left\lfloor\frac{\left\lceil\log_{2}(1+|t|)\right\rceil-1}{2}\right\rfloor \leq n.\] Using these lemmas, we may give a user-facing description of the cost of join. **Theorem 3.5** (join/is-bounded/nodes).: _Let \(t_{1}\), \(a\), and \(t_{2}\) be valid inputs to join. Then, the cost of \(\textsc{join}(t_{1},a,t_{2})\) is bounded by_ \[1+2\left(\left\lceil\log_{2}(1+\overline{|t_{1}|})\right\rceil-\left\lceil \frac{\left\lceil\log_{2}(1+\overline{|t_{2}|})\right\rceil-1}{2}\right\rceil \right).\] This matches the expected cost bound, \[\mathcal{O}\left(\left\lceil\log_{2}\left(\overline{|t_{1}|}/\overline{|t_{2}| }\right)\right\rceil\right).\] ## 4. Case study: algorithms on sequences An essential part of the work of Blelloch et al. (2016, 2022) and Sun (2019) is showing how an implementation of the sequence signature gives rise to efficient implementations of other common algorithms on sequences when sequences are implemented as balanced trees. Here, we consider the implementation and cost analysis of some such algorithms. We implement each algorithm generically in terms of the sequence interface given in Fig. 5. However, for the purpose of cost analysis, we break abstraction, inlining the sequence definitions. Additionally, for readability, we replace uses of \(\textsc{rec}_{\rho}\) with a more familiar pattern matching notation. ### Sequence sum One simple algorithm on a sequence of natural numbers is a parallel sum, adding up the elements in linear work and logarithmic span with respect to the length of the sequence when counting recursive calls. We give an implementation \[\textsc{Sum}:\operatorname{seq}_{\text{mat}}\to\textsc{F}(\text{nat})\] in Algorithm 3, adapting the definition from Fig. 2 to the call-by-push-value setting and adding cost instrumentation and parallelism. It goes by recursion using \(\textsc{rec}_{\textsc{f}(\text{nat})}\). In the base case, \(0\) is returned. In the inductive case, it recursively sums both subsequences _in parallel_ and then returns the sum of the results and the middle datum. When the implementation of sequences is specialized to red-black trees, we achieve the desired cost bound. **Theorem 4.1** (sum/bounded).: _For all red-black trees \(t\), the cost of \(\textsc{Sum}(t)\) is bounded by_ * \(|t|\) _work (sequential cost) and_ * \(1+2\big{\lceil}\log_{2}(1+|t|)\big{\rceil}\) _span (idealized parallel cost)._ Proof.: The sequential bound is immediate by induction. The parallel bound is shown using the black height, showing a bound of \(1+2n\) (and a strengthened bound of \(2n\) in case the tree is black) by induction. Then, Lemma 3.3 translates the bound from black height to the size of the tree. This matches the result of Blelloch et al. (2016, 2022): linear work and logarithmic span. ### Finite set functions Blelloch et al. (2016, 2022) consider implementations of standard functions on finite sets using balanced trees. Here, we briefly show how such implementations could be provided in terms of the basic sequence signature of Fig. 5. In order to implement a finite set as a sequence, we assume the element type \(\alpha\) is equipped with a total order. Then, standard functions on finite sets may be implemented using the recursor on sequences. In Fig. 10, we provide generic implementations of some examples: 1. The Split function splits a sorted sequence at a designated value, providing the elements of the sequence less than and greater than the value and, if it exists, the equivalent value. 2. The Insert function inserts a new value into the correct position in a sorted sequence, simply splitting the sequence at the desired value and joining the two sides around the new value. 3. The Union function takes the union of two sorted sequences, combining their elements to make a new sorted sequence. Blelloch et al. study the efficiency of these and other similar algorithms is studied, showing that implementations in terms of empty, join, and rec\({}_{\rho}\) have comparable efficiency to bespoke definitions. We include the implementations of these algorithms in our mechanization, but we leave their cost and correctness verification to future work. ## 5. Conclusion In the work, we presented an implementation of the join algorithm on red-black trees Blelloch et al. (2016, 2022) whose correctness is intrinsically verified due to structural invariants within the type definition. Our implementation was given in **caf**, instrumented with cost annotations to count the number of recursive calls performed; using the techniques developed by Niu et al. (2022), we gave a formally verified precise cost bound proof for the join algorithm. As noted by Blelloch et al. (2016, 2022), balanced trees are an appealing choice for the implementation of persistent sequences. Since the join-based presentation of sequences provides an induction principle over the underlying balanced trees, where call-by-push-value suspends the results of recursive calls, we were able to implement standard functional algorithms on sequences and, following Blelloch et al., prove their efficient sequential and parallel cost bounds. ### Future work In this work, we begin to study parallel-ready data structures. This suggests a myriad of directions for future work. Full sequence libraryA natural next step following from this work would be the verification of correctness conditions and cost bounds on other algorithms included in persistent sequence libraries. Finite sets and dictionariesAnother common use case of balanced trees, as explored in depth by Blelloch et al. (2016, 2022), is the implementation of finite sets and dictionaries by imposing and maintaining a total order on the data stored in the tree. In Section 4.2, we briefly discuss the implementation of finite sets using sorted sequences; as future work, Figure 10. Sample implementations of functions on sequences that use empty, join, and rec\({}_{\rho}\). we hope to extend this development to a full-scale finite set library with cost and correctness verification. _Amortized complexity._ Although we study the binary join operation on red-black trees in this work, more common historically is the single-element insertion operation. Once the desired location for the new element is found, insertion into the tree along with any necessary rebalancing has asymptotically constant amortized cost [13]. We expect this result could be verified similarly to other amortized analyses in **calf**[14]. _Various balancing schemes._ Blelloch et al. (2016, 2022) study a variety of tree balancing schemes, including AVL trees, weight-balanced trees, and treaps. All of these balancing schemes match the sequence signature, as well; we hope to implement and verify these schemes in future work. Unlike red-black trees, some of these schemes cannot be implemented purely functionally, _e.g._ treaps. This suggests an extension of **calf** that can better take effects into account. _Modular analysis of large-scale algorithms._ Many functional algorithms are implemented based on sequences, finite sets, and dictionaries [1]. However, in this work, we were forced to reveal the implementation of sequences as red-black trees in order to analyze the efficiency of algorithms implemented generically, such as Sum. In general, such analyses may even depend on particular hidden invariants within an implementation type; thus, we anticipate that analysis of larger-scale algorithms in this fashion would be intractable. Going forward, we hope to further develop a theory of modularity for algorithm cost, allowing algorithms implemented in terms of abstract data types to be analyzed without fully revealing the implementation of the abstraction. ## Acknowledgments We are grateful to Guy Blelloch for insightful discussions and advice about this work. This work was supported in part by the United States Air Force Office of Scientific Research under grant number FA9550-21-0009 (Tristan Nguyen, program manager) and the National Science Foundation under award number CCF-1901381. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the AFOSR or the NSF.
2303.00100
Asymptotic total ergodicity for actions of $\mathbb{F}[t]$ and polynomial configurations over finite fields and rings
We obtain new combinatorial results about polynomial configurations in large subsets of finite fields and rings by utilizing the phenomenon of asymptotic total ergodicity (previously studied for actions of $\mathbb{Z}$ on modular rings $\mathbb{Z}/N\mathbb{Z}$ in [Bergelson--Best, 2023]) in the context of actions of the polynomial ring $\mathbb{F}[t]$ over a finite field $\mathbb{F}$. Drawing inspiration from the well-understood limiting behavior of polynomial ergodic averages in totally ergodic systems, we show that the natural action of $\mathbb{F}[t]$ on a sequence of quotient rings $\mathbb{F}[t]/Q_n(t)\mathbb{F}[t]$, $Q_n(t) \in \mathbb{F}[t]$, is asymptotically totally ergodic if and only if every polynomial $P(x) \in (\mathbb{F}[t])[x]$ with $P(0) = 0$ asymptotically equidistributes in a subgroup of $\mathbb{F}[t]/Q_n(t)\mathbb{F}[t]$. We then derive several combinatorial consequences: (1) We establish a power saving bound for the Furstenberg--S\'ark\"ozy theorem over finite fields of fixed characteristic, complementing recent work of Li and Sauermann giving power saving bounds using the polynomial method. (2) We prove an enhancement of the Furstenberg--S\'ark\"ozy theorem guaranteeing many pairs $(x,y)$ with $x \in A$ and $x + P(y) \in B$ whenever $A$ and $B$ are large subsets of a quotient ring $\mathbb{F}[t]/Q(t)\mathbb{F}[t]$ that exhibits a sufficiently high level of approximate total ergodicity and the polynomial $P$ satisfies a rather general condition related to equidistributional properties studied in [Bergelson--Leibman, 2016]. We also show that, in the absence of asymptotic total ergodicity and an equidistribution condition on P, one cannot hope for such a refinement of the Furstenberg--S\'ark\"ozy theorem. (3) We produce new families of examples of partition regular polynomial equations over finite fields.
Ethan Ackelsberg, Vitaly Bergelson
2023-02-28T21:59:57Z
http://arxiv.org/abs/2303.00100v2
Asymptotic total ergodicity for actions of \(\mathbb{F}[t]\) and Furstenberg-Sarkozy-type theorems over finite fields and rings ###### Abstract. A version of the Furstenberg-Sarkozy theorem in the setting of \(\mathbb{F}[t]\), where \(\mathbb{F}\) is a finite field, states that for any subset \(E\subseteq\mathbb{F}[t]\) of positive density and any polynomial \(P(y)\in(\mathbb{F}[t])[y]\) with \(P(0)=0\), the difference set \(E-E\) contains an element \(P(y)\) for some \(y\in\mathbb{F}[t]\setminus\{0\}\). A corollary of this result is that for any \(\delta>0\), any subset \(A\) of the quotient ring \(\mathbb{F}[t]/Q(t)\mathbb{F}[t]\) of density at least \(\delta\) will contain distinct elements \(a,b\in A\) with \(b-a=P(y)\) for some \(y\), whenever \(Q\) is of sufficiently high degree. In this paper, we obtain refinements of this result via the phenomenon of _asymptotic total ergodicity_ for the quotient rings \(\mathbb{F}[t]/Q(t)\mathbb{F}[t]\). (This phenomenon of asymptotic total ergodicity is analogous to the phenomenon explored in [1] in the framework of modular rings \(\mathbb{Z}/N\mathbb{Z}\).) In particular, we show that if the minimal degree of the irreducible factors of \(Q_{n}\) diverges to infinity, then for a wide class of polynomials \(P(y)\in(\mathbb{F}[t])[y]\), the following refinement of the Furstenberg-Sarkozy theorem holds: for any \(\delta>0\), for any two subsets \(A,B\subseteq\mathbb{F}[t]/Q_{n}(t)\mathbb{F}[t]\) with \(|A||B|\geq\delta[\mathbb{F}[t]/Q_{n}(t)\mathbb{F}[t]]^{2}\) and \(n\) sufficiently large, there exist \(x,y\in\mathbb{F}[t]/Q_{n}(t)\mathbb{F}[t]\) such that \(x\in A\) and \(x+P(y)\in B\). Moreover, the values \(x,y\in\mathbb{F}[t]/Q_{n}(t)\mathbb{F}[t]\) for which \(x\in A\) and \(x+P(y)\in B\) obey the "correct" statistical behavior as \(n\to\infty\): \[\big{|}\big{\{}(x,y)\in(\mathbb{F}[t]/Q_{n}(t)\mathbb{F}[t])^{2}:x\in A,x+P(y )\in B\big{\}}\big{|}=\frac{|A||B|}{|\mathbb{F}[t]/Q_{n}(t)\mathbb{F}[t]|^{2}} +o_{n\to\infty}(1).\] The class of polynomials \(P(y)\) for which this result holds is characterized by equidistributional behavior previously studied in [1]. We also show that, in the absence of asymptotic total ergodicity and a natural equidistribution condition, one cannot hope for such a refinement to the Furstenberg-Sarkozy theorem. Another consequence of our results (in the special case that the polynomials \(Q_{n}\) are irreducible) is a power saving bound for the Furstenberg-Sarkozy theorem over finite fields of fixed characterstic. This complements recent work of Li and Sauermann [1], where a different power saving bound is obtained using the polynomial method. Key words and phrases:Finite fields, Furstenberg-Sarkozy theorem, total ergodicity, equidistribution 2020 Mathematics Subject Classification: 11B30 (11T06, 37A25) ###### Contents * 1 Introduction * 2 Asymptotic total ergodicity * 3 Asymptotic projection theorem * 4 Power saving bounds for the Furstenberg-Sarkozy theorem in characteristic \(p\) * 5 Proof of equivalences * 2.1 ## 1. Introduction Our starting point is the following classical result: **Theorem 1.1** (Furstenberg-Sarkozy theorem [10, 11]).: _Let \(P(x)\in\mathbb{Z}[x]\) be a nonzero polynomial with \(P(0)=0\). For any \(\delta\in(0,1)\), there exists \(N=N(P,\delta)\in\mathbb{N}\) such that any subset \(A\subseteq\{1,\ldots,N\}\) of size \(|A|\geq\delta N\) contains distinct elements \(a,b\in A\) with \(b-a=P(n)\) for some \(n\in\mathbb{Z}\)._ The assumption that \(P(0)=0\) can weakened as follows. Call a polynomial \(P\)_intersective_ if \(P\) has a root mod \(q\) for every \(q\in\mathbb{N}\). It follows from the work of Kamae and Mendes France [12, Example 3] that the conclusion of Theorem 1.1 holds if and only if \(P\) is intersective. A version of the Furstenberg-Sarkozy theorem holds also in a finite characteristic setting. The following result is a consequence of [1, Theorem 9.2] together with (a corrected version of) the remark1 following Theorem 9.5 in [1]: Footnote 1: In the remark following Theorem 9.5 in [1], intersective polynomials are defined as polynomials \(P(x)\in(\mathbb{F}[t])[x]\) such that for any finite index subgroup \(\Lambda\leq(\mathbb{F}[t],+)\), there exists \(m\in\mathbb{F}[t]\) such that \(P(mn)\in\Lambda\) for every \(n\in\Lambda\). The definition of intersective given in item (i) of Theorem 1.2 is different and deals with a wider class of polynomials but is the correct notion in order to get the desired “if and only if” conclusion. An example of an intersective polynomial that does not fit the condition in [1] is \(P(x)=x+t\). There is no multiple \(m\in\mathbb{F}[t]\) for which \(P(mn)\) always belongs to the subgroup \(t^{2}\mathbb{F}[t]\) of index \(|\mathbb{F}|^{2}\). However, \(P(-t)=0\), so \(P\) is intersective (according to our definition). **Theorem 1.2**.: _Let \(\mathbb{F}\) be a finite field with \(q\) elements. The following are equivalent for a polynomial \(P(x)\in(\mathbb{F}[t])[x]\):_ 1. \(P\) _is_ intersective_: for any_ \(Q(t)\in\mathbb{F}[t]\setminus\{0\}\)_, there exists_ \(n\in\mathbb{F}[t]\) _such that_ \(P(n)\equiv 0\pmod{Q}\)_;_ 2. _for any_ \(\delta\in(0,1)\)_, there exists_ \(N=N(P,\delta)\in\mathbb{N}\) _such that any subset_ \(A\subseteq\mathbb{F}[t]\) _of elements of degree_ \(<N\) _with_ \(|A|\geq\delta q^{N}\) _contains distinct_ \(a,b\in A\) _with_ \(b-a=P(n)\) _for some_ \(n\in\mathbb{F}[t]\)_._ A consequence of Theorem 1.2 is a Furstenberg-Sarkozy type theorem over large finite fields: **Corollary 1.3**.: _Let \(p\) be a prime number, and let \(P(x)\in(\mathbb{F}_{p}[t])[x]\) be an intersective polynomial. Then for any \(\delta\in(0,1)\), there exists \(N=N(P,\delta)\in\mathbb{N}\) such that if \(k\geq N\) and \(A\subseteq\mathbb{F}_{p^{k}}\) with \(|A|\geq\delta p^{k}\), then \(A\) contains distinct \(a,b\in A\) with \(b-a=P(n)\) for some \(n\in\mathbb{F}_{p^{k}}\)._ Recent work of Li and Sauermann [14], building on earlier work of Green [1] using the Croot-Lev-Pach [12] polynomial method, establishes quantitative improvements of Theorem 1.2 and Corollary 1.3 under the assumption \(P(0)=0\). **Theorem 1.4** ([14], Theorem 1.4 and Corollary 1.5).: _For \(q,d\in\mathbb{N}\), let_ \[t_{q,d}=\inf_{0<x<1}\frac{1+x+\cdots+x^{q-1}}{x^{\frac{1}{2}(q-1)(1-1/(dd^{ \prime}))}},\] _where_ \[d^{\prime}=\min\{d,(q-1)(1+\log_{q}d)\}.\] 1. _Fix_ \(q\) _and a polynomial_ \(P(x)\in\mathbb{F}_{q}[x]\) _of degree_ \(d\) _with_ \(P(0)=0\)_. If_ \(A\subseteq\mathbb{F}_{q}[t]\) _is a set of polynomials of degree less than_ \(N\) _and_ \(A\) _does not contain distinct_ \(a,b\in A\) _with_ \(b-a=P(x)\) _for some_ \(x\in\mathbb{F}_{q}[t]\)_, then_2__ Footnote 2: The notation \(a(N)\ll b(N)\) means that there is a constant \(C>0\) such that \(a(N)\leq Cb(N)\) for all \(N\in\mathbb{N}\). Subscripts on \(\ll\) denote on which parameters the constant \(C\) depends. \[|A|\ll_{q,d}t_{q,d}^{N}.\] 2. _Fix a prime_ \(p\) _and a polynomial_ \(P(x)\in\mathbb{F}_{p}[x]\) _of degree_ \(d\) _with_ \(P(0)=0\)_. If_ \(A\subseteq\mathbb{F}_{p^{k}}\) _does not contain distinct_ \(a,b\in A\) _with_ \(b-a=P(x)\) _for some_ \(x\in\mathbb{F}_{p^{k}}\)_, then_ Footnote 2: The notation \(a(N)\ll b(N)\) means that there is a constant \(C>0\) such that \(a(N)\leq Cb(N)\) for all \(N\in\mathbb{N}\). Subscripts on \(\ll\) denote on which parameters the constant \(C\) depends. \[|A|\ll_{p,d}t_{p,d}^{k}.\] In this paper, we produce power saving bounds for the Furstenberg-Sarkozy theorem over finite fields by a different method. The bounds we obtain are different from those in Theorem 1.4. In some cases, our bounds are stronger, while in other cases, ours are weaker; see Remark 1.13 below. Our approach draws inspiration from infinitary sources. These are: equidistributional results for polynomial sequences defined over \(\mathbb{F}[t]\) and the phenomenon of _asymptotic total ergodicity_ for actions of \(\mathbb{F}[t]\). As a consequence of our approach, our results apply in a more general setting than finite fields of characteristic \(p\), including quotient rings \(\mathbb{F}[t]/Q(t)\mathbb{F}[t]\) under some conditions on \(Q(t)\in\mathbb{F}[t]\). The phenomenon of asymptotic total ergodicity for \(\mathbb{Z}\)-actions was previously explored in [1], where similar Furstenberg-Sarkozy-type results are proved in the setting of modular rings \(\mathbb{Z}/N\mathbb{Z}\) when all prime factors of \(N\) are sufficiently large. The results of this paper are natural analogues of the results in [1]. Where appropriate, we note the correspondences between our setting (dealing with \(\mathbb{F}[t]\)-actions and quotient rings \(\mathbb{F}[t]/Q(t)\mathbb{F}[t]\)) and the more familiar setting of \(\mathbb{Z}\)-actions and modular rings \(\mathbb{Z}/N\mathbb{Z}\). However, some caution is needed, as our finite characteristic setting introduces new complications. Namely, the distributional behavior of a polynomial whose degree \(d\) exceeds the characteristic \(p\) is more sophisticated than the behavior of polynomials over \(\mathbb{Z}\) (see, e.g., Theorem 1.9 below), and this creates additional difficulties in our analysis. Before stating our results, we fix some notation. Let \(\mathbb{F}\) be a finite field of characteristic \(p\). We denote the set of monic polynomials over \(\mathbb{F}\) by \(\mathbb{F}[t]^{+}\). Every element \(Q(t)\in\mathbb{F}[t]^{+}\) has a unique factorization (up to reordering) into monic irreducibles \(Q(t)=Q_{1}(t)^{s_{1}}\ldots Q_{r}(t)^{s_{r}}\). We denote the quotient ring \(\mathbb{F}[t]/Q(t)\mathbb{F}[t]\) by \(\mathbb{F}[t]_{Q}\), and we have an isomorphism \[\mathbb{F}[t]_{Q}\cong\mathbb{F}[t]_{Q_{1}^{s_{1}}}\times\cdots\times\mathbb{ F}[t]_{Q_{r}^{s_{r}}} \tag{1.1}\] by the Chinese remainder theorem. Note that when \(Q\) is irreducible, \(\mathbb{F}[t]_{Q}\) is a finite field of characteristic \(p\). Moreover, any finite field of characteristic \(p\) can obtained as \(\mathbb{F}_{p}[t]_{Q}\) for some irreducible element \(Q(t)\in\mathbb{F}_{p}[t]^{+}\). The decomposition of the ring \(\mathbb{F}[t]_{Q}\) for general \(Q(t)\in\mathbb{F}[t]^{+}\) given in (1.1) is parallel to the situation with modular rings, where the Chinese remainder theorem gives an isomorphism \[\mathbb{Z}/N\mathbb{Z}\cong\mathbb{Z}/p_{1}^{s_{1}}\mathbb{Z}\times\cdots \times\mathbb{Z}/p_{r}^{s_{r}}\mathbb{Z}\] for \(N=p_{1}^{s_{1}}\ldots p_{r}^{s_{r}}\). We define an absolute value on \(\mathbb{F}[t]\) by \(\left|c_{d}t^{d}+\cdots+c_{1}t+c_{0}\right|=|\mathbb{F}|^{d}\) if \(c_{d}\neq 0\). Equivalently, for any \(Q(t)\in\mathbb{F}[t]\), \(|Q|\) is the cardinality of the quotient ring \(\mathbb{F}[t]_{Q}\). For \(Q=Q_{1}^{s_{1}}\ldots Q_{r}^{s_{r}}\in\mathbb{F}[t]^{+}\), we set \(\operatorname{lpf}(Q)=\min_{1\leq i\leq r}|Q_{i}|\) to be the size of the least prime factor of \(Q\). For a finite set \(S\) and a function \(f:S\to\mathbb{C}\), we write \[\operatorname*{\overline{\mathbb{E}}}_{x\in S}f(x)=\frac{1}{|S|}\sum_{x\in S}f (x).\] For \(r\geq 1\), we define the \(L^{r}\)-norm on \(S\) by \[\left\|f\right\|_{L^{r}(S)}:=\left(\operatorname*{\overline{\mathbb{E}}}_{x\in S }|f(x)|^{r}\right)^{1/r}.\] Recall that a measure-preserving \(\mathbb{Z}\)-system3\((X,\mathcal{X},\mu,T)\) is _totally ergodic_ if for every \(m\in\mathbb{Z}\setminus\{0\}\), \(T^{m}\) is ergodic. By analogy, we say that a measure-preserving \(\mathbb{F}[t]\)-system \(\big{(}X,\mathcal{X},\mu,(T_{n})_{n\in\mathbb{F}[t]}\big{)}\) is _totally ergodic_ if for every \(m\in\mathbb{F}[t]\setminus\{0\}\), the action \((T_{mn})_{n\in\mathbb{F}[t]}\) is ergodic. Our first result provides a finitization of the phenomenon of total ergodicity. A similar result for \(\mathbb{Z}\)-actions and the quotient rings \(\mathbb{Z}/N\mathbb{Z}\) appears in [1]. Footnote 3: Given an abelian group \(\Gamma\), a _measure-preserving \(\Gamma\)-system_ is a quadruple \((X,\mathcal{X},\mu,(T_{g})_{g\in\Gamma})\), where \((X,\mathcal{X},\mu)\) is a probability space, and \((T_{g})_{g\in\Gamma}\) is an action of \(\Gamma\) on \((X,\mathcal{X},\mu)\) by measure-preserving transformations. **Theorem 1.5**.: _Let \((Q_{n})_{n\in\mathbb{N}}\) be a sequence in \(\mathbb{F}[t]^{+}\). The following are equivalent:_ 1. _The sequence of quotient rings_ \(\mathbb{F}[t]_{Q_{n}}\) _is asymptotically totally ergodic: for any_ \(m\in\mathbb{F}[t]\setminus\{0\}\)_,_ \[\sup_{f_{n}:\mathbb{F}[t]_{Q_{n}}\to\mathbb{D}}\left\|\underset{y\in\mathbb{F}[t ]_{Q_{n}}}{\mathbb{E}}f_{n}(x+my)-\underset{z\in\mathbb{F}[t]_{Q_{n}}}{\mathbb{ E}}f_{n}(z)\right\|_{L^{2}(\mathbb{F}[t]_{Q_{n}})}\xrightarrow[n\to\infty]{}0.\] 2. \(\operatorname{lpf}(Q_{n})\to\infty\)_._ We prove Theorem 1.5 in Section 2. Using the spectral theorem for unitary actions of \(\mathbb{F}[t]\) and equidistributional results for polynomial sequences over \(\mathbb{F}[t]\), one may show the following: **Theorem 1.6**.: _Let \(P(y)\in(\mathbb{F}[t])[y]\) be a polynomial with zero constant term. Then for any totally ergodic system \(\left(X,\mathcal{X},\mu,(T_{n})_{n\in\mathbb{F}[t]}\right)\), any Folner sequence4\((\Phi_{N})_{N\in\mathbb{N}}\), and any \(f\in L^{2}(\mu)\),_ Footnote 4: A _Folner sequence_ in \(\mathbb{F}_{p}[t]\) is a sequence \((\Phi_{N})_{N\in\mathbb{N}}\) of finite subsets of \(\mathbb{F}_{p}[t]\) such that, for any \(n\in\mathbb{F}_{p}[t]\), \[\lim_{N\to\infty}\frac{|(\Phi_{N}+n)\triangle\Phi_{N}|}{|\Phi_{N}|}=0.\] Examples include \(\Phi_{N}=\left\{c_{N-1}t^{N-1}+\cdots+c_{1}t+c_{0}:c_{i}\in\mathbb{F}_{p}\right\}\) (the set of all polynomials over \(\mathbb{F}_{p}\) of degree \(<N\)) and \(\Phi_{N}=\left\{t^{N}+c_{N-1}t^{N-1}+\cdots+c_{1}t+c_{0}:c_{i}\in\mathbb{F}_{p} \right\}=\Phi_{N}+t^{N}\) (the set of all monic polynomials of degree \(N\)). \[\lim_{N\to\infty}\underset{n\in\Phi_{N}}{\mathbb{E}}T_{P(n)}f=\pi_{P}(f),\] _where \(\pi_{P}\) is the orthogonal projection onto the space_ \[\left\{g\in L^{2}(\mu):T_{P(n)}g=g\text{ for all }n\in\mathbb{F}[t]\right\}.\] **Remark 1.7**.: (1) A similar result with \(\mathbb{F}[t]\) replaced by a countably infinite field was obtained by Larick in [10, Theorem 1.1.1]. (2) For \(\mathbb{Z}\)-actions, the corresponding version of Theorem 1.6 is simpler. Namely, for any polynomial \(P(y)\in\mathbb{Z}[y]\), any totally ergodic \(\mathbb{Z}\)-system \((X,\mathcal{X},\mu,T)\), any Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\) in \(\mathbb{Z}\), and any \(f\in L^{2}(\mu)\), \[\lim_{N\to\infty}\underset{n\in\Phi_{N}}{\mathbb{E}}T^{P(n)}f=\int_{X}f\ d\mu.\] The presence of the projection \(\pi_{P}\) in Theorem 1.6 rather than \(\int_{X}f\ d\mu\) is a reflection of the more intricate distributional behavior of polynomials over \(\mathbb{F}[t]\). The situation where the projection \(\pi_{P}f\) is equal to \(\int_{X}f\ d\mu\) can be characterized by an equidistributional assumption on the polynomial \(P(y)\in(\mathbb{F}[t])[y]\); see Proposition 1.14 below. Our goal is to prove an asymptotic version of Theorem 1.6 and to deduce from it new combinatorial results over finite fields (and rings of the form \(\mathbb{F}[t]_{Q}\)). Before formulating our result, we sketch a proof of Theorem 1.6, which will serve as a model for our finitary results. By the spectral theorem for actions of \(\mathbb{F}[t]\) by unitary operators on a Hilbert space, we may work with the Hilbert space \(\mathcal{H}=L^{2}\left(\widehat{\mathbb{F}[t]},\sigma\right)\), where \(\sigma\) is a positive Borel measure on the dual group \(\widehat{\mathbb{F}[t]}\), and the unitary action \((T_{n})_{n\in\mathbb{F}[t]}\) is represented by the multiplication operators \((U_{n}h)(\chi)=\chi(n)h(\chi)\) for \(h\in\mathcal{H}\) and \(\chi\in\widehat{\mathbb{F}[t]}\). Rather than working with \(\widehat{\mathbb{F}[t]}\) as the abstract dual group of \(\mathbb{F}[t]\), it will be convenient to work with the dual group in a more concrete form. Let \(\mathbb{F}(t)\) be the field of rational functions \(\mathbb{F}(t)=\left\{\frac{m}{n}:m,n\in\mathbb{F}[t],n\neq 0\right\}\). Extending the absolute value we defined on \(\mathbb{F}[t]\) to \(\mathbb{F}(t)\) by \(\left|\frac{m}{n}\right|=\frac{|m|}{|n|}\), the completion of \(\mathbb{F}(t)\) is the field \(\mathbb{F}((t^{-1}))=\left\{\sum_{j=-\infty}^{N}c_{j}t^{j}:N\in\mathbb{Z},c_{ j}\in\mathbb{F}\right\}\). We think of \(\mathbb{F}(t)\) and \(\mathbb{F}((t^{-1}))\) as natural analogues of the rational numbers \(\mathbb{Q}\) and the real numbers \(\mathbb{R}\), respectively. In the characteristic zero setting, the dual group of the integers is isomorphic to the torus \(\mathbb{T}=\mathbb{R}/\mathbb{Z}\). A similar result is true in our setting: the dual group \(\overline{\mathbb{F}[t]}\) is isomorphic to \(\mathbb{F}((t^{-1}))/\mathbb{F}[t]\). In particular, every character \(\chi:\mathbb{F}[t]\to\mathbb{C}\) takes the form \[\chi(n)=e(nx)\] for some \(x\in\mathbb{F}((t^{-1}))/\mathbb{F}[t]\), where \[e\left(\sum_{j=-\infty}^{N}c_{j}t^{j}\right)=\chi_{0}(c_{-1})\] and \(\chi_{0}\) is a fixed nontrivial character on \(\mathbb{F}\). A word of caution: with the objects discussed above, we have an isomorphism \(\mathbb{F}((t^{-1}))\cong\mathbb{F}[t]\oplus\left(\mathbb{F}((t^{-1}))/ \mathbb{F}[t]\right)\). The corresponding statement in the more familiar characteristic zero setting is not true: \(\mathbb{R}\not\cong\mathbb{Z}\oplus\mathbb{T}\). To prove Theorem 1.6, it then suffices to show: for \(\sigma\)-a.e. \(x\in\mathbb{F}((t^{-1}))/\mathbb{F}[t]\), \[\lim_{N\to\infty}\underset{n\in\Phi_{N}}{\overset{\mathbb{E}}{\mathbb{F}}}e( P(n)x)=\begin{cases}1,&\text{if $e(P(n)x)=1$ for every $n\in\mathbb{F}[t]$;}\\ 0,&\text{otherwise.}\end{cases}\] For \(\mathbb{Z}\)-action, total ergodicity is equivalent to the absence of rational spectrum. Similarly, the assumption that \((T_{n})_{n\in\mathbb{N}}\) is a totally ergodic \(\mathbb{F}[t]\)-action means that \[\sigma\left((\mathbb{F}(t)/\mathbb{F}[t])\setminus\{0\}\right)=0\] Therefore, Theorem 1.6 reduces to studying equidistribution of the sequences \(\left(P(n)\alpha\right)_{n\in\mathbb{F}[t]}\) for irrational \(\alpha\in\mathbb{F}((t^{-1}))\setminus\mathbb{F}(t)\). A general Weyl-type equidistribution theorem for polynomials over \(\mathbb{F}[t]\) was established in [2], and we can use the result to finish the proof of Theorem 1.6. First, we need some definitions for polynomials in finite characteristic: **Definition 1.8**.: 1. A polynomial \(P(y)\in\mathbb{F}((t^{-1}))[y]\) is called _separable_ if \(P(y)=a_{0}+\sum_{i=1}^{k}a_{i}y^{r_{i}}\) and \(p\nmid r_{i}\) for \(i\in\{1,\ldots,k\}\). 2. A polynomial \(\eta(y)\in\mathbb{F}((t^{-1}))[y]\) is _additive_ if \(\eta(x+y)=\eta(x)+\eta(y)\) for any \(x,y\in\mathbb{F}((t^{-1}))\). 3. For \(u\in\mathbb{F}((t^{-1}))\) and \(f:\mathbb{F}((t^{-1}))\to\mathbb{F}((t^{-1}))\), define the differencing operator \(\partial_{u}f(x)=f(x+u)-f(x)\). Then define \(\partial_{u_{1},\ldots,u_{k}}\) inductively by \(\partial_{u_{1},\ldots,u_{k}}=\partial_{u_{k}}\partial_{u_{1},\ldots,u_{k-1}}\). The _derivational degree_ (abbreviated d-deg) of a polynomial \(P(y)\in\mathbb{F}((t^{-1}))[y]\) is the minimum \(d\geq 0\) such that \(\partial_{u_{1},\ldots,u_{d+1}}P(y)=0\) for any \(u_{1},\ldots,u_{d+1},y\in\mathbb{F}((t^{-1}))\). Note that \(\mathrm{d}\)-\(\deg y^{p}=1\), since \((u+v)^{p}=u^{p}+v^{p}\) in characteristic \(p\). More generally, for \(r=\sum_{i=0}^{n}a_{i}p^{i}\) with \(a_{i}\in\{0,\ldots,p-1\}\), we have \(\mathrm{d}\)-\(\deg y^{r}=\sum_{i=0}^{n}a_{i}\). Any polynomial can be written in the form \(P(y)=a_{0}+\sum_{i=1}^{n}\eta_{i}(y^{r_{i}})\), where \(a_{0}\in\mathbb{F}((t^{-1}))\), \(\eta_{1},\ldots,\eta_{n}\in\mathbb{F}((t^{-1}))[y]\) are additive polynomials, and \(y^{r_{i}}\) are distinct separable monomials. We say that \(a:\mathbb{F}_{p}[t]\to\mathbb{F}_{p}((t^{-1}))\) is _well-distributed_ mod \(\mathbb{F}_{p}[t]\) if \[\lim_{N\to\infty}\underset{n\in\Phi_{N}}{\overset{\mathbb{E}}{\mathbb{F}}}f( a(n))=\int_{\mathbb{F}_{p}((t^{-1}))/\mathbb{F}_{p}[t]}f\ dm\] for every continuous function \(f:\mathbb{F}_{p}((t^{-1}))/\mathbb{F}_{p}[t]\to\mathbb{C}\) and every Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\) in \(\mathbb{F}_{p}[t]\). A more refined notion of equidistribution is as follows. A function \(a:\mathbb{F}_{p}[t]\to\mathbb{F}_{p}((t^{-1}))\) is _well-distributed mod \(\mathbb{F}_{p}[t]\) in a subgroup \(H\subseteq\mathbb{F}_{p}((t^{-1}))/\mathbb{F}_{p}[t]\)_ if \[\lim_{N\to\infty}\underset{n\in\Phi_{N}}{\overset{\mathbb{E}}{\mathbb{F}}}f( a(n))=\int_{H}f\ dm_{H}\] for every continuous function \(f:\mathbb{F}_{p}((t^{-1}))/\mathbb{F}_{p}[t]\to\mathbb{C}\) and every Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\) in \(\mathbb{F}_{p}[t]\). For a subgroup \(H\) and a finite set \(F\subseteq\mathbb{F}_{p}((t^{-1}))/\mathbb{F}_{p}[t]\), we say that \(a:\mathbb{F}_{p}[t]\to\mathbb{F}_{p}((t^{-1}))\) is _well-distributed in the components of \(H+F\)_ if there exists \(m\in\mathbb{F}_{p}[t]\setminus\{0\}\) such that, for every \(k\in\mathbb{F}_{p}[t]\), the sequence \(\left(a(mn+k)\right)_{n\in\mathbb{F}_{p}[t]}\) is well-distributed in \(H+x\) for some \(x\in F\). **Theorem 1.9** ([1], Theorem 0.3).: _An additive polynomial \(\eta(y)\in(\mathbb{F}((t^{-1})))[y]\) is well distributed in the subgroup5\(\overline{\eta(\mathbb{F}[t])}=\mathcal{F}(\eta)+\eta(K)\), where \(K\subseteq\mathbb{F}((t^{-1}))/\mathbb{F}[t]\) is a finite subgroup. For any polynomial \(P(y)=\alpha_{0}+\sum_{i=1}^{n}\eta_{i}(y^{r_{i}})\), the orbit closure \(\mathcal{O}(P)=\overline{P(\mathbb{F}[t])}\) is of the form \(\mathcal{F}(P)+P(K)\), where \(\mathcal{F}(P)=\sum_{i=1}^{n}\mathcal{F}(\eta_{i})\) and \(K\) is a finite subset of \(\mathbb{F}[t]\), and \(P(y)\) is well-distributed in the components \(\mathcal{F}(P)+P(k)\), \(k\in K\)._ Footnote 5: In [1], the subgroup \(\mathcal{F}(\eta)\) is called a \(\Phi\)-_subtorus of level_\(\leq\log_{p}d\) For an additive polynomial \(\eta(y)\in\mathbb{F}[t]\) and irrational \(\alpha\in\mathbb{F}((t^{-1}))\setminus\mathbb{F}(t)\), the orbit closure \(\overline{\left\{\eta(y)\alpha:y\in\mathbb{F}[t]\right\}}\) is equal to the subtorus \(\mathcal{F}(\eta\alpha)\) rather than a union of finitely many shifts of \(\mathcal{F}(\eta\alpha)\). It follows that for \(P(y)\in\mathbb{F}[t]\) with \(P(0)=0\) and \(\alpha\in\mathbb{F}((t^{-1}))\setminus\mathbb{F}(t)\), the sequence \(\left(P(n)\alpha\right)_{n\in\mathbb{F}[t]}\) is well-distributed in the subtorus \(\mathcal{F}(P)\); see [1, Theorem 8.1] for more details. Thus, for any \(\alpha\in\mathbb{F}((t^{-1}))\setminus\mathbb{F}(t)\), \[\lim_{N\to\infty}\mathop{\mathbbm{E}}_{n\in\Phi_{N}}e\left(P(n)\alpha\right)= \begin{cases}1,&\text{if $e\left(P(n)\alpha\right)=1$ for every $n\in\mathbb{F}[t]$};\\ 0,&\text{otherwise}.\end{cases}\] This completes the proof of Theorem 1.6. We can now state our main result, which is an asymptotic version of Theorem 1.6 with quantitative bounds: **Theorem 1.10**.: _Let \(P(y)\in(\mathbb{F}[t])[y]\) be a nonconstant polynomial of degree \(d\) and derivational degree \(k\). Write \(P(y)=a_{0}+\sum_{i=1}^{n}\eta_{i}(y^{r_{i}})\). Let \(H_{i}=\eta_{i}\left(\mathbb{F}[t]\right)\) and \(H=\sum_{i=1}^{n}H_{i}\). Then for any \(Q(t)\in\mathbb{F}[t]^{+}\) and any \(f:\mathbb{F}[t]_{Q}\to\mathbb{C}\),_ \[\left\|\mathop{\mathbbm{E}}_{y\in\mathbb{F}[t]_{Q}}f(x+P(y))-\mathop{\mathbbm{ E}}_{z\in H_{Q}}f(x+a_{0}+z)\right\|_{L^{2}(\mathbb{F}[t]_{Q})}\leq\left(p^{2 \lfloor\log_{p}d\rfloor}\frac{k-1}{\operatorname{lpf}(Q)}\right)^{1/2^{k-1}} \|f\|_{L^{2}(\mathbb{F}[t]_{Q})}\,,\] _where \(H_{Q}=\{z\ (\bmod\,Q):z\in H\}\)._ **Remark 1.11**.: (1) In the case \(a_{0}=P(0)=0\), we get that the average \(\mathop{\mathbbm{E}}_{y\in\mathbb{F}[t]_{Q}}f(x+P(y))\) is approximated in \(L^{2}(\mathbb{F}[t]_{Q})\) by the function \[\widetilde{f}(x)=\mathop{\mathbbm{E}}_{z\in H_{Q}}f(x+z),\] which is the projection of \(f\) onto the space of \(H_{Q}\)-invariant functions, so Theorem 1.10 can indeed be seen as a finitary version of Theorem 1.6. (2) The phenomenon encompassed by Theorem 1.10 is simpler in the low degree situation (\(d<p\)). In the context of finite fields (\(\mathbb{F}[t]_{Q}\) with \(Q\) irreducible in our notation), a closely related result was previously established in [1, Lemma 3]. In particular, it is shown that for any \(q\), any subsets \(A,B\subseteq\mathbb{F}_{q}\), and any polynomial \(P(y)\in\mathbb{F}_{q}[y]\) with degree \(d<p\), \[\left|\mathop{\mathbbm{E}}_{x,y\in\mathbb{F}_{q}}\mathbbm{1}_{A}(x)\mathbbm{1}_ {B}(x+P(y))-\frac{|A||B|}{q^{2}}\right|\leq\left(\frac{q^{d-1}-(q-1)^{d-1}}{q ^{d-1}}\right)^{1/2^{d-1}}\ll q^{-1/2^{d-1}};\] see the statement of Lemma 3 in [1] and the formula for \(\mathcal{E}(q,d)\) at the top of page 713. As a consequence of Theorem 1.10, we obtain the following power savings for the Furstenberg-Sarkozy theorem: **Corollary 1.12**.: _Let \(P(y)\in(\mathbb{F}[t])[y]\) be an intersective polynomial of degree \(d\) and derivational degree \(k\). Let \(Q(t)\in\mathbb{F}[t]^{+}\). If \(A\subseteq\mathbb{F}[t]_{Q}\) does not contain distinct \(a,b\in A\) with \(b-a=P(y)\) for some \(y\in\mathbb{F}[t]_{Q}\), then_ \[|A|\ll_{P}|Q|\cdot\operatorname{lpf}(Q)^{-1/2^{k-1}}.\] _In particular, if \(Q\) is irreducible (so that \(\mathbb{F}[t]_{Q}\) is a field with \(|Q|\) elements), then_ \[|A|\ll_{P}|Q|^{1-1/2^{k-1}}.\] **Remark 1.13**.: (1) If we restrict to \(\operatorname{lpf}(Q)\) being sufficiently large (so that \(P(y)\) does not reduce to the zero polynomial \(\operatorname{mod}\,Q\)), then the implicit constant in the conclusion of Corollary 1.12 depends only on the degree \(d\) and the derivational degree \(k\). (2) The bound given in Theorem 1.4 is difficult to compute in general and to compare with the bound in Corollary 1.12. We can, however, highlight some general features of the different bounds. The power savings obtained in Corollary 1.12 depends only on the derivational degree \(k\) of the polynomial \(P\) and applies to all intersective polynomials with coefficients in \(\mathbb{F}[t]\). By contrast, the Li-Sauermann bound depends on the degree \(d\) of the polynomial \(P\) and on the characteristic \(p\) of the field \(\mathbb{F}_{q}\) and applies only to polynomials with zero constant term and with coefficients in \(\mathbb{F}_{p}\). The disadvantage of our bound is that the power saving \(\frac{1}{2^{k-1}}\) decays exponentially with the derivational degree. How the quantity \(t_{p,d}\) appearing in Theorem 1.4 depends on \(p\) and \(d\) is not immediately clear from the definition. However, a related bound due to Green [1, Theorem 1.2] (which Li and Sauermann optimize) gives power savings of \[c^{\prime}(p,d)=\frac{1}{2d^{2}(p-1)^{2}(1+\log_{p}d)^{2}\log p}\] for a degree \(d\) polynomial \(P(y)\in\mathbb{F}_{p}[y]\) with \(P(0)=0\). That is, for any \(q=p^{k}\), the largest subset of \(\mathbb{F}_{q}\) with no nontrivial pattern \(\{x,x+P(y)\}\) has cardinality \[|A|\ll_{p,d}q^{1-c^{\prime}(p,d)}.\] Therefore, for fixed \(p\), there is a regime of sufficiently high degree polynomials for which the Li-Sauermann bound beats ours. On the other hand, the quantity \(c^{\prime}(p,d)\) decays as \(p\to\infty\), so the bound in Corollary 1.12 will outperform this bound in sufficiently high characteristic (for fixed degree \(d\)). Thus, neither of the power saving bounds is universally better than the other, and both methods have their advantages and disadvantages. The main goal of our work is not to produce the best possible power saving bounds but to provide a heuristic backing for why any power saving bound should hold at all and to place the Furstenberg-Sarkozy theorem over finite fields within the appropriate general framework. Combining Theorems 1.9 and 1.10, we may deduce several finitary combinatorial statements from an infinitary statement about equidistribution. Say that a polynomial \(P(y)\in(\mathbb{F}[t])[y]\) is _good for irrational equidistribution_ if \((P(n)\alpha)_{n\in\mathbb{F}[t]}\) is well distributed for every irrational \(\alpha\in\mathbb{F}((t^{-1}))\setminus\mathbb{F}(t)\). **Proposition 1.14**.: _A polynomial \(P(y)\) is good for irrational equidistribution if and only if for any totally ergodic system \(\left(X,\mathcal{X},\mu,(T_{n})_{n\in\mathbb{F}[t]}\right)\), any Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\) in \(\mathbb{F}[t]\), and any \(f\in L^{2}(\mu)\),_ \[\lim_{N\to\infty}\operatorname*{\mathbbm{E}}_{n\in\Phi_{N}}T_{P(n)}f=\int_{X} f\ d\mu.\] Proposition 1.14 can be proved along the lines of Theorem 1.6 outlined earlier using the spectral theorem for unitary actions of \(\mathbb{F}[t]\). For the "only if" direction, upon replacing \(T_{n}\) by the multiplication operators \((U_{n}h)(x)=e(nx)h(x)\) on \(L^{2}\left(\mathbb{F}((t^{-1}))/\mathbb{F}[t],\sigma\right)\), we use the fact that \(P(y)\) is good for irrational equidistribution to conclude \[\lim_{N\to\infty}\mathop{\mathbbm{E}}_{n\in\Phi_{N}}U_{P(n)}h(x)=\lim_{N\to \infty}\mathop{\mathbbm{E}}_{n\in\Phi_{N}}e(P(n)x)h(x)=\begin{cases}h(0),&\text{if }x=0;\\ 0,&\text{otherwise}\end{cases}\] in \(L^{2}(\sigma)\). This corresponds to the desired convergence result \[\lim_{N\to\infty}\mathop{\mathbbm{E}}_{n\in\Phi_{N}}T_{P(n)}f=\int_{X}f\ d\mu.\] For the "if" direction: suppose \(P(y)\) is not good for irrational equidistribution, and let \(\alpha\in\mathbb{F}((t^{-1}))\setminus\mathbb{F}(t)\) such that \(\lim_{N\to\infty}\mathop{\mathbbm{E}}_{n\in\Phi_{N}}e\left(P(y)\alpha\right)\neq 0\). We then take as our totally ergodic system \(X=\mathbb{F}((t^{-1}))/\mathbb{F}[t]\), \(\mathcal{X}=Borel(X)\), \(\mu=m_{X}\), and \(T_{n}x=x+n\alpha\). For the function \(f(x)=e(x)\), we have \(\int_{X}f(x)\ dx=0\), since \(x\mapsto e(x)\) is a nontrivial character on \(X\), while \[\lim_{N\to\infty}\mathop{\mathbbm{E}}_{n\in\Phi_{N}}T_{P(n)}f=\lim_{N\to\infty }\mathop{\mathbbm{E}}_{n\in\Phi_{N}}e(P(n)\alpha)f\neq 0.\] **Remark 1.15**.: The naive analogue of Proposition 5.1 for \(\mathbb{Z}\)-actions is true. That is, a polynomial \(P(y)\in\mathbb{Z}[y]\) is good for irrational equidistribution (meaning that \((P(n)\alpha)\) is well-distributed mod \(1\) for every irrational \(\alpha\in\mathbb{R}\setminus\mathbb{Q}\)) if and only if for any totally ergodic system \((X,\mathcal{X},\mu,T)\), any Folner sequence \((\Phi_{N})_{N\in\mathbb{N}}\) in \(\mathbb{Z}\), and any \(f\in L^{2}(\mu)\), \[\lim_{N\to\infty}\mathop{\mathbbm{E}}_{n\in\Phi_{N}}T^{P(n)}f=\int_{X}f\ d\mu.\] However, this result is far less meaningful in the setting of \(\mathbb{Z}\)-actions, since every nonconstant integer polynomial is good for irrational equidistribution by Weyl's equidistribution theorem. This is far from the case in the setting of \(\mathbb{F}[t]\); see the examples below. **Example 1.16**.: (1) Every nonconstant separable polynomial is good for irrational equidistribution; see [1, Corollary 0.5]. (2) If \(\eta_{1},\ldots,\eta_{n}\) are additive polynomials such that \(\sum_{i=1}^{n}c_{i}\eta_{i}(y)=ay\) for some \(c_{1},\ldots,c_{n}\in\mathbb{Z}\) and \(a\in\mathbb{F}[t]\), then for any distinct \(r_{1},\ldots,r_{n}\in\mathbb{N}\) not divisible by \(p\), \(P(y)=\sum_{i=1}^{n}\eta_{i}(y^{r_{i}})\) is good for irrational equidistribution. This follows from Theorem 1.9, since the condition \(\sum_{i=1}^{n}c_{i}\eta_{i}(y)=ay\) ensures that the orbit closure \(\sum_{i=1}^{n}\mathcal{F}(\alpha\eta_{i})\) contains the orbit \(\{ay\alpha:y\in\mathbb{F}[t]\}\), which is dense mod \(\mathbb{F}[t]\) for irrational \(\alpha\in\mathbb{F}((t^{-1}))\setminus\mathbb{F}(t)\). (3) The polynomial \(P(y)=y^{p^{2}}+y^{2p}-y\) is good for irrational equidistribution. This follows from Theorem 1.18 below. Indeed, upon writing \(P(y)=\eta_{1}(y)+\eta_{2}(y^{2})\) with \(\eta_{1}(y)=y^{p^{2}}-y\) and \(\eta_{2}(y)=y^{p}\), we see that \(P(y)\) satisfies condition (iii) of Theorem 1.18 for \(\zeta_{1}(y)=-y\) and \(\zeta_{2}(y)=y^{p}\). (4) The polynomial \(P(y)=y^{p}\) is not good for irrational equidistribution: for any \(\alpha\) of the form \(\alpha=\beta^{p}\), the orbit closure \(\overline{\{P(y)\alpha:y\in\mathbb{F}[t]\}}\) is contained in the infinite index subgroup \(\{x^{p}:x\in\mathbb{F}((t^{-1}))/\mathbb{F}[t]\}\subseteq\mathbb{F}((t^{-1}))/ \mathbb{F}[t]\). (5) The polynomial \(P(y)=y^{2p}-y^{2}\) is not good for irrational equidistribution. Write \(P(y)=\eta(y^{2})\) with \(\eta(y)=y^{p}-y\). Then clearly \(P(\mathbb{F}[t])\subseteq\eta(\mathbb{F}[t])\). For any \(\alpha\in\mathbb{F}((t^{-1}))/\mathbb{F}[t]\) of the form \[\alpha=\sum_{j=0}^{\infty}\alpha_{j}t^{-(j+1)}\] satisfying \(\alpha_{pj}=\alpha_{j}\) for \(j\geq 0\), one can check by direct calculation that \(e\left(\eta(y)\alpha\right)=1\) for every \(y\in\mathbb{F}[t]\). Hence, for any such \(\alpha\), \((P(n)\alpha)_{n\in\mathbb{F}[t]}\) is not well distributed mod \(\mathbb{F}[t]\). Moreover, the set of all such \(\alpha\) is uncountable so contains irrational elements. (6) An additive polynomial \(P(y)=\sum_{i=0}^{k}a_{i}y^{p^{i}}\) is good for irrational equidistribution if and only if \(P(y)=a_{0}y\); see Proposition 5.4. The following theorem summarizes the main achievements in this paper. In particular, it emphasizes the role of equidistribution properties in obtaining finitary combinatorial results over the rings \(\mathbb{F}[t]_{Q}\). Note that item (v) below strengthens the conclusion of Corollary 1.12 under the assumption that the polynomial \(P(y)\) is good for irrational equidistribution. **Theorem 1.17**.: _Let \(P(y)\in(\mathbb{F}[t])[y]\) be a nonconstant polynomial. The following are equivalent:_ 1. _for any_ \(Q(t)\in\mathbb{F}[t]^{+}\)_,_ \[\sup_{\|f\|_{L^{2}(\mathbb{F}[t]_{Q})}=1}\left\|\mathop{\underline{ \mathbb{E}}}_{y\in\mathbb{F}[t]_{Q}}f(x+P(y))-\mathop{\underline{\mathbb{E}}} _{z\in\mathbb{F}[t]_{Q}}f(z)\right\|_{L^{2}(\mathbb{F}[t]_{Q})}=o_{\mathrm{lpf} (Q)\to\infty}(1);\] 2. _there exist_ \(C_{1},C_{2},\gamma>0\) _such that for any_ \(Q(t)\in\mathbb{F}[t]^{+}\) _with_ \(\mathrm{lpf}(Q)\geq C_{1}\)_, one has_ \[\sup_{\|f\|_{L^{2}(\mathbb{F}[t]_{Q})}=1}\left\|\mathop{\underline{ \mathbb{E}}}_{y\in\mathbb{F}[t]_{Q}}f(x+P(y))-\mathop{\underline{\mathbb{E}}} _{z\in\mathbb{F}[t]_{Q}}f(z)\right\|_{L^{2}(\mathbb{F}[t]_{Q})}\leq C_{2}\cdot \mathrm{lpf}(Q)^{-\gamma};\] 3. _there exists_ \(C>0\) _such that if_ \(Q(t)\in\mathbb{F}[t]^{+}\) _and_ \(\mathrm{lpf}(Q)\geq C\)_, then_ \(H+Q\mathbb{F}[t]=\mathbb{F}[t]\)_, where_ \(H\leq(\mathbb{F}[t],+)\) _is the group generated by_ \(\{P(y)-P(0):y\in\mathbb{F}[t]\}\)_._ 4. _for any_ \(\delta>0\)_, there exists_ \(N>0\) _such that if_ \(Q(t)\in\mathbb{F}[t]^{+}\) _has_ \(\mathrm{lpf}(Q)\geq N\) _and_ \(A,B\subseteq\mathbb{F}[t]_{Q}\) _are subsets with_ \(|A||B|\geq\delta|Q|^{2}\)_, then there exist_ \(x,y\in\mathbb{F}[t]_{Q}\) _such that_ \(x\in A\) _and_ \(x+P(y)\in B\)_;_ 5. _there exist_ \(C_{1},C_{2},\gamma>0\) _such that for any_ \(Q(t)\in\mathbb{F}[t]^{+}\) _with_ \(\mathrm{lpf}(Q)\geq C_{1}\)_, one has_ \[\bigg{|}\,\big{|}\big{\{}(x,y)\in\mathbb{F}[t]_{Q}^{2}:x\in A,x+P(y)\in B \big{\}}\,\big{|}-|A||B|\bigg{|}\leq C_{2}|A|^{1/2}|B|^{1/2}|Q|\cdot\mathrm{lpf }(Q)^{-\gamma}.\] _Moreover, if \(P(y)\) is good for irrational equidistribution, then each of the properties (i)-(v) holds._ By restricting the coefficients of the polynomial \(P\), we can prove a stronger version of Theorem 1.17: **Theorem 1.18**.: _Let \(P(y)\in\mathbb{F}_{p}[y]\). Let \(\eta_{1},\ldots,\eta_{n}\in\mathbb{F}_{p}[y]\) be additive polynomials and \(r_{1},\ldots,r_{n}\in\mathbb{N}\) distinct positive integers not divisible by \(p\) so that \(P(y)=\sum_{i=1}^{n}\eta_{i}(y^{r_{i}})\). The following are equivalent:_ 1. \(P(y)\) _is good for irrational equidistribution;_ 2. _for any totally ergodic system_ \(\big{(}X,\mathcal{X},\mu,(T_{y})_{y\in\mathbb{F}[t]}\big{)}\)_, any Folner sequence_ \((\Phi_{N})_{N\in\mathbb{N}}\) _in_ \(\mathbb{F}[t]\)_, and any_ \(f\in L^{2}(\mu)\)_,_ \[\lim_{N\to\infty}\mathop{\underline{\mathbb{E}}}_{y\in\Phi_{N}}T_{P(y)}f=\int_ {X}f\ d\mu;\] 3. _there exist additive polynomials_ \(\zeta_{1},\ldots,\zeta_{n}\in\mathbb{F}_{p}[y]\) _and_ \(a\in\mathbb{F}_{p}^{\times}\) _such that_ \[\sum_{i=1}^{n}\left(\eta_{i}\circ\zeta_{i}\right)(y)=ay;\] 4. _for any_ \(Q(t)\in\mathbb{F}[t]^{+}\)_,_ \[\sup_{\|f\|_{L^{2}(\mathbb{F}[t]_{Q})}=1}\left\|\mathop{\underline{\mathbb{E }}}_{y\in\mathbb{F}[t]_{Q}}f(x+P(y))-\mathop{\underline{\mathbb{E}}}_{z\in \mathbb{F}[t]_{Q}}f(z)\right\|_{L^{2}(\mathbb{F}[t]_{Q})}=o_{\mathrm{lpf}(Q) \to\infty}(1);\] 5. _there exist_ \(C_{1},C_{2},\gamma>0\) _such that for any_ \(Q(t)\in\mathbb{F}[t]^{+}\) _with_ \(\mathrm{lpf}(Q)\geq C_{1}\)_, one has_ \[\sup_{\|f\|_{L^{2}(\mathbb{F}[t]_{Q})}=1}\left\|\mathop{\underline{\mathbb{E }}}_{y\in\mathbb{F}[t]_{Q}}f(x+P(y))-\mathop{\underline{\mathbb{E}}}_{z\in \mathbb{F}[t]_{Q}}f(z)\right\|_{L^{2}(\mathbb{F}[t]_{Q})}\leq C_{2}\cdot\mathrm{ lpf}(Q)^{-\gamma};\] 6. _there exists_ \(C>0\) _such that if_ \(Q(t)\in\mathbb{F}[t]^{+}\) _and_ \(\mathrm{lpf}(Q)\geq C\)_, then_ \(H_{Q}=\mathbb{F}[t]\)_, where_ \(H_{Q}=\sum_{i=1}^{n}H_{i,Q}\)_,_ \(H_{i,Q}=\eta_{i}(\mathbb{F}[t]_{Q})\) _;_ 3. _for any_ \(\delta>0\)_, there exists_ \(N>0\) _such that if_ \(Q(t)\in\mathbb{F}[t]^{+}\) _has_ \(\operatorname{lpf}(Q)\geq N\) _and_ \(A,B\subseteq\mathbb{F}[t]_{Q}\) _are subsets with_ \(|A||B|\geq\delta|Q|^{2}\)_, then there exist_ \(x,y\in\mathbb{F}[t]_{Q}\) _such that_ \(x\in A\) _and_ \(x+P(y)\in B\)_;_ 4. _there exist_ \(C_{1},C_{2},\gamma>0\) _such that for any_ \(Q(t)\in\mathbb{F}[t]^{+}\) _with_ \(\operatorname{lpf}(Q)\geq C_{1}\)_, one has_ \[\bigg{|}\left|\left\{(x,y)\in\mathbb{F}[t]_{Q}^{2}:x\in A,x+P(y)\in B\right\} \right|-|A||B|\bigg{|}\leq C_{2}|A|^{1/2}|B|^{1/2}|Q|\cdot\operatorname{lpf}(Q) ^{-\gamma}.\] ## 2. Asymptotic total ergodicity In this section, we prove that the quantity \(\operatorname{lpf}(Q)\) captures the phenomenon of asymptotic total ergodicity. Recall Theorem 1.5: **Theorem 1.5**.: _Let \((Q_{n})_{n\in\mathbb{N}}\) be a sequence in \(\mathbb{F}[t]^{+}\). The following are equivalent:_ 1. _The sequence of quotient rings_ \(\mathbb{F}[t]_{Q_{n}}\) _is asymptotically totally ergodic: for any_ \(m\in\mathbb{F}[t]\setminus\{0\}\)_,_ \[\sup_{f_{n}:\mathbb{F}[t]_{Q_{n}}\to\mathbb{D}}\bigg{\|}\underset{y\in \mathbb{F}[t]_{Q_{n}}}{\mathbb{E}}f_{n}(x+my)-\underset{z\in\mathbb{F}[t]_{Q_{ n}}}{\mathbb{E}}f_{n}(z)\bigg{\|}_{L^{2}(\mathbb{F}[t]_{Q_{n}})}\xrightarrow[n \to\infty]{}0.\] 2. \(\operatorname{lpf}(Q_{n})\to\infty\)_._ Proof.: (ii) \(\implies\) (i). Fix \(m\in\mathbb{F}[t]\setminus\{0\}\). If \(Q\in\mathbb{F}[t]^{+}\) has \(\operatorname{lpf}(Q)>|m|\), then \(m\) is an element of the multiplicative group \(\mathbb{F}[t]_{Q}^{\times}\). Hence, for any \(x\in\mathbb{F}[t]_{Q}\) and any \(f:\mathbb{F}[t]_{Q}\to\mathbb{C}\), \[\underset{y\in\mathbb{F}[t]_{Q}}{\mathbb{E}}f(x+my)=\underset{z\in\mathbb{F} [t]_{Q}}{\mathbb{E}}f(z).\] (i) \(\implies\) (ii). Let \(Q\in\mathbb{F}[t]^{+}\), and let \(m\) be an irreducible factor of \(Q\). Enumerate \(\mathbb{F}[t]_{m}=\{x_{0},\dots,x_{|m|-1}\}\), and let \(f_{0}:\mathbb{F}[t]_{m}\to\mathbb{D}\) be the function \(f_{0}(x_{k})=e^{2\pi ik/|m|}\). Define \(f:\mathbb{F}[t]_{Q}\to\mathbb{D}\) by \(f(x)=f_{0}(x\mod m)\). Then \[\underset{y\in\mathbb{F}[t]_{Q}}{\mathbb{E}}f(x+my)=f(x)\] for every \(x\in\mathbb{F}[t]_{Q}\). On the other hand, \[\underset{z\in\mathbb{F}[t]_{Q}}{\mathbb{E}}f(z)=\underset{k=0}{\overset{|m |-1}{\mathbb{E}}}\underset{z\equiv x_{k}}{\mathbb{E}}f(z)=\underset{k=0}{ \overset{|m|-1}{\mathbb{E}}}f_{0}(x_{k})=0.\] Therefore, \[\bigg{\|}\underset{y\in\mathbb{F}[t]_{Q}}{\mathbb{E}}f(x+my)-\underset{z\in \mathbb{F}[t]_{Q}}{\mathbb{E}}f(z)\bigg{\|}_{L^{2}(\mathbb{F}[t]_{Q})}^{2}= \underset{x\in\mathbb{F}[t]_{Q}}{\mathbb{E}}|f(x)|^{2}=1.\] Now suppose \(\operatorname{lpf}(Q_{n})\not\to\infty\). Taking a subsequence if necessary, we may assume \(\operatorname{lpf}(Q_{n})\) is bounded. By the pigeonhole principle, we may then take a further subsequence and assume that there is a common irreducible factor \(m\) of every \(Q_{n}\), \(n\in\mathbb{N}\). The above calculation shows that we may find \(f_{n}:\mathbb{F}[t]_{Q_{n}}\to\mathbb{D}\) with \[\bigg{\|}\underset{y\in\mathbb{F}[t]_{Q_{n}}}{\mathbb{E}}f_{n}(x+my)- \underset{z\in\mathbb{F}[t]_{Q_{n}}}{\mathbb{E}}f_{n}(z)\bigg{\|}_{L^{2}( \mathbb{F}[t]_{Q_{n}})}=1\] for each \(n\in\mathbb{N}\), contradicting (i). ## 3. Asymptotic projection theorem We now turn to proving Theorem 1.10 with Fourier analysis. Characters on \(\mathbb{F}[t]_{Q}\) take the form \(\chi(x)=e(sx/Q)\) for some \(s\in\mathbb{F}[t]_{Q}\). For a function \(f:\mathbb{F}[t]_{Q}\to\mathbb{C}\), we therefore define its Fourier transform \(\widehat{f}:\mathbb{F}[t]_{Q}\to\mathbb{C}\) by \[\widehat{f}(s)=\mathop{\mathbb{E}}_{x\in\mathbb{F}[t]_{Q}}f(x)e(-sx/Q).\] We state some basic properties of the Fourier transform **Proposition 3.1**.: _For any \(f:\mathbb{F}\to\mathbb{C}\), one has_ * _Fourier inversion:_ \[f(x)=\sum_{s\in\mathbb{F}[t]_{Q}}\widehat{f}(s)e(sx/Q).\] * _Parseval's identity:_ \[\mathop{\mathbb{E}}_{x\in\mathbb{F}[t]_{Q}}|f(x)|^{2}=\sum_{s\in\mathbb{F}[t] _{Q}}\Big{|}\widehat{f}(s)\Big{|}^{2}.\] Define \(F(x):=\mathop{\mathbb{E}}_{y\in\mathbb{F}[t]_{Q}}f(x+P(y))-\mathop{\mathbb{E} }_{z\in H_{Q}}f(x+a_{0}+z)\). Then \[\widehat{F}(s)=\left(\mathop{\mathbb{E}}_{y\in\mathbb{F}[t]_{Q}}e(sP(y)/Q)-e( sa_{0}/Q)\mathbb{1}_{H_{Q}^{\perp}}(s)\right)\widehat{f}(s),\] where \(H_{Q}^{\perp}=\{s\in\mathbb{F}[t]_{Q}:e(sz)=1\text{ for all }z\in H_{Q}\}\) is the annihilator of the subgroup \(H_{Q}\). Hence, by Parseval's identity \[\mathop{\mathbb{E}}_{x\in\mathbb{F}[t]_{Q}}|F(x)|^{2} =\sum_{s\in\mathbb{F}[t]_{Q}}\left|\mathop{\mathbb{E}}_{y\in \mathbb{F}[t]_{Q}}e(sP(y)/Q)-e(sa_{0}/Q)\mathbb{1}_{H_{Q}^{\perp}}(s)\right|^ {2}\Big{|}\widehat{f}(s)\Big{|}^{2}\] \[\leq\left(\sup_{s\in\mathbb{F}[t]_{Q}}\left|\mathop{\mathbb{E}}_{ y\in\mathbb{F}[t]_{Q}}e(sP(y)/Q)-e(sa_{0}/Q)\mathbb{1}_{H_{Q}^{\perp}}(s) \right|\right)^{2}\mathop{\mathbb{E}}_{x\in\overline{\mathbb{F}}[t]_{Q}}|f(x)| ^{2}.\] All that remains to show is the inequality: **Proposition 3.2**.: _Let \(P(y)\in(\mathbb{F}[t])[y]\) be a nonconstant polynomial of degree \(d\) and derivational degree \(k\). Write \(P(y)=a_{0}+\sum_{i=1}^{n}\eta_{i}(y^{r_{i}})\). Then for any \(Q(t)\in\mathbb{F}[t]^{+}\) and any \(s\in\mathbb{F}[t]_{Q}\),_ \[\left|\mathop{\mathbb{E}}_{y\in\mathbb{F}[t]_{Q}}e(sP(y)/Q)-e(sa_{0}/Q) \mathbb{1}_{H_{Q}^{\perp}}(s)\right|^{2^{k-1}}\leq p^{2\lfloor\log_{p}d\rfloor }\frac{k-1}{\mathrm{lpf}(Q)}.\] A key ingredient in Proposition 3.2 is the following van der Corput-type inequality. We do not use any ring structure for this result, so we state and prove it in the setting of an arbitrary finite abelian group \(G\). For a function \(f:G\to\mathbb{C}\), define a multiplicative differencing operator by \(\Delta_{v}f(u)=f(u+v)\overline{f(u)}\), and let \(\Delta_{v_{1},\ldots,v_{k}}f=\Delta_{v_{k}}\left(\Delta_{v_{1},\ldots,v_{k-1}} f\right)\) for \(k\in\mathbb{N}\) and \(v_{1},\ldots,v_{k}\in G\). **Lemma 3.3**.: _Let \(G\) be a finite abelian group, and let \(H\leq G\) be a subgroup. For any function \(f:G\to\mathbb{C}\) and any \(k\in\mathbb{N}\),_ \[\left|\mathop{\mathbb{E}}_{x\in G}f(x)\right|^{2^{k}}\leq\mathop{\mathbb{E}}_ {v_{1},\ldots,v_{k}\in H}\mathop{\mathbb{E}}_{u\in G}\Delta_{v_{1},\ldots,v_{k }}f(u)\] **Remark 3.4**.: It is worth commenting on two special cases of Lemma 3.3. When \(H=G\), the quantity on the right hand side is equal to \(\left\|f\right\|_{U^{k}(G)}^{2^{k}}\), so the conclusion of Lemma 3.3 reduces to the inequality \(\left\|f\right\|_{U^{1}(G)}\leq\left\|f\right\|_{U^{k}(G)}\), which is a special case of monotonicity for the Gowers (semi)norms. On the other hand, when \(H=\{0\}\), the right hand side is equal to \(\mathbbm{E}_{u\in G}\left|f(u)\right|^{2^{k}}\), so the conclusion of Lemma 3.3 follows by Jensen's inequality. The general case can be seen as interpolating between these two extremes. Proof.: Suppose \(k=1\). Note that \[\mathop{\mathbbm{E}}_{x\in G}f(x)=\mathop{\mathbbm{E}}_{x\in G}\mathop{ \mathbbm{E}}_{h\in H}f(x+h).\] Therefore, by Jensen's inequality, \[\left|\mathop{\mathbbm{E}}_{x\in G}f(x)\right|^{2}=\left|\mathop{\mathbbm{E}} _{x\in G}\mathop{\mathbbm{E}}_{h\in H}f(x+h)\right|^{2}\leq\mathop{\mathbbm{ E}}_{x\in G}\left|\mathop{\mathbbm{E}}_{h\in H}f(x+h)\right|^{2}=\mathop{ \mathbbm{E}}_{x\in G}\mathop{\mathbbm{E}}_{h_{1},h_{2}\in H}f(x+h_{1}) \overline{f(x+h_{2})}.\] Interchanging the order of averaging and making the substitutions \(v=h_{1}-h_{2}\), \(u=x-h_{2}\), we obtain the desired inequality \[\left|\mathop{\mathbbm{E}}_{x\in G}f(x)\right|^{2}\leq\mathop{\mathbbm{E}}_{ v\in H}\mathop{\mathbbm{E}}_{u\in G}f(u+v)\overline{f(u)}.\] Suppose the inequality holds for \(k-1\). Then \[\left|\mathop{\mathbbm{E}}_{x\in G}f(x)\right|^{2^{k}}=\left(\left|\mathop{ \mathbbm{E}}_{x\in G}f(x)\right|^{2^{k-1}}\right)^{2}\leq\left|\mathop{ \mathbbm{E}}_{v_{1},\ldots,v_{k-1}\in H}\mathop{\mathbbm{E}}_{u\in G}\Delta_ {v_{1},\ldots,v_{k-1}}f(u)\right|^{2},\] which is in turn bounded above by \[\mathop{\mathbbm{E}}_{v_{1},\ldots,v_{k-1}\in H}\left|\mathop{\mathbbm{E}}_{u \in G}\Delta_{v_{1},\ldots,v_{k-1}}f(u)\right|^{2}.\] For fixed \(v_{1},\ldots,v_{k-1}\), applying the \(k=1\) case with the function \(\Delta_{v_{1},\ldots,v_{k-1}}f\) gives \[\left|\mathop{\mathbbm{E}}_{u\in G}\Delta_{v_{1},\ldots,v_{k-1}}f(u)\right|^{ 2}\leq\mathop{\mathbbm{E}}_{v_{k}\in H}\mathop{\mathbbm{E}}_{u\in G}\Delta_ {v_{k}}\Delta_{v_{1},\ldots,v_{k-1}}f(x)=\mathop{\mathbbm{E}}_{v_{k}\in H} \mathop{\mathbbm{E}}_{u\in G}\Delta_{v_{1},\ldots,v_{k-1},v_{k}}f(x).\] Putting everything together, \[\left|\mathop{\mathbbm{E}}_{x\in G}f(x)\right|^{2^{k}}\leq\mathop{\mathbbm{E} }_{v_{1},\ldots,v_{k}\in H}\mathop{\mathbbm{E}}_{u\in G}\Delta_{v_{1},\ldots, v_{k}}f(u).\] Proof of Proposition 3.2.: We first make a reduction to separable polynomials. If \(s\in H^{\perp}_{Q}\), then \(e(sP(y)/Q)=e(sa_{0}/Q)e(s(P(y)-a_{0})/Q)=e(sa_{0}/Q)\) for every \(y\in\mathbb{F}[t]_{Q}\), since \(P(y)-a_{0}\in H_{Q}\). Suppose now that \(s\notin H^{\perp}_{Q}\). We want to show \[\left|\mathop{\mathbbm{E}}_{y\in\mathbb{F}[t]_{Q}}e\left(sP(y)/Q\right)\right|^ {2^{k-1}}\leq p^{2\left\lfloor\log_{p}d\right\rfloor}\frac{k-1}{\mathrm{lpf}(Q )}.\] Noting that \(H^{\perp}_{Q}=\cap_{i=1}^{n}H^{\perp}_{i,Q}\), we have \[I=\left\{1\leq i\leq n:s\notin H^{\perp}_{i,Q}\right\}\neq\emptyset.\] Moreover, for any \(y\in\mathbb{F}[t]_{Q}\), \[e\left(sP(y)/Q\right)=e(sa_{0}/Q)e\left(s\sum_{i\in I}\eta_{i}(y^{r_{i}})/Q \right).\] Now, for each \(i\in I\), the function \(\chi_{i}(x)=e(s\eta_{i}(x)/Q)\) is a nontrivial character on \(\mathbb{F}[t]_{Q}\), so there exists \(s_{i}\neq 0\) such that \(\chi_{i}(x)=e(s_{i}x/Q)\). It therefore suffices to prove the following: for any nonconstant separable polynomial \(P(y)\in(\mathbb{F}[t]_{Q})[y]\), \[\left|\mathop{\mathbb{E}}_{y\in\mathbb{F}[t]_{Q}}e\left(P(y)/Q\right)\right|^{ 2^{k-1}}\leq p^{2\lfloor\log_{p}d\rfloor}\frac{k-1}{\mathrm{lpf}(Q)}.\] Suppose \(k=1\). Then \(P(y)=s_{1}y+s_{0}\in(\mathbb{F}[t]_{Q})[y]\) with \(s_{1}\neq 0\). Therefore, \[\mathop{\mathbb{E}}_{y\in\mathbb{F}[t]_{Q}}e(P(y)/Q)=e(s_{0}/Q)\mathop{ \mathbb{E}}_{y\in\mathbb{F}[t]_{Q}}e(s_{1}y/Q)=0.\] Now suppose \(k\geq 2\). Let \(P(y)=\sum_{i=i}^{n}s_{i}y^{r_{i}}+P^{\prime}(y)\), where \(\mathrm{d}\text{-}\mathrm{deg}\,y^{r_{i}}=k\) and \(\mathrm{d}\text{-}\mathrm{deg}\,P^{\prime}(y)\leq k-1\). By Lemma 3.3, \[\left|\mathop{\mathbb{E}}_{y\in\mathbb{F}[t]_{Q}}e(P(y)/Q)\right|^{2^{k-1}} \leq\mathop{\mathbb{E}}_{v_{1},\ldots,v_{k-1}\in K}\mathop{\mathbb{E}}_{u\in \mathbb{F}[t]_{Q}}e\left(\partial_{v_{1},\ldots,v_{k-1}}P(u)/Q\right)\] for any subgroup \(K\leq(\mathbb{F}[t]_{Q},+)\). (We will take a convenient choice for \(K\) later.) We now wish to obtain an expression for \(\partial_{v_{1},\ldots,v_{k}}P(u)\) that will allow us to bound the avaerage \[\mathop{\mathbb{E}}_{u}e\left(s\partial_{v_{1},\ldots,v_{k-1}}P(u)/Q\right).\] For \(v_{1},\ldots,v_{k-1}\in\mathbb{F}[t]_{Q}\), one has that \(\partial_{v_{1},\ldots,v_{k-1}}P^{\prime}(u)\) is constant (as a function of \(u\)), since \(\mathrm{d}\text{-}\mathrm{deg}\,P^{\prime}(y)\leq k-1\), so we can pull the constant \(e\left(s\partial_{v_{1},\ldots,v_{k-1}}P^{\prime}(u)\right)\) outside of the average. Let \(m=\left\lfloor\log_{p}d\right\rfloor\) so that \(p^{m}\leq d\) and \(p^{m+1}>d\). For each \(i=1,\ldots,n\), we may write \(r_{i}=\sum_{j=0}^{m}c_{i,j}p^{j}\) with \(c_{i,j}\in\{0,\ldots,p-1\}\) and \(\sum_{j=0}^{m}c_{i,j}=k\). Since \(P\) is separable by assumption, we have \(c_{i,0}\neq 0\) for \(i\in\{1,\ldots,n\}\). Then \[\partial_{v_{1},\ldots,v_{k-1}}(y^{r_{i}})=b_{i}\sum_{j=1}^{m}S_{i,j}(v_{1}, \ldots,v_{k-1})u^{p^{j}}+R_{i}(v_{1},\ldots,v_{k-1}),\] where \(b_{i}=\prod_{j=1}^{m}c_{i,j}!\), \(S_{i,j}(v_{1},\ldots,v_{k-1})\) is the sum of all monomials of the form \[\prod_{l=1}^{k-1}v_{l}^{p_{l}}\] with \[\left|\left\{1\leq l\leq k-1:j_{l}=j^{\prime}\right\}\right|=\begin{cases}c_{i, j^{\prime}},&\text{if }j^{\prime}\neq j\\ c_{i,j}-1,&\text{if }j^{\prime}=j\end{cases}\] and \(R_{i}\) is a symmetric polynomial in \(k-1\) variables. (If \(c_{i,j}=0\), then \(S_{i,j}=0\).) We can therefore write \[\partial_{v_{1},\ldots,v_{k-1}}P(y) =\sum_{i=1}^{n}s_{i}b_{i}\sum_{j=1}^{m}S_{i,j}(v_{1},\ldots,v_{k-1 })u^{p^{j}}+R(v_{1},\ldots,v_{k-1})\] \[=\sum_{j=0}^{m}\left(\sum_{i=1}^{n}s_{i}b_{i}S_{i,j}(v_{1},\ldots, v_{k-1})\right)u^{p^{j}}+R(v_{1},\ldots,v_{k-1}),\] where \(R=\sum_{i=1}^{n}R_{i}\). Let \[\eta_{v_{1},\ldots,v_{k-1}}(u)=\sum_{j=0}^{m}\left(\sum_{i=1}^{n}s_{i}b_{i}S_{i,j} (v_{1},\ldots,v_{k-1})\right)u^{p^{j}}.\] Note that \(\eta_{v_{1},\ldots,v_{k-1}}\) is a group homomorphism \((\mathbb{F}[t]_{Q},+)\to(\mathbb{F}[t]_{Q},+)\). It follows that \[\left|\mathop{\underline{\mathbb{E}}}_{u}e\left(\partial_{v_{1},\ldots,v_{k-1} }P(u)/Q\right)\right|=\left|\mathop{\underline{\mathbb{E}}}_{u}e\left(\eta_{v _{1},\ldots,v_{k-1}}(u)/Q\right)\right|=0\] whenever \(e\left(\eta_{v_{1},\ldots,v_{k-1}}(\cdot)/Q\right)\) is a nonzero function. Noting that \(e\left(\eta_{v_{1},\ldots,v_{k-1}}(\cdot)/Q\right)\) is a character on \(\mathbb{F}[t]_{Q}\), it may be written in the form \(e\left(\varphi(v_{1},\ldots,v_{k-1})u/Q\right)\) for some \(\varphi(v_{1},\ldots,v_{k-1})\in\mathbb{F}[t]_{Q}\). We have thus obtained the bound \[\left|\mathop{\underline{\mathbb{E}}}_{y\in\mathbb{F}[t]_{Q}}e(P(y)/Q)\right| ^{2^{k-1}}\leq\frac{\left|\left\{(v_{1},\ldots,v_{k-1})\in K^{k-1}:\varphi(v_ {1},\ldots,v_{k-1})=0\right\}\right|}{|K|^{k-1}}.\] The remainder of the proof consists of two main steps. First, we show that, for a convenient choice of \(K\), the function \(\varphi\) becomes (after a change of coordinates) a polynomial in \(k-1\) variables. Next, we establish a bound on the number of roots of multivariable polynomials mod \(Q\). Recall \(m=\left\lfloor\log_{p}d\right\rfloor\). Let \(K=\left\{x^{p^{m}}:x\in\mathbb{F}[t]_{Q}\right\}\). This is a subgroup, since the function \(x\mapsto x^{p^{m}}\) is a homomorphism. For \(0\leq j\leq m\), let \(C_{j}=\sum_{i=1}^{n}s_{i}b_{i}S_{i,j}\) so that \[\eta_{v_{1},\ldots,v_{k-1}}(u)=\sum_{j=0}^{m}C_{j}(v_{1},\ldots,v_{k-1})u^{p^{ j}},\] and each of the polynomials \(C_{j}(v_{1},\ldots,v_{k-1})\) is an additive polynomial of degree at most \(p^{m}\) in each coordinate. In particular, \(v_{i}\mid C_{j}(v_{1},\ldots,v_{k-1})\) for each \(i\in\{1,\ldots,k-1\}\). Making the substitution \(v_{i}=w_{i}^{p^{m}}\), we therefore have \[\eta_{w_{1}^{p^{m}},\ldots,w_{k-1}^{p^{m}}}(u)=\sum_{j=0}^{m}C_{j}\left(w_{1}^ {p^{m}},\ldots,w_{k-1}^{p^{m}}\right)u^{p^{j}}=\sum_{j=0}^{m}\widetilde{C}_{j} \left(w_{1},\ldots,w_{k-1}\right)w_{1}^{p^{j}}\ldots w_{k-1}^{p^{j}}u^{p^{j}}\] for some \(\widetilde{C}_{j}\). For each \(j\geq 0\), the function \(\chi(u)=e(u^{p^{j}}/Q)\) is a character on \(\mathbb{F}[t]_{Q}\), so there exists \(z_{j}\in\mathbb{F}[t]_{Q}\) such that \(\chi(u)=e(z_{j}u/Q)\). Hence, defining \(\psi(w_{1},\ldots,w_{k-1})=\varphi\left(w_{1}^{p^{m}},\ldots,w_{k-1}^{p^{m}}\right)\), we have \[\psi(w_{1},\ldots,w_{k-1})=\sum_{j=0}^{m}z_{j}\widetilde{C}_{j}(w_{1},\ldots, w_{k-1})w_{1}\ldots w_{k-1}.\] That is, \(\psi\) is a polynomial of degree at most \(p^{2m}\) in each coordinate. We claim that \(\psi\) is not the zero polynomial. By definition, \(z_{0}=1\). We also have \[C_{0}(v_{1},\ldots,v_{k-1})=\sum_{i=1}^{n}s_{i}b_{i}S_{i,0}(v_{1},\ldots,v_{k-1 }).\] The coefficients \(b_{i}\) are integers coprime to \(p\), so \(b_{i}\in\mathbb{F}[t]_{Q}^{\times}\). Hence, \(s_{i}b_{i}\neq 0\). Now, \(S_{i,0}\) is a sum of terms of the form \[\prod_{l=1}^{k-1}v_{l}^{p^{j}_{l}}\] with the property \(\sum_{l=1}^{k-1}p^{j_{l}}=r_{i}-p\). Therefore, the monomials appearing in \(S_{i,0}\) are distinct from the monomials appearing in \(S_{i^{\prime},0}\) for \(i\neq i^{\prime}\). It follows that \(C_{0}\) is not the zero polynomial. Thus, \[z_{0}\widetilde{C}_{0}(w_{1},\ldots,w_{k-1})w_{1}\ldots w_{k-1}=C_{0}\left(w_{1 }^{p^{m}},\ldots,w_{k-1}^{p^{m}}\right)\] is not the zero polynomial, and each monomial appearing has degree divisible by \(p^{m}\). Finally, for \(j\neq 0\), we have \[\widetilde{C}_{j}(w_{1},\ldots,w_{k-1})w_{1}\ldots w_{k-1}=\frac{C_{j}\left(w_ {1}^{p^{m}},\ldots,w_{k-1}^{p^{m}}\right)}{\prod_{l=1}^{k-1}w_{l}^{p^{j}-1}},\] which consists of monomials in which each variable has degree congruent to \(1\bmod p\). This proves that \(\psi\) is not the zero polynomial. The final step is to show that \(\psi\) has only a small number of zeros. **Lemma 3.5**.: _Let \(l\in\mathbb{N}\), and let \(T(y_{1},\ldots,y_{l})\in(\mathbb{F}[t]_{Q})[y_{1},\ldots,y_{l}]\) be a nonzero polynomial of degree \(d_{i}\) in the variable \(y_{i}\) for \(i=1,\ldots,d\). Then_ \[\left|\left\{(y_{1},\ldots,y_{l})\in\mathbb{F}[t]_{Q}^{l}:T(y_{1},\ldots,y_{l} )=0\right\}\right|\leq\left(\sum_{i=1}^{l}d_{i}\right)\frac{|Q|^{l}}{\mathrm{ lpf}(Q)}.\] Proof of Lemma.: Let us first consider the case \(l=1\). Write \(T(y)=\alpha_{d}y^{d}+\cdots+\alpha_{1}y+\alpha_{0}\). We view \(\alpha_{0},\ldots,\alpha_{d}\) as elements of \(\mathbb{F}[t]\) with \(|\alpha_{i}|<|Q|\). Let \(\alpha=\gcd(\alpha_{0},\ldots,\alpha_{d},Q)\), \(\widetilde{\alpha}_{i}=\frac{\alpha_{i}}{\alpha}\), and \(\widetilde{T}(y)=\widetilde{\alpha}_{d}y^{d}+\cdots+\widetilde{\alpha}_{1}y+ \widetilde{\alpha}_{0}\). Fix \(Q^{\prime}\in\mathbb{F}[t]^{+}\) irreducible such that \(Q^{\prime}\mid\frac{Q}{\alpha}\). For some \(i\in\{0,\ldots,d\}\), we have \(Q^{\prime}\nmid\widetilde{\alpha}_{i}\). Since \(\mathbb{F}[t]_{Q^{\prime}}\) is a field, and \(\widetilde{T}\) reduces to a nonzero polynomial of degree \(\leq d\bmod Q^{\prime}\), we have \[\left|\left\{y^{\prime}\in\mathbb{F}[t]_{Q^{\prime}}:\widetilde{T}(y^{\prime} )\equiv 0\pmod{Q^{\prime}}\right\}\right|\leq d.\] Now suppose \(T(y)=0\). Then \(\alpha\widetilde{T}(y)=0\). That is, \(Q\mid\alpha\widetilde{T}(y)\), so \(\frac{Q}{\alpha}\mid\widetilde{T}(y)\). Hence, \(Q^{\prime}\mid\widetilde{T}(y)\). Equivalently, \(\widetilde{T}(y)\equiv 0\pmod{Q^{\prime}}\). Therefore, \[|\{y\in\mathbb{F}[t]_{Q}:T(y)=0\}|\leq d\frac{|Q|}{|Q^{\prime}|}\leq d\frac{|Q |}{\mathrm{lpf}(Q)}.\] Suppose \(l\geq 2\). If \(\mathrm{lpf}(Q)\leq\sum_{i=1}^{l}d_{i}\), then there is nothing to prove, so assume \(\mathrm{lpf}(Q)>\sum_{i=1}^{l}d_{i}\). Fix \(y\in\mathbb{F}[t]_{Q}\), and let \(T_{y}(y_{1},\ldots,y_{l-1})=T(y_{1},\ldots,y_{l-1},y)\). If \(T_{y}(y_{1},\ldots,y_{l-1})\in(\mathbb{F}[t]_{Q})[y_{1},\ldots,y_{l-1}]\) is not the zero polynomial, then by the induction hypothesis, \[\left|\left\{(y_{1},\ldots,y_{l-1})\in\mathbb{F}[t]_{Q}^{l-1}:T_{y}(y_{1}, \ldots,y_{l-1})=0\right\}\right|\leq\left(\sum_{i=1}^{l-1}d_{i}\right)\frac{|Q |^{l-1}}{\mathrm{lpf}(Q)}.\] Hence, \[\left|\left\{(y_{1},\ldots,y_{l})\in\mathbb{F}[t]_{Q}^{l}:T(y_{1},\ldots,y_{l})=0\right\}\right|\] \[=|Q|^{l-1}\left|\left\{y\in\mathbb{F}[t]_{Q}:T_{y}=0\right\}|+| \{(y_{1},\ldots,y_{l-1},y):T_{y}\neq 0,T_{y}(y_{1},\ldots,y_{l-1})=0\}\right|\] \[\leq|Q|^{l-1}\left|\left\{y\in\mathbb{F}[t]_{Q}:T_{y}=0\right\}|+ \left(\sum_{i=1}^{l-1}d_{i}\right)\frac{|Q|^{l}}{\mathrm{lpf}(Q)}.\] It therefore suffices to prove \[|\{y\in\mathbb{F}[t]_{Q}:T_{y}=0\}|\leq d_{l}\frac{|Q|}{\mathrm{lpf}(Q)}.\] Fix \(y_{1},\ldots,y_{l-1}\in\mathbb{F}[t]_{Q}\), and let \(T^{y_{1},\ldots,y_{l-1}}(y)=T_{y}(y_{1},\ldots,y_{l-1})=T(y_{1},\ldots,y_{l-1},y)\). If \(T_{y}=0\), then \(T^{y_{1},\ldots,y_{l-1}}(y)=0\). By Lemma 3.5, it follows that \[|\{y\in\mathbb{F}[t]_{Q}:T_{y}=0\}|\leq|\{y\in\mathbb{F}[t]_{Q}:T^{y_{1}, \ldots,y_{l-1}}(y)=0\}|\leq d_{l}\frac{|Q|}{\mathrm{lpf}(Q)},\] unless \(T^{y_{1},\ldots,y_{l-1}}\) is the zero polynomial. So, it remains to find \(y_{1},\ldots,y_{l-1}\) such that \(T^{y_{1},\ldots,y_{l-1}}\) is a nonzero polynomial. Note that the coefficients of \(T^{y_{1},\ldots,y_{l-1}}\) are polynomial expressions in \(y_{1},\ldots,y_{l-1}\) of degree at most \(d_{i}\) in the variable \(y_{i}\). Since \(T\) is not the zero polynomial, there is at least one coefficient that is a nonzero polynomial \(C(y_{1},\ldots,y_{l-1})\in(\mathbb{F}[t]_{Q})[y_{1},\ldots,y_{l-1}]\). By the induction hypothesis, \[\Big{|}\Big{\{}(y_{1},\ldots,y_{l-1})\in\mathbb{F}[t]_{Q}^{l-1}:C(y_{1}, \ldots,y_{l-1})=0\Big{\}}\Big{|}\leq\left(\sum_{i=1}^{l-1}d_{i}\right)\frac{|Q |^{l-1}}{\mathrm{lpf}(Q)}.\] Since \(\mathrm{lpf}(Q)>\sum_{i=1}^{l}d_{i}\geq\sum_{i=1}^{l-1}d_{i}\) by assumption, it follows that \(C(y_{1},\ldots,y_{l-1})\neq 0\) for some \((y_{1},\ldots,y_{l-1})\in\mathbb{F}[t]_{Q}^{l-1}\). For this choice of \(y_{1},\ldots,y_{l-1}\), the polynomial \(T^{y_{1},\ldots,y_{l-1}}(y)\in(\mathbb{F}[t]_{Q})[y]\) is not the zero polynomial, so we are done. Applying Lemma 3.5 to \(\psi\), we get the bound \[\Big{|}\Big{\{}(w_{1},\ldots,w_{k-1}\in\mathbb{F}[t]_{Q}^{k-1}:\psi(w_{1}, \ldots,w_{k-1})=0\Big{\}}\Big{|}\leq p^{2m}(k-1)\frac{|Q|^{k-1}}{\mathrm{lpf}( Q)}.\] Thus, \[\left|\frac{\mathbb{E}}{\lvert y\in\mathbb{F}[t]_{Q}}e(P(y)/Q) \right|^{2^{k-1}} \leq\frac{\big{|}\big{\{}(v_{1},\ldots,v_{k-1})\in K^{k-1}:\varphi (v_{1},\ldots,v_{k-1})=0\big{\}}\big{|}}{|K|^{k-1}}\] \[=\frac{\Big{|}\Big{\{}(w_{1},\ldots,w_{k-1}\in\mathbb{F}[t]_{Q}^{ k-1}:\psi(w_{1},\ldots,w_{k-1})=0\Big{\}}\Big{|}}{|Q|^{k-1}}\] \[\leq\frac{p^{2m}(k-1)}{\mathrm{lpf}(Q)}.\] ## 4. Power saving bounds for the Furstenberg-Sarkozy theorem in characteristic \(p\) We now prove Corollary 1.12, restated below: **Corollary 1.12**.: _Let \(P(y)\in(\mathbb{F}[t])[y]\) be an intersective polynomial of degree \(d\) and derivational degree \(k\). Let \(Q(t)\in\mathbb{F}[t]^{+}\). If \(A\subseteq\mathbb{F}[t]_{Q}\) does not contain distinct \(a,b\in A\) with \(b-a=P(y)\) for some \(y\in\mathbb{F}[t]_{Q}\), then_ \[|A|\ll_{P}|Q|\cdot\mathrm{lpf}(Q)^{-1/2^{k-1}}.\] _In particular, if \(Q\) is irreducible (so that \(\mathbb{F}[t]_{Q}\) is a field with \(|Q|\) elements), then_ \[|A|\ll_{P}|Q|^{1-1/2^{k-1}}.\] Proof of Corollary 1.12.: Since \(P\) is intersective, there exists \(y\in\mathbb{F}[t]_{Q}\) with \(P(y)=0\). Hence, \(-a_{0}=P(y)-a_{0}\in H_{Q}\). Therefore, applying Theorem 1.10 with \(f=\mathbbm{1}_{A}\), we have \[\left|\operatorname*{\mathbb{E}}_{x,y\in\mathbb{F}[t]_{Q}}\mathbbm{1}_{A}(x) \mathbbm{1}_{A}(x+P(y))-\operatorname*{\mathbb{E}}_{x\in\mathbb{F}[t]_{Q},x \in H_{Q}}\mathbbm{1}_{A}(x)\mathbbm{1}_{A}(x+z)\right|\leq\frac{|A|}{|Q|} \left(p^{2\lfloor\log_{p}d\rfloor}\frac{k-1}{\operatorname{lpf}(Q)}\right)^{1 /2^{k-1}}\] by the Cauchy-Schwarz inequality. On the one hand, if \(A\) contains no nontrivial patterns \(\{x,x+P(y)\}\), then by Lemma 3.5, \[\operatorname*{\mathbb{E}}_{x,y\in\mathbb{F}[t]_{Q}}\mathbbm{1}_{A}(x) \mathbbm{1}_{A}(x+P(y))=\frac{|A|}{|Q|}\frac{|\{y\in\mathbb{F}[t]_{Q}:P(y)=0 \}|}{|Q|}\leq\frac{|A|}{|Q|}\frac{d}{\operatorname{lpf}(Q)},\] as long as \(\operatorname{lpf}(Q)\) is large enough so that \(P\) is not the zero polynomial mod \(Q\). On the other hand, \[\operatorname*{\mathbb{E}}_{x\in\mathbb{F}[t]_{Q},z\in H_{Q}}\mathbbm{1}_{A}(x )\mathbbm{1}_{A}(x+z)\geq\left(\frac{|A|}{|Q|}\right)^{2}\] by Lemma 3.3. Therefore, \[\left(\frac{|A|}{|Q|}\right)^{2}-\frac{|A|}{|Q|}\frac{d}{\operatorname{lpf}(Q )}\leq C\frac{|A|}{|Q|}\operatorname{lpf}(Q)^{-1/2^{k-1}},\] where \(C=\left(p^{2\lfloor\log_{p}d\rfloor}(k-1)\right)^{1/2^{k-1}}\). Multiplying both sides by \(\frac{|Q|^{2}}{|A|}\), we get the desired bound \[|A|\leq C|Q|\cdot\operatorname{lpf}(Q)^{-1/2^{k-1}}+d\frac{|Q|}{\operatorname{ lpf}(Q)}\ll|Q|\cdot\operatorname{lpf}(Q)^{-1/2^{k-1}}.\] ## 5. Proof of equivalences The goal of this section is to prove Theorem 1.17, restated here for convenience: **Theorem 1.17**.: _Let \(P(y)\in(\mathbb{F}[t])[y]\) be a nonconstant polynomial. The following are equivalent:_ 1. _for any_ \(Q(t)\in\mathbb{F}[t]^{+}\)_,_ \[\sup_{\|f\|_{L^{2}(\mathbb{F}[t]_{Q})}=1}\left\|\operatorname*{\mathbb{E}}_{ y\in\mathbb{F}[t]_{Q}}f(x+P(y))-\operatorname*{\mathbb{E}}_{z\in\mathbb{F}[t]_{Q}}f( z)\right\|_{L^{2}(\mathbb{F}[t]_{Q})}=o_{\operatorname{lpf}(Q)\to\infty}(1);\] 2. _there exist_ \(C_{1},C_{2},\gamma>0\) _such that for any_ \(Q(t)\in\mathbb{F}[t]^{+}\) _with_ \(\operatorname{lpf}(Q)\geq C_{1}\)_, one has_ \[\sup_{\|f\|_{L^{2}(\mathbb{F}[t]_{Q})}=1}\left\|\operatorname*{\mathbb{E}}_{ y\in\mathbb{F}[t]_{Q}}f(x+P(y))-\operatorname*{\mathbb{E}}_{z\in\mathbb{F}[t]_{Q}}f( z)\right\|_{L^{2}(\mathbb{F}[t]_{Q})}\leq C_{2}\cdot\operatorname{lpf}(Q)^{- \gamma};\] 3. _there exists_ \(C>0\) _such that if_ \(Q(t)\in\mathbb{F}[t]^{+}\) _and_ \(\operatorname{lpf}(Q)\geq C\)_, then_ \(H+Q\mathbb{F}[t]=\mathbb{F}[t]\)_, where_ \(H\leq(\mathbb{F}[t],+)\) _is the group generated by_ \(\{P(y)-P(0):y\in\mathbb{F}[t]\}\)_._ 4. _for any_ \(\delta>0\)_, there exists_ \(N>0\) _such that if_ \(Q(t)\in\mathbb{F}[t]^{+}\) _has_ \(\operatorname{lpf}(Q)\geq N\) _and_ \(A,B\subseteq\mathbb{F}[t]_{Q}\) _are subsets with_ \(|A||B|\geq\delta|Q|^{2}\)_, then there exist_ \(x,y\in\mathbb{F}[t]_{Q}\) _such that_ \(x\in A\) _and_ \(x+P(y)\in B\)_;_ 5. _there exist_ \(C_{1},C_{2},\gamma>0\) _such that for any_ \(Q(t)\in\mathbb{F}[t]^{+}\) _with_ \(\operatorname{lpf}(Q)\geq C_{1}\)_, one has_ \[\bigg{|}\left|\left\{(x,y)\in\mathbb{F}[t]_{Q}^{2}:x\in A,x+P(y)\in B\right\} \right|-|A||B|\bigg{|}\leq C_{2}|A|^{1/2}|B|^{1/2}|Q|\cdot\operatorname{lpf}(Q)^ {-\gamma}.\] _Moreover, if \(P(y)\) is good for irrational equidistribution, then each of the properties (i)-(v) holds._ First we prove that irrational equidistribution implies condition (iii). **Proposition 5.1**.: _Suppose \(P(y)\) is good for irrational equidistribution, and let \(H\leq(\mathbb{F}[t],+)\) be the group generated by \(\{P(y)-P(0):y\in\mathbb{F}[t]\}\). Then there exists \(C>0\) such that if \(Q(t)\in\mathbb{F}[t]^{+}\) satisfies \(\operatorname{lpf}(Q)\geq C\), then \(H+Q\mathbb{F}[t]=\mathbb{F}[t]\). That is, (iii) holds._ Proof.: We prove the contrapositive. Suppose (iii) fails. Then there is a sequence \((Q_{n})_{n\in\mathbb{N}}\) in \(\mathbb{F}[t]^{+}\) such that \(\mathrm{lpf}(Q_{n})\to\infty\) and \(H+Q_{n}\mathbb{F}[t]\neq\mathbb{F}[t]\) for \(n\in\mathbb{N}\). Equivalently, \[H^{\perp}\cap(Q_{n}\mathbb{F}[t])^{\perp}=(H+Q_{n}\mathbb{F}[t])^{\perp}\neq\{0\}.\] Since \((Q_{n}\mathbb{F}[t])^{\perp}\cong\widehat{\mathbb{F}[t]}_{Q_{n}}\), it follows that \(\frac{s_{n}}{Q_{n}}\in H^{\perp}\) for some \(s_{n}\not\equiv 0\pmod{Q_{n}}\). If \(\frac{s_{n}}{Q_{n}}=\frac{s_{n}^{\prime}}{Q_{n}^{\prime}}\) in reduced terms (i.e., \(s_{n}^{\prime}\) and \(Q_{n}^{\prime}\) are coprime), then \(\mathrm{lpf}(Q_{n}^{\prime})\geq\mathrm{lpf}(Q_{n})\), so we may assume without loss of generality that \(s_{n}\) and \(Q_{n}\) are coprime. The sequence \(\left(\frac{s_{n}}{Q_{n}}\right)_{n\in\mathbb{N}}\) then consists of distinct elements, so \(H^{\perp}\) is infinite. Every infinite compact group is uncountable, and there are only countable many rational points, so \(H^{\perp}\) must contain an irrational element. That is, for some irrational \(\alpha\), \(e(P(y)\alpha)=e(P(0)\alpha)\) for every \(y\in\mathbb{F}[t]\). Hence, \(\left(P(n)\alpha\right)_{n\in\mathbb{N}}\) is not well distributed, so \(P(y)\) is not good for irrational equidistribution. We will now prove the equivalences in Theorem 1.17 by showing the implications illustrated in the following diagram: Condition (ii) is a quantitative refinement of condition (i), so we immediately have the implication (ii) \(\Longrightarrow\) (i). By Theorem 1.10, we have the additional implications (i) \(\Longrightarrow\) (iii) \(\Longrightarrow\) (ii). Condition (v) follows from (ii) by a straightforward application of the Cauchy-Schwarz inequality. **Proposition 5.2**.: _(v) \(\Longrightarrow\) (iv)._ Proof.: Let \(\delta>0\). Let \(C_{1}\), \(C_{2}\), and \(\gamma\) be as in (v). Let \(Q(t)\in\mathbb{F}[t]^{+}\) with \(\mathrm{lpf}(Q)\geq C_{1}\). Let \(A,B\subseteq\mathbb{F}[t]_{Q}\) with \(|A||B|\geq\delta|Q|^{2}\). Then by (v), \[\left|\left\{(x,y)\in\mathbb{F}[t]_{Q}^{2}:x\in A,x+P(y)\in B \right\}\right| \geq|A||B|-C_{2}|A|^{1/2}|B|^{1/2}|Q|\cdot\mathrm{lpf}(Q)^{-\gamma}\] \[=|A|^{1/2}|B|^{1/2}\left(|A|^{1/2}|B|^{1/2}-C_{2}|Q|\cdot\mathrm{ lpf}(Q)^{-\gamma}\right)\] \[\geq\delta^{1/2}\left(\delta^{1/2}-C_{2}\cdot\mathrm{lpf}(Q)^{- \gamma}\right)|Q|^{2}.\] Thus, if \[\mathrm{lpf}(Q)\geq N(\delta)=\max\Bigg{\{}C_{1},\left(\frac{C_{2}}{\delta^{1 /2}}\right)^{1/\gamma}\Bigg{\}},\] then we can find \(x,y\in\mathbb{F}[t]_{Q}\) with \(x\in A\) and \(x+P(y)\in B\). We now prove the final implication to complete the proof of Theorem 1.17: **Proposition 5.3**.: _(iv) \(\Longrightarrow\) (iii)._ Proof.: We prove the contrapositive. Suppose (iii) fails. Then there is a sequence \((Q_{n})_{n\in\mathbb{N}}\) in \(\mathbb{F}[t]^{+}\) with \(\mathrm{lpf}(Q_{n})\to\infty\) such that \(\{P(y)-P(0):y\in\mathbb{F}[t]_{Q_{n}}\}\subseteq H_{Q_{n}}\leq\mathbb{F}[t]_{Q _{n}}\) for every \(n\in\mathbb{N}\). The subgroup \(H_{Q_{n}}\) has index \(p^{k_{n}}\) for some \(k_{n}\in\mathbb{N}\). Let \(A_{n}\) be a union of \(\left\lfloor\frac{p^{k_{n}}}{2}\right\rfloor\) cosets of \(H_{Q_{n}}\), and let \(B_{n}=\mathbb{F}[t]_{Q_{n}}\setminus(A_{n}+P(0))\). For any \(x\in A_{n}\) and \(y\in\mathbb{F}[t]_{Q_{n}}\), we have \[x+P(y)=x+P(0)+(P(y)-P(0))\in A_{n}+P(0)+H_{Q_{n}}=A_{n}+P(0).\] That is, if \(x\in A_{n}\) and \(y\in\mathbb{F}[t]_{Q_{n}}\), then \(x+P(y)\notin B_{n}\). Moreover, \[\frac{|A_{n}|}{|Q_{n}|}=\frac{\left\lfloor\frac{p^{k_{n}}}{2}\right\rfloor}{p^{k _{n}}}\geq\frac{1}{3}\qquad\text{and}\qquad\frac{|B_{n}|}{|Q_{n}|}\geq\frac{1}{ 2}.\] Therefore, property (iv) fails for \(\delta=\frac{1}{6}\). Now we proceed to prove the remaining equivalences in Theorem 1.18. As a first step, we have the following characterization of irrational equidistribution for additive polynomials: **Proposition 5.4**.: _Let \(\eta(y)\in(\mathbb{F}[t])[y]\) be an additive polynomial. The following are equivalent:_ 1. \(\eta(y)\) _is good for irrational equidistribution;_ 2. _there exists_ \(C>0\) _such that if_ \(Q(t)\in\mathbb{F}[t]^{+}\) _and_ \(\operatorname{lpf}(Q)\geq C\)_, then_ \(\eta(\mathbb{F}[t]_{Q})=\mathbb{F}[t]_{Q}\)_;_ 3. \(\eta(y)=ay\) _for some_ \(a\in\mathbb{F}[t]\setminus\{0\}\)_._ Proof.: (i)\(\implies\)(ii). See Proposition 5.1. (ii)\(\implies\)(iii). We prove the contrapositive. Suppose \(\eta(y)=\sum_{i=0}^{k}a_{i}y^{p^{i}}\) with \(a_{k}\neq 0\), \(k\geq 1\). We consider two cases separately. Case 1: \(a_{0}=0\). In this case, we may write \(\eta(y)=\eta^{\prime}(y)^{p}\), where \(\eta^{\prime}(y)=\sum_{i=1}^{k}a_{i}y^{p^{i-1}}\). Hence, for any \(Q(t)\in\mathbb{F}[t]^{+}\), \(\eta(\mathbb{F}[t]_{Q})\subseteq\{y^{p}:y\in\mathbb{F}[t]_{Q}\}\). If \(Q=Q_{0}^{2}\), then \(Q_{0}\not\equiv 0\pmod{Q}\), while \(Q_{0}^{p}\equiv 0\pmod{Q}\). Therefore, the homomorphism \(y\mapsto y^{p}\) has a nontrivial kernel mod \(Q\), so \(\{y^{p}:y\in\mathbb{F}[t]_{Q}\}\) is a proper subgroup of \(\mathbb{F}[t]_{Q}\). Taking \(Q_{0}\) to be an arbitrarily large irreducible element of \(\mathbb{F}[t]^{+}\), this shows that (ii) does not hold. Case 2: \(a_{0}\neq 0\). Write \(\eta(y)=yg(y)\) with \(g(y)=a_{0}+\sum_{i=1}^{k}a_{i}y^{p^{i-1}}\). Since \(g(y)\) is a nonconstant polynomial, the set \[R=\left\{Q(t)\in\mathbb{F}[t]^{+}:Q\text{ is irreducible and }g\text{ has a root mod }Q\right\}\] is infinite. Indeed, for any finite collection of irreducibles \(Q_{1},\dots,Q_{r}\in\mathbb{F}[t]^{+}\), consider \[g\left(a_{0}Q_{1}\dots Q_{r}y\right)=a_{0}\left(1+Q_{1}\dots Q_{r}y\sum_{i=1}^ {k}a_{i}\left(a_{0}Q_{1}\dots,Q_{r}y\right)^{p^{i-2}}\right). \tag{5.1}\] Since \(g\) is a nonzero polynomial, there exists \(y_{0}\in\mathbb{F}[t]\) such that \(g\left(a_{0}Q_{1}\dots Q_{r}y_{0}\right)\neq 0\). From the expression on the right hand side of (5.1), we have \(a_{0}\mid g\left(a_{0}Q_{1}\dots Q_{r}y_{0}\right)\), and \(\frac{g\left(a_{0}Q_{1}\dots Q_{r}y_{0}\right)}{a_{0}}\equiv 1\pmod{Q_{i}}\) for each \(i\in\{1,\dots,r\}\). Therefore, there is some irreducible \(Q\in\mathbb{F}[t]^{+}\setminus\{Q_{1},\dots,Q_{r}\}\) such that \(Q\mid g\left(a_{0}Q_{1}\dots Q_{r}y_{0}\right)\). Hence, \(R\) is infinite as claimed. Suppose \(Q\in R\) and \(Q\nmid a_{0}\). Let \(y\in\mathbb{F}[t]_{Q}\) such that \(g(y)=0\). Then \(\eta(y)=yg(y)=0\), but \(y\neq 0\), since \(y\mid a_{0}\). Hence, \(\eta\) has a nontrivial kernel mod \(Q\) for infinitely many irreducibles \(Q\), which contradicts condition (ii). (iii)\(\implies\)(i). See [1, Theorem 0.1]. The following lemma is the key tool to reduce equidistributional properties of polynomials to the additive case with which we have just dealt. **Lemma 5.5**.: _Let \(\eta_{1},\eta_{2}\in\mathbb{F}_{p}[y]\) be additive polynomials, and let \(H_{i}=\eta_{i}(\mathbb{F}[t])\) for \(i=1,2\). There exists an additive polynomial \(\eta\in\mathbb{F}_{p}[y]\) such that \(\eta(\mathbb{F}[t])=H_{1}+H_{2}\). Moreover, \(\eta=\eta_{1}\circ\zeta_{1}+\eta_{2}\circ\zeta_{2}\), where \(\zeta_{1},\zeta_{2}\in\mathbb{F}_{p}[y]\) are additive polynomials._ Proof.: If \(\eta_{i}=0\) for some \(i\), then take \(\eta=\eta_{j}\) with \(j\neq i\). Suppose now that \(\eta_{1}\) and \(\eta_{2}\) are both nonzero. Let \(H=H_{1}+H_{2}\). Write \(\eta_{1}(y)=\sum_{i=0}^{k}a_{i}y^{p^{i}}\) and \(\eta_{2}(y)=\sum_{j=0}^{l}b_{j}y^{p^{j}}\). Without loss of generality, \(k\geq l\). Define \[\eta_{1}^{\prime}(y)=b_{l}\eta_{1}(y)-a_{k}\eta_{2}\left(y^{p^{k-l}}\right)= \eta_{1}(b_{l}y)+\eta_{2}\left(-a_{k}y^{p^{k-l}}\right), \tag{5.2}\] and let \(H_{1}^{\prime}=\eta_{1}^{\prime}(\mathbb{F}[t])\). Then \(\deg\eta_{1}^{\prime}<\deg\eta_{1}\). Claim:\(H_{1}^{\prime}+H_{2}=H_{1}+H_{2}\). For any \(y\in\mathbb{F}[t]\), (5.2) expressed \(\eta_{1}^{\prime}(y)\) as a sum of an element of \(H_{1}\) and an element of \(H_{2}\). Hence, \(H_{1}^{\prime}\subseteq H_{1}+H_{2}\). Rearranging (5.2), we have \[\eta_{1}(y)=b_{l}^{-1}\eta_{1}^{\prime}(y)+b_{l}^{-1}a_{k}\eta_{2}\left(y^{p^{ k-l}}\right).\] Thus, \(H_{1}\subseteq H_{1}^{\prime}+H_{2}\). This proves the claim. We have shown that, given any nonzero additive polynomials \(\eta_{1},\eta_{2}\in\mathbb{F}_{p}[y]\), we may find \(\eta_{1}^{\prime},\eta_{2}^{\prime}\in\mathbb{F}_{p}[y]\) with \(\eta_{1}^{\prime}(\mathbb{F}[t])+\eta_{2}^{\prime}(\mathbb{F}[t])=\eta_{1}( \mathbb{F}[t])+\eta_{2}(\mathbb{F}[t])\) such that \(\deg\eta_{1}^{\prime}+\deg\eta_{2}^{\prime}<\deg\eta_{1}+\deg\eta_{2}\), and \(\eta_{1}^{\prime}\) and \(\eta_{2}^{\prime}\) are of the appropriate form. Repeating this process finitely many times, we eventually reduce to the situation that one of the additively polynomials is zero. We then take \(\eta\) to be the remaining nonzero polynomial. The argument in the proof of Lemma 5.5 provides an algorithm for obtaining \(\eta\) that bears a strong resemblance with the Euclidean algorithm. We work through a few simple examples to see more concretely how the algorithm works. **Example 5.6**.: (1)_\(\eta_{1}(y)=y^{p^{2}}-y\), \(\eta_{2}(y)=y^{p^{3}}+y^{p}\). The polynomial \(\eta_{2}\) has larger degree, so we shift the exponents of \(\eta_{1}\) to match the degree of \(\eta_{2}\) and subtract:_ \[\eta_{2}^{\prime}(y)=\eta_{2}(y)-\eta_{1}(y^{p})=2y^{p}.\] _If \(p=2\), then \(\eta_{2}^{\prime}(y)=0\), so we stop, and the resulting polynomial \(\eta\) is simply \(\eta_{1}\). (Note that when \(p=2\), \(\eta_{1}\) may be rewritten as \(\eta_{1}(y)=y^{p^{2}}+y\), and then it is clear that \(\eta_{2}(y)=\eta_{1}(y^{p})\), so the range of \(\eta_{2}\) is manifestly a subset of the range of \(\eta_{1}\).) Suppose \(p>2\). Then \(\deg\eta_{1}>\deg\eta_{2}^{\prime}\), so we shift the exponents of \(\eta_{2}^{\prime}\) and subtract:_ \[\eta_{1}^{\prime}(y)=2\eta_{1}(y)-\eta_{2}^{\prime}(y^{p})=-2y.\] _Since \(p>2\), the element \(-2\in\mathbb{F}_{p}\) is invertible, so the image of \(\eta_{1}^{\prime}\) is all of \(\mathbb{F}[t]\), and we are done: \(\eta(y)=\eta_{1}^{\prime}(y)=-2y\). (One can check that applying one more step of the algorithm would result in \(\eta_{2}^{\prime\prime}=0\), indicating that the process has terminated.)_ _(2) \(\eta_{1}(y)=y^{p^{3}}+y^{p^{2}}+y^{p}\), \(\eta_{2}(y)=y^{p^{2}}\). First, shifting \(\eta_{2}\) and subtracting, we have_ \[\eta_{1}^{\prime}(y)=\eta_{1}(y)-\eta_{2}(y^{p})=y^{p^{2}}+y^{p}.\] _Next, subtracting \(\eta_{2}\) without any shifting gives_ \[\eta_{1}^{\prime\prime}(y)=\eta_{1}^{\prime}(y)-\eta_{2}(y)=y^{p}.\] _Shifting \(\eta_{1}^{\prime\prime}\) and subtracting from \(\eta_{2}\) produces \(\eta_{2}^{\prime}=0\), so we are done and \(\eta(y)=\eta_{1}^{\prime\prime}(y)=y^{p}\)._ The following proposition completes the proof of Theorem 1.18: **Proposition 5.7**.: _Let \(P(y)\in\mathbb{F}_{p}[y]\) and write \(P(y)=a_{0}+\sum_{i=1}^{n}\eta_{i}(y^{r_{i}})\) with \(\eta_{1},\ldots,\eta_{n}\in\mathbb{F}_{p}[y]\) additive polynomials and \(r_{1},\ldots,r_{n}\in\mathbb{N}\) distinct positive integers not divisible by \(p\). The following are equivalent:_ 1. \(P(y)\) _is good for irrational equidistribution;_ _;_ * _there exists_ \(C>0\) _such that if_ \(Q(t)\in\mathbb{F}[t]^{+}\) _and_ \(\operatorname{lpf}(Q)\geq C\)_, then_ \(H+Q\mathbb{F}[t]=\mathbb{F}[t]\)_, where_ \(H\leq(\mathbb{F}[t],+)\) _is the group generated by_ \(\{P(y)-P(0):y\in\mathbb{F}[t]\}\)_;_ * _there exist additive polynomials_ \(\zeta_{1},\ldots,\zeta_{n}\in\mathbb{F}_{p}[y]\) _and_ \(a\in\mathbb{F}_{p}^{\times}\) _such that_ \[\sum_{i=1}^{n}\left(\eta_{i}\circ\zeta_{i}\right)(y)=ay.\] Proof.: (i) \(\Longrightarrow\) (ii). See Theorem 1.17. (ii) \(\Longrightarrow\) (iii). For \(Q(t)\in\mathbb{F}[t]^{+}\) and \(i\in\{1,\ldots,n\}\), let \(H_{i,Q}=\eta_{i}(\mathbb{F}[t]_{Q})\), By (ii), we have that \(H_{Q}=\mathbb{F}[t]_{Q}\) whenever \(\operatorname{lpf}(Q)\geq C\). By Lemma 5.5, there is an additive polynomial of the form \[\eta=\sum_{i=1}^{n}\eta_{i}\circ\zeta_{i}\] such that \(H_{Q}=\eta(\mathbb{F}[t]_{Q})\) for every \(Q(t)\in\mathbb{F}[t]^{+}\). In particular, \(\eta(\mathbb{F}[t]_{Q})=\mathbb{F}[t]_{Q}\) for all \(Q\) with \(\operatorname{lpf}(Q)\geq C\). Hence, by Proposition 5.4, \(\eta(y)=ay\). That is, (iii) holds. (iii) \(\Longrightarrow\) (i). Let \(\alpha\in\mathbb{F}((t^{-1}))\setminus\mathbb{F}(t)\). Let \[\mathcal{F}_{i}=\overline{\eta_{i}(\mathbb{F}[t])\alpha}\] By Theorem 1.9, \((P(y)\alpha)_{y\in\mathbb{F}[t]}\) is well distributed if and only if \(\mathcal{F}=\sum_{i=1}^{n}\mathcal{F}_{i}\) is the full "torus" \(\mathbb{F}((t^{-1}))/\mathbb{F}[t]\). By (iii), \[\mathcal{F}\ni\sum_{i=1}^{n}\eta_{i}\left(\zeta_{i}(y)\right)\alpha=ay\alpha\] for every \(y\in\mathbb{F}[t]\). But \((ay\alpha)_{y\in\mathbb{F}[t]}\) is well-distributed (in particular, it is dense) mod \(\mathbb{F}[t]\) (see [16, Theorem 0.1]), so \(\mathcal{F}=\mathbb{F}((t^{-1}))/\mathbb{F}[t]\). Thus, \(P(y)\) is good for irrational equidistribution. ## Acknowledgements The first author is supported by the National Science Foundation under Grant No. DMS-1926686.
2309.13641
Transformations on hypergraph families
We present a new general theory of function-based hypergraph transformations on finite families of finite hypergraphs. A function-based hypergraph transformation formalises the action of structurally modifying hypergraphs from a family in a consistent manner. The mathematical form of the transformations facilitates their analysis and incorporation into larger mathematical structures, and concurs with the function-based nature of modelling in the physical world. Since quotients of hypergraphs afford their simplification and comparison, we also discuss the notion of a quotient hypergraph transformation induced by an equivalence relation on the vertex set of a hypergraph family. Finally, we demonstrate function-based hypergraph transformations with two fundamental classes of examples involving the addition or deletion of hyperedges or hypergraphs.
Sean Trinity Vittadello
2023-09-24T14:02:07Z
http://arxiv.org/abs/2309.13641v1
# Transformations on hypergraph families ###### Abstract We present a new general theory of function-based hypergraph transformations on finite families of finite hypergraphs. A function-based hypergraph transformation formalises the action of structurally modifying hypergraphs from a family in a consistent manner. The mathematical form of the transformations facilitates their analysis and incorporation into larger mathematical structures, and concurs with the function-based nature of modelling in the physical world. Since quotients of hypergraphs afford their simplification and comparison, we also discuss the notion of a quotient hypergraph transformation induced by an equivalence relation on the vertex set of a hypergraph family. Finally, we demonstrate function-based hypergraph transformations with two fundamental classes of examples involving the addition or deletion of hyperedges or hypergraphs. **Address**: School of Mathematics and Statistics & School of BioSciences, The University of Melbourne, Parkville, Victoria 3010, Australia **E-mail address**: [email protected] **Key words and phrases**: Hypergraph transformation, hypergraph family, function-based transformation Introduction A hypergraph transformation acts on a given hypergraph to yield a new hypergraph with specific modifications, such as the addition or deletion of a hyperedge or the replacement of a subhypergraph by another subhypergraph. Hypergraph transformations therefore provide a formal description of the structural relationship between two hypergraphs, more general than the transformations induced by hypergraph homomorphisms, and find broad application in both pure and applied mathematics including spectral graph theory [1], graph similarity [2], the maximum stable set problem [3], network analysis [4, 5, 6], engineering design [7], and theoretical computer science [8, 9, 10]. The form in which a hypergraph transformation is expressed depends on the application, and may be descriptive, rule based, or function based. Rule-based hypergraph transformations, which modify hypergraphs algorithmically, have received the most attention because of their utility in theoretical computer science [8]. Hypergraphs are increasingly finding application in many areas of natural science as models of complex systems with higher-order interactions [11, 12], where hypergraph transformations can represent dynamic system behaviour [13, 14, 15]. Function-based representations of physical phenomena are fundamental in natural science, so expressing hypergraph transformations as functions aligns with the general mathematical formalism employed in mathematical modelling. Further, function-based hypergraph transformations can be readily incorporated into larger mathematical objects with additional structure, allowing for the development of more detailed mathematical models and the application of new techniques for mathematical analysis: for example, a partially ordered set of hypergraph transformations may represent a hierarchical system of dynamic processes. While some instances of function-based hypergraph transformations exist [2], no general theory for function-based hypergraph transformations that is suitable within the context of natural science has been described in the literature. In this article we introduce and develop a new approach to function-based hypergraph transformations, where each transformation is defined on a finite family of finite hypergraphs and acts consistently on all hypergraphs in its domain. Given the importance of the concept of a quotient hypergraph, which can assist with the simplification and comparison of hypergraphs, we also consider the notion of a quotient hypergraph transformation. We illustrate the general theory with two fundamental classes of examples involving the addition or deletion of hyperedges and the addition or deletion of hypergraphs. Preliminaries In this section we discuss preliminary notation, definitions, and results. We begin with a review of hypergraphs, their substructures, and connectivity. _Notation 2.1_ (**Sets**).: Denote the set of positive integers by \(\mathbb{N}\), the set of nonnegative integers by \(\mathbb{N}_{0}\), and \([n]:=\{\,m\leq n\mid m\in\mathbb{N}\,\}\) for \(n\in\mathbb{N}_{0}\). Given a set \(S\) we denote by \(\mathcal{O}(S)\) the power set of \(S\). _Definition 2.2_ (**Hypergraphs**).: A _hypergraph_ is a 2-tuple \(X:=\big{(}V(X),E(X)\big{)}\) of finite sets where \(V(X)\) is the _vertex set_ and \(E(X)\subseteq\mathcal{O}\big{(}V(X)\big{)}\) is the _hyperedge set_. We exclude the empty hyperedge, hence \(|e|\geq 1\), and allow multiple hyperedges. A _loop_ is a hyperedge that is a singleton set, so _allowing loops_ (resp. _disallowing loops_) assumes \(|e|\geq 1\) (resp. \(|e|\geq 2\)) for all \(e\in E(X)\). If \(V(X)=\emptyset\), so that \(E(X)=\emptyset\), then \(X\) is called the _null hypergraph_ and is denoted by \(\mathcal{N}\). Given a hypergraph \(X\), a _vertex labelling_ (resp. _hyperedge labelling_) of \(X\) is an injective function \(\ell_{V}\colon V(X)\to L_{V}\) (resp. \(\ell_{E}\colon E(X)\to L_{E}\)), where \(L_{V}\) and \(L_{E}\) are nonempty disjoint sets of labels. A vertex-labelled hypergraph therefore has all vertices uniquely labelled, and a hyperedge-labelled hypergraph has all hyperedges, including multiple hyperedges, uniquely labelled. We henceforth fix nonempty disjoint sets of labels \(L_{V}\) and \(L_{E}\), and denote by \(\mathscr{X}\) the universe of all vertex- and hyperedge-labelled hypergraphs over \(L_{V}\) and \(L_{E}\). We refer to \(\mathscr{X}\) as a _hypergraph family_. Since we only consider vertex- and hyperedge-labelled hypergraphs in this work we refer to them simply as hypergraphs without mentioning the labelling functions explicitly. For any subset \(\mathcal{X}\subseteq\mathscr{X}\), denote \(V(\mathcal{X}):=\{\,v\in V(X)\mid X\in\mathcal{X}\,\}\), \(E(\mathcal{X}):=\{\,e\in E(X)\mid X\in\mathcal{X}\,\}\), and \(\mathcal{X}^{*}:=\mathcal{X}\backslash\{\mathcal{N}\}\). If \(\mathcal{X}\) is a singleton set, say \(\mathcal{X}=\{X\}\), then we may denote \(\mathcal{X}\) simply by \(X\). While we consider hypergraphs without directed hyperedges, they could readily be included. We now consider substructures and connectivity in hypergraphs [16]. _Definition 2.3_ (**Strong subhypergraphs**).: Let \(X\), \(Y\in\mathscr{X}\). Then \(X\) is a _strong subhypergraph_ of \(Y\) if \(V(X)\subseteq V(Y)\) and \(E(X)\subseteq E(Y)\). In this case we say that \(Y\) contains \(X\), and if \(X\neq Y\) then the containment is _proper_. The strong subhypergraph \(X\) of \(Y\) is _induced_ by \(V(X)\) if \(E(X)=\big{\{}\,e\in E(Y)\mid e\subseteq V(X)\,\big{\}}\). Note that the null hypergraph \(\mathcal{N}\) is a strong subhypergraph of every hypergraph. If \(\mathcal{F}\) is a family of strong subhypergraphs of \(Y\) then \(X\in\mathcal{F}\) is _maximal_ in \(\mathcal{F}\) if no hypergraph in \(\mathcal{F}\) properly contains \(X\). _Definition 2.4_ (**Connectivity**).: Let \(X\in\mathscr{X}\). A _walk_ in \(X\) is a nonempty alternating sequence \((v_{0},e_{1},v_{1},\ldots,e_{m},v_{m})\) of vertices and hyperedges in \(X\) such that: (1) \(v_{i}\in V(X)\) for all \(i\in[m]\cup\{0\}\); (2) \(e_{i}\in E(X)\) for all \(i\in[m]\); (3) \(v_{i}\), \(v_{i+1}\in e_{i+1}\) for all \(0\leq i\leq m-1\). If the \(m+1\) vertices are pairwise distinct and the \(m\) hyperedges are pairwise distinct then the walk is a _path_. Note that a trivial walk \((v_{0})\) is a path. Two vertices \(v\), \(w\in V(X)\) are _connected_ in \(X\) if there exists a path in \(X\) from \(v\) to \(w\). The hypergraph \(X\) is _connected_ if it is nonnull and if every pair of vertices in \(X\) is connected, otherwise \(X\) is _disconnected_. A _connected component_, or simply _component_, of \(X\) is a connected strong subhypergraph of \(X\) that is maximal in the family of connected strong subhypergraphs of \(X\). Note that the null hypergraph is disconnected and has no components. We denote by \(\mathcal{C}(X)\) the set of connected components of \(X\). Further, for \(\mathcal{X}\subseteq\mathscr{X}\) we denote \(\mathcal{C}(\mathcal{X}):=\bigcup_{X\in\mathcal{X}}\mathcal{C}(X)\). _Definition 2.5_ (**Disjointness, union**).: Let \(X\), \(Y\in\mathscr{X}\). Then \(X\) and \(Y\) are _vertex disjoint_, or simply _disjoint_, if \(V(X)\cap V(Y)=\emptyset\) (which implies \(E(X)\cap E(Y)=\emptyset\)), and are _component disjoint_ if \(\mathcal{C}(X)\cap\mathcal{C}(Y)=\emptyset\). Note that vertex disjointness implies component disjointness. Further, the _hypergraph union_ of \(X\) and \(Y\) is the hypergraph \(X\cup Y\) with \(V(X\cup Y)=V(X)\cup V(Y)\) and \(E(X\cup Y)=E(X)\cup E(Y)\). We define a direct sum of hypergraphs, which corresponds to the similar notion for graphs [17], and provides a convenient decomposition of hypergraphs. _Definition 2.6_ (**Direct sum of hypergraphs**).: Let \(\{X_{i}\}_{i\in[m]}\subseteq\mathscr{X}\) be a subset of pairwise disjoint hypergraphs with \(m\in\mathbb{N}_{0}\). The _direct sum_ of the hypergraphs \(\{X_{i}\}_{i\in[m]}\), denoted by \(X_{1}\oplus\cdots\oplus X_{m}\), \(\bigoplus_{i\in[m]}X_{i}\), or \(\bigoplus\{X_{i}\}_{i\in[m]}\), is the hypergraph with \(V(X_{1}\oplus\cdots\oplus X_{m}):=\bigcup_{i\in[m]}V(X_{i})\) and \(E(X_{1}\oplus\cdots\oplus X_{m}):=\bigcup_{i\in[m]}E(X_{i})\). Note the following: the direct sum of hypergraphs is independent of the order of the hypergraph summands; \(X\in\mathscr{X}\) has the direct sum decomposition \(X=\bigoplus_{C\in\mathcal{C}(X)}C=\bigoplus\mathcal{C}(X)\); if \(m=0\) then the set of pairwise disjoint hypergraphs is empty, so the direct sum has no summands, hence the direct sum is the null hypergraph \(\mathcal{N}\). We also define a notion of the _direct difference_ of hypergraphs. _Definition 2.7_ (**Direct difference of hypergraphs**).: Let \(X\), \(Y\), \(Z\in\mathscr{X}\) where \(Y\) and \(Z\) are disjoint and \(X=Y\oplus Z\). The _direct difference_ of the hypergraphs \(X\) and \(Z\), denoted by \(X\ominus Z\), is \(Y=X\ominus Z\). Since a direct sum of hypergraphs is independent of the order of the hypergraph summands we also have \(Z=X\ominus Y\). **Proposition 2.8**.: _Let \(X\), \(Y\), \(Z\in\mathscr{X}\)._ 1. _If_ \(Y\) _and_ \(Z\) _are disjoint and_ \(X=Y\oplus Z\) _then_ \(X=(X\ominus Z)\oplus Z\) _and_ \(Y=(Y\oplus Z)\ominus Z\)_._ 2. _If_ \(Y\) _and_ \(Z\) _are disjoint,_ \(\mathcal{C}(Y)\subseteq\mathcal{C}(X)\)_, and_ \(\mathcal{C}(Z)\subseteq\mathcal{C}(X)\) _then_ \((X\ominus Y)\ominus Z=(X\ominus Z)\ominus Y\)_._ 3. _If_ \(Y\) _is disjoint with both_ \(X\) _and_ \(Z\)_, and_ \(\mathcal{C}(Z)\subseteq\mathcal{C}(X)\) _then_ \((X\oplus Y)\ominus Z=(X\ominus Z)\oplus Y\)_._ Proof.: (1) Since \(Y=X\ominus Z\) it follows that \(X=Y\oplus Z=(X\ominus Z)\oplus Z\) and \(Y=X\ominus Z=(Y\oplus Z)\ominus Z\). (2) Let \(W:=\bigcup\mathcal{C}(X)\backslash\mathcal{C}(Y\oplus Z)\) so that \(X=W\oplus(Y\oplus Z)=W\oplus(Z\oplus Y)\), then \((X\ominus Z)\ominus Y=W=(X\ominus Y)\ominus Z\). (3) Let \(W:=\bigcup\mathcal{C}(X\oplus Y)\backslash\mathcal{C}(Z)\), so that \(X\oplus Y=W\oplus Z\), and then \(W=(X\oplus Y)\ominus Z\). Now, \(X=(W\oplus Z)\ominus Y\), hence \(X\ominus Z=\big{(}(W\oplus Z)\ominus Y\big{)}\ominus Z=\big{(}(W\oplus Z) \ominus Z\big{)}\ominus Y=W\ominus Y\), where the second equality follows from Part (2) of this proposition and the third equality follows from Part (1) of this proposition, therefore \(W=(X\ominus Z)\oplus Y\). We conclude that \((X\oplus Y)\ominus Z=(X\ominus Z)\oplus Y\). Our definition of a quotient hypergraph is standard. _Definition 2.9_ (**Quotient hypergraph**).: Suppose \(X\in\mathscr{X}\) and \(R\) is an equivalence relation on \(V(X)\). The _quotient hypergraph_ of \(X\) under \(R\), denoted \(X/R\), is the hypergraph where: 1. \(V(X/R)=\left\{\,[v]_{R}\mid v\in V(X)\,\right\}\), where \([\cdot]_{R}\) denotes the equivalence class under \(R\). 2. \(E(X/R)\subseteq\wp\big{(}V(X/R)\big{)}\), where \(\left\{[v_{i}]_{R}\right\}_{i=1}^{n}\in E(X/R)\) for \(n\in\mathbb{N}\) (resp. \(n\geqslant 2\) if we disallow loops) if and only if there exists \(e\in E(X)\) such that (1) \(e\cap[v_{i}]_{R}\neq\emptyset\) for all \(1\leqslant i\leqslant n\), and (2) \(e\subseteq\bigcup_{i=1}^{n}[v_{i}]_{R}\). The map \(\theta\colon V(X)\to V(X/R)\) such that \(\theta(v)=[v]_{R}\) for \(v\in V(X)\) is the _projection_. We will have occasion to construct hypergraphs that are related to quotient hypergraphs, however with vertices consisting of equivalence classes in more general underlying vertex sets. For this we introduce the notions of a _vertex-augmented hypergraph_ and a _vertex-augmented quotient hypergraph_. _Definition 2.10_ (**Vertex-augmented hypergraph and quotient hypergraph**).: Suppose \(F\subseteq V(\mathscr{X})\) and \(X\in\mathscr{X}\) with \(V(X)\subseteq F\). The hypergraph \(Z:=\big{(}F,E(X)\big{)}\) is the _vertex-augmented hypergraph_ of \(X\) with respect to \(F\). If \(R_{F}\) is an equivalence relation on \(F\), with corresponding equivalence classes denoted \([\cdot]_{R_{F}}\), then the _vertex-augmented quotient hypergraph_ of \(X\) under \(R_{F}\), denoted \(X//R_{F}\), is the hypergraph where \(V(X//R_{F}):=\left\{\,[v]_{R_{F}}\mid v\in V(X)\,\right\}\) and \(E(X//R_{F}):=E(Z/R_{F})\). The map \(\theta\colon V(X)\to V(X//R_{F})\) such that \(\theta(v)=[v]_{R_{F}}\) for \(v\in V(X)\) is the _projection_. The following proposition establishes a canonical isomorphism between a given vertex-augmented quotient hypergraph and the corresponding quotient hypergraph under the restricted equivalence relation. We formalise this result to show the explicit relationship between the two forms of quotient hypergraphs. **Proposition 2.11**.: _Suppose \(F\subseteq V(\mathscr{X})\), \(X\in\mathscr{X}\) with \(V(X)\subseteq F\), \(R_{F}\) is an equivalence relation on \(F\), and \(R\) is the restriction of \(R_{F}\) to \(V(X)\). Then the vertex map \(\phi\colon V(X/R)\to V(X//R_{F})\) given by \(\phi\big{(}[v]_{R}\big{)}=[v]_{R_{F}}\) for \([v]_{R}\in V(X/R)\) is an isomorphism from the quotient hypergraph \(X/R\) onto the vertex-augmented quotient hypergraph \(X//R_{F}\), where the inverse map \(\phi^{-1}\) satisfies \(\phi^{-1}\big{(}[w]_{R_{F}}\big{)}=[v]_{R}\) for \([w]_{R_{F}}\in V(X//R_{F})\) and any representative \(v\in[w]_{R_{F}}\cap V(X)\)._ Proof.: \(\phi\) is well defined and injective since \([v]_{R}=[w]_{R}\) if and only if \([v]_{R_{F}}=[w]_{R_{F}}\), for \(v\), \(w\in V(X)\), and it follows from the definition of \(V(X//R_{F})\) that \(\phi\) is surjective. If \(f\subseteq V(X/R)\) then \(f\in E\big{(}X/R\big{)}\) if and only if \(\phi(f)\in E(X//R_{F})\), so it follows that \(\phi\) is an isomorphism. The inverse map \(\phi^{-1}\) is well defined since for any two representative elements \(u\), \(v\in[w]_{R_{F}}\cap V(X)\) we have \([u]_{R}=[v]_{R}\) _Notation 2.12_.: Let \(R_{V(\mathscr{X})}\) be an equivalence relation on \(V(\mathscr{X})\). For \(\mathscr{X}\subseteq\mathscr{X}\) we denote by \(\mathscr{X}/R_{V(\mathscr{X})}:=\{\,X/R_{V(\mathscr{X})}\mid X\in\mathscr{X}\,\}\) the corresponding collection of vertex-augmented quotient hypergraphs under \(R_{V(\mathscr{X})}\). ## 3 Hypergraph transformations ### Definition and basic properties Our definition of a hypergraph transformation requires the following notion of maximality. _Definition 3.1_ (**Component-maximal set of hypergraphs**).: Suppose \(\mathcal{S}\subseteq\mathscr{X}\subseteq\mathscr{X}\). We say that \(\mathcal{S}\) is _component maximal_ in \(\mathcal{X}\) if for each \(X\in\mathcal{X}\) there exists a subset \(\mathcal{D}_{X}\subseteq\mathcal{S}\), called an _\(\mathcal{S}\)-maximal subset_, such that: 1. \(\mathcal{D}_{X}\) is pairwise component disjoint. 2. If \(\mathcal{N}\in\mathcal{S}\) then \(\mathcal{N}\in\mathcal{D}_{X}\). 3. \(\mathcal{C}(T)\subseteq\mathcal{C}(X)\) for all \(T\in\mathcal{D}_{X}\). 4. If \(S\in\mathcal{S}\) and \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\) then there exists \(T\in\mathcal{D}_{X}\) such that \(\mathcal{C}(S)\subseteq\mathcal{C}(T)\). **Proposition 3.2**.: _Suppose \(\mathcal{S}\subseteq\mathscr{X}\subseteq\mathscr{X}\) where \(\mathcal{S}\) is component maximal in \(\mathcal{X}\) with \(\mathcal{S}\)-maximal subsets \(\{\mathcal{D}_{X}\}_{X\in\mathcal{X}}\). Then:_ 1. \(\mathcal{D}_{X}\) _is unique for all_ \(X\in\mathcal{X}\)_._ 2. _If_ \(\mathcal{N}\notin\mathcal{S}\) _then_ \(\mathcal{D}_{S}=\{S\}\) _for all_ \(S\in\mathcal{S}\)_._ 3. _If_ \(\mathcal{N}\in\mathcal{S}\) _then_ \(\mathcal{D}_{S}=\{\mathcal{N},S\}\) _for all_ \(S\in\mathcal{S}\)_, in particular_ \(\mathcal{D}_{\mathcal{N}}=\{\mathcal{N}\}\)_._ 4. _If_ \(\mathcal{N}\in\mathcal{S}\) _and_ \(X\in\mathcal{X}\) _then_ \(\mathcal{D}_{X}=\{\mathcal{N}\}\) _if and only if_ \(\mathcal{C}(S)\notin\mathcal{C}(X)\) _for all_ \(S\in\mathcal{S}\backslash\{\mathcal{N}\}\)_._ 5. _If_ \(X\)_,_ \(Y\in\mathcal{X}\)_, and_ \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\) _if and only if_ \(\mathcal{C}(S)\subseteq\mathcal{C}(Y)\) _for all_ \(S\in\mathcal{S}\backslash\{\mathcal{N}\}\)_, then_ \(\mathcal{D}_{X}=\mathcal{D}_{Y}\)_._ 6. _If_ \(\mathcal{N}\notin\mathcal{S}\) _then_ \(\mathcal{D}_{X}=\emptyset\) _implies_ \(\mathcal{C}(S)\nleq\mathcal{C}(X)\) _for all_ \(S\in\mathcal{S}\)_._ Proof.: (1) Fix \(X\in\mathcal{X}\) and suppose that \(\mathcal{D}^{\prime}_{X}\subseteq\mathcal{S}\) satisfies Properties (1) to (4) in Definition 3.1. Since \(\mathcal{N}\in\mathcal{D}^{\prime}_{X}\) if and only if \(\mathcal{N}\in\mathcal{D}_{X}\), it suffices to show that \(\mathcal{D}^{\prime}_{X}\backslash\{\mathcal{N}\}=\mathcal{D}_{X}\backslash\{ \mathcal{N}\}\). If \(S\in\mathcal{D}^{\prime}_{X}\backslash\{\mathcal{N}\}\) then \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\), hence there exists \(T\in\mathcal{D}_{X}\backslash\{\mathcal{N}\}\) such that \(\mathcal{C}(S)\subseteq\mathcal{C}(T)\). Since \(\mathcal{C}(T)\subseteq\mathcal{C}(X)\) there exists \(S^{\prime}\in\mathcal{D}^{\prime}_{X}\backslash\{\mathcal{N}\}\) such that \(\mathcal{C}(T)\subseteq\mathcal{C}(S^{\prime})\). Then \(\mathcal{C}(S)\subseteq\mathcal{C}(S^{\prime})\) and the pairwise component disjointness of \(\mathcal{D}^{\prime}_{X}\) imply \(S=S^{\prime}\), hence \(S=T\in\mathcal{D}_{X}\backslash\{\mathcal{N}\}\). We conclude that \(\mathcal{D}^{\prime}_{X}\backslash\{\mathcal{N}\}\subseteq\mathcal{D}_{X} \backslash\{\mathcal{N}\}\), where we may have \(\mathcal{D}^{\prime}_{X}\backslash\{\mathcal{N}\}=\emptyset\). An analogous argument shows that \(\mathcal{D}_{X}\backslash\{\mathcal{N}\}\subseteq\mathcal{D}^{\prime}_{X} \backslash\{\mathcal{N}\}\). (2) If \(\mathcal{N}\notin\mathcal{S}\) and \(S\in\mathcal{S}\) then \(\mathcal{D}:=\{S\}\) satisfies Properties (1) to (4) in Definition 3.1 with respect to \(S\). So, by Part (1) of this proposition, we have \(\mathcal{D}_{S}=\{S\}\). (3) If \(\mathcal{N}\in\mathcal{S}\) and \(S\in\mathcal{S}\backslash\{\mathcal{N}\}\) then \(\mathcal{D}:=\{\mathcal{N},S\}\) satisfies Properties (1) to (4) in Definition 3.1 with respect to \(S\). So, by Part (1) of this proposition, we have \(\mathcal{D}_{S}=\{\mathcal{N},S\}\). Additionally, if \(T\in\mathcal{D}_{\mathcal{N}}\) then \(\mathcal{C}(T)\subseteq\mathcal{C}(\mathcal{N})\), hence \(T=\mathcal{N}\), therefore \(\mathcal{D}_{\mathcal{N}}=\{\mathcal{N}\}\). (4) Let \(\mathcal{N}\in\mathcal{S}\) and \(X\in\mathcal{X}\). For the forward direction, suppose \(\mathcal{D}_{X}=\{\mathcal{N}\}\). If \(S\in\mathcal{S}\) and \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\) then Property (4) in Definition 3.1 implies \(\mathcal{C}(S)\subseteq\mathcal{C}(\mathcal{N})\), hence \(S=\mathcal{N}\). Therefore \(\mathcal{C}(S)\nsubseteq\mathcal{C}(X)\) for all \(S\in\mathcal{S}\backslash\{\mathcal{N}\}\). For the reverse direction, suppose \(\mathcal{C}(S)\nsubseteq\mathcal{C}(X)\) for all \(S\in\mathcal{S}\backslash\{\mathcal{N}\}\). Then \(T\in\mathcal{D}_{X}\) implies \(\mathcal{C}(T)\subseteq\mathcal{C}(X)\), by Property (3) in Definition 3.1, so \(T=\mathcal{N}\) and hence \(\mathcal{D}_{X}=\{\mathcal{N}\}\). (5) Since \(\mathcal{N}\in\mathcal{D}_{X}\) if and only if \(\mathcal{N}\in\mathcal{D}_{Y}\), it suffices to show that \(\mathcal{D}_{X}\backslash\{\mathcal{N}\}=\mathcal{D}_{Y}\backslash\{\mathcal{N}\}\). If \(S\in\mathcal{D}_{X}\backslash\{\mathcal{N}\}\) then \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\), hence \(\mathcal{C}(S)\subseteq\mathcal{C}(Y)\). So by Property (4) in Definition 3.1 there exists \(T\in\mathcal{D}_{Y}\backslash\{\mathcal{N}\}\) such that \(\mathcal{C}(S)\subseteq\mathcal{C}(T)\subseteq\mathcal{C}(Y)\). Then \(\mathcal{C}(T)\subseteq\mathcal{C}(X)\) so by Property (4) in Definition 3.1 there exists \(S^{\prime}\in\mathcal{D}_{X}\backslash\{\mathcal{N}\}\) such that \(\mathcal{C}(T)\subseteq\mathcal{C}(S^{\prime})\subseteq\mathcal{C}(X)\). Since \(\mathcal{C}(S)\subseteq\mathcal{C}(S^{\prime})\), the pairwise component-disjointness of \(\mathcal{D}_{X}\) implies \(S=S^{\prime}\) and hence \(S=T\in\mathcal{D}_{Y}\backslash\{\mathcal{N}\}\). Thus \(\mathcal{D}_{X}\backslash\{\mathcal{N}\}\subseteq\mathcal{D}_{Y}\backslash\{ \mathcal{N}\}\). An analogous argument gives \(\mathcal{D}_{Y}\backslash\{\mathcal{N}\}\subseteq\mathcal{D}_{X}\backslash\{ \mathcal{N}\}\). (6) We prove the contrapositive, so suppose there exists \(S\in\mathcal{S}\) such that \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\). Then by Property (4) in Definition 3.1 there exists \(T\in\mathcal{D}_{X}\) such that \(\mathcal{C}(S)\subseteq\mathcal{C}(T)\), hence \(\mathcal{D}_{X}\neq\emptyset\). Our definition of a _hypergraph transformation_ on \(\mathscr{X}\) ensures that the transformation acts consistently with respect to specified, or _distinguished_, hypergraphs. We regard connected components as the fundamental units of hypergraphs, since a hypergraph can be decomposed into a direct sum of connected components, and modifying a particular connected component of a hypergraph has no effect on the relations described by any other connected component. A _partial transformation_ on \(\mathscr{X}\) is a map \(\pi\colon\mathcal{X}\to\mathscr{X}\) where \(\mathcal{X}\subseteq\mathscr{X}\). The domain of \(\pi\) is \(\mathrm{Dom}(\pi)=\mathcal{X}\) and the image of \(\pi\) is \(\mathrm{Im}(\pi)\). A partial transformation therefore corresponds to a map between subsets of \(\mathscr{X}\). _Definition 3.3_ (**Hypergraph transformation**).: A _hypergraph transformation_ on \(\mathscr{X}\) is a 3-tuple \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\) where \(\pi\colon\mathcal{X}\to\mathscr{X}\) is a partial transformation on \(\mathscr{X}\) and \(\mathcal{S}\subseteq\mathcal{X}\) is a collection of distinguished hypergraphs, satisfying each of the following conditions: 1. (Nonredundancy) \(\mathcal{C}(S)\cap\mathcal{C}\big{(}\pi(S)\big{)}=\emptyset\) for all \(S\in\mathcal{S}\), and if \(\mathcal{N}\in\mathcal{S}\) then \(\pi(\mathcal{N})\neq\mathcal{N}\). 2. (Maximality) \(\mathcal{S}\) is component maximal in \(\mathcal{X}\), with \(\mathcal{S}\)-maximal subsets \(\{\mathcal{D}_{X}\}_{X\in\mathcal{X}}\). 3. (Direct sum decomposition is preserved) For each \(X\in\mathcal{X}\): 1. Defining the set \(\mathcal{S}_{X}:=\big{\{}\,S\in\mathcal{D}_{X}\mid V\big{(}\pi(S)\big{)}\cap V(X \ominus S)=\emptyset\,\big{\}}\), the set \(\pi(\mathcal{S}_{X})\) consists of pairwise vertex-disjoint hypergraphs and \(|\pi(\mathcal{S}_{X})|=|\mathcal{S}_{X}|\). 2. Denoting by \(\bar{X}\in\mathscr{X}\) the induced strong subhypergraph of \(X\) where \(\bar{X}:=X\ominus(\bigoplus_{S\in\mathcal{S}_{X}}S)\), the decomposition \(X=\bar{X}\oplus(\bigoplus_{S\in\mathcal{S}_{X}}S)\) is preserved by \(\pi\) to give \(\pi(X)=\bar{X}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}\pi(S)\big{)}\). _Remarks 3.4_.: With regard to Definition 3.3: 1. Condition (1) says that no connected component of a distinguished hypergraph \(S\in\mathcal{S}\) is fixed under \(\pi\), ensuring that all components of \(S\) are modified by \(\pi\) and none are redundant. Further, if \(\mathcal{N}\in\mathcal{S}\) then \(\mathcal{N}\) is not fixed under \(\pi\), otherwise \(\mathcal{N}\) is redundant as a distinguished hypergraph. 2. Condition (3) specifies the direct sum decomposition of each hypergraph \(X\in\mathcal{X}\) with respect to the distinguished hypergraphs, and the preservation of this direct sum decomposition under the action of \(\pi\). In particular, \(\pi\) is uniquely determined by \(\mathcal{S}\) and \(\pi(\mathcal{S})\). Note that, since the hypergraphs in \(\mathcal{S}_{X}\) are pairwise component disjoint and are all subhypergraphs of the same hypergraph \(X\), the set \(\mathcal{S}_{X}\) is pairwise vertex disjoint. 3. Employing partial transformations \(\pi\colon\mathcal{X}\to\mathscr{X}\) on \(\mathscr{X}\) provides flexibility for constructing hypergraph transformations, since we can choose a subset \(\mathcal{X}\) on which \(\pi\) has the appropriate action. 4. Hypergraph transformations are not in general closed under composition, and while we could weaken the defining conditions of hypergraph transformations to obtain closure under composition this would reduce the desired specificity of the transformations. Further, while our hypergraph transformations are unary functions they could readily be extended to \(n\)-ary functions for any \(n\in\mathbb{N}\). 5. The formal definition of a hypergraph transformation \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\) on \(\mathscr{X}\) is based on a collection of distinguished hypergraphs \(\mathcal{S}\), which ensures that the partial transformation \(\pi\colon\mathcal{X}\to\mathscr{X}\) acts consistently on \(\mathcal{X}\). In practice, however, once we have established that \(\mathcal{T}\) is a hypergraph transformation we can regard \(\mathcal{T}\) as the partial transformation \(\pi\colon\mathcal{X}\to\mathscr{X}\) without further reference to \(\mathcal{S}\). We now discuss some properties of hypergraph transformations. **Proposition 3.5**.: _Let \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\) be a hypergraph transformation on \(\mathscr{X}\)._ 1. \(\mathcal{S}_{S}=\{S\}\)_, for all_ \(S\in\mathcal{S}\)_._ 2. \(\pi(X)=X\) _if and only if_ \(\mathcal{S}_{X}=\emptyset\)_, for_ \(X\in\mathcal{X}\)_._ 3. \(\mathcal{S}=\emptyset\) _if and only if_ \(\pi(X)=X\) _for all_ \(X\in\mathcal{X}\)_, that is_ \(\pi\) _is an inclusion transformation._ Proof.: (1) Let \(S\in\mathcal{S}\). If \(\mathcal{N}\notin\mathcal{S}\) then \(\mathcal{D}_{S}=\{S\}\), by Part (2) of Proposition 3.2, so \(\mathcal{S}_{S}=\{S\}\). Suppose now that \(\mathcal{N}\in\mathcal{S}\). Then \(\mathcal{D}_{S}=\{\mathcal{N},S\}\), by Part (3) of Proposition 3.2, so we must have \(\mathcal{S}_{S}=\{S\}\): we cannot have \(\mathcal{S}_{S}=\{\mathcal{N},S\}\) when \(S\neq\mathcal{N}\) since the direct sum decomposition gives \(\pi(S)=\big{(}S\ominus(\bigoplus_{T\in\mathcal{S}_{S}}T)\big{)}\oplus\big{(} \bigoplus_{T\in\mathcal{S}_{S}}\pi(T)\big{)}=\big{(}S\ominus(\mathcal{N}\oplus S )\big{)}\oplus\big{(}\pi(\mathcal{N})\oplus\pi(S)\big{)}=\pi(\mathcal{N}) \oplus\pi(S)\), hence \(\pi(\mathcal{N})=\mathcal{N}\), and therefore nonredundancy does not hold. (2) For the forward direction we use a contrapositive argument, so suppose that \(\mathcal{S}_{X}\neq\emptyset\). First, suppose that there exists \(T\in\mathcal{S}_{X}\) such that \(\pi(T)\neq\mathcal{N}\). Since \(\mathcal{C}(T)\cap\mathcal{C}\big{(}\pi(T)\big{)}=\emptyset\) and \(\mathcal{C}(T)\subseteq\mathcal{C}(X)\) it follows that \(\mathcal{C}\big{(}\pi(T)\big{)}\cap\mathcal{C}(X)=\mathcal{C}\big{(}\pi(T) \big{)}\cap\mathcal{C}(X\ominus T)\), and since \(V\big{(}\pi(T)\big{)}\cap V(X\ominus T)=\emptyset\) it follows that \(\mathcal{C}\big{(}\pi(T)\big{)}\cap\mathcal{C}(X\ominus T)=\emptyset\), so \(\mathcal{C}\big{(}\pi(T)\big{)}\cap\mathcal{C}(X)=\emptyset\). Further, since \(\pi(X)=\bar{X}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}\pi(S)\big{)}\) by Condition (3) of Definition 3.3, it follows that \(\mathcal{C}\big{(}\pi(T)\big{)}\subseteq\mathcal{C}\big{(}\pi(X)\big{)}\). We conclude that \(\mathcal{C}\big{(}\pi(X)\big{)}\neq\mathcal{C}(X)\), and hence \(\pi(X)\neq X\). Second, suppose that \(\pi(S)=\mathcal{N}\) for all \(S\in\mathcal{S}_{X}\), which implies \(|\mathcal{S}_{X}|=|\pi(\mathcal{S}_{X})|=|\{\mathcal{N}\}|=1\). Then Condition (3) of Definition 3.3 gives \(\pi(X)=\bar{X}=X\ominus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}S\big{)}\neq X\). For the backward direction, if \(\mathcal{S}_{X}=\emptyset\) then Condition (3) of Definition 3.3 implies that \(\pi(X)=\bar{X}=X\). (3) First note that if \(S\in\mathcal{S}\) then \(\mathcal{S}_{S}=\{S\}\) by Part (1) of this proposition, so \(\mathcal{S}=\emptyset\) if and only if \(\mathcal{S}_{X}=\emptyset\) for all \(X\in\mathcal{X}\). The result then follows from Part (2) of this proposition. We define a notion of disjointness for hypergraph transformations, a property that ensures independence of action of the hypergraph transformations (see Proposition 3.9, Corollary 3.10, and Proposition 3.11). _Definition 3.6_ (**Disjoint hypergraph transformations**).: Two hypergraph transformations \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\) and \(\mathcal{T}^{\prime}:=(\mathcal{X}^{\prime},\pi^{\prime},\mathcal{S}^{\prime})\) on \(\mathscr{X}\) are _disjoint_ if for all \(X\in\mathcal{S}\cup\pi(\mathcal{S})\) and for all \(Y\in\mathcal{S}^{\prime}\cup\pi^{\prime}(\mathcal{S}^{\prime})\) the hypergraphs \(X\) and \(Y\) are vertex disjoint. _Notation 3.7_.: For two partial transformations \(\pi_{1}\colon\mathcal{X}_{1}\to\mathscr{X}\) and \(\pi_{2}\colon\mathcal{X}_{2}\to\mathscr{X}\) on \(\mathscr{X}\) their composition \(\pi_{2}\circ\pi_{1}\colon\operatorname{Dom}(\pi_{2}\circ\pi_{1})\to\mathscr{X}\) is the partial transformation with domain \(\operatorname{Dom}(\pi_{2}\circ\pi_{1}):=\pi_{1}^{-1}\big{(}\operatorname{Im}( \pi_{1})\cap\operatorname{Dom}(\pi_{2})\big{)}\). Note that \(\operatorname{Dom}(\pi_{2}\circ\pi_{1})\) is the largest possible domain for \(\pi_{2}\circ\pi_{1}\), and if \(\operatorname{Im}(\pi_{1})\cap\operatorname{Dom}(\pi_{2})=\emptyset\) then \(\operatorname{Dom}(\pi_{2}\circ\pi_{1})=\emptyset\) and \(\pi_{2}\circ\pi_{1}\) is the empty transformation. More generally, if \((\pi_{i}\colon\mathcal{X}_{i}\to\mathscr{X})_{i=1}^{n}\) is a finite sequence of partial transformations on \(\mathscr{X}\), for some \(n\in\mathbb{N}\), then their composition \(\bigcirc_{i=1}^{n}\pi_{n+1-i}:=\pi_{n}\circ\cdots\circ\pi_{2}\circ\pi_{1}\) is the partial transformation \(\bigcirc_{i=1}^{n}\pi_{n+1-i}\colon\operatorname{Dom}(\bigcirc_{i=1}^{n}\pi_{n+ 1-i})\to\mathscr{X}\), noting \(\operatorname{Dom}(\bigcirc_{i=1}^{n}\pi_{n+1-i})\) is the largest possible domain for \(\bigcirc_{i=1}^{n}\pi_{n+1-i}\). Denote by \(S_{n}\) the set of all permutations of \([n]\). The _coincidence set_\(\operatorname{Coin}\big{(}(\bigcirc_{i\in\sigma}\pi_{n+1-i})_{\sigma\in S_{n}}\big{)}\) of the sequence of all compositions \((\bigcirc_{i\in\sigma}\pi_{n+1-i})_{\sigma\in S_{n}}\) of \((\pi_{i}\colon\mathcal{X}_{i}\to\mathscr{X})_{i=1}^{n}\) is the maximum subset of the common domain \(\bigcap_{\sigma\in S_{n}}\operatorname{Dom}(\bigcirc_{i\in\sigma}\pi_{n+1-i})\) such that for each hypergraph in \(\operatorname{Coin}\big{(}(\bigcirc_{i\in\sigma}\pi_{n+1-i})_{\sigma\in S_{n}} \big{)}\) the compositions \((\bigcirc_{i\in\sigma}\pi_{n+1-i})_{\sigma\in S_{n}}\) of \((\pi_{i}\colon\mathcal{X}_{i}\to\mathscr{X})_{i=1}^{n}\) have the same image. **Lemma 3.8**.: _Let \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\) be a hypergraph transformation on \(\mathscr{X}\), and let \(X\in\mathcal{X}\). Suppose \(X^{\prime}\), \(X^{\prime\prime}\in\mathscr{X}\) are such that \(\mathcal{C}(X^{\prime})\subseteq\mathcal{C}(X)\), \(V(X^{\prime\prime})\cap V(X\ominus X^{\prime})=\emptyset\), and \(Y:=(X\ominus X^{\prime})\oplus X^{\prime\prime}\in\mathcal{X}\). Suppose further that, for all \(S\in\mathcal{S}\), \(S\) is component disjoint with both \(X^{\prime}\) and \(X^{\prime\prime}\), and \(\pi(S)\) is vertex disjoint with both \(X^{\prime}\) and \(X^{\prime\prime}\). Then \(\mathcal{D}_{Y}=\mathcal{D}_{X}\) and \(\mathcal{S}_{Y}=\mathcal{S}_{X}\)._ Proof.: We first show that \(\mathcal{D}_{Y}=\mathcal{D}_{X}\), which will follow from Part (5) of Proposition 3.2 after establishing that \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\) if and only if \(\mathcal{C}(S)\subseteq\mathcal{C}(Y)\) for all \(S\in\mathcal{S}\backslash\{\mathcal{N}\}\). Fix \(S\in\mathcal{S}\backslash\{\mathcal{N}\}\). If \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\) then, since \(S\in\mathcal{S}\) implies \(\mathcal{C}(S)\cap\mathcal{C}(X^{\prime})=\emptyset\), and since \(\mathcal{C}(X^{\prime})\subseteq\mathcal{C}(X)\), we have \(\mathcal{C}(S)\subseteq\mathcal{C}(X\ominus X^{\prime})\) and hence \(\mathcal{C}(S)\subseteq\mathcal{C}(Y)\). Conversely, if \(\mathcal{C}(S)\subseteq\mathcal{C}(Y)\) then, since \(S\in\mathcal{S}\) implies \(\mathcal{C}(S)\cap\mathcal{C}(X^{\prime\prime})=\emptyset\), we have \(\mathcal{C}(S)\subseteq\mathcal{C}(X\ominus X^{\prime})\subseteq\mathcal{C}(X)\). Since \(S\in\mathcal{S}\backslash\{\mathcal{N}\}\) is arbitrary, we conclude that \(\mathcal{D}_{Y}=\mathcal{D}_{X}\). To show that \(\mathcal{S}_{Y}=\mathcal{S}_{X}\) we begin by establishing that if \(S\in\mathcal{S}\) satisfies \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\) and \(\mathcal{C}(S)\subseteq\mathcal{C}(Y)\) then \(V\big{(}\pi(S)\big{)}\cap V(X\ominus S)=V\big{(}\pi(S)\big{)}\cap V(Y\ominus S)\). Note that \(Y\ominus S=\big{(}(X\ominus X^{\prime})\oplus X^{\prime\prime}\big{)}\ominus S =\big{(}(X\ominus X^{\prime})\ominus S\big{)}\oplus X^{\prime\prime}=\big{(}( X\ominus S)\ominus X^{\prime}\big{)}\oplus X^{\prime\prime}\): the second equality follows from Part (3) of Proposition 2.8 since \(X^{\prime\prime}\) is disjoint with \(X\ominus X^{\prime}\), and \(\mathcal{C}(S)\subseteq\mathcal{C}(X\ominus X^{\prime})\) which also implies \(X^{\prime\prime}\) is disjoint with \(S\); and the third equality follows from Part (2) of Proposition 2.8 since \(X^{\prime}\) is disjoint with \(S\), \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\), and \(\mathcal{C}(X^{\prime})\subseteq\mathcal{C}(X)\). Now, since \(S\in\mathcal{S}\) implies \(V\big{(}\pi(S)\big{)}\cap V(X^{\prime})=\emptyset\) we have \(V\big{(}\pi(S)\big{)}\cap V(X\ominus S)=V\big{(}\pi(S)\big{)}\cap V\big{(}( X\ominus S)\ominus X^{\prime}\big{)}\subseteq V\big{(}\pi(S)\big{)}\cap V(Y \ominus S)\), and since \(S\in\mathcal{S}\) implies \(V\big{(}\pi(S)\big{)}\cap V(X^{\prime\prime})=\emptyset\) we have \(V\big{(}\pi(S)\big{)}\cap V(Y\ominus S)=V\big{(}\pi(S)\big{)}\cap V\big{(}( X\ominus S)\ominus X^{\prime}\big{)}\subseteq V\big{(}\pi(S)\big{)}\cap V(X \ominus S)\). Therefore \(V\big{(}\pi(S)\big{)}\cap V(X\ominus S)=V\big{(}\pi(S)\big{)}\cap V(Y\ominus S)\). We now show that \(\mathcal{S}_{Y}=\mathcal{S}_{X}\). First, we have \(\mathcal{D}_{Y}=\mathcal{D}_{X}\). Second, \(S\in\mathcal{D}_{Y}=\mathcal{D}_{X}\) implies \(\mathcal{C}(S)\subseteq\mathcal{C}(Y)\) and \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\), so \(V\big{(}\pi(S)\big{)}\cap V(Y\ominus S)=V\big{(}\pi(S)\big{)}\cap V(X\ominus S)\). We conclude that \(\mathcal{S}_{Y}=\mathcal{S}_{X}\). **Proposition 3.9**.: _If two hypergraph transformations \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\) and \(\mathcal{T}^{\prime}:=(\mathcal{X}^{\prime},\pi^{\prime},\mathcal{S}^{\prime})\) on \(\mathscr{X}\) are disjoint then \(\operatorname{Coin}\big{(}(\pi^{\prime}\circ\pi,\pi\circ\pi^{\prime})\big{)}= \operatorname{Dom}(\pi^{\prime}\circ\pi)\cap\operatorname{Dom}(\pi\circ\pi^{ \prime})\)._ Proof.: We have \(\operatorname{Coin}\big{(}(\pi^{\prime}\circ\pi,\pi\circ\pi^{\prime})\big{)} \subseteq\operatorname{Dom}(\pi^{\prime}\circ\pi)\cap\operatorname{Dom}(\pi\circ \pi^{\prime})\) by the definition of the coincidence set, so to establish the reverse inclusion let \(X\in\operatorname{Dom}(\pi^{\prime}\circ\pi)\cap\operatorname{Dom}(\pi\circ \pi^{\prime})\) and we show \(\pi\big{(}\pi^{\prime}(X)\big{)}=\pi^{\prime}\big{(}\pi(X)\big{)}\). Denoting \(Y:=\pi^{\prime}(X)=\big{(}X\ominus(\bigoplus_{S^{\prime}\in\mathcal{S}^{\prime} _{X}}S^{\prime})\big{)}\oplus\big{(}\bigoplus_{S^{\prime}\in\mathcal{S}^{\prime}_ {X}}\pi^{\prime}(S^{\prime})\big{)}\), and noting that \(Y\in\operatorname{Dom}(\pi)\) and Lemma 3.8 implies \(\mathcal{S}_{Y}=\mathcal{S}_{X}\), we have \[\pi\big{(}\pi^{\prime}(X)\big{)}=\pi(Y) =\big{(}Y\ominus(\bigoplus_{S\in\mathcal{S}_{Y}}S)\big{)}\oplus \big{(}\bigoplus_{S\in\mathcal{S}_{Y}}\pi(S)\big{)}\] \[=\Big{[}\Big{(}\big{(}X\ominus(\bigoplus_{S^{\prime}\in\mathcal{S} ^{\prime}_{X}}S^{\prime})\big{)}\oplus\big{(}\bigoplus_{S^{\prime}\in\mathcal{S}^{ \prime}_{X}}\pi^{\prime}(S^{\prime})\big{)}\Big{)}\ominus(\bigoplus_{S\in\mathcal{ S}_{X}}S)\Big{]}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{Y}}\pi(S)\big{)}\] \[=\Big{[}\Big{(}\big{(}X\ominus(\bigoplus_{S^{\prime}\in\mathcal{S} ^{\prime}_{X}}S^{\prime})\big{)}\oplus\big{(}\bigoplus_{S^{\prime}\in\mathcal{S}^{ \prime}_{X}}\pi^{\prime}(S^{\prime})\big{)}\Big{)}\ominus(\bigoplus_{S\in\mathcal{ S}_{X}}S)\Big{]}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}\pi(S)\big{)}.\] Now, denoting \(Z:=\pi(X)=\big{(}X\ominus(\bigoplus_{S\in\mathcal{S}_{X}}S)\big{)}\oplus\big{(} \bigoplus_{S\in\mathcal{S}_{X}}\pi(S)\big{)}\), and noting that \(Z\in\operatorname{Dom}(\pi^{\prime})\) and Lemma 3.8 implies \(\mathcal{S}_{Z}^{\prime}=\mathcal{S}_{X}^{\prime}\), we have \[\pi^{\prime}\big{(}\pi(X)\big{)}=\pi^{\prime}(Z) =\big{(}Z\ominus(\bigoplus_{S^{\prime}\in\mathcal{S}_{Z}^{\prime}}S ^{\prime})\big{)}\oplus\big{(}\bigoplus_{S^{\prime}\in\mathcal{S}_{Z}^{\prime}} \pi^{\prime}(S^{\prime})\big{)}\] \[=\Big{[}\Big{(}\big{(}X\ominus(\bigoplus_{S\in\mathcal{S}_{X}}S) \big{)}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}\pi(S)\big{)}\Big{)}\ominus( \bigoplus_{S^{\prime}\in\mathcal{S}_{X}^{\prime}}S^{\prime})\Big{]}\oplus( \bigoplus_{S^{\prime}\in\mathcal{S}_{X}^{\prime}}\pi^{\prime}(S^{\prime}))\] \[=\Big{[}\Big{(}\big{(}X\ominus(\bigoplus_{S\in\mathcal{S}_{X}}S) \big{)}\ominus(\bigoplus_{S^{\prime}\in\mathcal{S}_{X}^{\prime}}S^{\prime}) \Big{)}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}\pi(S)\big{)}\Big{]}\oplus( \bigoplus_{S^{\prime}\in\mathcal{S}_{X}^{\prime}}\pi^{\prime}(S^{\prime}))\] \[=\Big{[}\Big{(}\big{(}X\ominus(\bigoplus_{S^{\prime}\in\mathcal{ S}_{X}^{\prime}}S^{\prime})\big{)}\ominus(\bigoplus_{S\in\mathcal{S}_{X}}S) \big{)}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}\pi(S)\big{)}\Big{]}\oplus( \bigoplus_{S^{\prime}\in\mathcal{S}_{X}^{\prime}}\pi^{\prime}(S^{\prime}))\] \[=\Big{[}\Big{(}\big{(}X\ominus(\bigoplus_{S^{\prime}\in\mathcal{ S}_{X}^{\prime}}S^{\prime})\big{)}\ominus(\bigoplus_{S\in\mathcal{S}_{X}}S) \big{)}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{X}^{\prime}}\pi^{\prime}(S^{ \prime})\big{)}\Big{]}\oplus(\bigoplus_{S\in\mathcal{S}_{X}}\pi(S))\] \[=\pi\big{(}\pi^{\prime}(X)\big{)},\] where the fifth and eighth equalities hold by the disjointness of \(\mathcal{T}\) and \(\mathcal{T}^{\prime}\) and by Part (3) of Proposition 2.8, the sixth equality holds by the disjointness of \(\mathcal{T}\) and \(\mathcal{T}^{\prime}\) and by Part (2) of Proposition 2.8, and the seventh equality holds since a direct sum of hypergraphs is independent of the order of the hypergraph summands. **Corollary 3.10**.: _If the finite sequence of hypergraph transformations \(\big{(}\mathcal{T}_{i}:=(\mathcal{X}_{i},\pi_{i},\mathcal{S}^{i})\big{)}_{i=1}^ {n}\) on \(\mathscr{X}\), for some \(n\in\mathbb{N}\) with \(n\geq 2\), is pairwise disjoint then \(\mathrm{Coin}\left((\big{(}\big{(}\big{)}_{i\in\sigma}\pi_{n+1-i})_{\sigma\in S _{n}}\right)=\bigcap_{\sigma\in S_{n}}\mathrm{Dom}(\big{(}\big{)}_{i\in\sigma }\pi_{n+1-i})\)._ Proof.: Note that \(\bigcap_{\sigma\in S_{n}}\mathrm{Dom}(\big{(}\big{)}_{i\in\sigma}\pi_{n+1-i} )\subseteq\mathrm{Dom}(\pi_{j}\circ\pi_{k})\cap\mathrm{Dom}(\pi_{k}\circ\pi_{ j})=\mathrm{Coin}\left((\pi_{j}\circ\pi_{k},\pi_{k}\circ\pi_{j})\right)\) for all \(j\), \(k\in[n]\) with \(j\neq k\), where the equality follows from Proposition 3.9. Since any two permutations in \(S_{n}\) can be transformed into each other by permuting adjacent elements, it therefore follows that \(\bigcap_{\sigma\in S_{n}}\mathrm{Dom}(\big{(}\big{)}_{i\in\sigma}\pi_{n+1-i} )\subseteq\mathrm{Coin}\left((\big{(}\big{)}_{i\in\sigma}\pi_{n+1-i})_{\sigma \in S_{n}}\right)\). **Proposition 3.11**.: _Suppose that \(\big{(}\mathcal{T}_{i}:=(\mathcal{X}_{i},\pi_{i},\mathcal{S}^{i})\big{)}_{i=1}^ {m}\) is a finite sequence of pairwise disjoint hypergraph transformations on \(\mathscr{X}\), for some \(m\in\mathbb{N}\), and let \(X\in\mathrm{Coin}\left((\big{(}\big{)}_{i\in\sigma}\pi_{m+1-i})_{\sigma\in S _{m}}\right)\). Then we have the direct sum decompositions_ \[X=\bar{X}\oplus(\bigoplus_{i\in[m]}\bigoplus_{S\in\mathcal{S}_{X}^{i}}S) \tag{1}\] _and_ \[\pi_{m}\circ\cdots\circ\pi_{1}(X)=\bar{X}\oplus\big{(}\bigoplus_{i\in[m]} \bigoplus_{S\in\mathcal{S}_{X}^{i}}\pi_{i}(S)\big{)}, \tag{2}\] _where \(\bar{X}\) is the strong subhypergraph of \(X\) determined by \(\bar{X}:=X\ominus(\bigoplus_{i\in[m]}\bigoplus_{S\in\mathcal{S}_{X}^{i}}S)\), and (2) is independent of the order of the \(\pi_{i}\)._ Proof.: We establish the result by induction on \(m\). The result holds for \(m=1\) by Condition (3) of Definition 3.3. Suppose now that the result holds for \(m=n\), where \(n\in\mathbb{N}\), and we show the result for \(m=n+1\). Define \(\bar{X}:=X\ominus(\bigoplus_{i\in[n+1]}\bigoplus_{S\in\mathcal{S}^{i}_{X}}S)\), from which we have \(X=\bar{X}\oplus(\bigoplus_{i\in[n+1]}\bigoplus_{S\in\mathcal{S}^{i}_{X}}S)\), so Equation (1) holds for \(m=n+1\). Now, \(X=\left(\bar{X}\oplus(\bigoplus_{S\in\mathcal{S}^{n+1}_{X}}S)\right)\oplus( \bigoplus_{i\in[n]}\bigoplus_{S\in\mathcal{S}^{i}_{X}}S)\), since a direct sum of hypergraphs is independent of the order of the hypergraph summands, so \(\bar{X}\oplus(\bigoplus_{S\in\mathcal{S}^{n+1}_{X}}S)=X\ominus(\bigoplus_{i \in[n]}\bigoplus_{S\in\mathcal{S}^{i}_{X}}S)\). Since Equations (1) and (2) hold for \(m=n\), by assumption, we have \(X=\left(\bar{X}\oplus(\bigoplus_{S\in\mathcal{S}^{n+1}_{X}}S)\right)\oplus( \bigoplus_{i\in[n]}\bigoplus_{S\in\mathcal{S}^{i}_{X}}S)\) and \(\pi_{n}\circ\cdots\circ\pi_{1}(X)=\left(\bar{X}\oplus(\bigoplus_{S\in \mathcal{S}^{n+1}_{X}}S)\right)\oplus\big{(}\bigoplus_{i\in[n]}\bigoplus_{S \in\mathcal{S}^{i}_{X}}\pi_{i}(S)\big{)}=:Y\). We need to determine \(\pi_{n+1}(Y)\). We can write \(Y=(X\ominus X^{\prime})\oplus X^{\prime\prime}\) for \(X^{\prime}:=\bigoplus_{i\in[n]}\bigoplus_{S\in\mathcal{S}^{i}_{X}}S\) and \(X^{\prime\prime}:=\bigoplus_{i\in[n]}\bigoplus_{S\in\mathcal{S}^{i}_{X}}\pi_{i }(S)\). Note that \(\mathcal{C}(X^{\prime})\subseteq\mathcal{C}(X)\). Further, since \(\mathcal{T}_{i}\) for \(i\in[n]\) are hypergraph transformations, and since \(V(X\ominus X^{\prime})\subseteq V(X\ominus S)\) for all \(S\in\mathcal{S}^{i}_{X}\) with \(i\in[n]\), it follows that \(V(X^{\prime\prime})\cap V(X\ominus X^{\prime})=\emptyset\). Since the hypergraph transformations \((\mathcal{T}_{i})_{i=1}^{n+1}\) are pairwise disjoint it follows that, for all \(S\in\mathcal{S}^{n+1}\), \(S\) is component disjoint with both \(X^{\prime}\) and \(X^{\prime\prime}\), and \(\pi_{n+1}(S)\) is vertex disjoint with both \(X^{\prime}\) and \(X^{\prime\prime}\). Therefore \(\mathcal{S}^{n+1}_{Y}=\mathcal{S}^{n+1}_{X}\) by Lemma 3.8. Further, letting \(\bar{Y}:=Y\ominus(\bigoplus_{S\in\mathcal{S}^{n+1}_{Y}}S)\), we have \(Y=\bar{Y}\oplus(\bigoplus_{S\in\mathcal{S}^{n+1}_{Y}}S)\) and \(\pi_{n+1}(Y)=\bar{Y}\oplus\big{(}\bigoplus_{S\in\mathcal{S}^{n+1}_{Y}}\pi_{n+1 }(S)\big{)}\). So, \[\pi_{n+1}\circ\cdots\circ\pi_{1}(X) =\pi_{n+1}(Y)=\bar{Y}\oplus\big{(}\bigoplus_{S\in\mathcal{S}^{n+1 }_{Y}}\pi_{n+1}(S)\big{)}\] \[=\big{(}Y\ominus(\bigoplus_{S\in\mathcal{S}^{n+1}_{Y}}S)\big{)} \oplus\big{(}\bigoplus_{S\in\mathcal{S}^{n+1}_{Y}}\pi_{n+1}(S)\big{)}=\big{(}Y \ominus(\bigoplus_{S\in\mathcal{S}^{n+1}_{X}}S)\big{)}\oplus\big{(}\bigoplus _{S\in\mathcal{S}^{n+1}_{X}}\pi_{n+1}(S)\big{)}\] \[=\Big{[}\Big{(}\big{(}\bar{X}\oplus(\bigoplus_{S\in\mathcal{S}^{ n+1}_{X}}S)\big{)}\oplus(\bigoplus_{i\in[n]}\bigoplus_{S\in\mathcal{S}^{i}_{X}}\pi_{i }(S)\big{)}\Big{)}\ominus(\bigoplus_{S\in\mathcal{S}^{n+1}_{X}}S)\Big{]}\oplus \big{(}\bigoplus_{S\in\mathcal{S}^{n+1}_{X}}\pi_{n+1}(S)\big{)}\] \[=\Big{(}\bar{X}\oplus\big{(}\bigoplus_{i\in[n]}\bigoplus_{S\in \mathcal{S}^{i}_{X}}\pi_{i}(S)\big{)}\Big{)}\oplus\big{(}\bigoplus_{S\in \mathcal{S}^{n+1}_{X}}\pi_{n+1}(S)\big{)}=\bar{X}\oplus\big{(}\bigoplus_{i\in[ n+1]}\bigoplus_{S\in\mathcal{S}^{i}_{X}}\pi_{i}(S)\big{)},\] where the sixth equality follows from the pairwise disjointness of the hypergraph transformations and by Part (3) of Proposition 2.8, the seventh equality follows from the pairwise disjointness of the hypergraph transformations and by Part (1) of Proposition 2.8, and the eighth equality holds since a direct sum of hypergraphs is independent of the order of the hypergraph summands. Therefore, Equation (2) holds for \(m=n+1\). Moreover, (2) is independent of the order of the \(\pi_{i}\) since \(X\in\operatorname{Coin}\big{(}(\bigcirc_{i\in\sigma}\pi_{m+1-i})_{\sigma\in S_{m }}\big{)}\). Under appropriate circumstances we can modify the collection of distinguished hypergraphs of a given hypergraph transformation to obtain a new hypergraph transformation, and we now consider a particular class of such modifications. _Definition 3.12_ (**Upward closed subset of hypergraphs**).: Suppose \(\mathcal{S}\subseteq\mathcal{X}\subseteq\mathscr{X}\). A subset \(\mathcal{S}^{\prime}\subseteq\mathcal{S}\) is _upward closed_ with respect to \(\mathcal{S}\) if \(S\in\mathcal{S}^{\prime}\), \(T\in\mathcal{S}\), and \(\mathcal{C}(S)\subseteq\mathcal{C}(T)\) imply \(T\in\mathcal{S}^{\prime}\). **Proposition 3.13**.: _Suppose \(\mathcal{S}^{\prime}\subseteq\mathcal{S}\subseteq\mathcal{X}\subseteq\mathscr{X}\)._ 1. \(\mathcal{S}^{\prime}\) _is upward closed with respect to_ \(\mathcal{S}\) _if and only if_ \(T\in\mathcal{S}\backslash\mathcal{S}^{\prime}\)_,_ \(S\in\mathcal{S}\)_, and_ \(\mathcal{C}(S)\subseteq\mathcal{C}(T)\) _imply_ \(S\in\mathcal{S}\backslash\mathcal{S}^{\prime}\) _._ 2. _If_ \(\mathcal{S}^{\prime}\) _is upward closed with respect to_ \(\mathcal{S}\) _and_ \(\mathcal{N}\in\mathcal{S}^{\prime}\) _then_ \(\mathcal{S}^{\prime}=\mathcal{S}\)_._ 3. _If_ \(\mathcal{S}^{\prime}\) _is upward closed with respect to_ \(\mathcal{S}\)_, and_ \(\mathcal{S}\) _is component maximal in_ \(\mathcal{X}\) _with_ \(\mathcal{S}\)_-maximal subsets_ \(\{\mathcal{D}_{X}\}_{X\in\mathcal{X}}\)_, then_ \(\mathcal{S}^{\prime}\) _is component maximal in_ \(\mathcal{X}\) _with_ \(\mathcal{S}^{\prime}\)_-maximal subsets_ \(\mathcal{D}^{\prime}_{X}:=\mathcal{D}_{X}\cap\mathcal{S}^{\prime}\) _for_ \(X\in\mathcal{X}\)_._ Proof.: (1) For the forward direction, suppose \(\mathcal{S}^{\prime}\) is upward closed with respect to \(\mathcal{S}\), \(T\in\mathcal{S}\backslash\mathcal{S}^{\prime}\), \(S\in\mathcal{S}\), and \(\mathcal{C}(S)\subseteq\mathcal{C}(T)\). Since \(\mathcal{S}^{\prime}\) is upward closed and \(T\notin\mathcal{S}^{\prime}\) we must have \(S\notin\mathcal{S}^{\prime}\). For the reverse direction, if \(S\in\mathcal{S}^{\prime}\), \(T\in\mathcal{S}\), and \(\mathcal{C}(S)\subseteq\mathcal{C}(T)\) then we must have \(T\in\mathcal{S}^{\prime}\). Hence \(\mathcal{S}^{\prime}\) is upward closed with respect to \(\mathcal{S}\). (2) For all \(T\in\mathcal{S}\) we have \(\mathcal{C}(\mathcal{N})\subseteq\mathcal{C}(T)\) and hence \(T\in\mathcal{S}^{\prime}\). It follows that \(\mathcal{S}^{\prime}=\mathcal{S}\). (3) We need to show that the subsets \(\mathcal{D}^{\prime}_{X}\), for \(X\in\mathcal{X}\), satisfy Properties (1) to (4) of Definition 3.1. If \(\mathcal{N}\in\mathcal{S}^{\prime}\) then \(\mathcal{S}^{\prime}=\mathcal{S}\) by Part (2) of this proposition and hence \(\mathcal{D}^{\prime}_{X}=\mathcal{D}_{X}\) for \(X\in\mathcal{X}\), so \(\mathcal{S}^{\prime}\) is component maximal. Suppose now that \(\mathcal{N}\notin\mathcal{S}^{\prime}\), and let \(X\in\mathcal{X}\). Properties (1) and (3) follow immediately from the definition of \(\mathcal{D}^{\prime}_{X}\) in terms of \(\mathcal{D}_{X}\), and Property (2) holds trivially since \(\mathcal{N}\notin\mathcal{S}^{\prime}\). For Property (4), suppose \(S\in\mathcal{S}^{\prime}\) with \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\). Then, since \(S\in\mathcal{S}\), there exists \(T\in\mathcal{D}_{X}\) such that \(\mathcal{C}(S)\subseteq\mathcal{C}(T)\) by Property (4). Now, since \(\mathcal{S}^{\prime}\) is upward closed, it follows that \(S\in\mathcal{S}^{\prime}\), \(T\in\mathcal{S}\), and \(\mathcal{C}(S)\subseteq\mathcal{C}(T)\) imply \(T\in\mathcal{S}^{\prime}\) and therefore \(T\in\mathcal{D}^{\prime}_{X}\). So \(\mathcal{S}^{\prime}\) is component maximal in \(\mathcal{X}\) with \(\mathcal{S}^{\prime}\)-maximal subsets \(\mathcal{D}^{\prime}_{X}\) for \(X\in\mathcal{X}\). _Definition 3.14_ (**Support, support reduction/augmentation**).: Let \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\) be a hypergraph transformation on \(\mathscr{X}\). The _support_ of \(\mathcal{T}\), denoted \(\operatorname{Supp}(\mathcal{T})\), is defined by \(\operatorname{Supp}(\mathcal{T}):=\{\,X\in\mathcal{X}\mid\pi(X)\neq X\,\}\). Let \(\mathcal{T}^{\prime}:=(\mathcal{X},\pi^{\prime},\mathcal{S}^{\prime})\) be another hypergraph transformation on \(\mathscr{X}\). Then \(\mathcal{T}^{\prime}\) is a _support reduction_ of \(\mathcal{T}\) corresponding to \(\mathcal{S}^{\prime}\) if \(\mathcal{S}^{\prime}\subseteq\mathcal{S}\), \(\mathcal{S}^{\prime}\) is upward closed with respect to \(\mathcal{S}\), and \(\pi^{\prime}|_{\mathcal{S}^{\prime}}=\pi|_{\mathcal{S}^{\prime}}\). In this case we also say that \(\mathcal{T}\) is a _support augmentation_ of \(\mathcal{T}^{\prime}\) corresponding to \(\mathcal{S}\). _Remark 3.15_.: For a hypergraph transformation \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\), the subset \(\mathcal{X}\backslash\operatorname{Supp}(\mathcal{T})\) of \(\mathcal{X}\) is the set of fixed points of \(\mathcal{T}\), that is \(\mathcal{X}\backslash\operatorname{Supp}(\mathcal{T})=\{\,X\in\mathcal{X} \mid\pi(X)=X\,\}\). **Lemma 3.16**.: _If \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\) and \(\mathcal{T}^{\prime}:=(\mathcal{X},\pi^{\prime},\mathcal{S})\) are two hypergraph transformations on \(\mathscr{X}\) and if \(\pi^{\prime}|_{\mathcal{S}}=\pi|_{\mathcal{S}}\) then \(\pi^{\prime}=\pi\), hence \(\mathcal{T}=\mathcal{T}^{\prime}\)._ Proof.: For notational clarity we denote \(\mathcal{T}^{\prime}:=(\mathcal{X},\pi^{\prime},\mathcal{S}^{\prime})\), so that \(\mathcal{S}^{\prime}=\mathcal{S}\), and \(\mathcal{D}^{\prime}_{X}=\mathcal{D}_{X}\) for all \(X\in\mathcal{X}\). Let \(X\in\mathcal{X}\). Then \(\mathcal{S}^{\prime}_{X}=\big{\{}\,S\in\mathcal{D}^{\prime}_{X}\mid V\big{(}\pi^ {\prime}(S)\big{)}\cap V(X\ominus S)=\emptyset\,\big{\}}=\big{\{}\,S\in \mathcal{D}_{X}\mid V\big{(}\pi(S)\big{)}\cap V(X\ominus S)=\emptyset\,\big{\}}= \mathcal{S}_{X}\). So defining \(\bar{X}^{\prime}:=X\ominus(\bigoplus_{S\in\mathcal{S}^{\prime}_{X}}S)\) and \(\bar{X}:=X\ominus(\bigoplus_{S\in\mathcal{S}_{X}}S)\) we have \(\bar{X}^{\prime}=\bar{X}\), therefore \(\pi^{\prime}(X)=\bar{X}^{\prime}\oplus\big{(}\bigoplus_{S\in\mathcal{S}^{\prime} _{X}}\pi^{\prime}(S)\big{)}=\bar{X}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{X}} \pi(S)\big{)}=\pi(X)\). Therefore \(\pi^{\prime}=\pi\). **Proposition 3.17**.: _Suppose \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\) is a hypergraph transformation on \(\mathscr{X}\), and \(\mathcal{S}^{\prime}\subseteq\mathcal{S}\) is an upward closed subset with respect to \(\mathcal{S}\). Then there exists a hypergraph transformation \(\mathcal{T}^{\prime}:=(\mathcal{X},\pi^{\prime},\mathcal{S}^{\prime})\) such that \(\mathcal{T}^{\prime}\) is the unique support reduction of \(\mathcal{T}\) corresponding to \(\mathcal{S}^{\prime}\). Further, \(\operatorname{Supp}(\mathcal{T}^{\prime})\subseteq\operatorname{Supp}(\mathcal{T})\), and for all \(X\in\mathcal{X}\) we have \(\mathcal{D}^{\prime}_{X}=\mathcal{D}_{X}\cap\mathcal{S}^{\prime}\), \(\mathcal{S}^{\prime}_{X}=\mathcal{S}_{X}\cap\mathcal{S}^{\prime}\), and \(\pi^{\prime}(X)=\big{(}X\ominus(\bigoplus_{S\in\mathcal{S}^{\prime}_{X}}S) \big{)}\oplus\big{(}\bigoplus_{S\in\mathcal{S}^{\prime}_{X}}\pi(S)\big{)}\)._ Proof.: Since \(\mathcal{T}\) is a hypergraph transformation, \(\mathcal{S}\) is component maximal with \(\mathcal{S}\)-maximal subsets \(\{\mathcal{D}_{X}\}_{X\in\mathcal{X}}\), and since \(\mathcal{S}^{\prime}\) is upward closed with respect to \(\mathcal{S}\) it follows from Part (3) of Proposition 3.13 that \(\mathcal{S}^{\prime}\) is component maximal in \(\mathcal{X}\) with \(\mathcal{S}^{\prime}\)-maximal subsets \(\mathcal{D}^{\prime}_{X}:=\mathcal{D}_{X}\cap\mathcal{S}^{\prime}\) for \(X\in\mathcal{X}\). If \(\mathcal{N}\in\mathcal{S}^{\prime}\) then, since \(\mathcal{S}^{\prime}\) is upward closed with respect to \(\mathcal{S}\), Part (2) of Proposition 3.13 implies \(\mathcal{S}^{\prime}=\mathcal{S}\). It follows from Lemma 3.16 that the only support reduction of \(\mathcal{T}\) corresponding to \(\mathcal{S}^{\prime}\) is \(\mathcal{T}\) itself. Suppose now that \(\mathcal{N}\notin\mathcal{S}^{\prime}\). We construct the partial transformation \(\pi^{\prime}\colon\mathcal{X}\to\mathscr{X}\) by defining \(\pi^{\prime}(X)\) for all \(X\in\mathcal{X}\). First define \(\pi^{*}\colon\mathcal{S}^{\prime}\to\mathscr{X}\) by \(\pi^{*}(S):=\pi(S)\) for all \(S\in\mathcal{S}^{\prime}\). Let \(X\in\mathcal{X}\) and define the set \(\mathcal{S}^{*}_{X}:=\big{\{}\,S\in\mathcal{D}^{\prime}_{X}\mid V\big{(}\pi^{* }(S)\big{)}\cap V(X\ominus S)=\emptyset\,\big{\}}\). Then \(\mathcal{S}^{*}_{X}=\mathcal{S}_{X}\cap\mathcal{S}^{\prime}\): \(S\in\mathcal{S}^{*}_{X}\) if and only if \(S\in\mathcal{D}^{\prime}_{X}\) and \(V\big{(}\pi^{*}(S)\big{)}\cap V(X\ominus S)=\emptyset\) if and only if \(S\in\mathcal{S}_{X}\cap\mathcal{S}^{\prime}\). Further, since \(\pi(\mathcal{S}_{X})\) consists of pairwise vertex-disjoint hypergraphs the subset \(\pi^{*}(\mathcal{S}^{*}_{X})=\pi(\mathcal{S}^{*}_{X})\subseteq\pi(\mathcal{S} _{X})\) is also pairwise vertex disjoint, and since \(|\pi(\mathcal{S}_{X})|=|\mathcal{S}_{X}|\) we have \(|\pi^{*}(\mathcal{S}^{*}_{X})|=|\pi(\mathcal{S}^{*}_{X})|=|\mathcal{S}^{*}_{X}|\). Define \(\bar{X}^{*}:=X\ominus(\bigoplus_{S\in\mathcal{S}^{*}_{X}}S)\) and \(\pi^{\prime}(X):=\bar{X}^{*}\oplus\big{(}\bigoplus_{S\in\mathcal{S}^{*}_{X}} \pi^{*}(S)\big{)}\). We show that \(\mathcal{T}^{\prime}:=(\mathcal{X},\pi^{\prime},\mathcal{S}^{\prime})\) is a hypergraph transformation. Note that \(\pi^{\prime}|_{\mathcal{S}^{\prime}}=\pi^{*}=\pi|_{\mathcal{S}^{\prime}}\): the first equality holds since if \(T\in\mathcal{S}^{\prime}\) then \(\mathcal{D}_{T}=\{T\}\) or \(\mathcal{D}_{T}=\{\mathcal{N},T\}\) by Parts (2) and (3) of Proposition 3.2, so \(\mathcal{D}^{\prime}_{T}=\{T\}\), hence \(\mathcal{S}^{*}_{T}=\{T\}\), therefore the definition of \(\pi^{\prime}\) gives \(\pi^{\prime}(T)=\pi^{*}(T)\); the second equality follows from the definition of \(\pi^{*}\). For nonredundancy, if \(S\in\mathcal{S}^{\prime}\) then \(\mathcal{C}(S)\cap\mathcal{C}\big{(}\pi^{\prime}(S)\big{)}=\mathcal{C}(S)\cap \mathcal{C}\big{(}\pi(S)\big{)}=\emptyset\), where the last equality holds since \(\mathcal{T}\) is a hypergraph transformation. We have shown that \(\mathcal{S}^{\prime}\) is component maximal in \(\mathcal{X}\) with \(\mathcal{S}^{\prime}\)-maximal subsets \(\mathcal{D}^{\prime}_{X}:=\mathcal{D}_{X}\cap\mathcal{S}^{\prime}\) for \(X\in\mathcal{X}\). To show that the direct sum decomposition is preserved, let \(X\in\mathcal{X}\). Define \(\mathcal{S}^{\prime}_{X}:=\big{\{}\,S\in\mathcal{D}^{\prime}_{X}\mid V\big{(} \pi^{\prime}(S)\big{)}\cap V(X\ominus S)=\emptyset\,\big{\}}\), and note that \(\mathcal{S}^{\prime}_{X}=\mathcal{S}^{*}_{X}\) since \(\pi^{\prime}|_{\mathcal{S}^{\prime}}=\pi^{*}\). Then \(\pi^{\prime}(\mathcal{S}^{\prime}_{X})=\pi^{*}(\mathcal{S}^{*}_{X})\) consists of pairwise vertex-disjoint hypergraphs, and \(|\pi^{\prime}(\mathcal{S}^{\prime}_{X})|=|\pi^{*}(\mathcal{S}^{*}_{X})|=| \mathcal{S}^{*}_{X}|=|\mathcal{S}^{\prime}_{X}|\). Denoting \(\bar{X}^{\prime}:=X\ominus(\bigoplus_{S\in\mathcal{S}^{\prime}_{X}}S)\), and noting that \(\bar{X}^{\prime}=\bar{X}^{*}\) since \(\mathcal{S}^{\prime}_{X}=\mathcal{S}^{*}_{X}\), we have \(\pi^{\prime}(X):=\bar{X}^{*}\oplus\big{(}\bigoplus_{S\in\mathcal{S}^{*}_{X}}\pi^ {*}(S)\big{)}=\bar{X}^{\prime}\oplus\big{(}\bigoplus_{S\in\mathcal{S}^{\prime}_{X} }\pi^{\prime}(S)\big{)}\), so the decomposition \(X=\bar{X}^{\prime}\oplus(\bigoplus_{S\in\mathcal{S}^{\prime}_{X}}S)\) is preserved by \(\pi^{\prime}\). We conclude that \(\mathcal{T}^{\prime}\) is a hypergraph transformation. Moreover, \(\mathcal{T}^{\prime}\) is a support reduction of \(\mathcal{T}\) corresponding to \(\mathcal{S}^{\prime}\), and uniqueness of \(\mathcal{T}^{\prime}\) follows from Lemma 3.16. Finally, for \(X\in\mathcal{X}\), \(\mathcal{S}^{*}_{X}=\mathcal{S}_{X}\cap\mathcal{S}^{\prime}\) and \(\mathcal{S}^{\prime}_{X}=\mathcal{S}^{*}_{X}\) imply \(\mathcal{S}^{\prime}_{X}=\mathcal{S}_{X}\cap\mathcal{S}^{\prime}\). Moreover, \(\operatorname{Supp}(\mathcal{T}^{\prime})\subseteq\operatorname{Supp}(\mathcal{T})\): \(X\in\operatorname{Supp}(\mathcal{T}^{\prime})\) implies \(\pi^{\prime}(X)\neq X\) implies \(\mathcal{S}^{\prime}_{X}\neq\emptyset\), by Part (2) of Proposition 3.5, implies \(\mathcal{S}_{X}\neq\emptyset\), since \(\mathcal{S}^{\prime}_{X}=\mathcal{S}_{X}\cap\mathcal{S}^{\prime}\), implies \(\pi(X)\neq X\), by Part (2) of Proposition 3.5, implies \(X\in\operatorname{Supp}(\mathcal{T})\). ### Examples of hypergraph transformations Here we discuss two main examples of hypergraph transformations, namely hyperedge addition/deletion and hypergraph addition/deletion, as well as a combined hypergraph-hyperedge addition transformation. #### 3.2.1 Hyperedge addition/deletion hypergraph transformations The _hyperedge space_ of \(\mathscr{X}\) generalises the notion of the edge space of a graph [18, Chapter 1.9, Page 23]. _Definition 3.18_ (**Hyperedge space**).: The _hyperedge space_\((\mathcal{E},\boxplus,\boxminus,\mathbb{Z}/2\mathbb{Z})\) of \(\mathscr{X}\) is the vector space over the field \(\mathbb{Z}/2\mathbb{Z}\) with underlying set \(\mathcal{E}:=\overset{\mathcal{O}}{\mathcal{O}}\big{(}E(\mathscr{X})\big{)}\), where addition \(\boxplus\) is the operation of symmetric difference, and scalar multiplication \(\boxminus\) is given by \(0\boxminus F:=\emptyset\) and \(1\boxminus F:=F\) for all \(F\in\mathcal{E}\). The hyperedge space has a basis given by the collection of all singleton sets in \(\mathcal{E}\). _Notation 3.19_.: Suppose \(H\subseteq E(\mathscr{X})\) is nonempty and \(X\in\mathscr{X}\) with \(\bigcup H\subseteq V(X)\). We denote by \(X\boxplus H\) the hypergraph in \(\mathscr{X}\) with vertex set \(V(X)\) and hyperedge set \(E(X)\boxplus H\). _Definition 3.20_ (**Hyperedge addition/deletion partial transformation**).: Suppose \(H\subseteq E(\mathscr{X})\) is nonempty and \(\mathcal{X}\subseteq\mathscr{X}\). We define the _hyperedge addition/deletion partial transformation_\(\pi_{H}\colon\mathcal{X}\to\mathscr{X}\) such that, for \(X\in\mathcal{X}\), \[\pi_{H}(X)=\begin{cases}X\boxplus H&\text{if }\bigcup H\subseteq V(X),\\ X&\text{if }\bigcup H\nsubseteq V(X).\end{cases} \tag{3}\] Therefore, if \(\bigcup H\subseteq V(X)\) then \(\pi_{H}\) adds all of the hyperedges in \(H\backslash E(X)\) to \(X\) and deletes all of the hyperedges in \(H\cap E(X)\) from \(X\), otherwise \(\pi_{H}\) fixes \(X\). _Notation 3.21_.: Suppose \(H\subseteq E(\mathscr{X})\) and \(X\in\mathscr{X}\). We denote by \(X\wedge H\) the hypergraph in \(\mathscr{X}\) given by \(X\wedge H:=\bigcup\{C\in\mathcal{C}(X)\mid e\cap V(C)\neq\emptyset\text{ for some }e\in H\,\}\). Note that \((X\wedge H)\wedge H=X\wedge H\), and if \(H=\emptyset\) then \(X\wedge H=\mathcal{N}\). _Definition 3.22_ (\(H\)**-closed set of hypergraphs**).: Suppose \(H\subseteq E(\mathscr{X})\) is nonempty and \(\mathcal{X}\subseteq\mathscr{X}\). We say that \(\mathcal{X}\) is \(H\)_-closed for addition_ (resp. \(H\)_-closed for deletion_) if \(X\in\mathcal{X}\), \(\bigcup H\subseteq V(X)\), and \(H\cap E(X)=\emptyset\) (resp. \(H\subseteq E(X)\)) imply \(X\wedge H\in\mathcal{X}\). We say that \(\mathcal{X}\) is \(H\)_-closed_ if \(X\in\mathcal{X}\) and \(\bigcup H\subseteq V(X)\) imply \(X\wedge H\in\mathcal{X}\). **Proposition 3.23**.: _Suppose \(H\subseteq E(\mathscr{X})\) is nonempty, \(\mathcal{X}\subseteq\mathscr{X}\) is \(H\)-closed, and \(\pi_{H}\colon\mathcal{X}\to\mathscr{X}\) is the hyperedge addition/deletion partial transformation. Define_ \[\mathcal{S}:=\{\,S\in\mathcal{X}\mid\bigcup H\subseteq V(S)\text{ and }S=S\wedge H\,\}, \tag{4}\] _and for \(X\in\mathcal{X}\) define_ \[\mathcal{D}_{X}:=\begin{cases}\{X\wedge H\}&\text{if }\bigcup H\subseteq V(X),\\ \emptyset&\text{if }\bigcup H\nleq V(X).\end{cases} \tag{5}\] _Then \(\mathcal{T}_{H}:=(\mathcal{X},\pi_{H},\mathcal{S})\) is a hypergraph transformation on \(\mathscr{X}\) with \(\mathcal{S}\)-maximal subsets \(\{\mathcal{D}_{X}\}_{X\in\mathcal{X}}\), and the collection of distinguished hypergraphs \(\mathcal{S}\) is greatest for \(\pi_{H}\) with respect to inclusion._ Proof.: For nonredundancy, if \(S\in\mathcal{S}\) then \(\pi_{H}(S)=S\boxplus H\) modifies all components of \(S\), since \(S=S\wedge H\), so \(\mathcal{C}(S)\cap\mathcal{C}\big{(}\pi_{H}(S)\big{)}=\emptyset\); additionally, \(\mathcal{N}\notin\mathcal{S}\). To see that \(\mathcal{S}\) is component maximal, let \(X\in\mathcal{X}\). Conditions (1) to (3) of Definition 3.1 follow immediately from the definition of \(\mathcal{D}_{X}\). For Condition (4) of Definition 3.1, if \(S\in\mathcal{S}\) and \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\) then \(S=X\wedge H\), so \(\mathcal{D}_{X}=\{S\}\); note that if \(\mathcal{C}(S)\nleq\mathcal{C}(X)\) for all \(S\in\mathcal{S}\) then, since \(\mathcal{X}\) is \(H\)-closed, \(\mathcal{D}_{X}=\emptyset\). It follows that \(\mathcal{D}_{X}\) is \(\mathcal{S}\)-maximal. To show the direct sum decomposition is preserved, let \(X\in\mathcal{X}\). Note that \(\mathcal{S}_{X}=\mathcal{D}_{X}\), so \(|\mathcal{S}_{X}|=|\pi_{H}(\mathcal{S}_{X})|\leq 1\), and it also holds trivially that \(\pi_{H}(\mathcal{S}_{X})\) is pairwise vertex disjoint. Now, note that \(\mathcal{S}_{X}\neq\emptyset\) if and only if \(\mathcal{D}_{X}\neq\emptyset\) if and only if \(\bigcup H\subseteq V(X)\). So, if \(\mathcal{S}_{X}=\{S\}\) then \(\pi_{H}(X)=X\boxplus H=(X\ominus S)\oplus(S\boxplus H)=\bar{X}\oplus\pi_{H}(S)\) where \(\bar{X}=X\ominus S\); and if \(\mathcal{S}_{X}=\emptyset\) then \(\pi_{H}(X)=X=\bar{X}\). In any case we have \(\pi_{H}(X)=\bar{X}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}\pi_{H}(S)\big{)}\), where \(\bar{X}:=X\ominus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}S\big{)}\). We conclude that \(\mathcal{T}_{H}\) is a hypergraph transformation on \(\mathscr{X}\) with \(\mathcal{S}\)-maximal subsets \(\{\mathcal{D}_{X}\}_{X\in\mathcal{X}}\). Finally we show that \(\mathcal{S}\) is greatest for \(\pi_{H}\), so let \(T\in\mathcal{X}\) be such that \(\big{(}\mathcal{X},\pi_{H},\mathcal{S}\cup\{T\}\big{)}\) is a hypergraph transformation. Nonredundancy implies \(T\neq\mathcal{N}\) since \(\pi_{H}(\mathcal{N})=\mathcal{N}\), and also \(\pi_{H}(T)\neq T\) since \(\mathcal{C}(T)\cap\mathcal{C}\big{(}\pi_{H}(T)\big{)}=\emptyset\), hence we must have \(\bigcup H\subseteq V(T)\) by the definition of \(\pi_{H}\). Suppose there exists \(C\in\mathcal{C}(T)\) such that \(e\cap V(C)=\emptyset\) for all \(e\in H\). Then, \(\mathcal{C}\big{(}\pi_{H}(T)\big{)}=\mathcal{C}(T\boxplus H)=\{C\}\cup \mathcal{C}\big{(}(T\backslash C)\boxplus H\big{)}\), so \(C\in\mathcal{C}(T)\cap\mathcal{C}\big{(}\pi_{H}(T)\big{)}\), contradicting nonredundancy. So we must have \(T=T\wedge H\). Therefore \(T\in\mathcal{S}\). _Definition 3.24_ (**Hyperedge addition partial transformation**).: Suppose \(H\subseteq E(\mathscr{X})\) is nonempty and \(\mathcal{X}\subseteq\mathscr{X}\). We define the _hyperedge addition partial transformation_\(\pi_{H}^{+}\colon\mathcal{X}\to\mathscr{X}\) such that, for \(X\in\mathcal{X}\), \[\pi_{H}^{+}(X)=\begin{cases}X\boxplus H&\text{if }\bigcup H\subseteq V(X)\text{ and }H\cap E(X)=\emptyset,\\ X&\text{if }\bigcup H\nleq V(X)\text{ or }H\cap E(X)\neq\emptyset.\end{cases} \tag{6}\] Therefore, if \(\bigcup H\subseteq V(X)\) and \(H\cap E(X)=\emptyset\) then \(\pi_{H}^{+}\) adds all of the hyperedges in \(H\) to \(X\), otherwise \(\pi_{H}^{+}\) fixes \(X\). **Proposition 3.25**.: _Suppose \(H\subseteq E(\mathscr{X})\) is nonempty, \(\mathcal{X}\subseteq\mathscr{X}\) is \(H\)-closed for addition, and \(\pi_{H}^{+}\colon\mathcal{X}\to\mathscr{X}\) is the hyperedge addition partial transformation. Define_ \[\mathcal{S}^{+}:=\{\,S\in\mathcal{X}\mid\bigcup H\subseteq V(S)\text{, }H\cap E(S)=\emptyset\text{, and }S=S\wedge H\,\}, \tag{7}\] _and for \(X\in\mathcal{X}\) define_ \[\mathcal{D}^{\pm}_{X}:=\begin{cases}\{X\wedge H\}&\text{if }\bigcup H\subseteq V(X) \text{ and }H\cap E(X)=\emptyset,\\ \emptyset&\text{if }\bigcup H\nsubseteq V(X)\text{ or }H\cap E(X)\neq \emptyset.\end{cases} \tag{8}\] _Then:_ 1. \(\mathcal{T}^{+}_{H}:=(\mathcal{X},\pi^{\pm}_{H},\mathcal{S}^{+})\) _is a hypergraph transformation on_ \(\mathscr{X}\) _with_ \(\mathcal{S}^{+}\)_-maximal subsets_ \(\{\mathcal{D}^{+}_{X}\}_{X\in\mathcal{X}}\)_._ 2. _If_ \(\mathcal{X}\) _is_ \(H\)_-closed then_ \(\mathcal{T}^{+}_{H}\) _is the support reduction of the hypergraph addition/deletion transformation_ \(\mathcal{T}_{H}:=(\mathcal{X},\pi_{H},\mathcal{S})\) _corresponding to_ \(\mathcal{S}^{+}\)_, such that_ \(\mathcal{S}^{+}=\{\,S\in\mathcal{S}\mid H\cap E(S)=\emptyset\,\}\) _and_ \(\mathcal{D}^{+}_{X}=\mathcal{D}_{X}\cap\mathcal{S}^{+}\) _for_ \(X\in\mathcal{X}\)_._ Proof.: (1) We omit the proof as it is similar to the proof of Proposition 3.23. (2) The subset \(\mathcal{S}^{+}=\{\,S\in\mathcal{S}\mid H\cap E(S)=\emptyset\,\}\) is upward closed with respect to \(\mathcal{S}\) since \(S\in\mathcal{S}^{+}\), \(T\in\mathcal{S}\), and \(\mathcal{C}(S)\subseteq\mathcal{C}(T)\) imply \(T=S\) and hence \(T\in\mathcal{S}^{+}\). Further, \(\pi^{+}_{H}|_{\mathcal{S}^{+}}=\pi_{H}|_{\mathcal{S}^{+}}\), and Proposition 3.23 implies \(\mathcal{T}_{H}\) is a hypergraph transformation since \(\mathcal{X}\) is \(H\)-closed. It follows from Proposition 3.17 that \(\mathcal{T}^{+}_{H}\) is the support reduction of \(\mathcal{T}_{H}\) corresponding to \(\mathcal{S}^{+}\), and \(\mathcal{D}^{+}_{X}=\mathcal{D}_{X}\cap\mathcal{S}^{+}\) for \(X\in\mathcal{X}\). #### 3.2.2 Hypergraph addition/deletion hypergraph transformations The _component space_ of \(\mathscr{X}\) is a vector space of connected components. _Definition 3.26_ (**Component space**).: The _component space_\((\mathcal{Z},\boxplus,\boxminus,\mathbb{Z}/2\mathbb{Z})\) of \(\mathscr{X}\) is the vector space over the field \(\mathbb{Z}/2\mathbb{Z}\) with underlying set \(\mathcal{Z}:=\mathcal{O}\big{(}\mathcal{C}(\mathscr{X})\big{)}\), where addition \(\boxplus\) is the operation of symmetric difference, and scalar multiplication \(\boxminus\) is given by \(0\boxminus G:=\emptyset\) and \(1\boxminus G:=G\) for all \(G\in\mathcal{Z}\). The component space has a basis given by the collection of all singleton sets in \(\mathcal{Z}\). _Remark 3.27_.: Note that while the notation for the addition and scalar multiplication operations is the same for both the hyperedge space and the component space, no ambiguity is possible. _Definition 3.28_ (**Hypergraph addition/deletion partial transformation**).: Let \(W\in\mathscr{X}^{*}\), and \(\mathcal{X}\subseteq\mathscr{X}\) with \(\mathcal{N}\in\mathcal{X}\). We define the _hypergraph addition/deletion partial transformation_\(\pi_{W}\colon\mathcal{X}\to\mathscr{X}\) such that, for \(X\in\mathcal{X}\), \[\pi_{W}(X)=\begin{cases}\boxplus\mathcal{C}(X)\boxplus\mathcal{C}(W)&\text{if } \mathcal{C}(W)\subseteq\mathcal{C}(X)\text{ or }V(W)\cap V(X)=\emptyset,\\ X&\text{if }\mathcal{C}(W)\nsubseteq\mathcal{C}(X)\text{ and }V(W)\cap V(X)\neq \emptyset.\end{cases} \tag{9}\] Therefore, if \(V(W)\cap V(X)=\emptyset\) then \(\pi_{W}\) adds the components of the hypergraph \(W\) to \(X\), if \(\mathcal{C}(W)\subseteq\mathcal{C}(X)\) then \(\pi_{W}\) deletes the components of the hypergraph \(W\) from \(X\), otherwise \(\pi_{W}\) fixes \(X\). **Proposition 3.29**.: _Suppose \(W\in\mathscr{X}^{*}\), \(\mathcal{X}\subseteq\mathscr{X}\) with \(\mathcal{N}\in\mathcal{X}\), and \(\pi_{W}\colon\mathcal{X}\to\mathscr{X}\) is the hypergraph addition/deletion partial transformation. Define_ \[\mathcal{S}:=\{\mathcal{N},W\}, \tag{10}\] _and for \(X\in\mathcal{X}\) define_ \[\mathcal{D}_{X}:=\begin{cases}\{\mathcal{N},W\}&\text{if }\mathcal{C}(W)\subseteq \mathcal{C}(X),\\ \{\mathcal{N}\}&\text{if }\mathcal{C}(W)\nleq\mathcal{C}(X).\end{cases} \tag{11}\] _Then \(\mathcal{T}_{W}:=(\mathcal{X},\pi_{W},\mathcal{S})\) is a hypergraph transformation on \(\mathscr{X}\) with \(\mathcal{S}\)-maximal subsets \(\{\mathcal{D}_{X}\}_{X\in\mathcal{X}}\), and the collection of distinguished hypergraphs \(\mathcal{S}\) is greatest for \(\pi_{W}\) with respect to inclusion._ Proof.: For nonredundancy, let \(S\in\mathcal{S}\). If \(S=\mathcal{N}\) then \(\mathcal{C}(S)=\emptyset\), hence \(\mathcal{C}(S)\cap\mathcal{C}\big{(}\pi_{W}(S)\big{)}=\emptyset\). If \(S=W\) then \(\pi_{W}(S)=\bigoplus\mathcal{C}(S)\boxplus\mathcal{C}(W)=\bigoplus\mathcal{C} (W)\boxplus\mathcal{C}(W)=\bigoplus\emptyset=\mathcal{N}\), so \(\mathcal{C}(S)\cap\mathcal{C}\big{(}\pi_{W}(S)\big{)}=\emptyset\). Further, \(\mathcal{N}\in\mathcal{S}\) and \(\pi_{W}(\mathcal{N})=\bigoplus\mathcal{C}(\mathcal{N})\boxplus\mathcal{C}(W)= \bigoplus\mathcal{C}(W)=W\neq\mathcal{N}\). The set \(\mathcal{S}\) is component maximal since Conditions (1) to (4) of Definition 3.1 follow immediately from the definition of the subsets \(\{\mathcal{D}_{X}\}_{X\in\mathcal{X}}\). To show the direct sum decomposition is preserved, let \(X\in\mathcal{X}\). Then \(\mathcal{S}_{X}\) takes one of the following forms: if \(\mathcal{C}(W)\nleq\mathcal{C}(X)\) and \(V(W)\cap V(X)\neq\emptyset\) then \(\mathcal{D}_{X}=\{\mathcal{N}\}\) and \(\mathcal{S}_{X}=\emptyset\); if \(\mathcal{C}(W)\nleq\mathcal{C}(X)\) and \(V(W)\cap V(X)=\emptyset\) then \(\mathcal{D}_{X}=\{\mathcal{N}\}\) and \(\mathcal{S}_{X}=\{\mathcal{N}\}\); and, if \(\mathcal{C}(W)\subseteq\mathcal{C}(X)\) then \(\mathcal{D}_{X}=\{\mathcal{N},W\}\), and also \(V(W)\cap V(X)\neq\emptyset\), so \(\mathcal{S}_{X}=\{W\}\). Then \(|\mathcal{S}_{X}|=|\pi_{W}(\mathcal{S}_{X})|\leq 1\), and it also holds trivially that \(\pi_{W}(\mathcal{S}_{X})\) is pairwise vertex disjoint. We consider three cases. First, if \(\mathcal{C}(W)\nleq\mathcal{C}(X)\) and \(V(W)\cap V(X)\neq\emptyset\) then \(\mathcal{S}_{X}=\emptyset\) and \(\pi_{W}(X)=X=\bar{X}=\bar{X}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}\pi_{ W}(S)\big{)}\) where \(\bar{X}:=X\ominus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}S\big{)}\). Second, if \(\mathcal{C}(W)\nleq\mathcal{C}(X)\) and \(V(W)\cap V(X)=\emptyset\) then \(\mathcal{S}_{X}=\{\mathcal{N}\}\) and \(\pi_{W}(X)=\bigoplus\mathcal{C}(X)\boxplus\mathcal{C}(W)=\bigoplus\mathcal{C} (X)\boxplus\mathcal{C}(W)=X\oplus W=(X\ominus\mathcal{N})\oplus\pi_{W}( \mathcal{N})=\big{(}X\ominus(\bigoplus_{S\in\mathcal{S}_{X}}S)\big{)}\oplus \big{(}\bigoplus_{S\in\mathcal{S}_{X}}\pi_{W}(S)\big{)}=\bar{X}\oplus\big{(} \bigoplus_{S\in\mathcal{S}_{X}}\pi_{W}(S)\big{)}\), where \(\bar{X}:=X\ominus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}S\big{)}\). Third, if \(\mathcal{C}(W)\subseteq\mathcal{C}(X)\) then \(\mathcal{S}_{X}=\{W\}\) and \(\pi_{W}(X)=\bigoplus\mathcal{C}(X)\boxplus\mathcal{C}(W)=\bigoplus\mathcal{C} (X)\ominus\bigoplus\mathcal{C}(W)=X\ominus W=\big{(}X\ominus W\big{)}\oplus \mathcal{N}=\big{(}X\ominus W\big{)}\oplus\pi_{W}(W)=\big{(}X\ominus( \bigoplus_{S\in\mathcal{S}_{X}}S)\big{)}\oplus\big{(}\bigoplus_{S\in\mathcal{S }_{X}}\pi_{W}(S)\big{)}=\bar{X}\oplus\big{(}\bigoplus_{S\in\mathcal{S}_{X}}\pi_{ W}(S)\big{)}\), where \(\bar{X}:=\big{(}X\ominus(\bigoplus_{S\in\mathcal{S}_{X}}S)\big{)}\). Finally we show that \(\mathcal{S}\) is greatest for \(\pi_{W}\), so let \(T\in\mathcal{X}\) be such that \(\big{(}\mathcal{X},\pi_{W},\mathcal{S}\cup\{T\}\big{)}\) is a hypergraph transformation. Note that either \(\mathcal{C}(W)\subseteq\mathcal{C}(T)\) or \(V(W)\cap V(T)=\emptyset\), otherwise \(\pi_{W}(T)=T\) which contradicts nonredundancy. First, suppose \(\mathcal{C}(W)\subseteq\mathcal{C}(T)\). If \(T\neq W\) then there exists \(C\in\mathcal{C}(T)\backslash\mathcal{C}(W)\), and since \(\pi_{W}(T)=\bigoplus\mathcal{C}(T)\boxplus\mathcal{C}(W)=\bigoplus\mathcal{C} (T)\backslash\mathcal{C}(W)\) we have \(C\in\mathcal{C}(T)\cap\mathcal{C}(\pi_{W}(T))\), contradicting nonredundancy, hence \(W\in\mathcal{S}\). Second, suppose \(V(W)\cap V(T)=\emptyset\). Then \(\mathcal{C}\big{(}\pi_{W}(T)\big{)}=\mathcal{C}\big{(}\oplus\mathcal{C}(T)\boxplus \mathcal{C}(W)\big{)}=\mathcal{C}(T)\cup\mathcal{C}(W)\), and by nonredundancy we have \(\emptyset=\mathcal{C}(T)\cap\mathcal{C}\big{(}\pi_{W}(T)\big{)}=\mathcal{C}(T)\), therefore \(T=\mathcal{N}\in\mathcal{S}\). _Definition 3.30_ (**Hypergraph addition partial transformation**).: Let \(W\in\mathscr{X}^{*}\), and \(\mathcal{X}\subseteq\mathscr{X}\) with \(\mathcal{N}\in\mathcal{X}\). We define the _hypergraph addition partial transformation_\(\pi_{W}^{+}\colon\mathcal{X}\to\mathscr{X}\) such that, for \(X\in\mathcal{X}\), \[\pi_{W}^{+}(X)=\begin{cases}X\oplus W&\text{if }V(W)\cap V(X)=\emptyset,\\ X&\text{if }V(W)\cap V(X)\neq\emptyset.\end{cases} \tag{12}\] Therefore, if \(V(W)\cap V(X)=\emptyset\) then \(\pi_{W}^{+}\) adds the components of the hypergraph \(W\) to \(X\), otherwise \(\pi_{W}^{+}\) fixes \(X\). **Proposition 3.31**.: _Suppose \(W\in\mathscr{X}^{*}\), \(\mathcal{X}\subseteq\mathscr{X}\) with \(\mathcal{N}\in\mathcal{X}\), and \(\pi_{W}^{+}\colon\mathcal{X}\to\mathscr{X}\) is the hypergraph addition partial transformation. Define_ \[\mathcal{S}^{+}:=\{\mathcal{N}\}, \tag{13}\] _and for \(X\in\mathcal{X}\) define_ \[\mathcal{D}_{X}^{+}:=\{\mathcal{N}\}. \tag{14}\] _Then \(\mathcal{T}_{W}^{+}:=(\mathcal{X},\pi_{W}^{+},\mathcal{S}^{+})\) is a hypergraph transformation on \(\mathscr{X}\) with \(\mathcal{S}^{+}\)-maximal subsets \(\{\mathcal{D}_{X}^{+}\}_{X\in\mathcal{X}}\)._ Proof.: We omit the proof as it is similar to the proof of Proposition 3.29. _Remark 3.32_.: Suppose \(W\in\mathscr{X}^{*}\), and \(\mathcal{X}\subseteq\mathscr{X}\) with \(\mathcal{N}\in\mathcal{X}\). Then \(\mathcal{S}^{+}\) in Equation (13) of Proposition 3.31 is not upward closed with respect to \(\mathcal{S}\) in Equation (10) of Proposition 3.29, since \(\mathcal{N}\in\mathcal{S}^{+}\), \(W\in\mathcal{S}\), and \(\mathcal{C}(\mathcal{N})\subseteq\mathcal{C}(W)\), however \(W\notin\mathcal{S}^{+}\). In fact, if \(\mathcal{S}^{+}\) was upward closed with respect to \(\mathcal{S}\) then Part (2) of Proposition 3.13 would imply that \(\mathcal{S}^{+}=\mathcal{S}\). In particular, \(\mathcal{T}_{W}^{+}\) is not a support reduction of \(\mathcal{T}_{W}\). #### 3.2.3 Combined hypergraph-hyperedge addition hypergraph transformations Here we provide an example of a hypergraph transformation that performs a hypergraph addition followed by the addition of hyperedges. _Definition 3.33_ (**Hypergraph-hyperedge addition partial transformation**).: Suppose \(H\subseteq E(\mathscr{X})\) is nonempty, \(W\in\mathscr{X}^{*}\), and \(\mathcal{X}\subseteq\mathscr{X}\) with \(V(\mathcal{X})\cap V(W)=\emptyset\). We define the _hypergraph-hyperedge addition partial transformation \(\pi^{++}_{W,H}\colon\mathcal{X}\to\mathscr{X}\)_ such that, for \(X\in\mathcal{X}\), \[\pi^{++}_{W,H}(X)=\begin{cases}(X\oplus W)\boxplus H&\text{if $\bigcup H\subseteq V (X\oplus W)$ and $H\cap E(X\oplus W)=\emptyset$},\\ X&\text{if $\bigcup H\nsubseteq V(X\oplus W)$ or $H\cap E(X\oplus W)\neq \emptyset$}.\end{cases} \tag{15}\] Therefore, if \(\bigcup H\subseteq V(X\oplus W)\) and \(H\cap E(X\oplus W)=\emptyset\) then \(\pi^{++}_{W,H}\) adds the components of the hypergraph \(W\) to \(X\) and then adds all of the hyperedges in \(H\) to \(X\oplus W\), otherwise \(\pi^{++}_{W,H}\) fixes \(X\). _Definition 3.34_ (\(W\)-\(H\)-closed for addition set of hypergraphs).: Suppose \(H\subseteq E(\mathscr{X})\) is nonempty, \(W\in\mathscr{X}^{*}\), and \(\mathcal{X}\subseteq\mathscr{X}\) with \(V(\mathcal{X})\cap V(W)=\emptyset\). We say that \(\mathcal{X}\) is \(W\)-\(H\)_-closed for addition_ if \(X\in\mathcal{X}\), \(\bigcup H\subseteq V(X\oplus W)\), \(H\cap E(X\oplus W)=\emptyset\), and \(X\wedge H\neq\mathcal{N}\) imply \(X\wedge H\in\mathcal{X}\). **Proposition 3.35**.: _Suppose \(H\subseteq E(\mathscr{X})\) is nonempty, \(W\in\mathscr{X}^{*}\), \(\mathcal{X}\subseteq\mathscr{X}\) with \(V(\mathcal{X})\cap V(W)=\emptyset\) is \(W\)-\(H\)-closed for addition, and \(\pi^{++}_{W,H}\colon\mathcal{X}\to\mathscr{X}\) is the hypergraph-hyperedge addition partial transformation. Define_ \[\mathcal{S}^{++}:=\{\,X\wedge H\mid X\in\mathcal{X}\text{, }\bigcup H\subseteq V (X\oplus W)\text{, }H\cap E(X\oplus W)=\emptyset\text{, and }X\wedge H\neq\mathcal{N}\,\}, \tag{16}\] _and for \(X\in\mathcal{X}\) define_ \[\mathcal{D}^{++}_{X}:=\begin{cases}\{X\wedge H\}&\text{if $\bigcup H\subseteq V (X\oplus W)$, $H\cap E(X\oplus W)=\emptyset$, and $X\wedge H\neq\mathcal{N}$},\\ \emptyset&\text{if $\bigcup H\nsubseteq V(X\oplus W)$ or $H\cap E(X\oplus W)\neq \emptyset$ or $X\wedge H=\mathcal{N}$}.\end{cases} \tag{17}\] _Then \(\mathcal{T}^{++}_{W,H}:=(\mathcal{X},\pi^{++}_{W,H},\mathcal{S}^{++})\) is a hypergraph transformation on \(\mathscr{X}\) with \(\mathcal{S}^{++}\)-maximal subsets \(\{\mathcal{D}^{++}_{X}\}_{X\in\mathcal{X}}\)._ Proof.: For nonredundancy, suppose \(S:=X\wedge H\in\mathcal{S}^{++}\) for some \(X\in\mathcal{X}\). Then \(\bigcup H\subseteq V(X\oplus W)\) implies \(\bigcup H\subseteq V(S\oplus W)\), and \(H\cap E(X\oplus W)=\emptyset\) implies \(H\cap E(S\oplus W)=\emptyset\). So \(\pi^{++}_{W,H}(S)=(S\oplus W)\boxplus H\) modifies all components of \(S\), since \(S=S\wedge H\) and \(S\neq\mathcal{N}\), and it follows that \(\mathcal{C}(S)\cap\mathcal{C}\big{(}\pi^{++}_{W,H}(S)\big{)}=\emptyset\), noting \(V(W)\cap V(S)=\emptyset\). Additionally, \(\mathcal{N}\notin\mathcal{S}^{++}\). To see that \(\mathcal{S}^{++}\) is component maximal, let \(X\in\mathcal{X}\). Conditions (1) to (3) of Definition 3.1 follow immediately from the definition of \(\mathcal{D}^{++}_{X}\). For Condition (4) of Definition 3.1, suppose \(S:=X\wedge H\in\mathcal{S}^{++}\) for some \(X\in\mathcal{X}\) and \(\mathcal{C}(S)\subseteq\mathcal{C}(Y)\) for some \(Y\in\mathcal{X}\). Then: \(\mathcal{C}(S)\subseteq\mathcal{C}(Y\wedge H)\); \(S\neq\mathcal{N}\) implies \(Y\wedge H\neq\mathcal{N}\); \(\bigcup H\subseteq V(X\oplus W)\) implies \(\bigcup H\subseteq V(S\oplus W)\), hence \(\bigcup H\subseteq V(Y\oplus W)\); \(\mathcal{C}(S\oplus W)\subseteq\mathcal{C}(Y\oplus W)\) and \(\bigcup H\subseteq V(S\oplus W)\subseteq V(Y\oplus W)\) imply \((S\oplus W)\wedge H=(Y\oplus W)\wedge H\), and since \(H\cap E(S\oplus W)=H\cap E(X\oplus W)=\emptyset\) we have \(H\cap E(Y\oplus W)=\emptyset\). So \(\mathcal{D}^{++}_{Y}=\{Y\wedge H\}\) is the required \(\mathcal{S}^{++}\)-maximal subset. Note that if \(\mathcal{C}(S)\nsubseteq\mathcal{C}(Y)\) for all \(S\in\mathcal{S}^{++}\) then, since \(\mathcal{X}\) is \(W\)-\(H\)-closed for addition, \(\mathcal{D}^{++}_{Y}=\emptyset\). To show the direct sum decomposition is preserved, let \(X\in\mathcal{X}\). Note that \(\mathcal{S}^{++}_{X}=\mathcal{D}^{++}_{X}\), so \(|\pi^{++}_{W,H}(\mathcal{S}^{++}_{X})|\leqslant 1\), and it also holds trivially that \(\pi^{++}_{W,H}(\mathcal{S}^{++}_{X})\) is pairwise vertex disjoint. Now, note that \(\mathcal{S}^{++}_{X}\neq\emptyset\) if and only if \(\mathcal{D}^{++}_{X}\neq\emptyset\) if and only if \(\bigcup H\subseteq V(X\oplus W)\), \(H\cap E(X\oplus W)=\emptyset\), and \(X\wedge H\neq\mathcal{N}\). So, if \(\mathcal{S}^{++}_{X}=\{S\}\) then \(\pi^{++}_{W,H}(X)=(X\oplus W)\boxplus H=(X\ominus S)\oplus\big{(}(S\oplus W) \boxplus H\big{)}=\bar{X}\oplus\pi^{++}_{W,H}(S)\) where \(\bar{X}=X\ominus S\); and if \(\mathcal{S}^{++}_{X}=\emptyset\) then \(\pi^{++}_{W,H}(X)=X=\bar{X}\). In any case we have \(\pi^{++}_{W,H}(X)=\bar{X}\oplus\big{(}\bigoplus_{S\in\mathcal{S}^{++}_{X}}\pi^ {++}_{W,H}(S)\big{)}\), where \(\bar{X}:=X\ominus(\bigoplus_{S\in\mathcal{S}^{++}_{X}}S)\). We conclude that \(\mathcal{T}^{++}_{W,H}\) is a hypergraph transformation on \(\mathscr{X}\) with \(\mathcal{S}^{++}\)-maximal subsets \(\{\mathcal{D}^{++}_{X}\}_{X\in\mathcal{X}}\). ## 4 Quotient hypergraph transformations ### Definition and basic properties Given a hypergraph transformation \(\mathcal{T}\) on \(\mathscr{X}\) and an equivalence relation \(R_{V(\mathscr{X})}\) on the set of vertices \(V(\mathscr{X})\), we consider the notion of a corresponding _quotient hypergraph transformation_ of \(\mathcal{T}\) on the hypergraph family \(\mathscr{X}/R_{V(\mathscr{X})}\). The existence of the quotient of a hypergraph transformation depends on the particular equivalence relation \(R_{V(\mathscr{X})}\), since hypergraphs in \(\mathscr{X}\) associated with \(\mathcal{T}\) must project into \(\mathscr{X}/R_{V(\mathscr{X})}\) appropriately. _Definition 4.1_ (**Amenable hypergraph transformation, quotient hypergraph transformation**).: Let \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\) be a hypergraph transformation on \(\mathscr{X}\) with \(\mathcal{S}\)-maximal subsets \(\{\mathcal{D}_{X}\}_{X\in\mathcal{X}}\), and let \(R_{V(\mathscr{X})}\) be an equivalence relation on \(V(\mathscr{X})\). Define: 1. \(\mathcal{S}/R_{V(\mathscr{X})}:=\{\,S//R_{V(\mathscr{X})}\mid S\in\mathcal{S} \,\}\). 2. \(\mathcal{D}_{X}/R_{V(\mathscr{X})}:=\{\,S//R_{V(\mathscr{X})}\mid S\in \mathcal{D}_{X}\,\}\) for \(X//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\). 3. \(\pi/R_{V(\mathscr{X})}\colon\mathcal{X}/R_{V(\mathscr{X})}\to\mathscr{X}/R_{V( \mathscr{X})}\) such that \(\pi/R_{V(\mathscr{X})}\big{(}X//R_{V(\mathscr{X})}\big{)}:=\pi(X)//R_{V( \mathscr{X})}\) for all \(X//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\). If \(\mathcal{T}/R_{V(\mathscr{X})}:=(\mathcal{X}/R_{V(\mathscr{X})},\pi/R_{V( \mathscr{X})},\mathcal{S}/R_{V(\mathscr{X})})\) is a hypergraph transformation on the hypergraph family \(\mathscr{X}/R_{V(\mathscr{X})}\) then \(\mathcal{T}\) is _amenable_ with respect to \(R_{V(\mathscr{X})}\), and in this case \(\mathcal{T}/R_{V(\mathscr{X})}\) is the _quotient hypergraph transformation_ with respect to \(R_{V(\mathscr{X})}\). _Remark 4.2_.: For \(\mathcal{T}/R_{V(\mathscr{X})}\) to be a hypergraph transformation it is necessary that the following hold: the map \(\pi/R_{V(\mathscr{X})}\) is well defined, that is \(X//R_{V(\mathscr{X})}=Y//R_{V(\mathscr{X})}\) implies \(\pi(X)//R_{V(\mathscr{X})}=\pi(Y)//R_{V(\mathscr{X})}\) for all \(X\), \(Y\in\mathcal{X}\); the subsets \(\mathcal{D}_{X}/R_{V(\mathscr{X})}\) are well defined, that is \(X//R_{V(\mathscr{X})}=Y//R_{V(\mathscr{X})}\) implies \(\mathcal{D}_{X}/R_{V(\mathscr{X})}=\mathcal{D}_{Y}/R_{V(\mathscr{X})}\) for all \(X\), \(Y\in\mathcal{X}\); and, \(\mathcal{T}/R_{V(\mathscr{X})}\) satisfies Conditions (1)-(3) in Definition 3.3. The following proposition demonstrates that commutativity of the partial transformations underlying a sequence of amenable hypergraph transformations induces commutativity of the partial transformations underlying the corresponding sequence of quotient hypergraph transformations. **Proposition 4.3**.: _Suppose \(\big{(}\mathcal{T}_{i}:=(\mathcal{X}_{i},\pi_{i},\mathcal{S}^{i})\big{)}_{i=1}^{n}\) is a sequence of hypergraph transformations on \(\mathscr{X}\), where \(n\in\mathbb{N}\) with \(n\geqslant 2\), and \(R_{V(\mathscr{X})}\) is an equivalence relation on \(V(\mathscr{X})\) such that each hypergraph transformation in \((\mathcal{T}_{i})_{i=1}^{n}\) is amenable with respect to \(R_{V(\mathscr{X})}\). Then \(\operatorname{Coin}\big{(}(\bigcirc_{i\in\sigma}\pi_{n+1-i})_{\sigma\in S_{n} }\big{)}/R_{V(\mathscr{X})}\subseteq\operatorname{Coin}\big{(}(\bigcirc_{i \in\sigma}\pi_{n+1-i}/R_{V(\mathscr{X})})_{\sigma\in S_{n}}\big{)}\)._ Proof.: Let \(X//R_{V(\mathscr{X})}\in\operatorname{Coin}\big{(}(\bigcirc_{i\in\sigma}\pi_{ n+1-i})_{\sigma\in S_{n}}\big{)}/R_{V(\mathscr{X})}\) with \(X\in\operatorname{Coin}\big{(}(\bigcirc_{i\in\sigma}\pi_{n+1-i})_{\sigma\in S _{n}}\big{)}\), and let \(\sigma\), \(\sigma^{\prime}\in S_{n}\). Then \[\bigcirc_{i\in\sigma}\pi_{n+1-i}/R_{V(\mathscr{X})}(X//R_{V( \mathscr{X})}) =\bigcirc_{i\in\sigma}\pi_{n+1-i}(X)//R_{V(\mathscr{X})}=\bigcirc_ {i\in\sigma^{\prime}}\pi_{n+1-i}(X)//R_{V(\mathscr{X})}\] \[=\bigcirc_{i\in\sigma^{\prime}}\pi_{n+1-i}/R_{V(\mathscr{X})}(X //R_{V(\mathscr{X})}),\] where the first and third equalities follow from the definition of a quotient hypergraph transformation and from \(\operatorname{Dom}(\pi_{j}/R_{V(\mathscr{X})})=\operatorname{Dom}(\pi_{j})/R_ {V(\mathscr{X})}\) for all \(j\in[n]\), and the second equality follows from the commutativity of the partial transformations \((\pi_{j})_{j=1}^{n}\) on \(\operatorname{Coin}\big{(}(\bigcirc_{i\in\sigma}\pi_{n+1-i})_{\sigma\in S_{n} }\big{)}\). Therefore, \(X//R_{V(\mathscr{X})}\in\operatorname{Coin}\big{(}(\bigcirc_{i\in\sigma}\pi_{ n+1-i}/R_{V(\mathscr{X})})_{\sigma\in S_{n}}\big{)}\). In Subsection 4.2 we discuss two examples of amenable hypergraph transformations. Amenability of a given hypergraph transformation may be realised if the equivalence relation on \(V(\mathscr{X})\) satisfies certain properties. Two examples are \(\mathcal{S}\)-preserving equivalence relations and \(W\)-disjointness preserving equivalence relations: _Definition 4.4_ (\(\mathcal{S}\)**-preserving, \(W\)-disjointness preserving**).: Suppose \(R_{V(\mathscr{X})}\) is an equivalence relation on \(V(\mathscr{X})\), and \(\mathcal{X}\subseteq\mathscr{X}\). 1. If \(\mathcal{S}\subseteq\mathcal{X}\) then \(R_{V(\mathscr{X})}\) is \(\mathcal{S}\)_-preserving with respect to_\(\mathcal{X}\) when \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\) implies \(\mathcal{C}(S//R_{V(\mathscr{X})})\subseteq\mathcal{C}(X//R_{V(\mathscr{X})})\) for all \(S\in\mathcal{S}\) and \(X\in\mathcal{X}\). 2. If \(W\in\mathscr{X}\) then \(R_{V(\mathscr{X})}\) is \(W\)_-disjointness preserving with respect to_\(\mathcal{X}\) when \(V(W)\cap V(X)=\emptyset\) implies \(V(W//R_{V(\mathscr{X})})\cap V(X//R_{V(\mathscr{X})})=\emptyset\) for all \(X\in\mathcal{X}\). _Remark 4.5_.: Suppose \(\mathcal{T}:=(\mathcal{X},\pi,\mathcal{S})\) is a hypergraph transformation on \(\mathscr{X}\) with \(\mathcal{S}\)-maximal subsets \(\{\mathcal{D}_{X}\}_{X\in\mathcal{X}}\). If \(\mathcal{T}\) is amenable with respect to the equivalence relation \(R_{V(\mathscr{X})}\) on \(V(\mathscr{X})\) then by Property (2) in Definition 4.1 it follows that \(S\in\mathcal{D}_{X}\), whereby \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\), implies \(\mathcal{C}(S//R_{V(\mathscr{X})})\subseteq\mathcal{C}(X//R_{V(\mathscr{X})})\). Part (1) of Definition 4.4 is therefore a more general property than this observation. ### Examples of quotient hypergraph transformations Here we give two examples of amenable hypergraph transformations, involving hyperedge addition and hypergraph addition. #### 4.2.1 Quotients of hyperedge addition hypergraph transformations Our example of an amenable hypergraph transformation involving hyperedge addition is based on a notion of hyperedge equivalence with respect to an equivalence relation on \(V(\mathscr{X})\). _Definition 4.6_ (**Vertex-augmented quotient of a hyperedge**).: Suppose \(R_{V(\mathscr{X})}\) is an equivalence relation on \(V(\mathscr{X})\), and \(e\in E(\mathscr{X})\). The _vertex-augmented quotient of the hyperedge \(e\)_ with respect to \(R_{V(\mathscr{X})}\) is defined by \(e/\!/R_{V(\mathscr{X})}:=\big{\{}\,[v]_{R_{V(\mathscr{X})}}\mid v\in e\,\big{\}} \in E(\mathscr{X}/R_{V(\mathscr{X})})\), where we assume \(|\{\,[v]_{R_{V(\mathscr{X})}}\mid v\in e\,\}|\geqslant 2\) if we disallow loops. _Remark 4.7_.: 1. To see that \(e/\!/R_{V(\mathscr{X})}\in E(\mathscr{X}/R_{V(\mathscr{X})})\), note that \(e\in E(\mathscr{X})\) implies \(e\in E(X)\) for some \(X\in\mathscr{X}\), so \(e/\!/R_{V(\mathscr{X})}\in E(X/\!/R_{V(\mathscr{X})})\), noting that \(|e|\geqslant 2\) and \(|e/\!/R_{V(\mathscr{X})}|\geqslant 2\) if we disallow loops, hence \(e/\!/R_{V(\mathscr{X})}\in E(\mathscr{X}/R_{V(\mathscr{X})})\). 2. The notation \(e/\!/R_{V(\mathscr{X})}\) is consistent with the definition of a vertex-augmented quotient hypergraph: if \(X\) is a hypergraph with \(f\in E(X)\) then, assuming \(|f/\!/R_{V(\mathscr{X})}|\geqslant 2\) if we disallow loops, the quotient map sends \(f\in E(X)\) to \(f/\!/R_{V(\mathscr{X})}\in E(X/\!/R_{V(\mathscr{X})})\). _Definition 4.8_ (\(e\)**-equivalent hyperedges**).: Suppose \(R_{V(\mathscr{X})}\) is an equivalence relation on \(V(\mathscr{X})\), and \(e\in E(\mathscr{X})\) with \(e/\!/R_{V(\mathscr{X})}\in E(\mathscr{X}/R_{V(\mathscr{X})})\). Then \(\widetilde{e}:=\{\,f\in E(\mathscr{X})\mid f/\!/R_{V(\mathscr{X})}=e/\!/R_{V( \mathscr{X})}\,\}\) is the set of \(e\)_-equivalent hyperedges_ with respect to \(R_{V(\mathscr{X})}\). Note that for \(f\in\widetilde{e}\) we have \(\widetilde{f}=\widetilde{e}\). _Notation 4.9_.: Suppose \(R_{V(\mathscr{X})}\) is an equivalence relation on \(V(\mathscr{X})\), \(e\in E(\mathscr{X})\) with \(e/\!/R_{V(\mathscr{X})}\in E(\mathscr{X}/R_{V(\mathscr{X})})\), and \(X\in\mathscr{X}\). We denote \(\widetilde{e}_{X}:=\{\,f\in\widetilde{e}\mid f\subseteq V(X)\,\}\). In the following proposition we characterise sets of \(e\)-equivalent hyperedges. **Proposition 4.10**.: _Suppose \(R_{V(\mathscr{X})}\) is an equivalence relation on \(V(\mathscr{X})\), and \(e\in E(\mathscr{X})\) with \(e/\!/R_{V(\mathscr{X})}\in E(\mathscr{X}/R_{V(\mathscr{X})})\). Then the following hold for all \(X\), \(Y\in\mathscr{X}\):_ 1. \(e/\!/R_{V(\mathscr{X})}\subseteq V(X/\!/R_{V(\mathscr{X})})\) _if and only if_ \(\widetilde{e}_{X}\neq\emptyset\)_._ 2. \(e/\!/R_{V(\mathscr{X})}\in E(X/\!/R_{V(\mathscr{X})})\) _if and only if_ \(\widetilde{e}_{X}\cap E(X)\neq\emptyset\)_._ 3. _If_ \(\mathcal{C}(X)\subseteq\mathcal{C}(Y)\) _then_ \(\widetilde{e}_{X}\subseteq\widetilde{e}_{Y}\)_._ 4. _If_ \(X=Y\wedge\widetilde{e}_{Y}\) _then_ \(\widetilde{e}_{X}=\widetilde{e}_{Y}\) _and_ \(\widetilde{e}_{X}\cap E(X)=\widetilde{e}_{Y}\cap E(Y)\)_._ 5. _If_ \(V(X/\!/R_{V(\mathscr{X})})=V(Y/\!/R_{V(\mathscr{X})})\) _then_ \(\widetilde{e}_{X}\neq\emptyset\) _if and only if_ \(\widetilde{e}_{Y}\neq\emptyset\)_._ 6. _If_ \(E(X/\!/R_{V(\mathscr{X})})=E(Y/\!/R_{V(\mathscr{X})})\) _then_ \(\widetilde{e}_{X}\cap E(X)\neq\emptyset\) _if and only if_ \(\widetilde{e}_{Y}\cap E(Y)\neq\emptyset\) _._ 7. _If_ \(\mathcal{S}\subseteq\mathcal{X}\subseteq\mathscr{X}\)_,_ \(X\in\mathcal{X}\)_,_ \(\tilde{e}_{X}\neq\emptyset\)_,_ \(X\wedge\tilde{e}_{X}\in\mathcal{S}\)_, and_ \(R_{V(\mathscr{X})}\) _is_ \(\mathcal{S}\)_-preserving with respect to_ \(\mathcal{X}\)_, then_ \((X\wedge\widehat{e}_{X})//R_{V(\mathscr{X})}=X//R_{V(\mathscr{X})}\wedge e//R _{V(\mathscr{X})}\)_._ Proof.: (1) \(e//R_{V(\mathscr{X})}\subseteq V(X//R_{V(\mathscr{X})})\) if and only if there exists \(f\subseteq V(X)\) such that \(f//R_{V(\mathscr{X})}=e//R_{V(\mathscr{X})}\), if and only if \(f\in\widehat{e}_{X}\). (2) If \(e//R_{V(\mathscr{X})}\in E(X//R_{V(\mathscr{X})})\) then there exists \(f\in E(X)\) such that \(f//R_{V(\mathscr{X})}=e//R_{V(\mathscr{X})}\), so \(f\in\widehat{e}_{X}\cap E(X)\). Conversely, if \(f\in\widehat{e}_{X}\cap E(X)\) then \(f//R_{V(\mathscr{X})}\in E(X//R_{V(\mathscr{X})})\), since \(f//R_{V(\mathscr{X})}=e//R_{V(\mathscr{X})}\in E(\mathscr{X}/R_{V(\mathscr{X})})\), and hence \(e//R_{V(\mathscr{X})}=f//R_{V(\mathscr{X})}\in E(X//R_{V(\mathscr{X})})\). (3) If \(f\in\widehat{e}_{X}\) then \(f\in\widehat{e}\) and \(f\subseteq V(X)\), so \(\mathcal{C}(X)\subseteq\mathcal{C}(Y)\) implies \(f\subseteq V(X)\subseteq V(Y)\), therefore \(f\in\widehat{e}_{Y}\). (4) Since \(\mathcal{C}(X)\subseteq\mathcal{C}(Y)\) it follows from Part (3) of this proposition that \(\widehat{e}_{X}\subseteq\widetilde{e}_{Y}\). For the reverse inclusion, let \(f\in\widetilde{e}_{Y}\) so that \(f\in\widetilde{e}\) and \(f\subseteq V(Y)\). Noting that \(\mathcal{C}(Y\wedge f)\subseteq\mathcal{C}(Y\wedge\widetilde{e}_{Y})\), we have \(f\subseteq V(Y)\) implies \(f\subseteq V(Y\wedge f)\) implies \(f\subseteq V(Y\wedge\widetilde{e}_{Y})=V(X)\), hence \(f\in\widehat{e}_{X}\). Therefore \(\widehat{e}_{X}=\widetilde{e}_{Y}\). Now, \(\widetilde{e}_{Y}\cap E(Y)=\widetilde{e}_{X}\cap E(Y)\subseteq\widetilde{e}_{ X}\cap E(X)\). For the reverse inclusion, if \(f\in\widetilde{e}_{X}\cap E(X)\) then \(f\in E(Y)\), so \(\widetilde{e}_{X}\cap E(X)\subseteq\widetilde{e}_{Y}\cap E(Y)\). Therefore \(\widetilde{e}_{X}\cap E(X)=\widetilde{e}_{Y}\cap E(Y)\). (5) Using Part (1) of this proposition, \(\widetilde{e}_{X}\neq\emptyset\) if and only if \(e//R_{V(\mathscr{X})}\subseteq V(X//R_{V(\mathscr{X})})\) if and only if \(e//R_{V(\mathscr{X})}\subseteq V(Y//R_{V(\mathscr{X})})\) if and only if \(\widetilde{e}_{Y}\neq\emptyset\). (6) Using Part (2) of this proposition, \(\widetilde{e}_{X}\cap E(X)\neq\emptyset\) if and only if \(e//R_{V(\mathscr{X})}\in E(X//R_{V(\mathscr{X})})\) if and only if \(e//R_{V(\mathscr{X})}\in E(Y//R_{V(\mathscr{X})})\) if and only if \(\widetilde{e}_{Y}\cap E(Y)\neq\emptyset\). (7) We show \(\mathcal{C}\big{(}(X\wedge\widetilde{e}_{X})//R_{V(\mathscr{X})}\big{)}= \mathcal{C}(X//R_{V(\mathscr{X})}\wedge e//R_{V(\mathscr{X})})\). Since \(\mathcal{C}(X\wedge\widetilde{e}_{X})\subseteq\mathcal{C}(X)\) and \(R_{V(\mathscr{X})}\) is \(\mathcal{S}\)-preserving with respect to \(\mathcal{X}\) we have \(\mathcal{C}\big{(}(X\wedge\widetilde{e}_{X})//R_{V(\mathscr{X})}\big{)} \subseteq\mathcal{C}(X//R_{V(\mathscr{X})})\). If \(C^{\prime}\in\mathcal{C}\big{(}(X\wedge\widetilde{e}_{X})//R_{V(\mathscr{X})} \big{)}\) then \(C^{\prime}=\bigcup_{C\in A}C//R_{V(\mathscr{X})}\) for some nonempty subset \(A\subseteq\mathcal{C}(X\wedge\widetilde{e}_{X})\), and since for all \(C\in\mathcal{C}(X\wedge\widetilde{e}_{X})\) there exists \(f\in\widetilde{e}_{X}\) such that \(f\cap V(C)\neq\emptyset\) it follows that \(e//R_{V(\mathscr{X})}\cap V(C^{\prime})\neq\emptyset\), so \(C^{\prime}\in\mathcal{C}(X//R_{V(\mathscr{X})}\wedge e//R_{V(\mathscr{X})})\). Hence \(\mathcal{C}\big{(}(X\wedge\widetilde{e}_{X})//R_{V(\mathscr{X})}\big{)} \subseteq\mathcal{C}(X//R_{V(\mathscr{X})}\wedge e//R_{V(\mathscr{X})})\). For the reverse inclusion we prove the contrapositive, so let \(C^{\prime\prime}\in\mathcal{C}(X//R_{V(\mathscr{X})})\backslash\mathcal{C} \big{(}(X\wedge\widetilde{e}_{X})//R_{V(\mathscr{X})}\big{)}\) and we show that \(C^{\prime\prime}\in\mathcal{C}(X//R_{V(\mathscr{X})})\backslash\mathcal{C}(X//R _{V(\mathscr{X})}\wedge e//R_{V(\mathscr{X})})\). Now, \(C^{\prime\prime}\in\mathcal{C}(X//R_{V(\mathscr{X})})\) implies \(C^{\prime\prime}=\bigcup_{C\in B}C//R_{V(\mathscr{X})}\) for some nonempty subset \(B\subseteq\mathcal{C}(X)\). Since \(C^{\prime\prime}\) and \(\mathcal{C}\big{(}(X\wedge\widetilde{e}_{X})//R_{V(\mathscr{X})}\big{)}\) are vertex disjoint, we must have \(f\cap V(C)=\emptyset\) for all \(C\in B\) and for all \(f\in\widehat{e}_{X}\). Note that if \(e//R_{V(\mathscr{X})}\cap V(C^{\prime\prime})\neq\emptyset\) then \(f//R_{V(\mathscr{X})}\cap V(C^{\prime\prime})\neq\emptyset\) for all \(f\in\widehat{e}_{X}\), so there must exist \(g\in\widehat{e}_{X}\) and \(C\in B\) such that \(g\cap V(C)\neq\emptyset\). It follows that \(e//R_{V(\mathscr{X})}\cap V(C^{\prime\prime})=\emptyset\), hence \(C^{\prime\prime}\neq\mathcal{C}(X//R_{V(\mathscr{X})}\wedge e//R_{V(\mathscr{X})})\). _Definition 4.11_ (**Equivalent-hyperedges addition partial transformation**).: Suppose \(R_{V(\mathscr{X})}\) is an equivalence relation on \(V(\mathscr{X})\), \(e\in E(\mathscr{X})\) with \(e//R_{V(\mathscr{X})}\in E(\mathscr{X}/R_{V(\mathscr{X})})\), and \(\mathcal{X}\subseteq\mathscr{X}\). We define the _equivalent hyperedges addition partial transformation \(\pi_{\widetilde{e}}\colon\mathcal{X}\to\mathscr{X}\) such that, for \(X\in\mathcal{X}\), \[\pi_{\widetilde{e}}(X)=\begin{cases}X\boxplus\widetilde{e}_{X}&\text{if } \widetilde{e}_{X}\neq\emptyset\text{ and }\widetilde{e}_{X}\cap E(X)=\emptyset,\\ X&\text{if }\widetilde{e}_{X}=\emptyset\text{ or }\widetilde{e}_{X}\cap E(X)\neq \emptyset.\end{cases} \tag{18}\] Therefore, if \(\widetilde{e}_{X}\neq\emptyset\) and \(\widetilde{e}_{X}\cap E(X)=\emptyset\) then \(\pi_{\widetilde{e}}\) adds all of the hyperedges in \(\widetilde{e}_{X}\) to \(X\). _Definition 4.12_ (\(\widetilde{e}\)**-closed for addition, \(\widetilde{e}\)-amenable for addition**).: Suppose \(R_{V(\mathscr{X})}\) is an equivalence relation on \(V(\mathscr{X})\), \(e\in E(\mathscr{X})\) with \(e//R_{V(\mathscr{X})}\in E(\mathscr{X}/R_{V(\mathscr{X})})\), and \(\mathcal{X}\subseteq\mathscr{X}\). We say that \(\mathcal{X}\) is _\(\widetilde{e}\)-closed for addition_ if \(X\in\mathcal{X}\), \(\widetilde{e}_{X}\neq\emptyset\), and \(\widetilde{e}_{X}\cap E(X)=\emptyset\) imply \(X\wedge\widetilde{e}_{X}\in\mathcal{X}\). Further, we say that \(\mathcal{X}\) is _\(\widetilde{e}\)-amenable for addition_ if \(S\in\{\,S\in\mathcal{X}\mid\widetilde{e}_{S}\neq\emptyset,\,\widetilde{e}_{S} \cap E(S)=\emptyset,\,\text{and }S=S\wedge\widetilde{e}_{S}\,\,\}\) implies \(\mathcal{C}(S)\nsubseteq\mathcal{C}(X)\) for all \(X\in\mathcal{X}\) with \(\widetilde{e}_{X}\cap E(X)\neq\emptyset\). **Proposition 4.13**.: _Suppose \(R_{V(\mathscr{X})}\) is an equivalence relation on \(V(\mathscr{X})\), \(e\in E(\mathscr{X})\) with \(e//R_{V(\mathscr{X})}\in E(\mathscr{X}/R_{V(\mathscr{X})})\), and \(\mathcal{X}\subseteq\mathscr{X}\) is both \(\widetilde{e}\)-closed for addition and \(\widetilde{e}\)-amenable for addition. Let \(\pi_{\widetilde{e}}\colon\mathcal{X}\to\mathscr{X}\) be the equivalent-hyperedges addition partial transformation, define_ \[\widetilde{\mathcal{S}}:=\{\,S\in\mathcal{X}\mid\widetilde{e}_{S}\neq\emptyset \text{, }\widetilde{e}_{S}\cap E(S)=\emptyset\text{, and }S=S\wedge\widetilde{e}_{S}\,\,\,\}, \tag{19}\] _and for \(X\in\mathcal{X}\) define_ \[\widetilde{\mathcal{D}}_{X}:=\begin{cases}\{X\wedge\widetilde{e}_{X}\},& \text{ when }\widetilde{e}_{X}\neq\emptyset\text{ and }\widetilde{e}_{X}\cap E(X)=\emptyset,\\ \emptyset,&\text{ when }\widetilde{e}_{X}=\emptyset\text{ or }\widetilde{e}_{X}\cap E(X)\neq \emptyset.\end{cases} \tag{20}\] _Then:_ 1. \(\mathcal{T}_{\widetilde{e}}:=(\mathcal{X},\pi_{\widetilde{e}},\widetilde{ \mathcal{S}})\) _is a hypergraph transformation on_ \(\mathscr{X}\) _with_ \(\widetilde{\mathcal{S}}\)_-maximal subsets_ \(\{\widetilde{\mathcal{D}}_{X}\}_{X\in\mathcal{X}}\)_._ 2. _If_ \(R_{V(\mathscr{X})}\) _is_ \(\widetilde{\mathcal{S}}\)_-preserving with respect to_ \(\mathcal{X}\) _then_ \(\mathcal{T}^{+}_{e//R_{V(\mathscr{X})}}:=(\mathcal{X}/R_{V(\mathscr{X})},\pi^ {+}_{e//R_{V(\mathscr{X})}},\mathcal{S}^{+})\) _is a hyperedge addition hypergraph transformation on_ \(\mathscr{X}/R_{V(\mathscr{X})}\) _for the hyperedge_ \(e//R_{V(\mathscr{X})}\)_, where_ \(\mathcal{S}^{+}=\{\,T\in\mathcal{X}/R_{V(\mathscr{X})}\mid e//R_{V(\mathscr{X })}\subseteq V(T)\)_,_ \(e//R_{V(\mathscr{X})}\notin E(T)\)_, and_ \(T=T\wedge e//R_{V(\mathscr{X})}\,\}\)_. For_ \(X//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\) _the_ \(\mathcal{S}^{+}\)_-maximal subsets are_ \(\mathcal{D}^{+}_{X//R_{V(\mathscr{X})}}=\{X//R_{V(\mathscr{X})}\wedge e//R_{V( \mathscr{X})}\}\) _if_ \(e//R_{V(\mathscr{X})}\subseteq V(X//R_{V(\mathscr{X})})\) _and_ \(e//R_{V(\mathscr{X})}\notin E(X//R_{V(\mathscr{X})})\)_, and_ \(\mathcal{D}^{+}_{X//R_{V(\mathscr{X})}}=\emptyset\) _if_ \(e//R_{V(\mathscr{X})}\nsubseteq V(X//R_{V(\mathscr{X})})\) _or_ \(e//R_{V(\mathscr{X})}\in E(X//R_{V(\mathscr{X})})\)_._ 3. \(\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\) _is upward closed with respect to_ \(\mathcal{S}^{+}\)_._ 4. _If_ \(R_{V(\mathscr{X})}\) _is_ \(\widetilde{\mathcal{S}}\)_-preserving with respect to_ \(\mathcal{X}\) _then_ \(\mathcal{T}_{\widetilde{e}}\) _is amenable with respect to_ \(R_{V(\mathscr{X})}\)_. In particular, the hypergraph transformation_ \(\mathcal{T}_{\widetilde{e}}/R_{V(\mathscr{X})}:=(\mathcal{X}/R_{V(\mathscr{X})}, \pi_{\widetilde{e}}/R_{V(\mathscr{X})},\widetilde{\mathcal{S}}/R_{V(\mathscr{X})})\) _is the support reduction of_ \(\mathcal{T}^{+}_{e/R_{V(\mathscr{X})}}\) corresponding to \(\tilde{\mathcal{S}}/R_{V(\mathscr{X})}\)._ Proof.: (1) For nonredundancy, if \(S\in\tilde{\mathcal{S}}\) then \(\pi_{\widetilde{e}}(S)=S\boxplus\widetilde{e}_{S}\) modifies all components of \(S\) by addition of the hyperedges in \(\widetilde{e}_{S}\), since \(S=S\wedge\widetilde{e}_{S}\), hence \(\mathcal{C}(S)\cap\mathcal{C}\big{(}\pi_{\widetilde{e}}(S)\big{)}=\emptyset\); additionally, \(\mathcal{N}\notin\tilde{\mathcal{S}}\). To see that \(\tilde{\mathcal{S}}\) is component maximal, let \(X\in\mathcal{X}\). Conditions (1) to (3) of Definition 3.1 follow immediately from the definition of \(\widetilde{\mathcal{D}}_{X}\), and for Condition (4) suppose \(S\in\tilde{\mathcal{S}}\) and \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\). Part (3) of Proposition 4.10 implies \(\widetilde{e}_{S}\subseteq\widetilde{e}_{X}\), and hence \(\widetilde{e}_{X}\neq\emptyset\) since \(\widetilde{e}_{S}\neq\emptyset\). Moreover, since \(\mathcal{X}\) is \(\widetilde{e}\)-amenable for addition it follows that \(\widetilde{e}_{X}\cap E(X)=\emptyset\). So \(\widetilde{\mathcal{D}}_{X}=\{X\wedge\widetilde{e}_{X}\}\). Now, \(\mathcal{C}(S\wedge\widetilde{e}_{S})\subseteq\mathcal{C}(X\wedge\widetilde{ e}_{X})\), since \(\widetilde{e}_{S}\subseteq\widetilde{e}_{X}\) and \(\mathcal{C}(S)\subseteq\mathcal{C}(X)\), therefore \(\mathcal{C}(S)\subseteq\mathcal{C}(X\wedge\widetilde{e}_{X})\), since \(S=S\wedge\widetilde{e}_{S}\). To show the direct sum decomposition is preserved, let \(X\in\mathcal{X}\). Note that \(\tilde{\mathcal{S}}_{X}=\widetilde{\mathcal{D}}_{X}\), so \(|\tilde{\mathcal{S}}_{X}|=|\pi_{\widetilde{e}}(\tilde{\mathcal{S}}_{X})| \leqslant 1\), and it also holds trivially that \(\pi_{\widetilde{e}}(\tilde{\mathcal{S}}_{X})\) consists of pairwise vertex-disjoint hypergraphs. If \(\widetilde{e}_{X}\neq\emptyset\) and \(\widetilde{e}_{X}\cap E(X)=\emptyset\) then \(\tilde{\mathcal{S}}_{X}=\widetilde{\mathcal{D}}_{X}=\{S\}\) where \(S=X\wedge\widetilde{e}_{X}\), and \(\widetilde{e}_{S}=\widetilde{e}_{X}\) by Part (4) of Proposition 4.10, so \(\pi_{\widetilde{e}}(X)=X\boxplus\widetilde{e}_{X}=(X\ominus S)\oplus(S \boxplus\widetilde{e}_{X})=(X\ominus S)\oplus(S\boxplus\widetilde{e}_{S})= \bar{X}\oplus\pi_{\widetilde{e}}(S)\), where \(\bar{X}=X\ominus S\). If \(\widetilde{e}_{X}=\emptyset\) or \(\widetilde{e}_{X}\cap E(X)\neq\emptyset\) then \(\tilde{\mathcal{S}}_{X}=\widetilde{\mathcal{D}}_{X}=\emptyset\), so \(\pi_{\widetilde{e}}(X)=X=\bar{X}\). In any case we have \(\pi_{\widetilde{e}}(X)=\bar{X}\oplus\big{(}\bigoplus_{S\in\tilde{\mathcal{S}} _{X}}\pi_{\widetilde{e}}(S)\big{)}\), where \(\bar{X}:=X\ominus\big{(}\bigoplus_{S_{S}\in\tilde{\mathcal{S}}_{X}}S\big{)}\). We conclude that \((\mathcal{X},\pi_{\widetilde{e}},\tilde{\mathcal{S}})\) is a hypergraph transformation on \(\mathscr{X}\) with \(\tilde{\mathcal{S}}\)-maximal subsets \(\{\tilde{\mathcal{D}}_{X}\}_{X\in\mathcal{X}}\). (2) Note that \(e//R_{V(\mathscr{X})}\in E(\mathscr{X}/R_{V(\mathscr{X})})\) and \(\mathcal{X}/R_{V(\mathscr{X})}\subseteq\mathscr{X}/R_{V(\mathscr{X})}\), so \(\pi^{+}_{e//R_{V(\mathscr{X})}}\colon\mathcal{X}/R_{V(\mathscr{X})}\to \mathscr{X}/R_{V(\mathscr{X})}\) is a hyperedge addition partial transformation. We show that \(\mathcal{X}/R_{V(\mathscr{X})}\) is \(e//R_{V(\mathscr{X})}\)-closed for addition, and then the result follows from Part (1) of Proposition 3.25. Suppose \(X//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\) with \(X\in\mathcal{X}\), \(e//R_{V(\mathscr{X})}\subseteq V(X//R_{V(\mathscr{X})})\), and \(e//R_{V(\mathscr{X})}\notin E(X//R_{V(\mathscr{X})})\). Then Parts (1) and (2) of Proposition 4.10 imply \(\widetilde{e}_{X}\neq\emptyset\) and \(\widetilde{e}_{X}\cap E(X)=\emptyset\), respectively, so since \(\mathcal{X}\) is \(\widetilde{e}\)-closed for addition we have \(S:=X\wedge\widetilde{e}_{X}\in\mathcal{X}\). Since \(\widetilde{e}_{S}=\widetilde{e}_{X}\) by Part (4) of Proposition 4.10, since \(\widetilde{e}_{X}\cap E(X)=\emptyset\) implies \(\widetilde{e}_{S}\cap E(S)=\emptyset\), and since \(S\wedge\widetilde{e}_{S}=S\wedge\widetilde{e}_{X}=S\), it follows that \(S\in\tilde{\mathcal{S}}\). So \(X//R_{V(\mathscr{X})}\wedge e//R_{V(\mathscr{X})}=S//R_{V(\mathscr{X})}\in \mathcal{X}/R_{V(\mathscr{X})}\), where the equality holds by Part (7) of Proposition 4.10 since \(R_{V(\mathscr{X})}\) is \(\tilde{\mathcal{S}}\)-preserving with respect to \(\mathcal{X}\). (3) We show that \(\tilde{\mathcal{S}}/R_{V(\mathscr{X})}\subseteq\mathcal{S}^{+}\), so let \(S//R_{V(\mathscr{X})}\in\tilde{\mathcal{S}}/R_{V(\mathscr{X})}\) with \(S\in\tilde{\mathcal{S}}\), noting \(S//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\). Then \(\widetilde{e}_{S}\neq\emptyset\) implies \(e//R_{V(\mathscr{X})}\subseteq V(S//R_{V(\mathscr{X})})\), \(\widetilde{e}_{S}\cap E(S)=\emptyset\) implies \(e//R_{V(\mathscr{X})}\notin E(S//R_{V(\mathscr{X})})\), and \(S=S\wedge\widetilde{e}_{S}\) implies \(S//R_{V(\mathscr{X})}=(S\wedge\widetilde{e}_{S})//R_{V(\mathscr{X})}=S//R_{V( \mathscr{X})}\wedge e//R_{V(\mathscr{X})}\) by Parts (1), (2), and (7) of Proposition 4.10, respectively. Hence \(S//R_{V(\mathscr{X})}\in\mathcal{S}^{+}\), and we conclude that \(\tilde{\mathcal{S}}/R_{V(\mathscr{X})}\subseteq\mathcal{S}^{+}\). To see that \(\tilde{\mathcal{S}}/R_{V(\mathscr{X})}\) is upward closed with respect to \(\mathcal{S}^{+}\), suppose \(S//R_{V(\mathscr{X})}\in\tilde{\mathcal{S}}/R_{V(\mathscr{X})}\) with \(S\in\tilde{\mathcal{S}}\), \(T//R_{V(\mathscr{X})}\in\mathcal{S}^{+}\), and \(\mathcal{C}(S//R_{V(\mathscr{X})})\subseteq\mathcal{C}(T//R_{V(\mathscr{X})})\). We show that \(T//R_{V(\mathscr{X})}\in\tilde{\mathcal{S}}/R_{V(\mathscr{X})}\). Note that \(S\in\tilde{\mathcal{S}}\) implies \(\widetilde{e}_{S}\neq\emptyset\) implies \(e//R_{V(\mathscr{X})}\subseteq V(S//R_{V(\mathscr{X})})\), by Part (1) of Proposition 4.10, and also \(e//R_{V(\mathscr{X})}\subseteq V(T//R_{V(\mathscr{X})})\). So \(C\in\mathcal{C}(T//R_{V(\mathscr{X})})\backslash\mathcal{C}(S//R_{V(\mathscr{X})})\) implies \(e//R_{V(\mathscr{X})}\cap V(C)=\emptyset\), hence \(T//R_{V(\mathscr{X})}\neq T//R_{V(\mathscr{X})}\wedge e//R_{V(\mathscr{X})}\), contradicting \(T//R_{V(\mathscr{X})}\in\mathcal{S}^{+}\). It follows that \(S//R_{V(\mathscr{X})}=T//R_{V(\mathscr{X})}\) hence \(T//R_{V(\mathscr{X})}\in\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\). Note that it is not necessarily true that \(T\in\widetilde{\mathcal{S}}\). We conclude that \(\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\) is upward closed with respect to \(\mathcal{S}^{+}\). (4) We show that \(\mathcal{T}_{\overline{e}}/R_{V(\mathscr{X})}\) is a well defined hypergraph transformation, first showing that the partial transformation \(\pi_{\overline{e}}/R_{V(\mathscr{X})}\) is well defined, and second showing that \(\mathcal{T}_{\overline{e}}/R_{V(\mathscr{X})}\) satisfies Conditions (1)-(3) in Definition 3.3. First, let \(X//R_{V(\mathscr{X})}\), \(Y//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\) with \(X//R_{V(\mathscr{X})}=Y//R_{V(\mathscr{X})}\). Then \(\widetilde{e}_{X}\neq\emptyset\) if and only if \(\widetilde{e}_{Y}\neq\emptyset\) by Part (5) of Proposition 4.10, and \(\widetilde{e}_{X}\cap E(X)\neq\emptyset\) if and only if \(\widetilde{e}_{Y}\cap E(Y)\neq\emptyset\) by Part (6) of Proposition 4.10. So \(\widetilde{e}_{X}=\emptyset\) or \(\widetilde{e}_{X}\cap E(X)\neq\emptyset\) implies \(\pi_{\overline{e}}/R_{V(\mathscr{X})}=\pi_{\overline{e}}(X)//R_{V(\mathscr{X}) }=X//R_{V(\mathscr{X})}=Y//R_{V(\mathscr{X})}=\pi_{\overline{e}}(Y)//R_{V( \mathscr{X})}=\pi_{\overline{e}}/R_{V(\mathscr{X})}(Y//R_{V(\mathscr{X})})\). Further, \(\widetilde{e}_{X}\neq\emptyset\) and \(\widetilde{e}_{X}\cap E(X)=\emptyset\) implies \(\pi_{\overline{e}}/R_{V(\mathscr{X})}(X//R_{V(\mathscr{X})})=\pi_{\overline{e} }(X)//R_{V(\mathscr{X})}=(X\boxplus\widetilde{e}_{X})//R_{V(\mathscr{X})}=X// R_{V(\mathscr{X})}\boxplus e/R_{V(\mathscr{X})}=Y//R_{V(\mathscr{X})}\boxplus e //R_{V(\mathscr{X})}=(Y\boxplus\widetilde{e}_{Y})//R_{V(\mathscr{X})}=\pi_{ \overline{e}}(Y)//R_{V(\mathscr{X})}=\pi_{\overline{e}}/R_{V(\mathscr{X})}(Y //R_{V(\mathscr{X})})\). It follows that \(\pi_{\overline{e}}/R_{V(\mathscr{X})}\) is well defined. Second, nonredundancy holds since if \(S//R_{V(\mathscr{X})}\in\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\) with \(S\in\widetilde{\mathcal{S}}\) then \(\pi_{\overline{e}}/R_{V(\mathscr{X})}(S//R_{V(\mathscr{X})})=\pi_{\overline{ e}}(S)//R_{V(\mathscr{X})}=(S\boxplus\widetilde{e}_{S})//R_{V(\mathscr{X})}=S//R_{V( \mathscr{X})}\boxplus e/R_{V(\mathscr{X})}\), so \(\mathcal{C}(S//R_{V(\mathscr{X})})\cap\mathcal{C}\big{(}\pi_{\overline{e}}/R _{V(\mathscr{X})}(S//R_{V(\mathscr{X})})\big{)}=\emptyset\) since addition of the hyperedge \(e//R_{V(\mathscr{X})}\) modifies all components of \(S//R_{V(\mathscr{X})}\); additionally, \(\mathcal{N}\notin\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\). Further, \(\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\) is upward closed with respect to \(\mathcal{S}^{+}\) by Part (3) of this proposition, so Part (3) of Proposition 3.13 implies that \(\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\) is component maximal with \(\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\)-maximal subsets defined by \(\mathcal{D}^{+}_{X//R_{V(\mathscr{X})}}\cap\widetilde{\mathcal{S}}/R_{V( \mathscr{X})}\) for \(X//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\). Let \(X//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\). We show that the subset \(\widetilde{\mathcal{D}}_{X}/R_{V(\mathscr{X})}\) is \(\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\)-maximal, and then uniqueness by Part (1) of Proposition 3.2 will give \(\widetilde{\mathcal{D}}_{X}/R_{V(\mathscr{X})}=\mathcal{D}^{+}_{X//R_{V(\mathscr{ X})}}\cap\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\). Note that each \(\widetilde{\mathcal{D}}_{X}/R_{V(\mathscr{X})}\) is well defined: if \(X//R_{V(\mathscr{X})}=Y//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\) then \(\widetilde{e}_{X}\neq\emptyset\) if and only if \(\widetilde{e}_{Y}\neq\emptyset\), and \(\widetilde{e}_{X}\cap E(X)\neq\emptyset\) if and only if \(\widetilde{e}_{Y}\cap E(Y)\neq\emptyset\), by Parts (5) and (6) of Proposition 4.10, respectively; so either \(\widetilde{\mathcal{D}}_{X}/R_{V(\mathscr{X})}=\emptyset=\widetilde{\mathcal{D }}_{Y}/R_{V(\mathscr{X})}\) or, since \(R_{V(\mathscr{X})}\) is \(\widetilde{\mathcal{S}}\)-preserving with respect to \(\mathcal{X}\), Part (7) of Proposition 4.10 implies \((X\wedge\widetilde{e}_{X})//R_{V(\mathscr{X})}=X//R_{V(\mathscr{X})}\wedge e/ //R_{V(\mathscr{X})}=Y//R_{V(\mathscr{X})}\wedge e/R_{V(\mathscr{X})}=(Y \wedge\widetilde{e}_{Y})//R_{V(\mathscr{X})}\) and therefore \(\widetilde{\mathcal{D}}_{X}/R_{V(\mathscr{X})}=\{(X\wedge\widetilde{e}_{X}) //R_{V(\mathscr{X})}\}=\widetilde{\mathcal{D}}_{Y}/R_{V(\mathscr{X})}\). Now, Conditions (1) to (3) of Definition 3.1 follow from the definition of \(\widetilde{\mathcal{D}}_{X}/R_{V(\mathscr{X})}\), noting that \(\mathcal{C}\big{(}(X\wedge\widetilde{e}_{X})//R_{V(\mathscr{X})}\big{)}= \mathcal{C}(X//R_{V(\mathscr{X})}\wedge e/R_{V(\mathscr{X})})\subseteq\mathcal{C }(X//R_{V(\mathscr{X})})\) since \(R_{V(\mathscr{X})}\) is \(\widetilde{\mathcal{S}}\)-preserving with respect to \(\mathcal{X}\) and applying Part (7) of Proposition 4.10. For Condition (4) of Definition 3.1, suppose \(S//R_{V(\mathscr{X})}\in\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\) with \(S\in\widetilde{\mathcal{S}}\), and \(\mathcal{C}(S//R_{V(\mathscr{X})})\subseteq\mathcal{C}(X//R_{V(\mathscr{X})})\). Since \(S\in\widetilde{\mathcal{S}}\) we have \(\widetilde{e}_{S}\neq\emptyset\) implies \(e//R_{V(\mathscr{X})}\subseteq V(S//R_{V(\mathscr{X})})\), by Part (1) of Proposition 4.10, and \(\widetilde{e}_{S}\cap E(S)=\emptyset\) implies \(e//R_{V(\mathscr{X})}\notin E(S//R_{V(\mathscr{X})})\), by Part (2) of Proposition 4.10. Then \(e//R_{V(\mathscr{X})}\subseteq V(X//R_{V(\mathscr{X})})\) implies \(\widetilde{e}_{X}\neq\emptyset\), by Part (1) of Proposition 4.10, and since \(e//R_{V(\mathscr{X})}\notin E(S//R_{V(\mathscr{X})})\) implies \(e/R_{V(\mathscr{X})}\notin E(X//R_{V(\mathscr{X})})\) we have \(\widetilde{e}_{X}\cap E(X)=\emptyset\) by Part (2) of Proposition 4.10. So \(\widetilde{\mathcal{D}}_{X}/R_{V(\mathscr{X})}=\{(X\wedge\widetilde{e}_{X})//R_{V( \mathscr{X})}\}\). Now, \(C\in\mathcal{C}(X//R_{V(\mathscr{X})})\backslash\mathcal{C}(S//R_{V(\mathscr{X})})\) implies \(\widetilde{e}_{X}\neq\emptyset\). Now, \(C\in\mathcal{C}(X//R_{V(\mathscr{X})})\backslash\mathcal{C}(S//R_{V(\mathscr{X})})\) implies \(\widetilde{e}_{X}\neq\emptyset\). \(e/R_{V(\mathscr{X})}\cap V(C)=\emptyset\), hence \(S//R_{V(\mathscr{X})}=X//R_{V(\mathscr{X})}\wedge e//R_{V(\mathscr{X})}=(X\wedge \widetilde{e}_{X})//R_{V(\mathscr{X})}\in\widetilde{\mathcal{D}}_{X}/R_{V( \mathscr{X})}\), where the last equality follows from Part (7) of Proposition 4.10 noting that \(R_{V(\mathscr{X})}\) is \(\widetilde{\mathcal{S}}\)-preserving with respect to \(\mathcal{X}\). We conclude that \(\widetilde{\mathcal{D}}_{X}/R_{V(\mathscr{X})}\) is \(\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\)-maximal. We show the direct sum decomposition is preserved, so let \(X//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\). For notational simplicity denote \(\mathcal{S}^{*}:=\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\) and \(Y:=X\wedge\widetilde{e}_{X}\). Note that Part (4) of Proposition 4.10 implies \(\widetilde{e}_{Y}=\widetilde{e}_{X}\) and \(\widetilde{e}_{Y}\cap E(Y)=\widetilde{e}_{X}\cap E(X)\). First suppose that \(\widetilde{e}_{X}=\emptyset\) or \(\widetilde{e}_{X}\cap E(X)\neq\emptyset\). Then \(\widetilde{\mathcal{D}}_{X}/R_{V(\mathscr{X})}=\emptyset\), so \(\mathcal{S}^{*}_{X//R_{V(\mathscr{X})}}=\emptyset\), hence \(|\mathcal{S}^{*}_{X//R_{V(\mathscr{X})}}|=|\pi_{\widetilde{e}}/R_{V(\mathscr{ X})}(\mathcal{S}^{*}_{X//R_{V(\mathscr{X})}})|=0\), it holds trivially that \(\pi_{\widetilde{e}}/R_{V(\mathscr{X})}(\mathcal{S}^{*}_{X//R_{V(\mathscr{X})}})\) consists of pairwise vertex-disjoint hypergraphs, and \(\pi_{\widetilde{e}}/R_{V(\mathscr{X})}(X//R_{V(\mathscr{X})})=X//R_{V(\mathscr{ X})}=\overline{X//R_{V(\mathscr{X})}}\). Second suppose that \(\widetilde{e}_{X}\neq\emptyset\) and \(\widetilde{e}_{X}\cap E(X)=\emptyset\). Then \(\widetilde{\mathcal{D}}_{X}/R_{V(\mathscr{X})}=\{Y//R_{V(\mathscr{X})}\}\). Now, \(\pi_{\widetilde{e}}/R_{V(\mathscr{X})}(Y//R_{V(\mathscr{X})})=\pi_{\widetilde{ e}}(Y)//R_{V(\mathscr{X})}=(Y\boxplus\widetilde{e}_{Y})//R_{V(\mathscr{X})}=Y//R_{V( \mathscr{X})}\boxplus e/R_{V(\mathscr{X})}\), where the second equality follows since \(\widetilde{e}_{Y}\neq\emptyset\) and \(\widetilde{e}_{Y}\cap E(Y)=\emptyset\). Then \(V\big{(}\pi_{\widetilde{e}}/R_{V(\mathscr{X})}(Y//R_{V(\mathscr{X})})\big{)} \cap V(X//R_{V(\mathscr{X})}\boxplus Y//R_{V(\mathscr{X})})=\emptyset\), so \(\mathcal{S}^{*}_{X//R_{V(\mathscr{X})}}=\{Y//R_{V(\mathscr{X})}\}\), hence \(|\mathcal{S}^{*}_{X//R_{V(\mathscr{X})}}|=|\pi_{\widetilde{e}}/R_{V(\mathscr{ X})}(\mathcal{S}^{*}_{X//R_{V(\mathscr{X})}})|=1\), and it also holds trivially that \(\pi_{\widetilde{e}}/R_{V(\mathscr{X})}(\mathcal{S}^{*}_{X//R_{V(\mathscr{X})}})\) consists of pairwise vertex-disjoint hypergraphs. Further, \(\pi_{\widetilde{e}}/R_{V(\mathscr{X})}(X//R_{V(\mathscr{X})})=\pi_{\widetilde {e}}(X)//R_{V(\mathscr{X})}=(X\boxplus\widetilde{e}_{X})//R_{V(\mathscr{X})}= X//R_{V(\mathscr{X})}\boxplus e/R_{V(\mathscr{X})}=(X//R_{V(\mathscr{X})}\boxplus Y //R_{V(\mathscr{X})})\boxplus(Y//R_{V(\mathscr{X})}\boxplus e/R_{V(\mathscr{ X})})=\overline{X//R_{V(\mathscr{X})}}\boxplus\pi_{\widetilde{e}}/R_{V( \mathscr{X})}(Y//R_{V(\mathscr{X})})\), where \(\overline{X//R_{V(\mathscr{X})}}:=(X//R_{V(\mathscr{X})}\boxplus Y//R_{V( \mathscr{X})})\). In any case we have the decomposition \(\pi_{\widetilde{e}}/R_{V(\mathscr{X})}(X//R_{V(\mathscr{X})})=\overline{X//R _{V(\mathscr{X})}}\boxplus\big{(}\bigoplus_{S//R_{V(\mathscr{X})}\in \mathcal{S}^{*}_{X//R_{V(\mathscr{X})}}}\pi_{\widetilde{e}}/R_{V(\mathscr{X})} (S//R_{V(\mathscr{X})})\big{)}\), where \(\overline{X//R_{V(\mathscr{X})}}:=X//R_{V(\mathscr{X})}\boxplus\big{(} \bigoplus_{S//R_{V(\mathscr{X})}\in\mathcal{S}^{*}_{X//R_{V(\mathscr{X})}}}S //R_{V(\mathscr{X})}\big{)}\). We conclude that \(\mathcal{T}_{\widetilde{e}}/R_{V(\mathscr{X})}\) is a hypergraph transformation on \(\mathscr{X}/R_{V(\mathscr{X})}\) with \(\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\)-maximal subsets \(\widetilde{\mathcal{D}}_{X}/R_{V(\mathscr{X})}\) for \(X//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\). To show that \(\mathcal{T}_{\widetilde{e}}/R_{V(\mathscr{X})}\) is the support reduction of \(\mathcal{T}^{+}_{e//R_{V(\mathscr{X})}}\) corresponding to \(\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\) it remains to show that \(\pi_{\widetilde{e}}/R_{V(\mathscr{X})}\) and \(\pi^{+}_{e/R_{V(\mathscr{X})}}\) are equal on \(\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\), so let \(S//R_{V(\mathscr{X})}\in\widetilde{\mathcal{S}}/R_{V(\mathscr{X})}\) with \(S\in\widetilde{\mathcal{S}}\). Then \(\widetilde{e}_{S}\neq\emptyset\) implies \(e//R_{V(\mathscr{X})}\subseteq V(S//R_{V(\mathscr{X})})\), by Part (1) of Proposition 4.10, and \(\widetilde{e}_{S}\cap E(S)=\emptyset\) implies \(e//R_{V(\mathscr{X})}\notin E(S//R_{V(\mathscr{X})})\), by Part (2) of Proposition 4.10. So \(\pi_{\widetilde{e}}/R_{V(\mathscr{X})}(S//R_{V(\mathscr{X})})=\pi_{\widetilde{e}} (S)//R_{V(\mathscr{X})}=(S\boxplus\widetilde{e}_{S})//R_{V(\mathscr{X})}=S//R_{V( \mathscr{X})}\boxplus e/R_{V(\mathscr{X})}=\pi^{+}_{e/R_{V(\mathscr{X})}}(S//R_{V( \mathscr{X})})\), as required. #### 4.2.2 Quotients of hypergraph addition hypergraph transformations We now consider an example of an amenable hypergraph transformation involving hypergraph addition. **Lemma 4.14**.: _Suppose \(\mathcal{X}\subseteq\mathscr{X}\), \(W\in\mathscr{X}^{*}\), and \(R_{V(\mathscr{X})}\) is an equivalence relation on \(V(\mathscr{X})\) that is \(W\)-disjointness preserving with respect to \(\mathcal{X}\). Then the following hold for all \(X\), \(Y\in\mathcal{X}\):_ 1. \(V(W)\cap V(X)=\emptyset\) _if and only if_ \(V(W//R_{V(\mathscr{X})})\cap V(X//R_{V(\mathscr{X})})=\emptyset\)_._ 2. _If_ \(X//R_{V(\mathscr{X})}=Y//R_{V(\mathscr{X})}\) _then_ \(V(W)\cap V(X)=\emptyset\) _if and only if_ \(V(W)\cap V(Y)=\emptyset\) Proof.: (1) Since \(R_{V(\mathscr{X})}\) is \(W\)-disjointness preserving with respect to \(\mathscr{X}\) the forward direction holds. For the reverse direction, a contrapositive argument gives \(V(W)\cap V(X)\neq\emptyset\) implies \(V(W//R_{V(\mathscr{X})})\cap V(X//R_{V(\mathscr{X})})\neq\emptyset\). (2) \(V(W)\cap V(X)=\emptyset\) if and only if \(V(W//R_{V(\mathscr{X})})\cap V(X//R_{V(\mathscr{X})})=\emptyset\) if and only if \(V(W//R_{V(\mathscr{X})})\cap V(Y//R_{V(\mathscr{X})})=\emptyset\) if and only if \(V(W)\cap V(Y)=\emptyset\), where the first and third equivalences hold by Part(1) of this proposition. **Proposition 4.15**.: _Suppose \(W\in\mathscr{X}^{*}\), \(\mathscr{X}\subseteq\mathscr{X}\) with \(\mathcal{N}\in\mathscr{X}\), and \(R_{V(\mathscr{X})}\) is an equivalence relation on \(V(\mathscr{X})\) that is \(W\)-disjointness preserving with respect to \(\mathscr{X}\). Then the hypergraph transformation \(\mathcal{T}^{+}_{W}=(\mathscr{X},\pi^{+}_{W},\mathcal{S}^{+})\) is amenable with respect to \(R_{V(\mathscr{X})}\). In particular, the hypergraph transformation \(\mathcal{T}^{+}_{W}/R_{V(\mathscr{X})}:=(\mathscr{X}/R_{V(\mathscr{X})},\pi^{ +}_{W}/R_{V(\mathscr{X})},\mathcal{S}^{+}/R_{V(\mathscr{X})})\) is equal to the hypergraph addition hypergraph transformation \(\mathcal{T}^{+}_{W//R_{V(\mathscr{X})}}:=(\mathscr{X}/R_{V(\mathscr{X})},\pi^{ +}_{W//R_{V(\mathscr{X})}},\mathcal{S}^{+})\) where \(W//R_{V(\mathscr{X})}\in\mathscr{X}^{*}/R_{V(\mathscr{X})}\) and \(\mathcal{S}^{+}=\{\mathcal{N}\}\)._ Proof.: Noting that \(\mathcal{S}^{+}/R_{V(\mathscr{X})}=\{\mathcal{N}\}=\mathcal{S}^{+}\), it suffices to show that \(\pi^{+}_{W}/R_{V(\mathscr{X})}\) is a well defined partial transformation on \(\mathscr{X}/R_{V(\mathscr{X})}\) such that \(\pi^{+}_{W}/R_{V(\mathscr{X})}=\pi^{+}_{W//R_{V(\mathscr{X})}}\). To show \(\pi^{+}_{W}/R_{V(\mathscr{X})}\) is well defined, let \(X//R_{V(\mathscr{X})}\), \(Y//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\) with \(X//R_{V(\mathscr{X})}=Y//R_{V(\mathscr{X})}\). First, if \(V(W)\cap V(X)\neq\emptyset\), equivalently \(V(W)\cap V(Y)\neq\emptyset\) by Part (2) of Proposition 4.14, then \(\pi^{+}_{W}/R_{V(\mathscr{X})}(X//R_{V(\mathscr{X})})=\pi^{+}_{W}(X)//R_{V( \mathscr{X})}=X//R_{V(\mathscr{X})}=Y//R_{V(\mathscr{X})}=\pi^{+}_{W}(Y)//R_{ V(\mathscr{X})}=\pi^{+}_{W}/R_{V(\mathscr{X})}(Y//R_{V(\mathscr{X})})\). Second, if \(V(W)\cap V(X)=\emptyset\), equivalently \(V(W)\cap V(Y)=\emptyset\) by Part(2) of Proposition 4.14, then Part (1) of Proposition 4.14 implies \(V(W//R_{V(\mathscr{X})})\cap V(X//R_{V(\mathscr{X})})=\emptyset\) and \(V(W//R_{V(\mathscr{X})})\cap V(Y//R_{V(\mathscr{X})})=\emptyset\). Then \(\pi^{+}_{W}/R_{V(\mathscr{X})}(X//R_{V(\mathscr{X})})=\pi^{+}_{W}(X)/R_{V( \mathscr{X})}=(X\oplus W)/R_{V(\mathscr{X})}=X//R_{V(\mathscr{X})}\oplus W//R _{V(\mathscr{X})}=Y//R_{V(\mathscr{X})}\oplus W//R_{V(\mathscr{X})}=(Y\oplus W )//R_{V(\mathscr{X})}=\pi^{+}_{W}(Y)//R_{V(\mathscr{X})}=\pi^{+}_{W}/R_{V( \mathscr{X})}(Y//R_{V(\mathscr{X})})\). Therefore \(\pi^{+}_{W}/R_{V(\mathscr{X})}\) is well defined. To see that \(\pi^{+}_{W}/R_{V(\mathscr{X})}=\pi^{+}_{W//R_{V(\mathscr{X})}}\), let \(X//R_{V(\mathscr{X})}\in\mathcal{X}/R_{V(\mathscr{X})}\). First, if \(V(W)\cap V(X)\neq\emptyset\), equivalently \(V(W//R_{V(\mathscr{X})})\cap V(X//R_{V(\mathscr{X})})\neq\emptyset\) by Part (1) of Proposition 4.14, then \(\pi^{+}_{W}/R_{V(\mathscr{X})}(X//R_{V(\mathscr{X})})=\pi^{+}_{W}(X)//R_{V( \mathscr{X})}=X//R_{V(\mathscr{X})}=\pi^{+}_{W//R_{V(\mathscr{X})}}(X//R_{V( \mathscr{X})})\). Second, if \(V(W)\cap V(X)=\emptyset\), equivalently \(V(W//R_{V(\mathscr{X})})\cap V(X//R_{V(\mathscr{X})})=\emptyset\) from Part (1) of Proposition 4.14, then \(\pi^{+}_{W}/R_{V(\mathscr{X})}(X//R_{V(\mathscr{X})})=\pi^{+}_{W}(X)//R_{V( \mathscr{X})}=X//R_{V(\mathscr{X})}\oplus W//R_{V(\mathscr{X})}=\pi^{+}_{W//R _{V(\mathscr{X})}}(X//R_{V(\mathscr{X})})\). Therefore \(\pi^{+}_{W}/R_{V(\mathscr{X})}=\pi^{+}_{W//R_{V(\mathscr{X})}}\). Concluding remarks Hypergraphs are of interest within pure mathematics as well as in applications of mathematics, in the latter case because they provide a general framework for modelling higher-order interactions in networks. Hypergraph transformations allow for a formal description of structural modifications of hypergraphs, and in particular can model dynamic properties of networks. Function-based forms of hypergraph transformations are important as they can be incorporated into larger mathematical structures and are readily applicable for modelling the physical world, however no suitable theory of function-based hypergraph transformations exists in the literature. In this article we present a new general theory for function-based hypergraph transformations which are defined on finite families of finite hypergraphs. Our notion of a hypergraph transformation modifies a hypergraph by replacing certain connected components of the hypergraph, according to the collection of distinguished hypergraphs associated with the transformation. In this way, a given hypergraph transformation replaces the same subset of connected components in any hypergraph in its domain with the same new connected components, thereby ensuring consistency of action. We establish sufficient conditions for the commutativity of a given set of hypergraph transformations, based on a notion of pairwise disjointness for the transformations. We also demonstrate how a hypergraph transformation can be modified to obtain a new transformation by appropriately modifying the collection of distinguished hypergraphs. Further, since quotient hypergraphs can enable the simplification and comparison of hypergraphs, we consider a notion of a quotient hypergraph transformation. Finally, to illustrate the general theory we provide specific examples of hypergraph transformations that add or delete a set of hyperedges or a hypergraph, which comprise fundamental transformations of hypergraphs. ## Declaration of competing interest The author declares that there is no conflict of interest related to the research presented in this manuscript. ## Data availability No data was used for the research described in the manuscript.
2309.13127
Stability of time-dependent motions for fluid-rigid ball interaction
We aim at the stability of time-dependent motions, such as time-periodic ones, of a rigid body in a viscous fluid filling the exterior to it in 3D. The fluid motion obeys the incompressible Navier-Stokes system, whereas the motion of the body is governed by the balance for linear and angular momentum. Both motions are affected by each other at the boundary. Assuming that the rigid body is a ball, we adopt a monolithic approach to deduce $L^q$-$L^r$ decay estimates of solutions to a non-autonomous linearized system. We then apply those estimates to the full nonlinear initial value problem to find temporal decay properties of the disturbance. Although the shape of the body is not allowed to be arbitrary, the present contribution is the first attempt at analysis of the large time behavior of solutions around nontrivial basic states, that can be time-dependent, for the fluid-structure interaction problem and provides us with a stability theorem which is indeed new even for steady motions under the self-propelling condition or with wake structure.
Toshiaki Hishida
2023-09-22T18:24:58Z
http://arxiv.org/abs/2309.13127v2
# Stability of time-dependent motions for fluid-rigid ball interaction ###### Abstract We aim at stability of time-dependent motions, such as time-periodic ones, of a rigid body in a viscous fluid filling the exterior to it in 3D. The fluid motion obeys the incompressible Navier-Stokes system, whereas the motion of the body is governed by the balance for linear and angular momentum. Both motions are affected by each other at the boundary. Assuming that the rigid body is a ball, we adopt a monolithic approach to deduce \(L^{q}\)-\(L^{r}\) decay estimates of solutions to a non-autonomous linearized system. We then apply those estimates to the full nonlinear initial-value problem to find temporal decay properties of the disturbance. Although the shape of the body is not allowed to be arbitrary, the present contribution is the first attempt at analysis of the large time behavior of solutions around nontrivial basic states, that can be time-dependent, for fluid-structure interaction problem and provides us with a stability theorem which is indeed new even for steady motions under the self-propelling condition or with wake structure. MSC: 35Q35, 76D05 Keywords: fluid-structure interaction, time-dependent motion, stability, decay estimate ## 1 Introduction We study the stability of nontrivial basic motions for fluid-rigid ball interaction in 3D within the framework of \(L^{q}\)-theory. This work is inspired by Ervedoza, Maity and Tucsnak [10], who have successfuly established the \(L^{q}\)-stability of the rest state. The novelty of our study is to develop analysis even for time-dependent basic motions such as time-periodic (and almost periodic) ones. We emphasize that what is done here is already new when the basic motion is a nontrivial steady state. On the other hand, the shape of a single rigid body is not allowed to be arbitrary; in fact, it is assumed to be a ball in the present paper. Later, we will explain the reason why and mention the existing literature, in particular, [17, 36] by Galdi and by Maity and Tucsnak, about the related issue for the case of arbitrary shape. To fix the idea, in what follows, we are going to discuss the self-propelled regime as a typical case, nevertheless, the approach itself can be adapted to other situations of fluid-rigid ball interaction, in which physically relevant nontrivial basic motions could be available, as long as the system (1.5) below for disturbance is the same. When the rigid body is a ball, it is reasonable to take the frame attached to the center of mass of the ball just by translation to reduce the problem to the one in a time-independent domain. Let \(B\) be the open ball centered at the origin with radius \(1\), that is identified with a rigid body whose density \(\rho>0\) is assumed to be a constant. In the resultant problem after change of variable, a viscous incompressible fluid occupies the exterior domain \(\Omega=\mathbb{R}^{3}\setminus\overline{B}\) and the motion obeys the Navier-Stokes system with an additional nonlinear term (due to the change of variable), where the fluid velocity and pressure are denoted by \(u(x,t)\in\mathbb{R}^{3}\) and \(p(x,t)\in\mathbb{R}\), whereas \(\eta(t)\in\mathbb{R}^{3}\) and \(\omega(t)\in\mathbb{R}^{3}\) respectively stand for the translational and angular velocities of the rigid ball and are governed by the balance for linear and angular momentum. The fluid velocity \(u\) meets the rigid motion \(\eta+\omega\times x\) (with \(\times\) being the vector product) at the boundary \(\partial\Omega\), where an extra velocity \(u_{*}(x,t)\in\mathbb{R}^{3}\) generated by the body is allowed to be involved. Indeed, the velocity \(u_{*}\) plays an important role within the self-propelled regime, in which there are no external force and torque so that the body moves due to an internal mechanism described by \(u_{*}\) through interaction of the fluid-rigid body, see Galdi [14, 15] for the details. We assume that \(u_{*}\) is tangential to the boundary \(\partial\Omega\), that is, \(\nu\cdot u_{*}|_{\partial\Omega}=0\), see Remark 3.5 in subsection 3.7 for more general case. Here, \(\nu\) stands for the unit normal to \(\partial\Omega\) directed toward the interior of the rigid body; indeed, \(\nu=-x\) at \(\partial\Omega\) being the unit sphere. Consequently, the unknowns \(u\), \(p\), \(\eta\) and \(\omega\) obey ([10, 15]) \[\begin{split}&\partial_{t}u+(u-\eta)\cdot\nabla u=\Delta u- \nabla p,\qquad\text{div }u=0\quad\text{in }\Omega\times I,\\ & u|_{\partial\Omega}=\eta+\omega\times x+u_{*},\qquad u\to 0 \quad\text{as }|x|\to\infty,\\ & m\frac{d\eta}{dt}+\int_{\partial\Omega}S(u,p)\nu\,d\sigma=0, \\ & J\frac{d\omega}{dt}+\int_{\partial\Omega}x\times S(u,p)\nu\,d \sigma=0,\end{split} \tag{1.1}\] where \(I\) is the whole or half time axis in \(\mathbb{R}\), \(S(u,p)\) denotes the Cauchy stress tensor, that is, \[S(u,p)=2Du-p\,\mathbb{I},\qquad Du=\frac{\nabla u+(\nabla u)^{\top}}{2} \tag{1.2}\] with \(\mathbb{I}\) being \(3\times 3\) unity matrix. Here and hereafter, \((\cdot)^{\top}\) stands for the transpose. Mass and tensor of inertia of the rigid ball are given by \[m=\frac{4\pi\rho}{3},\qquad J=\frac{2m}{5}\,\mathbb{I}. \tag{1.3}\] The model with several physical parameters can be reduced to the problem above in which both kinematic viscosity of the fluid and the radius of the rigid ball are normalized. Then \(\rho\) is regarded as the ratio of both densities of fluid-structure, see subsection 2.1. In [10] Ervedoza, Maity and Tucsnak considered the case when the extra velocity \(u_{*}\) is absent and proved that the initial value problem for (1.1) admits a unique solution globally in time with decay properties such as \[\|u(t)\|_{\infty,\Omega}+|\eta(t)|+|\omega(t)|=O(t^{-1/2})\] as \(t\to\infty\) under the smallness of the \(L^{3}\)-norm of the initial velocity as well as the initial rigid motion. This is a desired development of the local well-posedness in the space \(L^{q}\) due to Wang and Xin [51] and may be also regarded as stability of the rest state. The stability of that state in the 2D case was already studied by Ervedoza, Hillairet and Lacave [9] when the rigid body is a disk. The present paper is aiming at the study of stability criterion for nontrivial basic motions \(\{u_{b},p_{b},\eta_{b},\omega_{b}\}\) to (1.1), that is, we intend to deduce the large time behavior \[\|u(t)-u_{b}(t)\|_{\infty,\Omega}+\|\nabla u(t)-\nabla u_{b}(t)\|_{3,\Omega}+ |\eta(t)-\eta_{b}(t)|+|\omega(t)-\omega_{b}(t)|=o(t^{-1/2}) \tag{1.4}\] of the solution to the initial value problem for (1.1) as \(t\to\infty\) provided that the initial disturbance is small enough in the same sense as in [10] and that the basic motion is also small in a sense uniformly in \(t\) as well as globally Holder continuous in \(t\). We will discuss briefly in subsection 2.4 possible basic motions within the self-propelled regime (1.1). Let us also mention that the asymptotic rate of \(\nabla u(t)\) in (1.4) is an improvement of the corresponding result due to [10], in which less rate is deduced when \(u_{b}=0\), \(\eta_{b}=\omega_{b}=0\). Let \(\{u_{b},\,p_{b},\,\eta_{b},\,\omega_{b}\}\) be a solution to (1.1) on the whole time axis \(I=\mathbb{R}\). We intend to find a solution to the initial value problem for (1.1) in the form \[u=u_{b}+\widetilde{u},\quad p=p_{b}+\widetilde{p},\quad\eta=\eta_{b}+ \widetilde{\eta},\quad\omega=\omega_{b}+\widetilde{\omega}.\] Omitting the tildes \(\widetilde{(\cdot)}\), the disturbance should obey \[\begin{array}{ll}\partial_{t}u+(u-\eta)\cdot\nabla u+(u_{b}-\eta_{b})\cdot \nabla u+(u-\eta)\cdot\nabla u_{b}=&\Delta u-\nabla p\\ \mbox{div }u=&0\end{array}\Bigg{\}}\,\mbox{in }\Omega\times(s,\infty),\\ u|_{\partial\Omega}=\eta+\omega\times x,\qquad u\to 0\quad\mbox{as }|x|\to\infty,\\ m\frac{d\eta}{dt}+\int_{\partial\Omega}S(u,p)\nu\,d\sigma=0,\\ J\frac{d\omega}{dt}+\int_{\partial\Omega}x\times S(u,p)\nu\,d\sigma=0,\end{array} \tag{1.5}\] endowed with the initial conditions at the initial time \(s\in\mathbb{R}\). Besides [10], we will mention the existing literature, that is particularly related to this study, about the original problem in the inertial frame in 3D. Serre [41] constructed a global weak solution to the initial value problem under the action of gravity. This soluion is of the Leray-Hopf class and the argument is based on the energy relation as in the standard Navier-Stokes theory. Silvestre [43] also studied a weak solution under the self-propelling condition and moreover, in a specific case that the body is axisymmetric, she constructed a global strong solution with some parity conditions and discussed the attainability of the translational self-propelled steady motion found by Galdi [13], answering to celebrated Finn's starting problem in her context. When the external force and torque act on the rigid body whose shape is arbitrary, a strong solution locally in time is constructed first by Galdi and Silvestre [20]. All of those literature [41, 43, 20] developed the \(L^{2}\)-theory and the problem was studied in a frame attached to the body of arbitrary shape (except the latter half of [43]), where, unlike the derivation of (1.1), the change of variable must involve the rotation matrix as well as translation unless the body is a ball, see Galdi [15] for the details. Then the resultant equation reads \[\frac{\partial u}{\partial t}+(u-\eta-\omega\times x)\cdot\nabla u+\omega \times u=\Delta u-\nabla p \tag{1.6}\] in place of the first equation of (1.1). The equations of the rigid body in (1.1) are also replaced by \[\begin{array}{l} m\frac{d\eta}{dt}+\omega\times(m\eta)+\int_{ \partial\Omega}S(u,p)\nu\,d\sigma=0,\\ J\frac{d\omega}{dt}+\omega\times(J\omega)+\int_{\partial\Omega}x \times S(u,p)\nu\,d\sigma=0.\end{array} \tag{1.7}\] If we considered the exterior problem with a prescribed rigid motion, the difficulty caused by the drift term \((\omega\times x)\cdot\nabla u\) with spatially unbounded (possibly even time-dependent) coefficient would be more or less overcome due to efforts by several authors, see [19, 25, 26, 27, 28] and the references therein. But this term is indeed a nonlinear term for fluid-structure interaction under consideration and still prevents us from carrying out analysis in a successful way. This is why we are forced to restrict ourselves to the case of the ball in the present paper as well as in [10]. The linear theory of large time behavior is well established for the case of arbitrary shape in [10], while we have already a difficulty at the level of linear analysis in this paper, see Remark 3.4 in subsection 3.7. Recently, Galdi [17] has succeeded in continuation of the solution obtained in [20] globally in time by means of energy estimates and, moreover, he has proved the large time decay \[\lim_{t\to\infty}\Big{(}\|u(t)\|_{6,\Omega}+\|\nabla u(t)\|_{2,\Omega}+|\eta(t )|+|\omega(t)|\Big{)}=0 \tag{1.8}\] even if the shape of the body is arbitrary when the \(H^{1}\)-norm of the initial velocity, the initial rigid motion, and the \(L^{2}\)-norm in \(t\in(0,\infty)\) of the external force and torque acting on the body are small enough. Although there is no definite decay rate in general, one can derive the rate \(O(t^{-1/2})\) in (1.8) if in particular the body is a ball, see [17, Remark 4.1]. There is the other way of transformation to reduce the original problem to the one in a time-independent domain. This is a local transformation that keeps the situation far from the body as it is in order to avoid the term \((\omega\times x)\cdot\nabla u\) in (1.6) even if the shape of the body is arbitrary. Cumsille and Takahashi [5] adopted this transformation to construct a solution locally in time and then successfully derived a priori estimates in the inertial frame so that the global existence of a strong solution for small data was first established within the \(L^{2}\)-theory, however, without any information about the large time behavior. By the same transformation, Geissert, Gotze and Hieber [21] developed the \(L^{q}\)-theory for the local well-posedness in the maximal regularity class; further, in the recent work [36], Maity and Tucsnak have proved even large time decay with definite rate as well as global well-posedness for small data by adapting the time-shifted method proposed by Shibata [42] in the framework of maximal regularity with time-weighted norms. Since there are several complicated terms arising from this latter transformation, it seems difficult to take a reasonable nontrivial basic motion in the resultant system and to discuss its stability, that is indeed the issue here, unless one considers the problem around the rest state. Thus the latter transformation is not preferred in the present paper. Let us turn to our problem (1.5) around a nontrivial motion. Towards the large time behavior of the disturbance, the essential step is to develop the temporal decay estimates of solutions to the linearized system. In fact, this was successfully done by [10] when \(u_{b}=0\) and \(\eta_{b}=0\). They adopted the monolithic approach which is traced back to Takahashi and Tucsnak [46], see also Silvestre [43], and then derived the \(L^{q}\)-\(L^{r}\) decay estimates of the semigroup, that they call the fluid-structure semigroup (while, in this paper, it is called the Stokes-structure semigroup). Look at (1.5), then the term which is never subordinate to the Stokes-structure semigroup is \(\eta_{b}\cdot\nabla u\). This term is well known as the Oseen term in studies of the exterior Navier-Stokes problem, see [16, 34]. For the fluid-structure interaction problem, however, fine temporal behavior of the (purely) Oseen operator \(-\Delta u-\eta_{b}\cdot\nabla u+\nabla p\) is hopeless because the term \(\eta_{b}\cdot\nabla u\) is no longer skew-symmetric on account of the boundary condition. The idea is to combine the Oseen term with \(u_{b}\cdot\nabla u\) since the skew-symmetry is recovered for \((u_{b}-\eta_{b})\cdot\nabla u\), see (3.89)-(3.90). The first step is therefore to derive some decay properties of solutions to the linearized system with the term \((U_{b}-\eta_{b})\cdot\nabla u\) in the whole space without the rigid body, see subsection 4.1, where \(U_{b}\) is the monolithic velocity (2.32). Here, the linear term \(U_{b}\cdot\nabla u\) can be treated as perturbation from the purely Oseen evolution operator provided that \(u_{b}\) is of sub-critical class such as \(u_{b}\in L^{q}(\Omega)\) with some \(q<3\) even though \(u_{b}\) is time-dependent. In the scale-critical case such as \(u_{b}\in L^{3,\infty}(\Omega)\) (weak-\(L^{3}\) space), decay estimate (4.14) with \(r<\infty\) is still available, however, it is difficult to deduce gradient estimate (4.15) of the solution when \(u_{b}\) is time-dependent. Although there is a device to construct the Navier-Stokes flow even in this situation as proposed in [27, Section 5], it does not work for the other nonlinear term \(\eta\cdot\nabla u\). Thus the critical case is out of reach in this paper, and let us concentrate ourselves on the sub-critical case, which covers the self-propelled motion and the motion with wake structure (due to translation). In the subcritical case, the linear term \(u\cdot\nabla u_{b}\) can be discussed together with the nonlinear term as perturbation from the principal part of the linearized system. Moreover, the other term \(\eta\cdot\nabla u_{b}\) is comparable to \(u\cdot\nabla u_{b}\) since \(\eta(t)\) behaves like \(\|u(t)\|_{\infty,\Omega}\) as \(t\to\infty\). As for an alternative way in which the term \((u-\eta)\cdot\nabla u_{b}\) is also involved into the linearization, see Remark 3.3 in subsection 3.7. The right choice of the principal part of the linearized system would be the following non-autonomous system, which we call the Oseen-structure system: \[\begin{split}&\partial_{t}u=\Delta u+(\eta_{b}-u_{b})\cdot\nabla u -\nabla p,\qquad\text{div }u=0\quad\text{in }\Omega\times(s,\infty),\\ & u|_{\partial\Omega}=\eta+\omega\times x,\qquad u\to 0\quad \text{as }|x|\to\infty,\\ & m\frac{d\eta}{dt}+\int_{\partial\Omega}S(u,p)\nu\,d\sigma=0,\\ & J\frac{d\omega}{dt}+\int_{\partial\Omega}x\times S(u,p)\nu\,d \sigma=0,\\ & u(s)=u_{0},\quad\eta(s)=\eta_{0},\quad\omega(s)=\omega_{0}\end{split} \tag{1.9}\] with \(s\in\mathbb{R}\) being the initial time. In fact, in this paper, we develop the \(L^{q}\)-\(L^{r}\) estimates of solutions to (1.9) and apply them to the nonlinear problem (1.5). We do need the specific shape already in linear analysis, otherwise new term \((\omega\times x)\cdot\nabla u_{b}\) appears in (1.5), which arises from \((\omega\times x)\cdot\nabla u\) in (1.6), see Remark 3.4 for further discussion, although the other term \((\eta_{b}+\omega_{b}\times x-u_{b})\cdot\nabla u\) can be handled as in [27, 28] by the present author. Since the difficult term above is absent around the rest state even if the shape of the rigid body is arbitrary, our method developed in this paper works well to provide an alternative proof of the \(L^{q}\)-\(L^{r}\) estimates of the Stokes-structure semigroup established by [10]. The difficulty to analyze (1.9) is the non-autonomous character. For the linearized problem relating to the Navier-Stokes system in exterior domains, the only results that study the temporal decay for the non-autonomous system seem to be due to the present author [27, 28] on the \(L^{q}\)-\(L^{r}\) estimates of the Oseen evolution operator arising from (prescribed) time-dependent rigid motions. In this paper we adapt the method developed in those papers to the Oseen-structure system (1.9), where the crucial step is to deduce the uniformly boundedness of the evolution operator for large time, see (4.32)-(4.33) in subsection 4.3, with the aid of (4.27)-(4.28) which follow from the energy relations (4.25)-(4.26). In this stage one needs to discuss the issue above simultaneously with the one of the adjoint evolution operator, that is the solution operator to the backward problem for the adjoint system, see subsection 3.7. We then proceed to the next stage in which gradient estimates of the evolution operator are derived near the rigid body (Proposition 4.4) and near spatial infinity (Proposition 4.5). In this latter stage, asymptotic behavior (for \(t-s\to 0\) and \(t-s\to\infty\)) of the associated pressure together with the time derivative of the evolution operator plays an important role. We first deduce the bahavior of the pressure near the initial time in subsection 3.6, where analysis is developed with the aid of fractional powers of the Stokes-structure operator and very different from the study of the same issue in [28] for the case of prescribed time-dependent rigid motions. In doing so, we make use the representation of the Stokes-structure resolvent, which is given in subsection 3.3. To employ the monolithic approach, in this paper unlike [10], we do not rely on the decomposition shown by Wang and Xin [51] because the associated projection is not symmetric with respect to the duality pairing which involves the constant density \(\rho\) of the rigid body, see (1.3), unless \(\rho=1\). This situation is not consistent with our argument particularly when making use of the adjoint evolution operator. For this reason, our analysis is based on the similar but different decompos (2.18) below, then the associated projection possesses the aforementioned desired symmetry. The decomposition (2.18) was established by Silvestre [43] when \(q=2\). Since the same result for the \(L^{q}\)-space does not seem to be found in the existing literature, the proof is given in subsection 3.1. The paper consists of five sections. In the next section we introduce the Oseen-structure operator as well as the Stokes-structure one and then give our main results: Theorem 2.1 on \(L^{q}\)-\(L^{r}\) estimates of the evolution operator and Theorem 2.2 on the nonlinear stability. Section 3 is concerned with some preparatory results: the decomposition of the \(L^{q}\)-space mentioned above, reformulation of the Stokes-structure operator, smoothing estimate as well as generation of the Oseen-structure evolution operator, analysis of the pressure and, finally, the adjoint evolution operator. In section 4 we study in detail the large time decay of the evolution operator to complete the proof of Theorem 2.1. Final section is devoted to the proof of Theorem 2.2 for the nonlinear problem (1.5). ## 2 Main results ### Nondimensional variables If all physical parameters are taken into account, the system (1.1) should read \[\begin{split}&\rho_{{}_{L}}\left\{\partial_{t}u+(u-\eta)\cdot \nabla u\right\}=\mu\Delta u-\nabla p,\qquad\text{div}\;u=0\quad\text{in}\; \Omega^{R}\times I,\\ & u|_{\partial\Omega^{R}}=\eta+\omega\times x+u_{*},\qquad u \to 0\quad\text{as}\;|x|\to\infty,\\ & m_{{}_{R}}\frac{d\eta}{dt}+\int_{\partial\Omega^{R}}S_{\mu}(u, p)\nu\,d\sigma=0,\\ & J_{{}_{R}}\frac{d\omega}{dt}+\int_{\partial\Omega^{R}}x\times S _{\mu}(u,p)\nu\,d\sigma=0,\end{split} \tag{2.1}\] with \[\begin{split}&\Omega^{R}=\mathbb{R}^{3}\setminus\overline{B_{R}},\qquad S_{\mu}(u,p)=2\mu Du-p\,\mathbb{I},\\ & m_{{}_{R}}=\int_{B_{R}}\rho_{{}_{S}}\,dx=\frac{4\pi R^{3}\rho_{ {}_{S}}}{3},\qquad J_{{}_{R}}=\int_{B_{R}}\big{(}|x|^{2}\,\mathbb{I}-x\otimes x \big{)}\rho_{{}_{S}}\,dx=\frac{8\pi R^{5}\rho_{{}_{S}}}{15}\,\mathbb{I},\end{split}\] where the rigid body is a ball \(B_{R}\) centered at the origin with radius \(R\), \(\rho_{{}_{L}}\) and \(\rho_{{}_{S}}\) are respectively the densities of liquid and solid, and \(\mu\) stands for the viscosity coefficient. All of those parameters are assumed to be positive constants. Setting \[\begin{split}&\widetilde{x}=\frac{1}{R}\,x,\qquad \widetilde{t}=\frac{\mu}{R^{2}\rho_{{}_{L}}}\,t,\\ &\widetilde{u}(\widetilde{x},\widetilde{t})=\frac{R\rho_{{}_{L} }}{\mu}\,u(x,t),\qquad\widetilde{p}(\widetilde{x},\widetilde{t})=\frac{R^{2} \rho_{{}_{L}}}{\mu^{2}}\,p(x,t),\\ &\widetilde{\eta}(\widetilde{t})=\frac{R\rho_{{}_{L}}}{\mu}\, \eta(t),\qquad\widetilde{\omega}(\widetilde{t})=\frac{R^{2}\rho_{{}_{L}}}{\mu} \,\omega(t),\qquad\widetilde{u}_{*}(\widetilde{x},\widetilde{t})=\frac{R\rho_ {{}_{L}}}{\mu}\,u_{*}(x,t)\end{split}\] and omitting the tildes \(\widetilde{(\cdot)}\), we are led to (1.1) with \[\rho=\frac{\rho_{{}_{S}}}{\rho_{{}_{L}}},\quad m=\int_{B_{1}}\rho\,dx=\frac{4 \pi\rho}{3},\quad J=\int_{B_{1}}\big{(}|x|^{2}\mathbb{I}-x\otimes x\big{)} \rho\,dx=\frac{2m}{5}\,\mathbb{I}, \tag{2.2}\] where \(J\) is called the tensor of inertia. It is worth while mentioning \[x\times(a\times x)=\big{(}|x|^{2}\mathbb{I}-x\otimes x\big{)}a \tag{2.3}\] for all \(a\in\mathbb{R}^{3}\), that implies \[\int_{B_{1}}(a\times x)\cdot(b\times x)\rho\,dx=(Ja)\cdot b=a\cdot(Jb) \tag{2.4}\] for all \(a\), \(b\in\mathbb{R}^{3}\). Even if the shape of the rigid body is not a ball, the symmetric matrix \(J\) defined by the latter integral of (2.2) satisfies (2.4) and thus \(J\) is positive definite. ### Stokes-structure operator Let us begin with introducing fundamental notation. Throughout this paper, we set \[B:=B_{1},\qquad\Omega:=\mathbb{R}^{3}\setminus\overline{B},\qquad\Omega_{R}:= \Omega\cap B_{R}\quad(R>1),\] where \(B_{R}\) denotes the open ball centered at the origin with radius \(R\). Given a domain \(D\subset\mathbb{R}^{3}\), \(q\in[1,\infty]\) and integer \(k\geq 0\), the standard Lebesgue and Sobolev spaces are denoted by \(L^{q}(D)\) and \(W^{k,q}(D)\). We abbreviate the norm \(\|\cdot\|_{q,D}=\|\cdot\|_{L^{q}(D)}\). By \(\langle\cdot,\cdot\rangle_{D}\) (resp. \(\langle\cdot,\cdot\rangle_{\partial D}\)) we denote standard duality pairings over the domain \(D\) (resp. \(\partial D\) being the boundary of \(D\)) in each context. As the space for the pressure, one needs also the homogeneous Sobolev space \[\widehat{W}^{1,q}(D)=\{p\in L^{q}_{\rm loc}(\overline{D});\;\nabla p\in L^{q}( D)\}\] with seminorm \(\|\nabla(\cdot)\|_{q,D}\). When \(D=\Omega\), let us introduce \[\widehat{W}^{1,q}_{(0)}(\Omega)=\left\{p\in\widehat{W}^{1,q}(\Omega);\;\int_{ \Omega_{3}}p\,dx=0\right\},\] that is a Banach space with norm \(\|\nabla(\cdot)\|_{q,\Omega}\). The class \(C^{\infty}_{0}(D)\) consists of all \(C^{\infty}\)-functions with compact support in \(D\), then \(W^{k,q}_{0}(D)\) denotes the completion of \(C^{\infty}_{0}(D)\) in \(W^{k,q}(D)\), where \(k>0\) is an integer. Given \(q\in[1,\infty]\), let \(q^{\prime}\in[1,\infty]\) be the conjugate number defined by \(1/q^{\prime}+1/q=1\). For \(q\in(1,\infty)\), we define the Sobolev space of order \((-1)\) by \(W^{-1,q}(D)=W^{1,q^{\prime}}_{0}(D)^{*}\). In what follows we adopt the same symbols for denoting scalar and vector (even tensor) functions as long as no confusion occurs. By \(C^{\infty}_{0,\sigma}(D)\) we denote the class of all vector fields \(u\) which are in \(C^{\infty}_{0}(D)\) and satisfy \(\mbox{div}\;u=0\) in \(D\). Let \(X\) be a Banach space. Then \(\mathcal{L}(X)\) stands for the Banach space consisting of all bounded linear operators from \(X\) into itself. Finally, we denote several positive constants by \(C\), which may change from line to line. Before stating our main results, let us also introduce the underlying space and the Stokes-structure operator to formulate (1.5) and (1.9) within the monolithic framework as in [10, 46, 43]. By RM we denote the space of all rigid motions, that is, \[\mbox{RM}:=\{\eta+\omega\times x;\;\eta,\,\omega\in\mathbb{R}^{3}\}.\] For the resolvent problem, see subsection 3.3, we have to consider the complex rigid motions \(\eta+\omega\times x\) with \(\eta\), \(\omega\in\mathbb{C}^{3}\). For \(1<q<\infty\), we set \[L^{q}_{R}(\mathbb{R}^{3}):=\{U\in L^{q}(\mathbb{R}^{3})^{3};\;U|_{B}\in\mbox{ RM}\}. \tag{2.5}\] The underlying space we adopt is \[X_{q}(\mathbb{R}^{3}):=\{U\in L^{q}_{R}(\mathbb{R}^{3});\ \text{div}\ U=0\ \ \text{in}\ \mathbb{R}^{3}\}. \tag{2.6}\] It is convenient to define the map \[i:L^{q}_{R}(\mathbb{R}^{3})\to L^{q}(\Omega)^{3}\times\mathbb{R}^{3}\times \mathbb{R}^{3}\] or \[i:L^{q}_{R}(\mathbb{R}^{3})\to L^{q}(\Omega)^{3}\times\mathbb{C}^{3}\times \mathbb{C}^{3}\] for the associated resolvent problem, where the scalar field of the Lebesgue space is \(\mathbb{C}\) for the latter case, by \[\begin{split}& i:U\mapsto i(U):=(u,\eta,\omega)\,\text{ with}\\ & u=U|_{\Omega},\qquad\eta=\frac{1}{m}\int_{B}U(x)\rho\,dx,\qquad \omega=J^{-1}\int_{B}x\times U(x)\rho\,dx\end{split} \tag{2.7}\] with \(\rho\), \(m\) and \(J\) being given by (2.2), then we see from (2.3) that \[U|_{B}=\eta+\omega\times x.\] If in particular \(U\in X_{q}(\mathbb{R}^{3})\), we find \[\nu\cdot(u-\eta-\omega\times x)|_{\partial\Omega}=0 \tag{2.8}\] with \(\nu\) being the unit normal to \(\partial\Omega\) directed toward \(B\) (indeed, \(\nu=-x\) since \(\partial\Omega\) is the unit sphere) on account of \(\text{div}\ U=0\) in \(\mathbb{R}^{3}\). Conversely, if \(u\in L^{q}(\Omega)\) satisfies \(\text{div}\ u=0\) in \(\Omega\) (so that the normal trace \(\nu\cdot u|_{\partial\Omega}\) makes sense) and \(\nu\cdot(u-\eta-\omega\times x)|_{\partial\Omega}=0\) for some pair of \(\eta\), \(\omega\in\mathbb{R}^{3}\), then \[U:=u\chi_{\Omega}+(\eta+\omega\times x)\chi_{B}\in X_{q}(\mathbb{R}^{3}) \tag{2.9}\] with \(i(U)=(u,\eta,\omega)\), where \(\chi_{\Omega}\) and \(\chi_{B}\) denote the characteristic functions. In this way, elements of \(X_{q}(\mathbb{R}^{3})\) are understood through (2.7)-(2.9). The space \(X_{q}(\mathbb{R}^{3})\), \(1<q<\infty\), is a Banach space endowed with norm \[\|U\|_{X_{q}(\mathbb{R}^{3})}:=\left(\|u\|^{q}_{q,\Omega}+\|(\eta_{u}+\omega_ {u}\times x)\rho^{1/q}\|^{q}_{q,B}\right)^{1/q} \tag{2.10}\] for \(U\in X_{q}(\mathbb{R}^{3})\) and we have the duality relation \(X_{q}(\mathbb{R}^{3})^{*}=X_{q^{\prime}}(\mathbb{R}^{3})\), see (3.8) in Proposition 3.1 below, with the pairing \[\begin{split}\langle U,V\rangle_{\mathbb{R}^{3},\rho}& :=\int_{\Omega}u\cdot v\,dx+\int_{B}(\eta_{u}+\omega_{u}\times x) \cdot(\eta_{v}+\omega_{v}\times x)\rho\,dx\\ &=\langle u,v\rangle_{\Omega}+m\eta_{u}\cdot\eta_{v}+(J\omega_{u} )\cdot\omega_{v}\end{split} \tag{2.11}\] (with obvious change if one should consider the complex rigid motions for the resolvent problem) for \(U\in X_{q}(\mathbb{R}^{3})\) and \(V\in X_{q^{\prime}}(\mathbb{R}^{3})\), where \(i(U)=(u,\eta_{u},\omega_{u})\), \(i(V)=(v,\eta_{v},\omega_{v})\), see (2.2), (2.4) and (2.7). It is clear that the pairing (2.11) is defined for \(U\in L^{q}_{R}(\mathbb{R}^{3})\) and \(V\in L^{q^{\prime}}_{R}(\mathbb{R}^{3})\) as well. Notice that the constant weight \(\rho\) is involved in the integral over the rigid body \(B\), see (2.10)-(2.11), nonetheless, it is obvious from (2.7) to see that the following three quantities are equivalent for \(U\in X_{q}(\mathbb{R}^{3}),\;i(U)=(u,\eta_{u},\omega_{u})\): \[\|U\|_{X_{q}(\mathbb{R}^{3})}\sim\|U\|_{q,\mathbb{R}^{3}}\sim\|u\|_{q,\Omega}+| \eta_{u}|+|\omega_{u}|, \tag{2.12}\] where the symbol \(\sim\) means that inequalities in both directions hold with some constants. Thus the norm (2.10) does not play any role to discuss the asymptotic behavior of solutions to (1.5) and (1.9). Nevertheless, the reason for introducing (2.10) is to describe the energy \[\|U\|_{X_{2}(\mathbb{R}^{3})}^{2}=\langle U,U\rangle_{\mathbb{R}^{3},\rho}=\| u\|_{2,\Omega}^{2}+m|\eta_{u}|^{2}+(J\omega_{u})\cdot\omega_{u} \tag{2.13}\] with \((J\omega_{u})\cdot\omega_{u}=\frac{2m}{5}|\omega_{u}|^{2}\) when \(B\) is a ball. In fact, the energy (2.13) fulfills the identity in the desired form along time-evolution of the linearized system (1.9), see (4.25)-(4.26) in subsection 4.2. The other reason is to ensure several duality relations, see (3.85), (3.87) and (3.97). Even for general \(U,\,V\in L^{q}(\mathbb{R}^{3}),\,1<q<\infty\), it is sometimes (see subsectron 4.6) convenient to introduce the other norm and pairing \[\|U\|_{q,(\mathbb{R}^{3},\rho)}:=\left(\int_{\mathbb{R}^{3}}|U(x)|^{q}\left( \rho\chi_{B}+\chi_{\Omega}\right)dx\right)^{1/q},\quad\langle U,V\rangle_{ \mathbb{R}^{3},\rho}:=\int_{\mathbb{R}^{3}}(U\cdot V)(\rho\chi_{B}+\chi_{ \Omega})\,dx, \tag{2.14}\] which are consistent with (2.10)-(2.11), in order that \(L^{q}_{R}(\mathbb{R}^{3})\) is regarded as a subspace of \(L^{q}(\mathbb{R}^{3})^{3}\). By [10, Proposition 3.3] it is shown that the class \[\mathcal{E}(\mathbb{R}^{3}):=\{U\in C^{\infty}_{0,\sigma}(\mathbb{R}^{3});\; DU=0\;\text{in}\;B\}=C^{\infty}_{0}(\mathbb{R}^{3})^{3}\cap X_{q}(\mathbb{R}^{3}) \tag{2.15}\] is dense in \(X_{q}(\mathbb{R}^{3})\), \(1<q<\infty\). An alternative proof will be given in Proposition 3.1 as well. Let \(U\in X_{q}(\mathbb{R}^{3})\) satisfy, in particular, \(u\in W^{1,q}(\Omega)\), where \(i(U)=(u,\eta,\omega)\), see (2.7). Then, \(U\in W^{1,q}(\mathbb{R}^{3})\) if and only if \[u|_{\partial\Omega}=\eta+\omega\times x. \tag{2.16}\] Moreover, under the condition (2.16) it holds that \[\nabla U=(\nabla u)\chi_{\Omega}+M_{\omega}\chi_{B},\qquad M_{\omega}:=\left( \begin{array}{ccc}0&-\omega_{3}&\omega_{2}\\ \omega_{3}&0&-\omega_{1}\\ -\omega_{2}&\omega_{1}&0\end{array}\right). \tag{2.17}\] Let \(1<q<\infty\). With the space \(X_{q}(\mathbb{R}^{3})\) at hand as the one in which we are going to look for the monolithic velocity (2.9), there are two ways to eliminate the pressure. They are similar but slightly different. One is the approach within the space \(L^{q}_{R}(\mathbb{R}^{3})\) by Silvestre [43, 44], who developed the case \(q=2\). In Proposition 3.1 below we will establish the decomposition \[L^{q}_{R}(\mathbb{R}^{3})=X_{q}(\mathbb{R}^{3})\oplus Z_{q}(\mathbb{R}^{3}) \tag{2.18}\] with \[\begin{split} Z_{q}(\mathbb{R}^{3})=\Big{\{}V\in L^{q}_{R}( \mathbb{R}^{3});\;V|_{\Omega}=\nabla p,\;p\in\widehat{W}^{1,q}(\Omega),\;V|_{ B}=\eta+\omega\times x,\\ \eta=\frac{-1}{m}\int_{\partial\Omega}p\nu\,d\sigma,\;\omega=-J^{- 1}\int_{\partial\Omega}x\times(p\nu)\,d\sigma\Big{\}}\end{split} \tag{2.19}\] as well as \(X_{q}(\mathbb{R}^{3})^{\perp}=Z_{q^{\prime}}(\mathbb{R}^{3})\). The latter relation means that \(Z_{q^{\prime}}(\mathbb{R}^{3})\) is the annihilator of \(X_{q}(\mathbb{R}^{3})\) with respect to \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3},\rho}\), see (2.11). Note that, for the element of the space \(Z_{q}(\mathbb{R}^{3})\), \(p\) is determined uniquely up to constant which, however, does not change \(\eta\) and \(\omega\) since \(\int_{\partial\Omega}\nu\,d\sigma=\int_{\partial\Omega}x\times\nu\,d\sigma=0\). When \(q=2\), (2.18) was already proved by Silvestre [43], who studied her problem within, instead of \(L^{q}_{R}(\mathbb{R}^{3})\), the class \(L^{2}(\Omega)+\mathrm{RM}\) in which the flow behaves like a rigid motion at infinity in the exterior domain \(\Omega\). By \[\mathbb{P}=\mathbb{P}_{q}:L^{q}_{R}(\mathbb{R}^{3})\to X_{q}(\mathbb{R}^{3}) \tag{2.20}\] we denote the bounded projection associated with (2.18). Then we have the relation \(\mathbb{P}^{*}_{q}=\mathbb{P}_{q^{\prime}}\) with respect to \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3},\rho}\) in which the constant weight \(\rho\) is involved, that is, in the sense of (3.10), see Proposition 3.1. The other way is to use the following decomposition developed by Wang and Xin [51, Theorem 2.2], see also Dashti and Robinson [6] for the case \(q=2\): \[L^{q}(\mathbb{R}^{3})=G^{(1)}_{q}(\mathbb{R}^{3})\oplus L^{q}_{\sigma}( \mathbb{R}^{3})=G^{(1)}_{q}(\mathbb{R}^{3})\oplus G^{(2)}_{q}(\mathbb{R}^{3} )\oplus X_{q}(\mathbb{R}^{3}) \tag{2.21}\] where \[L^{q}_{\sigma}(\mathbb{R}^{3})=\{V\in L^{q}(\mathbb{R}^{3});\ \mathrm{div}\ V=0\ \mathrm{in}\ \mathbb{R}^{3}\},\] \[G^{(1)}_{q}(\mathbb{R}^{3})=\{\nabla p_{1}\in L^{q}(\mathbb{R}^ {3});\ p_{1}\in\widehat{W}^{1,q}(\mathbb{R}^{3})\},\] \[G^{(2)}_{q}(\mathbb{R}^{3})=\Big{\{}V\in L^{q}(\mathbb{R}^{3}); \ \mathrm{div}\ V=0\ \mathrm{in}\ \mathbb{R}^{3},\ V|_{\Omega}=\nabla p_{2},\ p_{2}\in\widehat{W}^{1,q}( \Omega),\] \[\qquad\qquad\int_{B}V\,dy=-\int_{\partial\Omega}p_{2}\nu\,d\sigma,\ \int_{B}x\times V\,dy=-\int_{\partial\Omega}x\times(p_{2}\nu)\,d\sigma\Big{\}}.\] Here, the density \(\rho\) is not involved in the conditions for \(V\in G^{(2)}_{q}(\mathbb{R}^{3})\), so that \(\langle U,V\rangle_{\mathbb{R}^{3}}=0\) for all \(U\in X_{q}(\mathbb{R}^{3})\) and \(V\in G^{(2)}_{q^{\prime}}(\mathbb{R}^{3})\). One cannot avoid this situation because \(L^{q}_{\sigma}(\mathbb{R}^{3})^{\perp}=G^{(1)}_{q^{\prime}}(\mathbb{R}^{3})\) holds with respect to the standard pairing \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3}}\). As a consequence, the bounded projection \[\mathbb{Q}=\mathbb{Q}_{q}:L^{q}(\mathbb{R}^{3})\to X_{q}(\mathbb{R}^{3})\] associated with (2.21) fulfills the relation \(\mathbb{Q}^{*}_{q}=\mathbb{Q}_{q^{\prime}}\) in the sense that \[\langle\mathbb{Q}_{q}U,V\rangle_{\mathbb{R}^{3}}=\langle U,\mathbb{Q}_{q^{ \prime}}V\rangle_{\mathbb{R}^{3}}\] for all \(U\in L^{q}(\mathbb{R}^{3})\) and \(V\in L^{q^{\prime}}(\mathbb{R}^{3})\) with respect to the standard pairing \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3}}\), which should be compared with (3.10) in Proposition 3.1. The fact that \(\mathbb{Q}\) is not symmetric with respect to \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3},\rho}\) unless \(\rho=1\) is not consistent with duality arguments especially in subsections 3.6, 3.7, 4.3 and 4.6. For this reason, the latter way is not convenient for us and thus the decomposition (2.18) is preferred in this paper. The classical Fujita-Kato projection \[\mathbb{P}_{0}=\mathbb{P}_{0,q}:L^{q}(\mathbb{R}^{3})\to L^{q}_{\sigma}( \mathbb{R}^{3}) \tag{2.22}\] is well-known and it is described in terms of the Riesz transform \(\mathcal{R}=(-\Delta)^{-1/2}\nabla\) as \(\mathbb{P}_{0}=\mathcal{I}+\mathcal{R}\otimes\mathcal{R}\) with \(\mathcal{I}\) being the identity operator. Notice the relation \(\mathbb{P}^{*}_{0,q}=\mathbb{P}_{0,q^{\prime}}\) with respect to \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3}}\). The projection \(\mathbb{P}_{0}\) is used in subsections 4.1 and 4.5. Let us call (1.9) with \(\{u_{b},\eta_{b}\}=\{0,0\}\) the Stokes-structure system (although it is called the fluid-structure system in the existing literature); that is, it is written as \[\begin{split}&\partial_{t}u=\Delta u-\nabla p,\qquad\mathrm{div}\ u=0\quad\mathrm{in}\ \Omega\times(0,\infty),\\ & u|_{\partial\Omega}=\eta+\omega\times x,\qquad u\to 0\quad\mathrm{as}\ |x|\to\infty,\\ & m\frac{d\eta}{dt}+\int_{\partial\Omega}S(u,p)\nu\,d\sigma=0,\\ & J\frac{d\omega}{dt}+\int_{\partial\Omega}x\times S(u,p)\nu\,d \sigma=0,\end{split} \tag{2.23}\] subject to the initial conditions at \(s=0\) (since (2.23) is autonomous). This problem can be formulated as the evolution equation of the form \[\frac{dU}{dt}+AU=0 \tag{2.24}\] in the space \(X_{q}(\mathbb{R}^{3})\), where the velocities of the fluid and the rigid body are unified as a velocity \(U\) in the whole space \(\mathbb{R}^{3}\) through (2.9). Here, the operator \(A\), to which we refer as the Stokes-structure operator in this paper, is defined by \[D_{q}(A) =\big{\{}U\in X_{q}(\mathbb{R}^{3})\cap W^{1,q}(\mathbb{R}^{3}); \;u=U|_{\Omega}\in W^{2,q}(\Omega)\big{\}}, \tag{2.25}\] \[AU =\mathbb{P}\mathcal{A}U,\] \[\mathcal{A}U =\left\{\begin{aligned} &-\text{div}\ (2Du)=-\Delta u, \qquad x\in\Omega,\\ &\frac{1}{m}\int_{\partial\Omega}(2Du)\nu\,d\sigma+\left(J^{-1} \int_{\partial\Omega}y\times(2Du)\nu\,d\sigma\right)\times x,\qquad x\in B, \end{aligned}\right.\] where \(\mathbb{P}\) is the projection (2.20). Notice that the domain \(D_{q}(A)\) is dense since so is \(\mathcal{E}(\mathbb{R}^{3})\) and that the boundary condition (2.16) is hidden for \(U\in D_{q}(A)\), where \((u,\eta,\omega)=i(U)\), see (2.7). In [10] the other operator \(\widetilde{A}U=\mathbb{Q}\mathcal{A}U\) with the same domain \(D_{q}(\widetilde{A})=D_{q}(A)\) acting on the same space \(X_{q}(\mathbb{R}^{3})\) is defined by use of the other projection \(\mathbb{Q}\) associated with (2.21). Due to [10, Proposition 3.4], given \(F\in X_{q}(\mathbb{R}^{3})\), the resolvent problem \[(\lambda+\widetilde{A})U=F\qquad\text{in}\ X_{q}(\mathbb{R}^{3}) \tag{2.26}\] is equivalent to the Stokes-structure resolvent system \[\lambda u-\Delta u+\nabla p=f,\qquad\text{div}\ u=0\quad\text{in }\Omega, \tag{2.27}\] \[u|_{\partial\Omega}=\eta+\omega\times x,\qquad u\to 0\quad \text{as}\ |x|\to\infty,\] \[\lambda\eta+\frac{1}{m}\int_{\partial\Omega}S(u,p)\nu\,d\sigma=\kappa,\] \[\lambda\omega+J^{-1}\int_{\partial\Omega}x\times S(u,p)\nu\,d \sigma=\mu,\] where \((f,\kappa,\mu)=i(F)\), see (2.7), and the associated pressure \(p\) is appropriately determined. Likewise, as we will show in Proposition 3.2, our resolvent problem \[(\lambda+A)U=F\qquad\text{in}\ X_{q}(\mathbb{R}^{3}) \tag{2.28}\] is also equivalent to (2.27). Moreover, via uniqueness of solutions to the problem (2.27) with \(\lambda=1\), we find that \(A=\widetilde{A}\) in Proposition 3.3 and, thereby, that several results for \(\widetilde{A}\) established by [10] continue to hold for \(A\) as well. We note that, as observed first by Takahashi and Tucsnak [46], the evolution equation (2.24) is also shown to be equivalent to the system (2.23) in the similar fashion to Proposition 3.2. If we denote by \(A_{q}\) the operator \(A\) with \(D_{q}(A)\) acting on \(X_{q}(\mathbb{R}^{3})\), we have the duality relation \[A_{q}^{*}=A_{q^{\prime}} \tag{2.29}\] with respect to \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3},\rho}\) for every \(q\in(1,\infty)\) and thus \(A\) is closed. The duality (2.29) is observed from (3.87) below combined with the fact that \(\lambda+A\) with \(\lambda>0\) is surjective. This surjectivity follows from analysis of the resolvent mentioned just below. In particular, it is a positive self-adjoint operator on \(X_{2}(\mathbb{R}^{3})\) as shown essentially by Silvestre [44, Theorem 4.1] and by Takahashi and Tucsnak [46, Proposition 4.2]. Given \(F\in X_{q}(\mathbb{R}^{3})\), consider the resolvent problem (2.28). Then, for every \(\varepsilon\in(0,\pi/2)\), there is a constant \(C_{\varepsilon}>0\) such that \(\mathbb{C}\setminus(-\infty,0]\subset\rho(-A)\) being the resolvent set of the operator \(-A\) subject to \[\|(\lambda+A)^{-1}F\|_{q,\mathbb{R}^{3}}\leq\frac{C_{\varepsilon}}{|\lambda|} \|F\|_{q,\mathbb{R}^{3}} \tag{2.30}\] for all \(\lambda\in\Sigma_{\varepsilon}\) and \(F\in X_{q}(\mathbb{R}^{3})\), which is due to Ervedoza, Maity and Tucsnak [10, Theorem 6.1] and implies that the operator \(-A\) generates a bounded analytic semigroup \(\{e^{-tA};\,t\geq 0\}\) on \(X_{q}(\mathbb{R}^{3})\) for every \(q\in(1,\infty)\), that is an improvement of Wang and Xin [51], where \[\Sigma_{\varepsilon}:=\{\lambda\in\mathbb{C}\setminus\{0\};\;|\arg\lambda| \leq\pi-\varepsilon\}. \tag{2.31}\] Hence, the fractional powers \(A^{\alpha}\) with \(\alpha>0\) are well-defined as closed operators on \(X_{q}(\mathbb{R}^{3})\). Estimate (2.30) especially for large \(|\lambda|\) will be revisited in subsection 3.3. ### Main results As the basic motion, we fix a solution \(\{u_{b},\eta_{b},\omega_{b}\}\) to the problem (1.1) on the whole time axis \(\mathbb{R}\) together with the associated pressure \(p_{b}\) and set \[U_{b}(x,t)=u_{b}(x,t)\chi_{\Omega}(x)+\big{(}\eta_{b}(t)+\omega_{b}(t)\times x \big{)}\chi_{B}(x). \tag{2.32}\] Let us assume that \[\left\{\begin{array}{l}u_{b}\in L^{\infty}(\mathbb{R};\,L^{q_{0}}(\Omega) \cap L^{\infty}(\Omega))\mbox{ with some }q_{0}\in(1,3),\quad\mbox{div }u_{b}=0\mbox{ in }\Omega,\\ \eta_{b},\,\omega_{b}\in L^{\infty}(\mathbb{R};\,\mathbb{R}^{3}),\\ \nu\cdot(u_{b}-\eta_{b}-\omega_{b}\times x)|_{\partial\Omega}=0,\end{array}\right. \tag{2.33}\] then we see that \(U_{b}\in L^{\infty}(\mathbb{R};\,X_{q}(\mathbb{R}^{3}))\) for every \(q\in[q_{0},\infty)\). Let us also mention that the assumption on \(\omega_{b}\) is used merely in the proof of Proposition 4.1 for the whole space problem without body. We further make the assumption \[u_{b}\in C^{\theta}(\mathbb{R};\,L^{\infty}(\Omega)),\quad\eta_{b}\in C^{ \theta}(\mathbb{R};\,\mathbb{R}^{3})\;\mbox{ with some }\theta\in(0,1). \tag{2.34}\] Set \[\begin{split}\|U_{b}\|&:=\sup_{t\in\mathbb{R}}\big{(} \|u_{b}(t)\|_{q_{0},\Omega}+\|u_{b}(t)\|_{\infty,\Omega}+|\eta_{b}(t)|+|\omega _{b}(t)|\big{)},\\ [U_{b}]_{\theta}&:=\sup_{t>s}\frac{\|u_{b}(t)-u_{b}(s) \|_{\infty,\Omega}+|\eta_{b}(t)-\eta_{b}(s)|}{(t-s)^{\theta}}.\end{split} \tag{2.35}\] **Remark 2.1**.: _The condition at the boundary \(\partial\Omega\) in (2.33) may be rewritten as \(\nu\cdot(u_{b}-\eta_{b})|_{\partial\Omega}=0\) since \(\nu\cdot(\omega_{b}\times x)=0\) at the sphere always hold. In view of the boundary condition in (1.1), one can deal with the case of tangential velocity \(u_{*}\), that is, \(\nu\cdot u_{*}|_{\partial\Omega}=0\) (under the condition that the normal trace of \(u_{*}\) is well-defined). For the case of nonzero but small \(\nu\cdot u_{*}\), we will later discuss a possible analysis in Remark 3.5. In this generalized case, we have more terms in the integrand of the boundary integral for the equations of the rigid body in (1.1) and, therefore, those terms are involved when introducing the operator \(B(t)\) below, see (2.37)._ Examples of the basic motion for which the assumptions (2.33)-(2.34) could be met will be discussed briefly in subsection 2.4. To study the Oseen-structure system (1.9), let us introduce the family \(\{L_{\pm}(t);\,t\in\mathbb{R}\}\) of the Oseen-structure operators on \(X_{q}(\mathbb{R}^{3})\), \(1<q<\infty\), by \[D_{q}(L_{\pm}(t))=D_{q}(A),\qquad L_{\pm}(t)U=AU\pm B(t)U, \tag{2.36}\] where \(A\) is the Stokes-structure operator (2.25) and \[B(t)U=\mathbb{P}\big{[}\{(u_{b}(t)-\eta_{b}(t))\cdot\nabla u\}\chi_{\Omega} \big{]} \tag{2.37}\] for \(u=U|_{\Omega}\) with \(\mathbb{P}\) being the projection given by (2.20). As mentioned in the preceding subsection and as we will show in Proposition 3.3, our Stokes-structure operator \(A\) coincides with the operator \(\widetilde{A}\) studied in [10] by Ervedoza, Maity and Tucsnak. With the aid of the elliptic estimate due to [10, Proposition 7.3] \[\|u\|_{W^{2,q}(\Omega)}\leq C\big{(}\|AU\|_{q,\Omega}+\|U\|_{q,\mathbb{R}^{3} }\big{)},\qquad U\in D_{q}(A),\;u=U|_{\Omega}, \tag{2.38}\] we find \[\begin{split}\|\nabla u\|_{q,\Omega}&\leq C\|u\|_{ W^{2,q}(\Omega)}^{1/2}\|u\|_{q,\Omega}^{1/2}\\ &\leq C\big{(}\|AU\|_{q,\Omega}+\|U\|_{q,\mathbb{R}^{3}}\big{)}^{ 1/2}\|u\|_{q,\Omega}^{1/2}\\ &\leq C\left(\|AU\|_{q,\mathbb{R}^{3}}^{1/2}\|U\|_{q,\mathbb{R}^{ 3}}^{1/2}+\|U\|_{q,\mathbb{R}^{3}}\right)\end{split} \tag{2.39}\] which together with (2.33)-(2.35) leads to \[\|B(t)U\|_{q,\mathbb{R}^{3}}\leq C\|(u_{b}(t)-\eta_{b}(t))\cdot\nabla u\|_{q,\Omega}\leq C\|U_{b}\|\left(\|AU\|_{q,\mathbb{R}^{3}}^{1/2}\|U\|_{q,\mathbb{R }^{3}}^{1/2}+\|U\|_{q,\mathbb{R}^{3}}\right) \tag{2.40}\] for all \(U\in D_{q}(A)\). Thus, one justifies the relation \(D_{q}(L_{\pm}(t))=D_{q}(A)\), see (2.36), on which we have \[\begin{split}\|L_{\pm}(t)U\|_{q,\mathbb{R}^{3}}& \leq(1+C\|U_{b}\|)\|AU\|_{q,\mathbb{R}^{3}}+C\|U_{b}\|\|U\|_{q, \mathbb{R}^{3}},\\ \|AU\|_{q,\mathbb{R}^{3}}&\leq 2\|L_{\pm}(t)U\|_{q, \mathbb{R}^{3}}+C(\|U_{b}\|^{2}+\|U_{b}\|)\|U\|_{q,\mathbb{R}^{3}}.\end{split} \tag{2.41}\] Moreover, it follows immediately from (2.25) that \[\|AU\|_{q,\mathbb{R}^{3}}\leq C\|u\|_{W^{2,q}(\Omega)},\qquad U\in D_{q}(A), \,u=U|_{\Omega},\] which combined with (2.38) tells us that \(\|\cdot\|_{W^{2,q}(\Omega)}+\|\cdot\|_{q,\mathbb{R}^{3}}\) is equivalent to the graph norm of \(A\) and, therefore, also to that of \(L_{\pm}(t)\) uniformly in \(t\in\mathbb{R}\) on account of (2.41); in particular, \[\|u\|_{W^{2,q}(\Omega)}\leq C\big{(}\|L_{\pm}(t)U\|_{q,\mathbb{R}^{3}}+\|U\|_{ q,\mathbb{R}^{3}}\big{)},\qquad u=U|_{\Omega}, \tag{2.42}\] for all \(U\in D(L_{\pm}(t))\) with some \(C>0\) (involving \(\|U_{b}\|\)) independent of \(t\in\mathbb{R}\). The initial value problems for (1.5) as well as (1.9) are formulated as \[\frac{dU}{dt}+L_{+}(t)U=0,\quad t\in(s,\infty);\qquad U(s)=U_{0} \tag{2.43}\] and \[\frac{dU}{dt}+L_{+}(t)U=H(U),\quad t\in(s,\infty);\qquad U(s)=U_{0} \tag{2.44}\] where \[U_{0}=u_{0}\chi_{\Omega}+(\eta_{0}+\omega_{0}\times x)\chi_{B} \tag{2.45}\] and \[H(U)=\mathbb{P}\left[\big{\{}(\eta-u)\cdot\nabla(u_{b}+u)\big{\}}\chi_{\Omega}\right] \tag{2.46}\] with \((u,\eta,\omega)=i(U)\), see(2.7). Recall that the assumption \(U_{0}\in X_{q}(\mathbb{R}^{3})\) involves the compatibility condition \(\nu\cdot(u_{0}-\eta_{0}-\omega_{0}\times x)|_{\partial\Omega}=0\). The first main result of this paper is the following theorem on \(L^{q}\)-\(L^{r}\) estimates of the evolution operator generated by \(L_{+}(t)\). **Theorem 2.1**.: _Suppose (2.33) and (2.34), then the operator family \(\{L_{+}(t);\,t\in\mathbb{R}\}\) generates an evolution operator \(\{T(t,s);\,-\infty<s\leq t<\infty\}\) on \(X_{q}(\mathbb{R}^{3})\) for every \(q\in(1,\infty)\) with the following properties:_ \[T(t,\tau)T(\tau,s)=T(t,s)\quad(s\leq\tau\leq t),\quad T(t,t)=I\quad\text{in} \ \mathcal{L}(X_{q}(\mathbb{R}^{3})), \tag{2.47}\] \[(t,s)\mapsto T(t,s)F\in X_{q}(\mathbb{R}^{3})\ \text{is continuous for}\ F\in X_{q}(\mathbb{R}^{3}), \tag{2.48}\] \[\begin{cases}T(\cdot,s)F\in C^{1}((s,\infty);\,X_{q}(\mathbb{R}^{3}))\cap C(( s,\infty);\,D_{q}(A)),\\ \partial_{t}T(t,s)F+L_{+}(t)T(t,s)F=0\quad\text{for}\ F\in X_{q}(\mathbb{R}^{3 }),\,t\in(s,\infty),\end{cases} \tag{2.49}\] \[\begin{cases}T(t,\cdot)F\in C^{1}((-\infty,t);\,X_{q}(\mathbb{R}^{3})),\\ \partial_{s}T(t,s)F=T(t,s)L_{+}(s)F\quad\text{for}\ F\in D_{q}(A),\,s\in(- \infty,t).\end{cases} \tag{2.50}\] _Furthermore, if \(\|U_{b}\|\leq\alpha_{j}\) being small enough, to be precise, see below about how small it is for each item \(j=1,2,3,4\), then the evolution operator \(T(t,s)\) enjoys the following estimates with some constant \(C=C(q,r,\alpha_{j},\beta_{0},\theta)>0\) whenever \([U_{b}]_{\theta}\leq\beta_{0}\), where \(\|U_{b}\|\) and \([U_{b}]_{\theta}\) are given by (2.35) and \(\beta_{0}>0\) is arbitrary._ 1. _Let_ \(q\in(1,\infty)\) _and_ \(r\in[q,\infty]\)_, then_ \[\|T(t,s)F\|_{r,\mathbb{R}^{3}}\leq C(t-s)^{-(3/q-3/r)/2}\|F\|_{q,\mathbb{R}^{ 3}}\] (2.51) _for all_ \((t,s)\) _with_ \(t>s\) _and_ \(F\in X_{q}(\mathbb{R}^{3})\)_. To be precise, there is a constant_ \(\alpha_{1}=\alpha_{1}(q_{0})>0\) _such that if_ \(\|U_{b}\|\leq\alpha_{1}\)_, then the assertion above holds for every_ \(q\in(1,\infty)\) _and_ \(r\in[q,\infty]\)_. Estimate (_2.51_) holds true for the adjoint evolution operator_ \(T(t,s)^{*}\) _as well under the same smallness of_ \(\|U_{b}\|\) _as above._ 2. _Let_ \(1<q\leq r<\infty\)_, then_ \[\|\nabla T(t,s)F\|_{r,\mathbb{R}^{3}}\leq C(t-s)^{-1/2-(3/q-3/r)/2}(1+t-s)^{ \max\{(1-3/r)/2,\,0\}}\|F\|_{q,\mathbb{R}^{3}}\] (2.52) _for all_ \((t,s)\) _with_ \(t>s\) _and_ \(F\in X_{q}(\mathbb{R}^{3})\)_. To be precise, given_ \(r_{1}\in(1,4/3]\)_, there is a constant_ \(\alpha_{2}=\alpha_{2}(r_{1},q_{0})\in(0,\alpha_{1}]\) _such that if_ \(\|U_{b}\|\leq\alpha_{2}\)_, then the assertion above holds for every_ \(r\in[r_{1},\infty)\) _and_ \(q\in(1,r]\)_, where_ \(\alpha_{1}=\alpha_{1}(q_{0})\) _is the constant given in the previous item. Estimate (_2.52_) holds true for the adjoint evolution operator_ \(T(t,s)^{*}\) _as well under the same smallness of_ \(\|U_{b}\|\) _as above._ 3. _Let_ \(q\in(1,\infty)\) _and_ \(r\in[q,\infty]\)_, then_ \[\|T(t,s)\mathbb{P}\mathrm{div}\ F\|_{r,\mathbb{R}^{3}}\leq C(t-s)^{-1/2-(3/q-3 /r)/2}(1+t-s)^{\max\{(3/q-2)/2,\,0\}}\|F\|_{q,\mathbb{R}^{3}}\] (2.53) _for all_ \((t,s)\) _with_ \(t>s\) _and_ \(F\in L^{q}(\mathbb{R}^{3})^{3\times 3}\) _with_ \((F\nu)|_{\partial\Omega}=0\) _as well as_ \(\mathrm{div}\ F\in L^{p}_{R}(\mathbb{R}^{3})\) _with some_ \(p\in(1,\infty)\)_. To be precise, given_ \(r_{0}\in[4,\infty)\)_, if_ \(\|U_{b}\|\leq\alpha_{3}\) _with_ \(\alpha_{3}(r_{0},q_{0}):=\alpha_{2}(r_{0}^{\prime},q_{0})\)_, where_ \(\alpha_{2}\) _is the constant given in the previous item, then the assertion above holds for every_ \(q\in(1,r_{0}]\) _and_ \(r\in[q,\infty]\) _._ 4. _Let_ \(1<q\leq r<\infty\)_, then_ \[\begin{split}&\|\nabla T(t,s)\mathbb{P}\mathrm{div}\ F\|_{r, \mathbb{R}^{3}}\\ &\leq C(t-s)^{-1-(3/q-3/r)/2}(1+t-s)^{\max\{(1-3/r)/2,\,0\}+\max\{ (3/q-2)/2,\,0\}}\|F\|_{q,\mathbb{R}^{3}}\end{split}\] (2.54) _for all_ \((t,s)\) _with_ \(t>s\) _and_ \(F\in L^{q}(\mathbb{R}^{3})^{3\times 3}\) _with_ \((F\nu)|_{\partial\Omega}=0\) _as well as_ \(\mathrm{div}\ F\in L^{p}_{R}(\mathbb{R}^{3})\) _with some_ \(p\in(1,\infty)\)_. To be precise, given_ \(r_{0}\in[4,\infty)\) _and_ \(r_{1}\in(1,4/3]\)_, if_ \(\|U_{b}\|\leq\alpha_{4}\) _with_ \(\alpha_{4}(r_{0},r_{1},q_{0}):=\alpha_{2}\big{(}\min\{r_{0}^{\prime},r_{1}\}, q_{0}\big{)}\)_, where_ \(\alpha_{2}\) _is the constant given in the item_ 2_, then the assertion above holds for_ \(1<q\leq r<\infty\) _with_ \(q\in(1,r_{0}]\) _as well as_ \(r\in[r_{1},\infty)\)_._ **Remark 2.2**.: _According to (2.52), the rate of decay is given by_ \[\|\nabla T(t,s)F\|_{r,\mathbb{R}^{3}}\leq\left\{\begin{array}{ll}C(t-s)^{-1/ 2-(3/q-3/r)/2}\|F\|_{q,\mathbb{R}^{3}}&\quad(r\leq 3),\\ C(t-s)^{-3/2q}\|F\|_{q,\mathbb{R}^{3}}&\quad(r>3),\end{array}\right. \tag{2.55}\] _for all \((t,s)\) with \(t-s>2\) and \(F\in X_{q}(\mathbb{R}^{3})\). This rate is the same as the one for the Stokes and Oseen semigroups in exterior domains due to Iwashita [33], Kobayashi and Shibata [34], Maremonti and Solonnikov [37], see also [28] for the generalized Oseen evolution operator even with rotating effect in which the time-dependent motion of a rigid body is prescribed. It is known ([37, 24]) that the rate (2.55) is sharp for the Stokes semigroup in exterior domains, whereas the optimality is not clear for the problem under consideration. In fact, one of the points of their argument is that the steady Stokes flow in exterior domains does not possess fine summability such as \(L^{3}(\Omega)\) at infinity even for the external force \(f\in C_{0}^{\infty}(\Omega)\) unless \(\int_{\partial\Omega}S(u,p)\nu\,d\sigma=0\). Compared with that, if \(f\chi_{\Omega}+(\kappa+\mu\times x)\chi_{B}=\mathrm{div}\ F\) with \(F\) satisfying the conditions in the item 3 of Theorem 2.1 for \(T(t,s)\mathbb{P}\mathrm{div}\) and if \(U\) is the steady Stokes-structure solution to \(AU=\mathbb{P}\mathrm{div}\ F\), then we have_ \[\int_{\partial\Omega}S(u,p)\nu\,d\sigma=m\kappa=\int_{B}(\kappa+\mu\times x) \rho\,dx=-\int_{\partial\Omega}(F\nu)\rho\,dx=0,\] _yielding better decay of \(u=U|_{\Omega}\), where \(p\) is the associated pressure, and thus the desired rate \(\eqref{eq:2.55}_{1}\) even with \(r>3\) for \(e^{-(t-s)A}\) (case \(U_{b}=0\)) does not lead us to any contradiction unlike the Stokes semigroup in exterior domains._ **Remark 2.3**.: _Since we know from (2.47), (2.51) and (2.49) that \(T(t,s)F\in D_{r}(A)\subset W^{1,r}(\mathbb{R}^{3})\) for all \((t,s)\) with \(t>s\) and \(F\in X_{q}(\mathbb{R}^{3})\), estimate of \(\|\nabla T(t,s)F\|_{r,\mathbb{R}^{3}}\) makes sense. Of course, \(\nabla T(t,s)F\) of the form (2.17) never belongs to \(X_{r}(\mathbb{R}^{3})\). By (2.7) with \(U=T(t,s)F\), we have_ \[\begin{split}\|\nabla T(t,s)F\|_{r,B}&\leq C|\omega (t)|\leq C\|T(t,s)F\|_{1,B}\\ &\leq\left\{\begin{array}{ll}C\|F\|_{q,\mathbb{R}^{3}}&(t-s \leq 2),\\ C(t-s)^{-3/2q}\|F\|_{q,\mathbb{R}^{3}}&(t-s>2),\end{array}\right.\end{split}\] _for all \(F\in X_{q}(\mathbb{R}^{3})\) on account of (2.51). Hence, estimate of \(\|\nabla T(t,s)F\|_{r,\Omega}\) over the fluid region is dominant in the sense that it determines (2.52)._ **Remark 2.4**.: _The adjoint evolution operator \(T(t,s)^{*}\) is studied in subsection 3.7. The duality argument shows (2.53) of the operator \(T(t,s)\mathbb{P}\mathrm{div}\), that plays a role for the proof of Theorem 2.2 below and that corresponds to Lemma 8.1 of [10] in which the additional condition \(F|_{B}=0\) is imposed. The reason why this condition is removed in Theorem 2.1 is that the same estimate of \(\nabla T(t,s)^{*}\) as in (2.52) is deduced over the whole space \(\mathbb{R}^{3}\), as mentioned above, rather than exterior domain \(\Omega\) solely. Nevertheless, as pointed out by Ervedoza, Hillairet and Lacave [9, remark after Corollary 3.10] as well as [10], \(\nabla T(t,s)^{*}\) is not exactly adjoint of the operator \(T(t,s)\mathbb{P}\mathrm{div}\) (unless \(\rho=1\)). This is because several duality relations hold true with respect to the pairing (2.11) involving the constant weight \(\rho\), and this is why one needs the vanishing normal trace \((F\nu)|_{\partial\Omega}=0\) for (2.53) in order that the boundary integral disappears by integration by parts even though \(\rho\neq 1\). Notice that \((F\nu)|_{\partial\Omega}\) from both directions coincide with each other since \(\mathrm{div}\ F\in L^{p}(\mathbb{R}^{3})\) for some \(p\in(1,\infty)\)._ To proceed to the nonlinear problem (1.5), we do need the latter estimates (2.53)-(2.54) in Theorem 2.1 to deal with the nonlinear term \(\eta\cdot\nabla u\) among four terms in (2.46) as in [9, 10]. For the other terms one can discuss them by use of (2.51) and (2.52) under a bit more assumptions on \(\nabla u_{b}\) than (2.56) below, however, if one applies (2.53)-(2.54) partly to the linear term \((\eta-u)\cdot\nabla u_{b}\) in (2.46), then one needs less further assumption: \[\nabla u_{b}\in L^{\infty}(\mathbb{R};\,L^{3}(\Omega))\cap C^{\tilde{\theta} }_{\mathrm{loc}}(\mathbb{R};\,L^{3}(\Omega))\quad\text{with some }\tilde{\theta}\in(0,1). \tag{2.56}\] Accordingly, in addition to (2.35), let us introduce the quantity \[\|U_{b}\|^{\prime}:=\|U_{b}\|+\sup_{t\in\mathbb{R}}\|\nabla u_{b}(t)\|_{3,\Omega} \tag{2.57}\] but the Holder seminorm of \(\nabla u_{b}(t)\) is not needed since the Holder condition in (2.56) is used merely to show that a mild solution (solution to (5.1) below) becomes a strong solution to the initial value problem (2.44). The second main result reads **Theorem 2.2**.: _Suppose (2.33)-(2.34) and (2.56). There exists a constant \(\alpha=\alpha(q_{0},\beta_{0},\theta)>0\) such that if \(\|U_{b}\|^{\prime}\leq\alpha\) as well as \([U_{b}]_{\theta}\leq\beta_{0}\), then the following statement holds, where \(\|U_{b}\|^{\prime}\) is given by (2.57) and \(\beta_{0}>0\) is arbitrary: There is a constant \(\delta=\delta(\alpha,\beta_{0},\theta)>0\) such that if \(U_{0}\in X_{3}(\mathbb{R}^{3})\) satisfies \(\|U_{0}\|_{3,\mathbb{R}^{3}}<\delta\), where \(U_{0}\) is given by (2.45), then problem (2.44) admits a unique strong solution_ \[U\in C([s,\infty);\,X_{3}(\mathbb{R}^{3}))\cap C((s,\infty);\,D_{3}(A))\cap C ^{1}((s,\infty);\,X_{3}(\mathbb{R}^{3}))\] _which enjoys_ \[\begin{split}&\|u(t)\|_{q,\Omega}=o\big{(}(t-s)^{-1/2+3/2q}\big{)},\\ &\|\nabla u(t)\|_{r,\Omega}+|\eta(t)|+|\omega(t)|=o\big{(}(t-s)^{ -1/2}\big{)}\end{split} \tag{2.58}\] _as \((t-s)\to\infty\) for every \(q\in[3,\infty]\) and \(r\in[3,\infty)\), where \((u,\eta,\omega)=i(U)\), see (2.7)._ **Remark 2.5**.: _The decay rate \((t-s)^{-1/2}\) for \(\|\nabla u(t)\|_{3,\Omega}\) in (2.58) is new even when \(U_{b}=0\), see [10] in which less rate is deduced. This improvement is due to (5.15)._ **Remark 2.6**.: _In view of the proof, we see that the large time decay (2.58) with \(r\in[3,\infty)\) replaced by \(r\in[3,\sigma_{0*})\) of the mild solution can be obtained even if \(\nabla u_{b}\in L^{\infty}(\mathbb{R};\,L^{\sigma_{0}}(\Omega))\) with some \(\sigma_{0}\in(3/2,3]\) instead of (2.56), where \(1/\sigma_{0*}=1/\sigma_{0}-1/3\). This solution becomes a strong one with values in \(D_{3}(A)+D_{\sigma_{0}}(A)\) under the additional condition \(\nabla u_{b}\in C^{\tilde{\theta}}_{\mathrm{loc}}(\mathbb{R};\,L^{\sigma_{0}} (\Omega))\) with some \(\tilde{\theta}\in(3/2\sigma_{0}-1/2,1)\) as well as \(\theta\in(3/2\sigma_{0}-1/2,1)\), where \(\theta\) is given in (2.34)._ ### Basic motions In this subsection we briefly discuss the basic motions, specifically the self-propelled motions, in the literature. Those motions are stable as long as they are small enough when applying Theorem 2.2, but the details should be discussed elsewhere. By following the compactness argument due to Galdi [14, Theorem 5.1] who studied the steady problem attached to the body of arbitrary shape, it is possible to show the existence of a solution of the Leray class \(\nabla u_{b}\in L^{2}(\Omega)\) to the steady problem associated with (1.1) when \(u_{*}\) is independent of \(t\) and small in \(H^{1/2}(\partial\Omega)\) as well as vanishing flux condition. If we assume further \(u_{*}\in H^{3/2}(\partial\Omega)\), then we have \(\nabla u_{b}\in H^{1}(\Omega)\subset L^{3}(\Omega)\), see (2.56), \(u_{b}\in W^{1,6}(\Omega)\subset L^{\infty}(\Omega)\cap C^{1/2}(\overline{ \Omega})\). The uniqueness of the solution is also available when the body is a ball, while this issue remains still open for the case of arbitrary shape. If, for instance, the steady rigid motion \(\eta_{b}+\omega_{b}\times x\) obtained in this way fulfills \(\eta_{b}\cdot\omega_{b}\neq 0\), then the result due to Galdi and Kyed [18] concludes that the solution \(u_{b}\) enjoys a wake structure, yielding (2.33) with \(q_{0}\in(2,3)\) provided \(u_{*}\) is tangential to \(\partial\Omega\). Actually, we are able to deduce even more; indeed, according to Theorems 5.1 and 5.2 of the same paper [18], we have pointwise estimates \[|u_{b}(x)|\leq C|x|^{-1},\qquad|\nabla u_{b}(x)|\leq C|x|^{-3/2} \tag{2.59}\] for large \(|x|\) with further wake behavior so that \(u_{b}\) and \(\nabla u_{b}\) decays even faster outside the wake region. With \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq: ## 3 Oseen-structure evolution operator Before analyzing the Oseen-structure operator (2.36)-(2.37), we begin with some preparatory results in the first three subsections: the decomposition (2.18) to eliminate the pressure, justification of the monolithic formulation (of the resolvent problem), and the other formulation of the Stokes-structure operator (2.25) to derive the resolvent estimate (2.30). In those subsections, the shape of a rigid body is allowed to be arbitrary. Let \(B\) be a bounded domain with connected boundary of class \(C^{1,1}\), that is assumed to be \[B\subset B_{1},\qquad\int_{B}x\,dx=0,\] and set \[\Omega=\mathbb{R}^{3}\setminus\overline{B},\qquad m=\int_{B}\rho\,dx,\qquad J =\int_{B}\big{(}|x|^{2}\mathbb{I}-x\otimes x\big{)}\rho\,dx.\] In subsection 3.4 we show that the Oseen-structure operator generates an evolution operator on the space \(X_{q}(\mathbb{R}^{3})\) and, successively in subsection 3.5, we deduce smoothing rates of the evolution operator near the initial time. Such rates of the associated pressure is studied in subsection 3.6. Analysis of the pressure is rather technical part of this paper. Subsection 3.7 is devoted to investigation of the backward problem for the adjoint system. ### Decomposition of \(L^{q}_{R}(\mathbb{R}^{3})\) In this subsection we establish the decomposition (2.18). To this end, let us start with the following preparatory lemma on the space \(L^{q}_{R}(\mathbb{R}^{3})\), see (2.5). **Lemma 3.1**.: _Let \(1<q<\infty\). Then the Banach space \(L^{q}_{R}(\mathbb{R}^{3})\) is reflexive. Furthermore, the class_ \[\mathcal{E}_{R}(\mathbb{R}^{3}):=\{U\in C^{\infty}_{0}(\mathbb{R}^{3})^{3};\; DU=0\;\text{in}\;B\} \tag{3.1}\] _is dense in \(L^{q}_{R}(\mathbb{R}^{3})\)._ Proof.: Set \[J^{q}(\mathbb{R}^{3}):=\Big{\{}U\in L^{q}(\mathbb{R}^{3})^{3};\;U|_{\Omega}=0, \;\int_{B}U\rho\,dx=0,\;\int_{B}x\times U\rho\,dx=0\Big{\}}.\] We first show that \[L^{q}(\mathbb{R}^{3})^{3}=L^{q}_{R}(\mathbb{R}^{3})\oplus J^{q}(\mathbb{R}^{3 }). \tag{3.2}\] It is readily seen that \(L^{q}_{R}(\mathbb{R}^{3})\cap J^{q}(\mathbb{R}^{3})=\{0\}\). Given \(U\in L^{q}(\mathbb{R}^{3})\), we set \[V=U\chi_{\Omega}+(\eta+\omega\times x)\chi_{B}\] with \[\eta=\frac{1}{m}\int_{B}U\rho\,dx,\qquad\omega=J^{-1}\int_{B}x\times U\rho\,dx.\] Then we observe \[V\in L^{q}_{R}(\mathbb{R}^{3}),\qquad U-V\in J^{q}(\mathbb{R}^{3}),\] yielding the decomposition (3.2). We next show that \(J^{q}(\mathbb{R}^{3})^{\perp}\subset L^{q}_{R}(\mathbb{R}^{3})\), where the annihilator is considered with respect to \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3},\rho}\) (but the constant weight \(\rho\) does not play any role here). Suppose that \(U\in L^{q^{\prime}}(\mathbb{R}^{3})^{3}\) satisfies \[\langle U,\Psi\rangle_{\mathbb{R}^{3},\rho}=0,\qquad\forall\Psi\in J^{q}( \mathbb{R}^{3}). \tag{3.3}\] Let \(\Phi\in C_{0}^{\infty}(B)^{3\times 3}\), then we find \[\int_{B}\operatorname{div}\,\left(\Phi+\Phi^{\top}\right)dx=0,\qquad\int_{B}x \times\operatorname{div}\,\left(\Phi+\Phi^{\top}\right)dx=0,\] so that \(\operatorname{div}\,\left(\Phi+\Phi^{\top}\right)\in J^{q}(\mathbb{R}^{3})\) by setting zero outside \(B\). By (3.3) we are led to \[2\rho\langle DU,\Phi\rangle_{B}=-\int_{B}U\cdot\operatorname{div}\,\left(\Phi +\Phi^{\top}\right)\rho\,dx=-\langle U,\operatorname{div}\,\left(\Phi+\Phi^{ \top}\right)\rangle_{\mathbb{R}^{3},\rho}=0\] for all \(\Phi\in C_{0}^{\infty}(B)^{3\times 3}\), yielding \(U|_{B}\in\operatorname{RM}\). We thus obtain the desired inclusion relation. Since the opposite inclusion is obvious, we infer \(J^{q}(\mathbb{R}^{3})^{\perp}=L^{q^{\prime}}_{R}(\mathbb{R}^{3})\). This combined with (3.2) leads us to \[L^{q}_{R}(\mathbb{R}^{3})^{*}=\left[L^{q}(\mathbb{R}^{3})^{3}/J^{q}(\mathbb{R} ^{3})\right]^{*}=J^{q}(\mathbb{R}^{3})^{\perp}=L^{q^{\prime}}_{R}(\mathbb{R}^{ 3}),\] which implies that \(L^{q}_{R}(\mathbb{R}^{3})\) is reflexive. Finally, let us show that the class (3.1) is dense in \(L^{q}_{R}(\mathbb{R}^{3})\). Given \(U\in L^{q}_{R}(\mathbb{R}^{3})\), we set \(\eta+\omega\times x=U|_{B}\). Let us take the following lift of the rigid motion: \[\begin{split}\ell(\eta,\omega)(x)&:=\frac{1}{2} \operatorname{rot}\,\left(\phi(x)\big{(}\eta\times x-|x|^{2}\omega\big{)} \right)\\ &=\phi(x)(\eta+\omega\times x)+\nabla\phi(x)\times(\eta\times x -|x|^{2}\omega),\end{split} \tag{3.4}\] where \(\phi\) is a cut-off function satisfying \[\phi\in C_{0}^{\infty}(B_{3}),\qquad 0\leq\phi\leq 1,\qquad\phi=1\,\text{ on }B_{2}. \tag{3.5}\] Then we have \[\ell(\eta,\omega)\in C_{0}^{\infty}(B_{3}),\qquad\operatorname{div}\,\ell( \eta,\omega)=0,\qquad\ell(\eta,\omega)|_{\overline{B}}=\eta+\omega\times x.\] Since the lift (3.4) will be also used later, it is convenient to introduce the lifting operator \[\ell:(\eta,\omega)\mapsto\ell(\eta,\omega). \tag{3.6}\] Set \(U_{0}=\ell(\eta,\omega)\). One can take \(u_{j}\in C_{0}^{\infty}(\Omega)\) satisfying \(\|u_{j}-(U-U_{0})\|_{q,\Omega}\to 0\) as \(j\to\infty\), and let us denote by \(u_{j}\) again by setting zero outside \(\Omega\). We then put \(V_{j}:=u_{j}+U_{0}\in C_{0}^{\infty}(\mathbb{R}^{3})\) that attains \(\eta+\omega\times x\) on \(B\) (thus \(V_{j}\in\mathcal{E}_{R}(\mathbb{R}^{3})\)) and satisfies \(\|V_{j}-U\|_{q,\mathbb{R}^{3}}\to 0\) as \(j\to\infty\), leading to the desired denseness. The proof is complete. The decomposition (3.7) below was proved by Silvestre [43, Theorem 3.2] when \(q=2\). Proposition 3.1 may be regarded as generalization of her result by following the idea due to Wang and Xin [51, Theorem 2.2], who established the other decomposition (2.21) for general \(q\in(1,\infty)\). It should be emphasized that the projection \(\mathbb{P}\) associated with the decomposition (3.7) is symmetric with respect to \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3},\rho}\) with constant weight \(\rho\) as in (3.10) below unlike the projection \(\mathbb{Q}\) associated with the other one (2.21). In fact, this is the reason why we do need the following proposition in the present study. **Proposition 3.1**.: _Let \(1<q<\infty\). Let \(X_{q}(\mathbb{R}^{3})\) and \(Z_{q}(\mathbb{R}^{3})\) be the spaces given respectively by (2.6) and (2.19). Then the class \(\mathcal{E}(\mathbb{R}^{n})\), see (2.15), is dense in \(X_{q}(\mathbb{R}^{n})\) and_ \[L^{q}_{R}(\mathbb{R}^{3})=X_{q}(\mathbb{R}^{3})\oplus Z_{q}(\mathbb{R}^{3}), \tag{3.7}\] \[X_{q}(\mathbb{R}^{3})^{*}=Z_{q}(\mathbb{R}^{3})^{\perp}=X_{q^{\prime}}(\mathbb{ R}^{3}),\qquad Z_{q}(\mathbb{R}^{3})^{*}=X_{q}(\mathbb{R}^{3})^{\perp}=Z_{q^{ \prime}}(\mathbb{R}^{3}), \tag{3.8}\] _where \(Z_{q}(\mathbb{R}^{3})^{\perp}\) and \(X_{q}(\mathbb{R}^{3})^{\perp}\) stand for the annihilators with respect to \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3},\rho}\), see (2.11)._ _Let \(\mathbb{P}=\mathbb{P}_{q}\) be the projection from \(L^{q}_{R}(\mathbb{R}^{3})\) onto \(X_{q}(\mathbb{R}^{3})\), then_ \[\|\mathbb{P}U\|_{q,\mathbb{R}^{3}}\leq C\|U\|_{q,\mathbb{R}^{3}} \tag{3.9}\] _for all \(U\in L^{q}_{R}(\mathbb{R}^{3})\) with some constant \(C>0\) as well as the relation \(\mathbb{P}^{*}_{q}=\mathbb{P}_{q^{\prime}}\) in the sense that_ \[\langle\mathbb{P}_{q}U,V\rangle_{\mathbb{R}^{3},\rho}=\langle U,\mathbb{P}_{q^ {\prime}}V\rangle_{\mathbb{R}^{3},\rho} \tag{3.10}\] _for all \(U\in L^{q}_{R}(\mathbb{R}^{3})\) and \(V\in L^{q^{\prime}}_{R}(\mathbb{R}^{3})\). If in particular \(U\in L^{q}_{R}(\mathbb{R}^{3})\) satisfies \(u=U|_{\Omega}\in W^{1,q}(\Omega)\), then we have \((\mathbb{P}U)|_{\Omega}\in W^{1,q}(\Omega)\) and_ \[\|\nabla\mathbb{P}U\|_{q,\Omega}\leq C\big{(}\|U\|_{q,\mathbb{R}^{3}}+\| \nabla u\|_{q,\Omega}\big{)} \tag{3.11}\] _with some constant \(C>0\) independent of \(U\)._ Proof.: Step 1. We first verify that \(X_{q}(\mathbb{R}^{3})\cap Z_{q}(\mathbb{R}^{3})=\{0\}\). Suppose \(U\in X_{q}(\mathbb{R}^{3})\cap Z_{q}(\mathbb{R}^{3})\) and set \[U|_{\Omega}=\nabla p,\qquad U|_{B}=\eta+\omega\times x,\] then we have \[\Delta p=0\quad\text{in }\Omega,\qquad\partial_{\nu}p=\nu\cdot(\eta+\omega \times x)\quad\text{on }\partial\Omega \tag{3.12}\] with \[\eta=\frac{-1}{m}\int_{\partial\Omega}p\nu\,d\sigma,\qquad\omega=-J^{-1}\int_ {\partial\Omega}x\times(p\nu)\,d\sigma. \tag{3.13}\] It is observed that \(\nabla p\in L^{2}_{\text{loc}}(\overline{\Omega})\) even though \(q\) is close to \(1\) and that \(p-p_{\infty}\) with some constant \(p_{\infty}\in\mathbb{R}\) (resp. \(\nabla p\)) behaves like the fundamental solution \(\frac{-1}{4\pi|x|}\) (resp. its gradient) of the Laplacian at inifinity since \(\nabla p\in L^{q}(\Omega)\); in fact, they go to zero even faster because \(\int_{\partial\Omega}\partial_{\nu}p\,d\sigma\,(=0)\) is the coefficient of the leading term of the asymptotic expansion at infinity. This together with (3.12)-(3.13) justifies the equality (3.15) below when multiplying the Laplace equation in (3.12) by \((p-p_{\infty})\phi_{R}\) with \[\phi_{R}(x):=\phi(x/R), \tag{3.14}\] where \(\phi\) is fixed as in (3.5), and then letting \(R\to\infty\). Indeed, since \[\lim_{R\to\infty}\int_{2R<|x|<3R}|\nabla p||p-p_{\infty}||\nabla\phi_{R}|\,dx =0,\] we obtain \[\int_{\Omega}|\nabla p|^{2}\,dx=\int_{\partial\Omega}(\partial_{\nu}p)(p-p_{ \infty})\,d\sigma=\int_{\partial\Omega}\nu\cdot(\eta+\omega\times x)p\,d\sigma =-m|\eta|^{2}-(J\omega)\cdot\omega. \tag{3.15}\] This implies that \(\nabla p=0\) as well as \(\eta=\omega=0\), leading to \(U=0\). Step 2. Let us show that the class \(\mathcal{E}(\mathbb{R}^{3})\) is dense in \(X_{q}(\mathbb{R}^{3})\). Given \(U\in X_{q}(\mathbb{R}^{3})\), we set \(\eta+\omega\times x=U|_{B}\) and proceed as in the latter half of Lemma 3.1. Using the same lifting function \(U_{0}=\ell(\eta,\omega)\in C^{\infty}_{0,\sigma}(B_{3})\) given by (3.4), we see that \((U-U_{0})|_{\Omega}\) belongs to the space \[L^{q}_{\sigma}(\Omega)=\{u\in L^{q}(\Omega);\ \mathrm{div}\ u=0\ \mathrm{in}\ \Omega,\ \nu\cdot u|_{\partial\Omega}=0\}. \tag{3.16}\] Since \(C^{\infty}_{0,\sigma}(\Omega)\) is dense in \(L^{q}_{\sigma}(\Omega)\) ([16, 38, 45]), the proof of the desired denseness is complete in the similar manner to descriptions in the last paragraph of the proof of Lemma 3.1. This was already shown in [10, Appendix A.2] within the context of the other decomposition (2.21). We next prove that \(\mathcal{E}(\mathbb{R}^{3})^{\perp}=X_{q}(\mathbb{R}^{3})^{\perp}=Z_{q^{ \prime}}(\mathbb{R}^{3})\) with respect to \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3},\rho}\) by following the argument in [43] (in which the case \(q=2\) is discussed). Let \(U\in\mathcal{E}(\mathbb{R}^{3})\) and \(V\in Z_{q^{\prime}}(\mathbb{R}^{3})\), with \(i(U)=(u,\eta_{u},\omega_{u})\) and \(i(V)=(\nabla p,\eta,\omega)\), where \(\eta\) and \(\omega\) are specified as (3.13). Then it is readily seen that \[\langle U,V\rangle_{\mathbb{R}^{3},\rho}=\int_{\Omega}\mathrm{div}\ (up)\,dx-\eta_{u}\cdot\int_{ \partial\Omega}p\nu\,d\sigma-\omega_{u}\cdot\int_{\partial\Omega}x\times(p \nu)\,d\sigma=0.\] This relation holds even for all \(U\in X_{q}(\mathbb{R}^{3})\) and \(V\in Z_{q^{\prime}}(\mathbb{R}^{3})\) thanks to the denseness observed above. We are thus led to \(Z_{q^{\prime}}(\mathbb{R}^{3})\subset X_{q}(\mathbb{R}^{3})^{\perp}\). Conversely, let \(V\in L^{q^{\prime}}_{R}(\mathbb{R}^{3})\) satisfy \(\langle\Phi,V\rangle_{\mathbb{R}^{3},\rho}=0\) for all \(\Phi\in\mathcal{E}(\mathbb{R}^{3})\). Set \((v,\eta,\omega)=i(V)\) and \((\phi,\eta_{\phi},\omega_{\phi})=i(\Phi)\). If, in particular, \[\eta_{\phi}=\omega_{\phi}=0,\qquad\phi\in C^{\infty}_{0,\sigma}(\Omega), \tag{3.17}\] then it follows that \(v=\nabla p\) for some \(p\in L^{q^{\prime}}_{\mathrm{loc}}(\overline{\Omega})\). This implies that \[m\eta_{\phi}\cdot\eta+\omega_{\phi}\cdot(J\omega) =-\int_{\Omega}\mathrm{div}\ (\phi p)\,dx\] \[=-\eta_{\phi}\cdot\int_{\partial\Omega}p\nu\,d\sigma-\omega_{ \phi}\cdot\int_{\partial\Omega}x\times(p\nu)\,d\sigma\] for all \(\Phi\in\mathcal{E}(\mathbb{R}^{3})\). Since \(\eta_{\phi}\) and \(\omega_{\phi}\) are arbitrary, we conclude \[\eta=\frac{-1}{m}\int_{\partial\Omega}p\nu\,d\sigma,\qquad\omega=-J^{-1}\int_ {\partial\Omega}x\times(p\nu)\,d\sigma,\] that is, \(V\in Z_{q^{\prime}}(\mathbb{R}^{3})\). This proves \(\mathcal{E}(\mathbb{R}^{3})^{\perp}\subset Z_{q^{\prime}}(\mathbb{R}^{3})\) and, therefore, \[\mathcal{E}(\mathbb{R}^{3})^{\perp}=X_{q}(\mathbb{R}^{3})^{\perp}=Z_{q^{ \prime}}(\mathbb{R}^{3}),\qquad Z_{q^{\prime}}(\mathbb{R}^{3})^{\perp}=X_{q}( \mathbb{R}^{3}). \tag{3.18}\] In fact, the latter follows from the former with \(X_{q}(\mathbb{R}^{3})^{\perp\perp}=X_{q}(\mathbb{R}^{3})\) since \(X_{q}(\mathbb{R}^{3})\) is closed in the reflexive space \(L^{q}_{R}(\mathbb{R}^{3})\), see Lemma 3.1. Now, (3.18) immediately leads to (3.7) when \(q=2\) ([43, Theorem 3.2]). Step 3. To complete the proof of (3.7) for the case \(q\neq 2\), we use it for the case \(q=2\). Given \(U\in\mathcal{E}_{R}(\mathbb{R}^{3})\), see (3.1), there is a unique pair of \(V\in X_{2}(\mathbb{R}^{3})\) and \(W\in Z_{2}(\mathbb{R}^{3})\) such that \(U=V+W\). By following the argument developed by Wang and Xin [51, Theorem 2.2], we will prove that \(V\in X_{q}(\mathbb{R}^{3})\) and \(W\in Z_{q}(\mathbb{R}^{3})\) along with estimate (3.25) below, where \((\nabla p,\eta,\omega)=i(W)\), see (2.7). Set also \((u,\eta_{u},\omega_{u})=i(U)\) and \((v,\eta_{v},\omega_{v})=i(V)\). Since \(\nu\cdot v|_{\partial\Omega}=\nu\cdot(\eta_{v}+\omega_{v}\times x)\), see (2.8), we have \[\Delta p=\mathrm{div}\ u\quad\mathrm{in}\ \Omega,\] \[\nu\cdot(\nabla p-u)=-\nu\cdot(\eta_{v}+\omega_{v}\times x)\quad \mathrm{on}\ \partial\Omega.\] Let \(\ell\) be the lifting operator given by (3.4) and (3.6). Then the problem above is rewritten as \[\begin{array}{l}\Delta p=\mbox{div}\,\left(u-\ell(\eta_{v},\omega_{v})\right) \quad\mbox{in}\,\,\Omega,\\ \nu\cdot(\nabla p-u)=-\nu\cdot\ell(\eta_{v},\omega_{v})\quad\mbox{on}\,\, \partial\Omega.\end{array} \tag{3.19}\] One is then able to apply the theory of the Neumann problem in exterior domains, see [38, 45], to (3.19) to infer \[\|\nabla p\|_{q,\Omega}\leq C\big{(}\|u\|_{q,\Omega}+|\eta_{v}|+|\omega_{v}| \big{)}\leq C\big{(}\|U\|_{q,\mathbb{R}^{3}}+|\eta|+|\omega|\big{)}, \tag{3.20}\] where (2.12) is also taken into account. Leu us single out a solution \(p\) in such a way that \(\int_{\Omega_{3}}p\,dx=0\) to obtain \[\|p\|_{q,\Omega_{3}}\leq C\|\nabla p\|_{q,\Omega}. \tag{3.21}\] We show that \[|\eta|+|\omega|\leq C\|U\|_{q,\mathbb{R}^{3}} \tag{3.22}\] for all \(U\in\mathcal{E}_{R}(\mathbb{R}^{3})\). Suppose the contrary, then one can take a sequence \(\{U_{k}\}\subset\mathcal{E}_{R}(\mathbb{R}^{3})\) and the corresponding \[\begin{array}{l}(\nabla p_{k})\chi_{\Omega}+(\eta_{k}+\omega_{k}\times x) \chi_{B}\in Z_{2}(\mathbb{R}^{3}),\qquad\int_{\Omega_{3}}p_{k}\,dx=0,\\ \eta_{k}=\frac{-1}{m}\int_{\partial\Omega}p_{k}\nu\,d\sigma, \qquad\omega_{k}=-J^{-1}\int_{\partial\Omega}x\times(p_{k}\nu)\,d\sigma,\\ v_{k}\chi_{\Omega}+(\eta_{v_{k}}+\omega_{v_{k}}\times x)\chi_{B}\in X_{2}( \mathbb{R}^{3}),\end{array} \tag{3.23}\] such that \[\lim_{k\to\infty}\|U_{k}\|_{q,\mathbb{R}^{3}}=0,\qquad|\eta_{k}|+|\omega_{k}| =1. \tag{3.24}\] By virtue of (3.20)-(3.21) together with (3.24), there are \(\eta,\,\omega\in\mathbb{R}^{3}\) and \(p\in\widehat{W}^{1,q}_{(0)}(\Omega)\) as well as a subsequence, still denoted by the same symbol, such that, along the subsequence, \[\begin{array}{l}\mbox{w-}\lim_{k\to\infty}\nabla p_{k}=\nabla p\quad\mbox{ in}\,\,L^{q}(\Omega),\qquad\lim_{k\to\infty}\|p_{k}-p\|_{q,\Omega_{3}}=0,\\ \lim_{k\to\infty}\eta_{k}=\eta,\qquad\lim_{k\to\infty}\omega_{k}=\omega.\end{array}\] By (3.23) and by the trace estimate \[\|p_{k}-p\|_{q,\partial\Omega}\leq C\|\nabla p_{k}-\nabla p\|_{q,\Omega_{3}}^ {1/q}\|p_{k}-p\|_{q,\Omega_{3}}^{1-1/q}+\|p_{k}-p\|_{q,\Omega_{3}},\] we obtain \[\eta=\frac{-1}{m}\int_{\partial\Omega}p\nu\,d\sigma,\qquad\omega=-J^{-1}\int_ {\partial\Omega}x\times(p\nu)\,d\sigma.\] Since \(p_{k}\) obeys (3.19) with \(u_{k}=U_{k}|_{\Omega}\), we find \(\Delta p=0\) in \(\Omega\), so that \(\partial_{\nu}p|_{\partial\Omega}\) makes sense, and \[\langle\nu\cdot(\nabla p_{k}-u_{k}-\nabla p),\psi\rangle_{ \partial\Omega}=\langle\nabla p_{k}-u_{k}-\nabla p,\nabla\psi\rangle_{\Omega_ {3}}\to 0,\] \[-\nu\cdot(\eta_{v_{k}}+\omega_{v_{k}}\times x)=\nu\cdot(\eta_{k}+ \omega_{k}\times x-\eta_{u_{k}}-\omega_{u_{k}}\times x)\to\nu\cdot(\eta+ \omega\times x),\] as \(k\to\infty\) for all \(\psi\in C^{\infty}_{0}(B_{3})\) on account of (3.24), where \(\eta_{u_{k}}+\omega_{u_{k}}\times x=U_{k}|_{B}\). As a consequence, \(p,\,\eta\) and \(\omega\) solve (3.12)-(3.13), leading to \(\eta=\omega=0\) as explained in Step 1. This contradicts \(|\eta|=|\omega|=1\) and thus concludes (3.22), which combined with (3.20) proves \[\|V\|_{q,\mathbb{R}^{3}}+\|W\|_{q,\mathbb{R}^{3}}\sim\|V\|_{q,\mathbb{R}^{3}}+ \|\nabla p\|_{q,\Omega}+|\eta|+|\omega|\leq C\|U\|_{q,\mathbb{R}^{3}} \tag{3.25}\] for all \(U\in\mathcal{E}_{R}(\mathbb{R}^{3})\). With (3.25) at hand, given \(U\in L^{q}_{R}(\mathbb{R}^{3})\), one can construct \(V\in X_{q}(\mathbb{R}^{3})\) and \(W\in Z_{q}(\mathbb{R}^{3})\) such that \(U=V+W\) along with the same estimate (3.25) since \(\mathcal{E}_{R}(\mathbb{R}^{3})\) is dense in \(L^{q}_{R}(\mathbb{R}^{3})\) by Lemma 3.1. This completes the proof of (3.7) and (3.9). From (3.7) we find \[X_{q}(\mathbb{R}^{3})^{*}=\big{[}L^{q}_{R}(\mathbb{R}^{3})/Z_{q}(\mathbb{R}^{3 })\big{]}^{*}=Z_{q}(\mathbb{R}^{3})^{\perp},\qquad Z_{q}(\mathbb{R}^{3})^{*}= \big{[}L^{q}_{R}(\mathbb{R}^{3})/X_{q}(\mathbb{R}^{3})\big{]}^{*}=X_{q}( \mathbb{R}^{3})^{\perp},\] which along with (3.18) implies (3.8) and (3.10) as well. Step 4. Finally, let us show (3.11) for \(U\in L^{q}_{R}(\mathbb{R}^{3})\) with \((u,\eta_{u},\omega_{u})=i(U)\), see (2.7), when additionally assuming \(u\in W^{1,q}(\Omega)\). Since \(\mathbb{P}U=U-W\) with \[W=(\nabla p)\chi_{\Omega}+(\eta+\omega\times x)\chi_{B}\in Z_{q}(\mathbb{R}^{ 3}),\] it suffices to prove that \[\|\nabla^{2}p\|_{q,\Omega}\leq C\big{(}\|U\|_{q,\mathbb{R}^{3}}+\|\nabla u\|_ {q,\Omega}\big{)}, \tag{3.26}\] where \(p\) should obey \[\Delta p =\mathrm{div}\ u\quad\text{in }\Omega,\] \[\partial_{\nu}p =\nu\cdot(u-\eta_{u}-\omega_{u}\times x+\eta+\omega\times x) \quad\text{on }\partial\Omega,\] with \(\eta\) and \(\omega\) given by (3.13). As in (3.19), we utilize the lift \[U_{0}:=\ell(\eta_{u}-\eta,\omega_{u}-\omega)\] to rewrite the problem above as \[\Delta p=\mathrm{div}\ \big{(}u-U_{0}\big{)}\quad\text{in }\Omega,\] \[\partial_{\nu}p=\nu\cdot\big{(}u-U_{0}\big{)}\quad\text{on } \partial\Omega.\] By [23, Lemma 2.3] we have \[\|\nabla^{2}p\|_{q,\Omega} \leq C\|u-U_{0}\|_{W^{1,q}(\Omega)}\] \[\leq C\big{(}\|u\|_{W^{1,q}(\Omega)}+|\eta_{u}|+|\omega_{u}|+| \eta|+|\omega|\big{)}\] \[\leq C\big{(}\|U\|_{q,\mathbb{R}^{3}}+\|\nabla u\|_{q,\Omega}+| \eta|+|\omega|\big{)}\] which combined with (3.25) concludes (3.26). The proof is complete. ### (2.28) is equivalent to (2.27) This subsection claims that the resolvent equation (2.28) in \(X_{q}(\mathbb{R}^{3})\) with the Stokes-structure operator \(A\) defined by (2.25) is equivalent to the boundary value problem (2.27). This can be proved in the same way as in [10, Proposition 3.4] on the same issue for (2.26) with the other operator \(\widetilde{A}\), see also [46], however, one has to prove the equivalence independently of the existing literature about the operator \(\widetilde{A}\). In fact, with the following proposition at hand, we will then verify \(A=\widetilde{A}\) afterwards by use of uniqueness of solutions to the boundary value problem (2.27). **Proposition 3.2**.: _Let \(1<q<\infty\) and \(\lambda\in\mathbb{C}\). Suppose that \(F\in X_{q}(\mathbb{R}^{3})\) and \((f,\kappa,\mu)=i(F)\) through (2.7). If_ \[u\in W^{2,q}(\Omega),\quad p\in\widehat{W}^{1,q}(\Omega),\quad\ (\eta,\omega)\in\mathbb{C}^{3}\times\mathbb{C}^{3} \tag{3.27}\] _fulfill (2.27), then (2.28) holds with \(U=u\chi_{\Omega}+(\eta+\omega\times x)\chi_{B}\). Conversely, assume that \(U\in D_{q}(A)\) satisfies (2.28). Then there exists a pressure \(p\in\widehat{W}^{1,q}(\Omega)\) such that \((u,\eta,\omega)=i(U)\) together with \(p\) enjoys (2.27)._ Proof.: Concerning the first half, it is obvious that \[U=u\chi_{\Omega}+(\eta+\omega\times x)\chi_{B}\in D_{q}(A),\] \[(\nabla p)\chi_{\Omega}+\left[\frac{-1}{m}\int_{\partial\Omega}p \nu\,d\sigma+\left(-J^{-1}\int_{\partial\Omega}y\times(p\nu)\,d\sigma\right) \times x\right]\chi_{B}\in Z_{q}(\mathbb{R}^{3})\] and that, in view of (2.18)-(2.19), applying the projection \(\mathbb{P}\) to (2.27) yields (2.28). To show the second half, suppose that \(U\in D_{q}(A)\) fulfills (2.28), then \(u\) possesses the desired regularity together with the boundary condition \(u|_{\partial\Omega}=\eta+\omega\times x\). We also find \[\langle(\lambda+A)U,\Phi\rangle_{\mathbb{R}^{3},\rho}=\langle F,\Phi\rangle_{ \mathbb{R}^{3},\rho}\] which is rewritten as \[\lambda\langle u,\phi\rangle_{\Omega}+\lambda m\eta\cdot\eta_{ \phi}+\lambda(J\omega)\cdot\omega_{\phi} \tag{3.28}\] \[\quad-\langle\Delta u,\phi\rangle_{\Omega}+\int_{\partial\Omega }(2Du)\nu\,d\sigma\cdot\eta_{\phi}+\int_{\partial\Omega}y\times(2Du)\nu\,d \sigma\cdot\omega_{\phi}\] \[=\langle f,\phi\rangle_{\Omega}+m\kappa\cdot\eta_{\phi}+(J\mu) \cdot\omega_{\phi}\] for all \(\Phi\in\mathcal{E}(\mathbb{R}^{3})\) with \((\phi,\eta_{\phi},\omega_{\phi})=i(\Phi)\) on account of (2.11) and (3.10). Let us, in particular, choose (3.17), then we get \[\langle\lambda u-\Delta u-f,\phi\rangle_{\Omega}=0\] for all \(\phi\in C^{\infty}_{0,\sigma}(\Omega)\). Since \(\lambda u-\Delta u-f\in L^{q}(\Omega)\), there is a function \(p\in L^{q}_{\mathrm{loc}}(\overline{\Omega})\) with \(\nabla p\in L^{q}(\Omega)\) such that \[\lambda u-\Delta u-f=-\nabla p\] in \(\Omega\). By taking into account this equation in (3.28), we are led to \[\left(\lambda m\eta+\int_{\partial\Omega}(2Du)\nu\,d\sigma-\int_ {\partial\Omega}p\nu\,d\sigma-m\kappa\right)\cdot\eta_{\phi}\] \[+\left(\lambda J\omega+\int_{\partial\Omega}y\times(2Du)\nu\,d \sigma-\int_{\partial\Omega}y\times(p\nu)\,d\sigma-J\mu\right)\cdot\omega_{ \phi}=0\] for all \(\eta_{\phi}\), \(\omega_{\phi}\in\mathbb{C}^{3}\), which yields the equations for the rigid body in (2.27). The proof is complete. **Proposition 3.3**.: _The operator \(A\) defined by (2.25) coincides with \(\widetilde{A}\), where the latter operator is defined by (2.25) in which \(\mathbb{P}\) is replaced by the other projection \(\mathbb{Q}:L^{q}(\mathbb{R}^{3})\to X_{q}(\mathbb{R}^{3})\) associated with (2.21)._ Proof.: By Proposition 3.2 and by [10, Proposition 3.4], both equations (2.26) and (2.28) are equivalent to the boundary value problem (2.27). Hence, it suffices to show that the only solution to (2.27) with \(\lambda=1\) and \(F=0\) within the class (3.27) is the trivial one. In fact, given \(U\in D_{q}(A)\), one can see from [10, Theorem 6.1] that there is a unique \(\widetilde{U}\in D_{q}(\widetilde{A})=D_{q}(A)\) satisfying \((1+\widetilde{A})\widetilde{U}=(1+A)U\) in \(X_{q}(\mathbb{R}^{3})\). Then the uniqueness for (2.27) (which we will show below) implies that \(\widetilde{U}=U\) and, thereby, \(\widetilde{A}U=\widetilde{A}\widetilde{U}=AU\). Consider the problem (2.27) with \(\lambda=1,\,f=0\) and \(\kappa=\mu=0\). Then we have \(u\in W^{1,2}_{\mathrm{loc}}(\overline{\Omega})\) even if \(q\) is close to \(1\). Moreover, from (3.27) it follows that \(\{u,p-p_{\infty}\}\) with some constant \(p_{\infty}\in\mathbb{R}\) and \(\nabla u\) behave like the fundamental solution and its gradient to the Stokes resolvent system even though \(q\) is large. Let \(\phi_{R}\) be the same cut-off function as in (3.14). Computing \[0=\int_{\Omega}\big{(}u-\Delta u+\nabla(p-p_{\infty})\big{)}\cdot u\phi_{R}\,dx\] and then letting \(R\to\infty\), where \[\lim_{R\to\infty}\int_{2R<|x|<3R}|S(u,p-p_{\infty})||u||\nabla\phi_{R}|\,dx=0,\] we are led to \[\|u\|^{2}_{2,\Omega}+m|\eta|^{2}+(J\omega)\cdot\omega+2\|Du\|^{2}_{2,\Omega}=0,\] which concludes that \(u=0\) and \(\eta=\omega=0\). The proof is complete. ### Stokes-structure resolvent Estimate (2.30) for large \(|\lambda|\) is established in [10, Proposition 5.1], where the authors of [10] however omit the proof since it is a slight variation of the proof given by Maity and Tucsnak [35, Theorem 3.1] on the same issue for the Stokes-structure system in a bounded container. In fact, their idea based on the reformulation below rather than (2.28) or (2.26) is fine, but the variation needs a couple of nontrivial modifications since \(\Omega\) is unbounded. For completeness, this subsection is devoted to the details of reconstruction of the proof of (2.30) for large \(|\lambda|\). Once we have that, a contradiction argument performed in [10, Section 6] leads to (2.30) even for \(\lambda\in\Sigma_{\varepsilon}\) close to the origin \(\lambda=0\). The reason why the details are provided here is that we do need a representation of the resolvent, see (3.48) together with (3.44) below, to deduce a useful estimate of the associated pressure near the initial time in subsection 3.6. The underlying space of the other formulation of the resolvent problem (2.27) is \[Y_{q}:=L^{q}_{\sigma}(\Omega)\times\mathbb{C}^{3}\times\mathbb{C}^{3},\] where \(L^{q}_{\sigma}(\Omega)\) given by (3.16) is the standard underlying space when considering the Stokes resolvent in the exterior domain \(\Omega\). By \(P_{\Omega}:L^{q}(\Omega)\to L^{q}_{\sigma}(\Omega)\) we denote the classical Fujita-Kato projection associated with the Helmholtz decomposition [38, 45] \[L^{q}(\Omega)=L^{q}_{\sigma}(\Omega)\oplus\{\nabla p\in L^{q}(\Omega);\;p\in \widehat{W}^{1,q}(\Omega)\}.\] Then the Stokes operator \(A_{\Omega}\) in exterior domains is defined by \[D_{q}(A_{\Omega})=L^{q}_{\sigma}(\Omega)\cap W^{1,q}_{0}(\Omega)\cap W^{2,q} (\Omega),\qquad A_{\Omega}=-P_{\Omega}\Delta.\] Let us reformulate the resolvent system (2.27) by following the procedure due to Maity and Tucsnak [35]. Let \(F\in X_{q}(\mathbb{R}^{3}),\,1<q<\infty\), and consider (2.27) with \((f,\kappa,\mu)=i(F)\), see (2.7). Given \((\eta,\omega)\in\mathbb{C}^{3}\times\mathbb{C}^{3}\), we take a lifting function \[U_{0}=\ell(\eta,\omega)\] of the rigid motion \(\eta+\omega\times x\), where \(\ell\) is given by (3.4) and (3.6). Obviously, we have \[\|\ell(\eta,\omega)\|_{W^{2,q}(\Omega)}\leq C|(\eta,\omega)|. \tag{3.29}\] The fluid part of (2.27) is reduced to finding \(\widetilde{u}:=u-U_{0}\in D(A_{\Omega})\) that obeys \[\lambda(\widetilde{u}+P_{\Omega}U_{0})+A_{\Omega}\widetilde{u}-P_{\Omega} \Delta U_{0}=P_{\Omega}f \tag{3.30}\] in \(L^{q}_{\sigma}(\Omega)\), while the associated pressure consists of \[\begin{split}& p=p_{1}+p_{2}-\lambda p_{3}+p_{4}\quad\text{with} \\ & p_{1}=N(\Delta\widetilde{u}),\quad p_{2}=N(\Delta U_{0}), \quad p_{3}=N(U_{0}),\quad p_{4}=N(\ell(\kappa,\mu))\end{split} \tag{3.31}\] in terms of the Neumann operator \(N:E_{q}(\Omega)\ni h\mapsto N(h):=\psi\in\widehat{W}^{1,q}_{(0)}(\Omega)\) which singles out a unique solution to \[\Delta\psi=\text{div }h\quad\text{in }\Omega,\qquad\partial_{\nu}\psi=\nu\cdot h \quad\text{on }\partial\Omega,\qquad\int_{\Omega_{3}}\psi\,dx=0,\] where \(E_{q}(\Omega):=\{h\in L^{q}(\Omega)^{3};\;\text{div }h\in L^{q}(\Omega)\}\). Then we have \[\|N(h)\|_{q,\Omega_{3}}\leq C\|\nabla N(h)\|_{q,\Omega_{3}}\leq C\|\nabla N(h )\|_{q,\Omega}\leq C\|h\|_{q,\Omega}. \tag{3.32}\] Note that \[\nu\cdot f=\nu\cdot(\kappa+\mu\times x)=\nu\cdot\ell(\kappa,\mu)\qquad\text{ on }\partial\Omega,\] where the function \(\ell(\kappa,\mu)\) is introduced for description of \(p_{4}\) in (3.31) since \(\kappa+\mu\times x\notin E_{q}(\Omega)\). In (3.30) one can not write \(A_{\Omega}U_{0}\) because \(U_{0}\) never belongs to \(D(A_{\Omega})\) on account of \(U_{0}|_{\partial\Omega}=\eta+\omega\times x\). We rewrite the equation of balance for linear momentum \[\lambda m\eta+\int_{\partial\Omega}S(\widetilde{u}+U_{0},p_{1}+p_{2}-\lambda p _{3}+p_{4})\nu\,d\sigma=m\kappa\] in (2.27) as \[\lambda\left(m\eta+\int_{\partial\Omega}p_{3}\nu\,d\sigma\right)+\int_{ \partial\Omega}S(\widetilde{u},p_{1})\nu\,d\sigma+\int_{\partial\Omega}S(U_{0 },p_{2})\nu\,d\sigma=m\kappa+\int_{\partial\Omega}p_{4}\nu\,d\sigma. \tag{3.33}\] Likewise, the equation of balance for angular momentum is described as \[\begin{split}\lambda\left(J\omega+\int_{\partial\Omega}x\times(p _{3}\nu)\,d\sigma\right)+\int_{\partial\Omega}x\times S(\widetilde{u},p_{1}) \nu\,d\sigma+&\int_{\partial\Omega}x\times S(U_{0},p_{2})\nu\,d \sigma\\ &=J\mu+\int_{\partial\Omega}x\times(p_{4}\nu)\,d\sigma.\end{split} \tag{3.34}\] It is convenient to introduce \[K\left(\begin{array}{c}\eta\\ \omega\end{array}\right)=\begin{pmatrix}m\eta+\int_{\partial\Omega}N(\ell( \eta,\omega))\nu\,d\sigma\\ J\omega+\int_{\partial\Omega}x\times N(\ell(\eta,\omega))\nu\,d\sigma\end{pmatrix} \tag{3.35}\] \[Q_{1}\widetilde{u}=\begin{pmatrix}\int_{\partial\Omega}S(\widetilde{u},N( \Delta\widetilde{u}))\nu\,d\sigma\\ \int_{\partial\Omega}x\times S(\widetilde{u},N(\Delta\widetilde{u}))\nu\,d \sigma\end{pmatrix} \tag{3.36}\] \[Q_{2}(\eta,\omega)=\left(\begin{array}{c}\int_{\partial\Omega}S( \ell(\eta,\omega),N(\Delta\ell(\eta,\omega))\nu\,d\sigma\\ \int_{\partial\Omega}x\times S(\ell(\eta,\omega),N(\Delta\ell(\eta,\omega)) \nu\,d\sigma\end{array}\right). \tag{3.37}\] Note that \(K\in\mathbb{C}^{6\times 6}\) is independent of the choice of lift of the rigid motion \(\eta+\omega\times x\) since \(\psi=N(\ell(\eta,\omega))\) solves \[\Delta\psi=0\quad\text{in }\Omega,\qquad\partial_{\nu}\psi=\nu\cdot(\eta+ \omega\times x)\quad\text{on }\partial\Omega,\qquad\int_{\Omega_{3}}\psi\,dx=0,\] and that \(K\) is invertible, see [35, Lemma 3.8]. Hence, the equations (3.33)-(3.34) read \[\lambda\left(\begin{array}{c}\eta\\ \omega\end{array}\right)+K^{-1}Q_{1}\widetilde{u}+K^{-1}Q_{2}(\eta,\omega)= \left(\begin{array}{c}\kappa\\ \mu\end{array}\right). \tag{3.38}\] By (3.38) one can describe the term \(\lambda P_{\Omega}U_{0}\) of (3.30) as \[\lambda P_{\Omega}U_{0}=P_{\Omega}\ell(\lambda\eta,\lambda\omega)=P_{\Omega} \ell\left((\kappa,\mu)-K^{-1}Q_{1}\widetilde{u}-K^{-1}Q_{2}(\eta,\omega) \right),\] from which (3.30) is reduced to \[\lambda\widetilde{u}+A_{\Omega}\widetilde{u}-P_{\Omega}\Delta U _{0}-P_{\Omega}\ell\left(K^{-1}Q_{1}\widetilde{u}+K^{-1}Q_{2}(\eta,\omega) \right) =P_{\Omega}f-P_{\Omega}\ell(\kappa,\mu) \tag{3.39}\] \[=f-\ell(\kappa,\mu),\] where the last equality above follows from \(f-\ell(\kappa,\mu)\in L^{q}_{\sigma}(\Omega)\). Let us introduce the other Stokes-structure operator \(\mathbb{A}\) acting on \(Y_{q}\) by \[\begin{array}{l}D_{q}(\mathbb{A})=D_{q}(A_{\Omega})\times\mathbb{C}^{3} \times\mathbb{C}^{3},\\ \mathbb{A}\left(\begin{array}{c}\widetilde{u}\\ (\eta,\omega)\end{array}\right)=\left(\begin{array}{cc}1&-P_{\Omega}\ell\\ 0&1\end{array}\right)\left(\begin{array}{cc}A_{\Omega}&-P_{\Omega}\Delta \ell\\ K^{-1}Q_{1}&K^{-1}Q_{2}\end{array}\right)\left(\begin{array}{c}\widetilde{u} \\ (\eta,\omega)\end{array}\right),\end{array} \tag{3.40}\] then, in view of (3.38)-(3.39), the resolvent system (2.27) is reformulated as \[(\lambda+\mathbb{A})\left(\begin{array}{c}\widetilde{u}\\ (\eta,\omega)\end{array}\right)=\left(\begin{array}{c}f-\ell(\kappa,\mu)\\ (\kappa,\mu)\end{array}\right)\qquad\text{in }Y_{q}. \tag{3.41}\] The operator \(\mathbb{A}\) is splitted into \[\mathbb{A}=\mathbb{A}_{0}+\mathbb{A}_{1}\] with \[\mathbb{A}_{0}\left(\begin{array}{c}\widetilde{u}\\ (\eta,\omega)\end{array}\right)=\left(\begin{array}{cc}1&-P_{\Omega}\ell\\ 0&1\end{array}\right)\left(\begin{array}{cc}A_{\Omega}&-P_{\Omega}\Delta \ell\\ 0&0\end{array}\right)\left(\begin{array}{c}\widetilde{u}\\ (\eta,\omega)\end{array}\right), \tag{3.42}\] \[\mathbb{A}_{1}\left(\begin{array}{c}\widetilde{u}\\ (\eta,\omega)\end{array}\right)=\left(\begin{array}{cc}1&-P_{\Omega}\ell\\ 0&1\end{array}\right)\left(\begin{array}{cc}0&0\\ K^{-1}Q_{1}&K^{-1}Q_{2}\end{array}\right)\left(\begin{array}{c}\widetilde{u} \\ (\eta,\omega)\end{array}\right). \tag{3.43}\] Let \(\lambda\in\mathbb{C}\setminus(-\infty,0]\), then \(\lambda+\mathbb{A}_{0}\) is invertible and \[(\lambda+\mathbb{A}_{0})^{-1}=\left(\begin{array}{cc}(\lambda+A_{\Omega})^{ -1}&\lambda^{-1}(\lambda+A_{\Omega})^{-1}P_{\Omega}\Delta\ell\\ 0&\lambda^{-1}\end{array}\right) \tag{3.44}\] in \(\mathcal{L}(Y_{q})\). Moreover, we find \[\|(\lambda+\mathbb{A}_{0})^{-1}\|_{\mathcal{L}(Y_{q})}\leq C_{\varepsilon}| \lambda|^{-1} \tag{3.45}\] for all \(\lambda\in\Sigma_{\varepsilon}\) with \(|\lambda|\geq 1\) (say), see (2.31), where \(\varepsilon\in(0,\pi/2)\) is fixed arbitrarily, because of the parabolic resolvent estimate of the Stokes operator \(A_{\Omega}\). We will show that \(\mathbb{A}_{1}\) is subordinate to \(\mathbb{A}_{0}\). To this end, let us deduce estimates of \(Q_{1}\) and \(Q_{2}\) given by (3.36)-(3.37). By the trace estimate together with (3.29) and (3.32), we have \[|Q_{1}\widetilde{u}|_{\mathbb{C}^{3}\times\mathbb{C}^{3}}\leq C\big{(}\|A_{ \Omega}\widetilde{u}\|_{q,\Omega}+\|\widetilde{u}\|_{q,\Omega}\big{)} \tag{3.46}\] as well as \[|Q_{2}(\eta,\omega)|_{\mathbb{C}^{3}\times\mathbb{C}^{3}}\leq C|(\eta,\omega )|.\] Those estimates together with (3.29) lead us to \[\|\ell\left(K^{-1}Q_{1}\widetilde{u}+K^{-1}Q_{2}(\eta,\omega) \right)\|_{W^{1,q}(\Omega)}+\big{|}K^{-1}Q_{1}\widetilde{u}+K^{-1}Q_{2}(\eta, \omega)\big{|}_{\mathbb{C}^{3}\times\mathbb{C}^{3}}\] \[\leq C\big{(}\|A_{\Omega}\widetilde{u}\|_{q,\Omega}+\|\widetilde {u}\|_{q,\Omega}+|(\eta,\omega)|\big{)}.\] Since the support of the lifting function (3.4) is bounded, the Rellich theorem implies that, for any sequence \(\{\widetilde{u}_{j},(\eta_{j},\omega_{j})\}\) being bounded in \(D_{q}(A_{\Omega})\times\mathbb{C}^{3}\times\mathbb{C}^{3}\), one can subtract a convergent subsequence in \(L^{q}(\Omega)\times\mathbb{C}^{3}\times\mathbb{C}^{3}\) from \(\big{\{}\ell\big{(}K^{-1}Q_{1}\widetilde{u}_{j}+K^{-1}Q_{2}(\eta_{j},\omega_ {j})\big{)},\)\(K^{-1}Q_{1}\widetilde{u}_{j}+K^{-1}Q_{2}(\eta_{j},\omega_{j})\big{\}}\). Along the same subsequence, \(\mathbb{A}_{1}\left(\begin{array}{c}\widetilde{u}_{j}\\ (\eta_{j},\omega_{j})\end{array}\right)\) is also convergent in \(Y_{q}\) as \(P_{\Omega}\) is bounded on \(L^{q}(\Omega)\). Hence, \(\mathbb{A}_{1}\) is a compact operator from \(D_{q}(\mathbb{A}_{0})=D_{q}(A_{\Omega})\times\mathbb{C}^{3}\times\mathbb{C}^ {3}\) (endowed with the graph norm) into \(Y_{q}\). One can then employ a perturbation theorem [7, Chapter III, Lemma 2.16] to conclude that \(\mathbb{A}_{1}\) is \(\mathbb{A}_{0}\)-bounded and its \(\mathbb{A}_{0}\)-bound is zero; that is, for every small \(\delta>0\) there is a constant \(C_{\delta}>0\) satisfying \[\Big{\|}\mathbb{A}_{1}\left(\begin{array}{c}\widetilde{u}\\ (\eta,\omega)\end{array}\right)\Big{\|}_{Y_{q}}\leq\delta\left\|\mathbb{A}_{0 }\left(\begin{array}{c}\widetilde{u}\\ (\eta,\omega)\end{array}\right)\right\|_{Y_{q}}+C_{\delta}\left\|\left( \begin{array}{c}\widetilde{u}\\ (\eta,\omega)\end{array}\right)\right\|_{Y_{q}}\] where \(\|\cdot\|_{Y_{q}}\) is obviously given by the sum of \(\|\cdot\|_{q,\Omega}\) and \(|(\cdot,\cdot)|_{\mathbb{C}^{3}\times\mathbb{C}^{3}}\). This combined with (3.45) allows us to take \(\Lambda_{\varepsilon}>0\) for each \(\varepsilon\in(0,\pi/2)\) such that \[\|(\lambda+\mathbb{A})^{-1}\|_{\mathcal{L}(Y_{q})}\leq C_{\varepsilon}| \lambda|^{-1} \tag{3.47}\] for all \(\lambda\in\Sigma_{\varepsilon}\) with \(|\lambda|\geq\Lambda_{\varepsilon}\), where the resolvent is described as the Neumann series in \(\mathcal{L}(Y_{q})\) for such \(\lambda\): \[\begin{split}&(\lambda+\mathbb{A})^{-1}=(\lambda+\mathbb{A}_{0})^ {-1}\sum_{j=0}^{\infty}\big{[}-\mathbb{A}_{1}(\lambda+\mathbb{A}_{0})^{-1} \big{]}^{j}\\ &\text{with}\quad\sum_{j=0}^{\infty}\big{\|}\mathbb{A}_{1}(\lambda+ \mathbb{A}_{0})^{-1}\big{\|}_{\mathcal{L}(Y_{q})}^{j}\leq 2.\end{split} \tag{3.48}\] With (3.47) at hand, we immediately obtain \[\|\widetilde{u}\|_{q,\Omega}+|(\eta,\omega)|\leq C_{\varepsilon}|\lambda|^{-1} \big{(}\|f\|_{q,\Omega}+|(\kappa,\mu)|\big{)}\] for (3.41), from which together with (3.29) we are led to \[\|u\|_{q,\Omega}+|(\eta,\omega)|\leq C_{\varepsilon}|\lambda|^{-1}\big{(}\|f\|_ {q,\Omega}+|(\kappa,\mu)|\big{)}\] for the solution to (2.27). On account of Proposition 3.2, we conclude \[\|U\|_{q,\mathbb{R}^{3}}\leq C_{\varepsilon}|\lambda|^{-1}\|F\|_{q,\mathbb{R}^ {3}}\] for (2.28) with \(C_{\varepsilon}>0\) independent of \(\lambda\in\Sigma_{\varepsilon}\) satisfying \(|\lambda|\geq\Lambda_{\varepsilon}\), where \(\varepsilon\in(0,\pi/2)\) is arbitrary. ### Generation of the evolution operator In this subsection we show the generation of the evolution operator by the family \(\{L_{+}(t);\,t\in\mathbb{R}\}\) of the Oseen-structure operators (2.36)-(2.37). It is of parabolic type (in the sense of Tanabe and Sobolevskii [48, 52]) with the properties (2.47)-(2.50). Let \(1<q<\infty\). We first see that, for each fixed \(t\in\mathbb{R}\), the operator \(-L_{+}(t)\) generates an analytic semigroup on the space \(X_{q}(\mathbb{R}^{3})\). This is readily verified as follows by a simple perturbation argument. We fix \(\varepsilon\in(0,\pi/2)\). Given \(F\in X_{q}(\mathbb{R}^{3})\), let us take \(U=(\lambda+A)^{-1}F\) with \(\lambda\in\Sigma_{\varepsilon}\), see (2.31), in (2.40) and employ (2.30) to obtain \[\|B(t)(\lambda+A)^{-1}F\|_{q,\mathbb{R}^{3}}\leq C\big{(}|\lambda|^{-1/2}+| \lambda|^{-1}\big{)}\|U_{b}\|\|F\|_{q,\mathbb{R}^{3}}. \tag{3.49}\] Hence, there is a constant \(c_{0}>0\) dependent on \(\|U_{b}\|\) but independent of \(t\) such that if \(\lambda\in\Sigma_{\varepsilon}\) fulfills \(|\lambda|\geq c_{0}\), then the right hand-side of (3.49) is bounded from above by \(\frac{1}{2}\|F\|_{q,\mathbb{R}^{3}}\), yielding \(\lambda\in\rho(-L_{+}(t))\) subject to \[\|(\lambda+L_{+}(t))^{-1}\|_{\mathcal{L}(X_{q}(\mathbb{R}^{3}))}\leq 2\|( \lambda+A)^{-1}\|_{\mathcal{L}(X_{q}(\mathbb{R}^{3}))}\leq\frac{C}{|\lambda|}\] for every \(t\in\mathbb{R}\) by the Neumann series argument. We next verify the regularity of \(L_{+}(t)\) in \(t\) that allows us to apply the theory of parabolic evolution operators, see [48, Chapter 5]. In fact, by the same computations as above with use of (2.39), it follows from (2.34)-(2.35) that \[\begin{split}&\|(L_{+}(t)-L_{+}(s))(\lambda+L_{+}(\tau))^{-1}F\|_{q, \mathbb{R}^{3}}\\ &=\big{\|}(B(t)-B(s))(\lambda+A)^{-1}\big{[}1+B(\tau)(\lambda+A )^{-1}\big{]}^{-1}F\big{\|}_{q,\mathbb{R}^{3}}\\ &\leq C(c_{0}^{-1/2}+c_{0}^{-1})(t-s)^{\theta}\left[U_{b}\right] \theta\|F\|_{q,\mathbb{R}^{3}}\end{split} \tag{3.50}\] for all \(F\in X_{q}(\mathbb{R}^{3})\), \(\lambda\in\Sigma_{\varepsilon}\) with \(|\lambda|\geq c_{0}\), and \(t,\,s,\,\tau\in\mathbb{R}\) with \(t>s\). As a consequence, the family \(\{L_{+}(t);\,t\in\mathbb{R}\}\) generates an evolution operator \(\{T(t,s);\,s,t\in I,\,s\leq t\}\) on \(X_{q}(\mathbb{R}^{3})\) for every compact interval \(I\subset\mathbb{R}\), which provides the family \(\{T(t,s);\,-\infty<s\leq t<\infty\}\) with the properties (2.47)-(2.50) by uniqueness of evolution operators. Notice that the locally Holder continuity in \(t\) of \(u_{b}(t)\) (with values in \(L^{\infty}(\Omega)\)) as well as \(\eta_{b}(t)\) is enough just for generation of the evolution operator, however, the globally Holder continuity (2.34) is needed for further studies. This point is indeed the issue of the next subsection. ### Smoothing estimates of the evolution operator This subsection is devoted to the \(L^{q}\)-\(L^{r}\) smoothing estimates (2.51)-(2.52) for all \((t,s)\) with \(t-s\in(0,\tau_{*}]\), where \(\tau_{*}\in(0,\infty)\) is fixed arbitrarily. Those rates themselves are quite standard, nevertheless, the point is to show that the constants in (2.51)-(2.52) may be dependent on \(\tau_{*}\), however, can be taken uniformly with respect to such \(t\), \(s\) under the globally Holder condition (2.34) on \(u_{b}(t)\) and \(\eta_{b}(t)\). The similar studies are found in [27, Lemma 3.2]. **Proposition 3.4**.: _Suppose (2.33) and (2.34). Let \(1<q<\infty\), \(r\in[q,\infty]\) (except \(r=\infty\) for (2.52)) and \(\tau_{*},\,\alpha_{0},\,\beta_{0}\in(0,\infty)\). Then there is a constant \(C=C(q,r,\tau_{*},\alpha_{0},\beta_{0},\theta)>0\) such that both (2.51) and (2.52) hold for all \((t,s)\) with \(t-s\in(0,\tau_{*}]\) and \(F\in X_{q}(\mathbb{R}^{3})\) whenever \(\|U_{b}\|\leq\alpha_{0}\) and \([U_{b}]_{\theta}\leq\beta_{0}\), where \(\|U_{b}\|\) and \([U_{b}]_{\theta}\) are given by (2.35). Moreover, we have_ \[\begin{split}\lim_{t\to s}\,(t-s)^{(3/q-3/r)/2}\|T(t,s)F\|_{r, \mathbb{R}^{3}}&=0,\qquad r\in(q,\infty],\\ \lim_{t\to s}\,(t-s)^{1/2+(3/q-3/r)/2}\|\nabla T(t,s)F\|_{r, \mathbb{R}^{3}}&=0,\qquad r\in[q,\infty),\end{split} \tag{3.51}\] _for every \(F\in X_{q}(\mathbb{R}^{3})\), where the convergence is uniform on each precompact set of \(X_{q}(\mathbb{R}^{3})\)._ Proof.: Let us fix \(\tau_{*}>0\). In the proof of construction of the evolution operator, an important step is to show that \[\|L_{+}(t)T(t,s)\|_{\mathcal{L}(X_{q}(\mathbb{R}^{3}))}\leq C(t-s)^{-1} \tag{3.52}\] as well as \[\|T(t,s)\|_{\mathcal{L}(X_{q}(\mathbb{R}^{3}))}\leq C \tag{3.53}\] for \(t-s\leq\tau_{*}\). If we look into details of deduction of (3.52)-(3.53), see Tanabe [48, Chapter 5], we find that both constants \(C=C(q,\tau_{*},\alpha_{0},\beta_{0},\theta)>0\) can be taken uniformly in \((t,s)\) with \(t-s\leq\tau_{*}\) on account of the globally Holder continuity (3.50). See also the discussions in [27, Lemma 3.2]. Based on (3.52)-(3.53), we take \(U(t)=T(t,s)F\) with \(F\in X_{q}(\mathbb{R}^{3})\) in \(\eqref{eq:2.41}_{2}\) and use (2.39) to infer \[\|\nabla T(t,s)F\|_{q,\Omega}\leq C(t-s)^{-1/2}\|F\|_{q,\mathbb{R}^{3}} \tag{3.54}\] for \(t-s\leq\tau_{*}\) with some constant \(C=C(q,\tau_{*},\alpha_{0},\beta_{0},\theta)>0\). It follows from (3.53)-(3.54) together with the Gagliardo-Nirenberg inequality that \[\|T(t,s)F\|_{r,\Omega}\leq C(t-s)^{-(3/q-3/r)/2}\|F\|_{q,\mathbb{R}^{3}} \tag{3.55}\] for \(t-s\leq\tau_{*}\) provided \(1/q-1/r<1/3\) (actually, \(1/q-1/r\leq 1/3\) if \(q\neq 3\)). From the relation (2.7) with \(U(t)=T(t,s)F\), it is readily seen that \[\|U(t)\|_{r,B}\leq C(|\eta(t)|+|\omega(t)|)\leq C\int_{B}|U(y,t)|\,dy\leq C\| U(t)\|_{q,\mathbb{R}^{3}}\leq C\|F\|_{q,\mathbb{R}^{3}} \tag{3.56}\] and, from (2.17), that \[\|\nabla U(t)\|_{r,B}\leq C|\omega(t)|\leq C\|F\|_{q,\mathbb{R}^{3}}. \tag{3.57}\] Estimates (3.55) and (3.56) imply (2.51) for \(t-s\leq\tau_{*}\) if \(1<q\leq r\leq\infty\) (\(q\neq\infty\)) and \(1/q-1/r<1/3\), but the latter restriction can be eventually removed by using the semigroup property (2.47). Then (3.54) together with (2.51) for \(t-s\leq\tau_{*}\) leads us to \[\|\nabla T(t,s)F\|_{r,\Omega}\leq C(t-s)^{-1/2-(3/q-3/r)/2}\|F\|_{q,\mathbb{R }^{3}}\] for \(t-s\leq\tau_{*}\), which along with (3.57) concludes (2.52) for such \(t\), \(s\). It remains to show (3.51). If \(F\in\mathcal{E}(\mathbb{R}^{3})\subset D_{q}(A)\), \(1<q<\infty\), then it follows from (2.39), (2.41) and the boundedness \[\|L_{+}(t)T(t,s)(k+L_{+}(s))^{-1}\|_{\mathcal{L}(X_{q}(\mathbb{R}^{3}))}\leq C\] near \(t=s\) ([48, Chapter 5, Theorem 2.1]), where \(k>0\) is fixed large enough, that \[\|\nabla T(t,s)F\|_{q,\Omega}\leq C\big{(}\|AF\|_{q,\mathbb{R}^{3}}+\|F\|_{q, \mathbb{R}^{3}}\big{)}\] near \(t=s\). This combined with (3.57) (with \(r=q\)) implies that \[\lim_{t\to s}{(t-s)^{1/2}}\|\nabla T(t,s)F\|_{q,\mathbb{R}^{3}}=0\] for every \(F\in\mathcal{E}(\mathbb{R}^{3})\) and, therefore, every \(F\in X_{q}(\mathbb{R}^{3})\) since \(\mathcal{E}(\mathbb{R}^{3})\) is dense in \(X_{q}(\mathbb{R}^{3})\), see Proposition 3.1. The other behaviors in (3.51) are verified easier. The proof is complete. We next derive the Holder estimate of the evolution operator. This plays a role to verify that a mild solution becomes a strong one to the nonlinear initial value problem in section 5. The argument is similar to the one for the autonomous case based on the theory of analytic semigroups, but not completely the same. It can be found, for instance, in the paper [49, Lemma 2.8] by Teramoto who used the fractional powers of generators, however, in order to justify the argument there, one has to deduce estimate of \(\|(k+L_{+}(t))^{\alpha}(k+L_{+}(s))^{-\alpha}\|_{\mathcal{L}(X_{q}(\mathbb{R}^ {3}))}\) independent of \((t,s)\), where \(k>0\) is fixed large enough, as pointed out by Farwig and Tsuda [11, Lemma 3.6]. In the latter literature, the authors discuss the desired estimate of the fractional powers by use of bounded \(\mathcal{H}^{\infty}\)-calculus of generators uniformly in \(t\). Instead, in this paper, we take easier way to deduce the following result, which is enough for later use in section 5. **Proposition 3.5**.: _Suppose (2.33) and (2.34). Let \(j\in\{0,\,1\}\), \(1<q<\infty\), \(r\in[q,\infty]\) (except \(r=\infty\) for \(j=1\)) and \(\tau_{*},\alpha_{0},\beta_{0}\in(0,\infty)\). Assume that \(q\) and \(r\) satisfy_ \[\frac{1}{q}-\frac{1}{r}<\frac{2-j}{3}\qquad(j=0,\,1). \tag{3.58}\] _Given \(\mu\) satisfying_ \[0<\mu<1-\frac{j}{2}-\frac{3}{2}\left(\frac{1}{q}-\frac{1}{r}\right), \tag{3.59}\] _set_ \[\kappa=\max\left\{\frac{j}{2}+\frac{3}{2}\left(\frac{1}{q}-\frac{1}{r}\right) +\mu,\;\frac{1}{2}\right\}.\] _Then there is a constant \(C=C(\mu,q,r,\tau_{*},\alpha_{0},\beta_{0},\theta)>0\) such that_ \[\|\nabla^{j}T(t+h,s)F-\nabla^{j}T(t,s)F\|_{r,\mathbb{R}^{3}}\leq C(t-s)^{- \kappa}h^{\mu}\|F\|_{q,\mathbb{R}^{3}} \tag{3.60}\] _for all \((t,s)\) with \(t-s\in(0,\tau_{*}]\), \(h\in(0,1)\) and \(F\in X_{q}(\mathbb{R}^{3})\), whenever \(\|U_{b}\|\leq\alpha_{0}\) and \([U_{b}]_{\theta}\leq\beta_{0}\), where \(\|U_{b}\|\) and \([U_{b}]_{\theta}\) are given by (2.35)._ Proof.: Set \(U(t)=T(t,s)F\), that satisfies the equation \[U(t)=e^{-(t-s)A}F-V(t),\qquad V(t)=\int_{s}^{t}e^{-(t-\tau)A}B(\tau)U(\tau)\,d\tau,\] in terms of the Stokes-structure semigroup \(e^{-tA}\). By \(L^{q}\)-\(L^{r}\) estimates of the semigroup \(e^{-tA}\) due to [10] we get \[\|\nabla^{j}\big{(}e^{-(t+h-s)A}-e^{-(t-s)A}\big{)}F\|_{r,\mathbb{R}^{3}}\leq C (t-s)^{-j/2-(3/q-3/r)/2-\mu}h^{\mu}\|F\|_{q,\mathbb{R}^{3}} \tag{3.61}\] for every \(\mu\in(0,1)\). Proposition 3.4 implies that \[\|B(t)U(t)\|_{q,\mathbb{R}^{3}}\leq C\|U_{b}\|(t-s)^{-1/2}\|F\|_{q,\mathbb{R}^ {3}}\] which together with (3.61) leads to \[\|\nabla^{j}V(t+h)-\nabla^{j}V(t)\|_{r,\mathbb{R}^{3}}\] \[\leq C\|U_{b}\|\|F\|_{q,\mathbb{R}^{3}}\Big{[}(t-s)^{1/2-j/2-(3/q -3/r)/2-\mu}\,h^{\mu}+(t-s)^{-1/2}\,h^{1-j/2-(3/q-3/r)/2}\Big{]}\] with \(\mu\) satisfying (3.59). In view of this and (3.61), we conclude (3.60). ### Estimate of the pressure In this subsection we study a smoothing rate near the initial time of the pressure associated with \(T(t,s)F\). As in [28], this must be an important issue at the final stage of the proof of decay estimates of \(T(t,s)F\) because of the nonautonomous character. Indeed, this circumstance differs from that for the autonomous case [10] in which several advantages of analytic semigroups are used. For our purpose, we need to discuss the domains of fractional powers of the Stokes-structure operator \(A\) through the behavior of the resolvent of the other operator \(\mathbb{A}\). We start with the following lemma. **Lemma 3.2**.: _Suppose that \(\mathbb{A}\) is the operator given by (3.40). Let \(1<q<\infty\) and \(\vartheta\in(0,1/2q)\). Then there is a constant \(C=C(q,\vartheta)>0\) such that_ \[\|\mathbb{A}(\lambda+\mathbb{A})^{-1}G\|_{Y_{q}}\leq C\lambda^{-\vartheta} \big{(}\|g\|_{W^{1,q}(\Omega)}+\|G\|_{Y_{q}}\big{)} \tag{3.62}\] _for all \(\lambda\geq 1\) and \(G=(g,\kappa,\mu)\in Y_{q}=L^{q}_{\sigma}(\Omega)\times\mathbb{C}^{3}\times \mathbb{C}^{3}\) with \(g\in W^{1,q}(\Omega)\)._ Proof.: Since we are going to discuss the behavior of the resolvent for \(\lambda\to\infty\) only along the real half line, we may fix, for instance, \(\varepsilon=\pi/4\) and assume that \(\lambda\geq\Lambda_{\pi/4}\). Then we know the representation (3.48) of \((\lambda+\mathbb{A})^{-1}\), which leads to \[\mathbb{A}(\lambda+\mathbb{A})^{-1}G=\mathbb{A}(\lambda+\mathbb{A}_{0})^{-1}( G+H(\lambda))\] with \[\begin{split} H(\lambda)=\left(\begin{array}{c}h(\lambda)\\ \big{(}\alpha(\lambda),\beta(\lambda)\big{)}\end{array}\right)&:=\sum_{j=1}^{ \infty}\big{[}-\mathbb{A}_{1}(\lambda+\mathbb{A}_{0})^{-1}\big{]}^{j}\,G\\ &=-\mathbb{A}_{1}(\lambda+\mathbb{A}_{0})^{-1}(G+H(\lambda))\end{split} \tag{3.63}\] and \[\|G+H(\lambda)\|_{Y_{q}}\leq\sum_{j=0}^{\infty}\|\mathbb{A}_{1}(\lambda+ \mathbb{A}_{0})^{-1}\|_{\mathcal{L}(Y_{q})}^{j}\|G\|_{Y_{q}}\leq 2\|G\|_{Y_{q}}. \tag{3.64}\] In view of (3.40) and (3.44) we find \[\|\mathbb{A}(\lambda+\mathbb{A}_{0})^{-1}G\|_{Y_{q}} \leq\|A_{\Omega}w(\lambda)\|_{q,\Omega}+\lambda^{-1}\|P_{\Omega} \Delta\ell(\kappa,\mu)\|_{q,\Omega}\] \[\quad+\|P_{\Omega}\ell\big{(}K^{-1}Q_{1}w(\lambda)+\lambda^{-1}K^ {-1}Q_{2}(\kappa,\mu)\big{)}\|_{q,\Omega}\] \[\quad+|K^{-1}Q_{1}w(\lambda)|_{\mathbb{C}^{3}\times\mathbb{C}^{3} }+\lambda^{-1}|K^{-1}Q_{2}(\kappa,\mu)|_{\mathbb{C}^{3}\times\mathbb{C}^{3}}\] where \[w(\lambda):=(\lambda+A_{\Omega})^{-1}g+\lambda^{-1}(\lambda+A_{\Omega})^{-1}P_ {\Omega}\Delta\ell(\kappa,\mu).\] From (3.29), (3.46) and the resolvent estimate of \(A_{\Omega}\) it follows that \[\|\mathbb{A}(\lambda+\mathbb{A}_{0})^{-1}G\|_{Y_{q}}\leq C\|A_{\Omega}(\lambda +A_{\Omega})^{-1}g\|_{q,\Omega}+C\lambda^{-1}\big{(}\|g\|_{q,\Omega}+|(\kappa, \mu)|\big{)}. \tag{3.65}\] We here recall the fact that, except the vanishing normal trace, the space \(D_{q}(A_{\Omega}^{\vartheta})\) does not involve any boundary condition (Noll and Saal [40, Subsection 2.3], Giga and Sohr [23]) and, thereby, \[g\in L^{q}_{\sigma}(\Omega)\cap W^{1,q}(\Omega)\subset L^{q}_{\sigma}(\Omega) \cap H^{2\vartheta}_{q}(\Omega)=[L^{q}_{\sigma}(\Omega),D_{q}(A_{\Omega})]_{ \vartheta}=D_{q}(A_{\Omega}^{\vartheta})\] provided \(\vartheta\in(0,1/2q)\), where \([\cdot,\cdot]_{\vartheta}\) stands for the complex interpolation functor and \(H^{2\vartheta}_{q}(\Omega):=[L^{q}(\Omega),W^{2,q}(\Omega)]_{\vartheta}\) is the Bessel potential space. For such \(\vartheta\), we infer \[\|A_{\Omega}(\lambda+A_{\Omega})^{-1}g\|_{q,\Omega} =\|A_{\Omega}^{1-\vartheta}(\lambda+A_{\Omega})^{-1}A_{\Omega}^{ \vartheta}g\|_{q,\Omega}\] \[\leq C\|A_{\Omega}(\lambda+A_{\Omega})^{-1}A_{\Omega}^{\vartheta} g\|_{q,\Omega}^{1-\vartheta}\|(\lambda+A_{\Omega})^{-1}A_{\Omega}^{\vartheta}g\|_{q,\Omega}^{\vartheta}\] \[\leq C\lambda^{-\vartheta}\|A_{\Omega}^{\vartheta}g\|_{q,\Omega}\] \[\leq C\lambda^{-\vartheta}\|g\|_{W^{1,q}(\Omega)}\] which combined with (3.65) implies that \[\|\mathbb{A}(\lambda+\mathbb{A}_{0})^{-1}G\|_{Y_{q}}\leq C\lambda^{-\vartheta }\big{(}\|g\|_{W^{1,q}(\Omega)}+\|G\|_{Y_{q}}\big{)} \tag{3.66}\] for all \(\lambda\geq\Lambda_{\pi/4}\). It remains to show the other estimate \[\|\mathbb{A}(\lambda+\mathbb{A}_{0})^{-1}H(\lambda)\|_{Y_{q}}\leq C\lambda^{- \vartheta}\|G\|_{Y_{q}} \tag{3.67}\] in which the additional condition \(g\in W^{1,q}(\Omega)\) is not needed. In fact, exactly by the same argument as above for deduction of (3.66), we furnish \[\|\mathbb{A}(\lambda+\mathbb{A}_{0})^{-1}H(\lambda)\|_{Y_{q}} \leq C\lambda^{-\vartheta}\big{(}\|h(\lambda)\|_{W^{1,q}(\Omega)} +\|H(\lambda)\|_{Y_{q}}\big{)} \tag{3.68}\] \[\leq C\lambda^{-\vartheta}\big{(}\|h(\lambda)\|_{W^{1,q}(\Omega)} +\|G\|_{Y_{q}}\big{)}.\] By the representation (3.63) of \(H(\lambda)\) along with (3.43)-(3.44) we find \[\|h(\lambda)\|_{W^{1,q}(\Omega)}=\|P_{\Omega}\ell\big{(}K^{-1}Q_{1}v(\lambda) +\lambda^{-1}K^{-1}Q_{2}(\kappa+\alpha(\lambda),\mu+\beta(\lambda))\big{)}\|_ {W^{1,q}(\Omega)}\] where \[v(\lambda):=(\lambda+A_{\Omega})^{-1}(g+h(\lambda))+\lambda^{-1}(\lambda+A_{ \Omega})^{-1}P_{\Omega}\Delta\ell(\kappa+\alpha(\lambda),\mu+\beta(\lambda)).\] Since \(P_{\Omega}\) is bounded on \(W^{1,q}(\Omega)\) by Proposition 3.1, we use (3.29), (3.46) and (3.64) to observe \[\|h(\lambda)\|_{W^{1,q}(\Omega)}\leq C\|G+H(\lambda)\|_{Y_{q}}\leq C\|G\|_{Y_{ q}}\] with some \(C>0\) independent of \(\lambda\geq\Lambda_{\pi/4}\). In this way, we conclude (3.67). The proof is complete. The behavior of the resolvent with respect to \(\lambda\) obtained in Lemma 3.2 is inherited by the Stokes-structure operator to conclude the following lemma. **Lemma 3.3**.: _Suppose that \(A\) is the operator given by (2.25). Let \(1<q<\infty\) and \(\vartheta\in(0,1/2q)\). Then there is a constant \(C=C(q,\vartheta)>0\) such that_ \[\|A(\lambda+A)^{-1}F\|_{q,\mathbb{R}^{3}}\leq C\lambda^{-\vartheta}\big{(}\|f \|_{W^{1,q}(\Omega)}+\|F\|_{q,\mathbb{R}^{3}}\big{)} \tag{3.69}\] _for all \(\lambda\geq 1\) and \(F\in X_{q}(\mathbb{R}^{3})\) with \(f=F|_{\Omega}\in W^{1,q}(\Omega)\)._ Proof.: Set \((f,\kappa,\mu)=i(F)\), see (2.7), and \[U=(\lambda+A)^{-1}F,\qquad(u,\eta,\omega)=i(U).\] Following subsection 3.3, we take the lifting function \(U_{0}=\ell(\eta,\omega)\) as in (3.4). Then we have \[A(\lambda+A)^{-1}F=F-\lambda U=(f-\lambda u)\chi_{\Omega}+\big{\{}(\kappa- \lambda\eta)+(\mu-\lambda\omega)\times x\big{\}}\chi_{B}\] so that \[\begin{split}\|A(\lambda+A)^{-1}F\|_{q,\mathbb{R}^{3}}& \leq C\|f-\lambda u\|_{q,\Omega}+C|(\kappa,\mu)-\lambda(\eta, \omega)|\\ &\leq C\|f-\ell(\kappa,\mu)-\lambda\big{(}u-\ell(\eta,\omega) \big{)}\|_{q,\Omega}+C|(\kappa,\mu)-\lambda(\eta,\omega)|\end{split} \tag{3.70}\] on account of (3.29). By virtue of (3.41) in the other formulation, we obtain \[\left(\begin{array}{c}\widetilde{u}\\ (\eta,\omega)\end{array}\right)=(\lambda+\mathbb{A})^{-1}\left(\begin{array}[] {c}g\\ (\kappa,\mu)\end{array}\right)\] with \[\widetilde{u}=u-\ell(\eta,\omega),\qquad g=f-\ell(\kappa,\mu).\] By the assumption \(f\in W^{1,q}(\Omega)\) we have \(g\in W^{1,q}(\Omega)\) with \[\|\nabla g\|_{q,\Omega}\leq\|\nabla f\|_{q,\Omega}+C|(\kappa,\mu)|\] and, therefore, we know (3.62) for \[\mathbb{A}(\lambda+\mathbb{A})^{-1}G=G-\lambda\left(\begin{array}{c} \widetilde{u}\\ (\eta,\omega)\end{array}\right),\qquad G=\left(\begin{array}{c}g\\ (\kappa,\mu)\end{array}\right).\] Thus, in view of (3.70), we conclude (3.69). The proof is complete. In the following lemma, we note that \(\mathbb{P}\psi\) never fulfills the boundary condition at \(\partial\Omega\) which is involved in \(D_{r}(A)\), see (2.25), even if \(\psi\in C_{0}^{\infty}(\Omega)\subset L_{R}^{r}(\mathbb{R}^{3})\) (by setting zero outside \(\Omega\)); thus, \(\delta>0\) must be small in order that (3.71) holds true. **Lemma 3.4**.: _Let \(1<r<\infty\) and \(\delta\in(0,1/2r)\). Then there is a constant \(C=C(r,\delta)>0\) such that \(\mathbb{P}\psi\in D_{r}(A^{\delta})\) subject to_ \[\|A^{\delta}\mathbb{P}\psi\|_{r,\mathbb{R}^{3}}\leq C\big{(}\|\psi\|_{W^{1,r}( \Omega)}+\|\psi\|_{r,\mathbb{R}^{3}}\big{)} \tag{3.71}\] _for all \(\psi\in L_{R}^{r}(\mathbb{R}^{3})\) with \(\psi|_{\Omega}\in W^{1,r}(\Omega)\)._ Proof.: Given \(\delta\in(0,1/2r)\), we take \(\vartheta\in(\delta,1/2r)\). Let \(F\in X_{r}(\mathbb{R}^{3})\) with \(f=F|_{\Omega}\in W^{1,r}(\Omega)\), then we have (3.69) with such \(\vartheta\), which implies that \(F\in D_{r}(A^{\delta})\) subject to \[\|A^{\delta}F\|_{r,\mathbb{R}^{3}}\leq C\big{(}\|f\|_{W^{1,r}(\Omega)}+\|F\|_{ r,\mathbb{R}^{3}}\big{)} \tag{3.72}\] by following the argument in [52, Theorem 2.24]. In this literature, \(0\in\rho(A)\) is assumed, however, it is not needed by considering \(1+A\) instead of \(A\) as follows: In fact, by (3.69) with \(\vartheta\in(\delta,1/2r)\) we obtain \[\int_{1}^{\infty}\lambda^{-1+\delta}\|(1+A)(\lambda+1+A)^{-1}F\|_{r,\mathbb{R }^{3}}\,d\lambda\leq C\big{(}\|f\|_{W^{1,r}(\Omega)}+\|F\|_{r,\mathbb{R}^{3}} \big{)},\] while the integral over the interval \((0,1)\) of the same integrand always convergent. As a consequence, \[(1+A)^{-1+\delta}F =\frac{\sin\pi(1-\delta)}{\pi}\int_{0}^{\infty}\lambda^{-1+\delta }(\lambda+1+A)^{-1}F\,d\lambda\] \[=\frac{\sin\pi(1-\delta)}{\pi}(1+A)^{-1}\int_{0}^{\infty}\lambda^ {-1+\delta}(1+A)(\lambda+1+A)^{-1}F\,d\lambda\] belongs to \(D_{r}(A)\), yielding \(F\in D_{r}(A^{\delta})\) along with (3.72). By (3.11) together with (3.9) we know that the projection \(\mathbb{P}\) is bounded from \(L_{R}^{r}(\mathbb{R}^{3})\cap W^{1,r}(\Omega)\) to \(X_{r}(\mathbb{R}^{3})\cap W^{1,r}(\Omega)\). Thus, (3.72) implies (3.71). We provide estimate of the pressure near the initial time, that would be of independent interest. **Proposition 3.6**.: _Suppose (2.33) and (2.34). Let \(1<q<\infty\) and \(\tau_{*},\,\alpha_{0},\,\beta_{0}\in(0,\infty)\). Then, for every \(\gamma\in\big{(}(1+1/q)/2,\,1\big{)}\), there is a constant \(C=C(\gamma,q,\tau_{*},\alpha_{0},\beta_{0},\theta)>0\) such that_ \[\|p(t)\|_{q,\Omega_{3}}+\|\partial_{t}T(t,s)F\|_{W^{-1,q}(\Omega_{3})}\leq C(t -s)^{-\gamma}\|F\|_{q,\mathbb{R}^{3}} \tag{3.73}\] _for all \((t,s)\) with \(t-s\in(0,\tau_{*}]\) and \(F\in X_{q}(\mathbb{R}^{3})\) whenever \(\|U_{b}\|\leq\alpha_{0}\) and \([U_{b}]_{\theta}\leq\beta_{0}\), where \(p(t)\) denotes the pressure associated with \(T(t,s)F\) and it is singled out in such a way that \(\int_{\Omega_{3}}p(t)\,dx=0\) for each \(t\in(s,\infty)\), while \(\|U_{b}\|\) and \([U_{b}]_{\theta}\) are given by (2.35)._ Proof.: Set \(U(t)=T(t,s)F\) and \(u(t)=U(t)|_{\Omega}\). We follow the approach developed by Noll and Saal [40, Lemma 13]. Given \(\phi\in C^{\infty}_{0}(\Omega_{3})\), we take \(\psi:=\mathbb{B}[\phi-\overline{\phi}]\in W^{1,q^{\prime}}_{0}(\Omega_{3})\) that solves \[\text{div }\psi=\phi-\overline{\phi}\quad\text{in }\Omega_{3},\qquad\psi=0 \quad\text{on }\partial\Omega_{3}, \tag{3.74}\] where \(\mathbb{B}\) denotes the Bogovskii operator for the domain \(\Omega_{3}\) and \(\overline{\phi}:=\frac{1}{|\Omega_{3}|}\int_{\Omega_{3}}\phi(x)\,dx\). The boundary value problem (3.74) (in which \(\phi-\overline{\phi}\) is replaced by \(g\) with the compatibility condition) admits many solutions, among which the Bogovskii operator \[\mathbb{B}:W^{k,r}_{0}(\Omega_{3})\to W^{k+1,r}_{0}(\Omega_{3})^{3},\qquad r \in(1,\infty),\;k=0,1,2,\cdots \tag{3.75}\] specifies a particular solution discovered by Bogovskii [1] with fine regularity properties \[\|\nabla^{k+1}\mathbb{B}g\|_{r,\Omega_{3}}\leq C\|\nabla^{k}g\|_{r,\Omega_{3 }},\qquad\|\mathbb{B}g\|_{r,\Omega_{3}}\leq C\|g\|_{W^{1,r^{\prime}}(\Omega_{3 })^{*}} \tag{3.76}\] with some \(C>0\) (dependent on the same \(k,\,r\) as above) as well as \(\text{div }\mathbb{B}g=g\) provided \(\int_{\Omega_{3}}g\,dx=0\), see [2, 16, 22] for the details. Note that \(\mathbb{B}g\in C^{\infty}_{0}(\Omega_{3})^{3}\) for every \(g\in C^{\infty}_{0}(\Omega_{3})\) and that the right-hand side of the latter estimate of (3.76) cannot be replaced by \(\|g\|_{W^{-1,r}(\Omega_{3})}\). We may understand \(\psi\in W^{1,q^{\prime}}(\mathbb{R}^{3})\cap L^{q^{\prime}}_{R}(\mathbb{R}^{3})\) by setting zero outside \(\Omega_{3}\) and use (3.76) to get \[\|\psi\|_{W^{1,q^{\prime}}(\mathbb{R}^{3})}=\|\psi\|_{W^{1,q^{\prime}}_{0}( \Omega_{3})}\leq C\|\phi-\overline{\phi}\|_{q^{\prime},\Omega_{3}}\leq C\| \phi\|_{q^{\prime},\Omega_{3}}. \tag{3.77}\] By virtue of \(\int_{\Omega_{3}}p(t)\,dx=0\), we obtain \[\langle p(t),\phi\rangle_{\Omega_{3}} =\langle p(t),\phi-\overline{\phi}\rangle_{\Omega_{3}}=\langle p( t),\text{div }\psi\rangle_{\Omega_{3}}=-\langle\nabla p(t),\psi\rangle_{\Omega_{3}}\] \[=\langle\partial_{t}u-\Delta u+(u_{b}-\eta_{b})\cdot\nabla u, \psi\rangle_{\Omega_{3}}\] for all \(\phi\in C^{\infty}_{0}(\Omega_{3})\). Since \(\psi=0\) outside \(\Omega_{3}\), we have \[\langle\partial_{t}u,\psi\rangle_{\Omega_{3}}=\langle\partial_{t}U,\psi \rangle_{\mathbb{R}^{3},\rho}=-\langle AU+B(t)U,\psi\rangle_{\mathbb{R}^{3},\rho}\] and, thereby, \[\langle p(t),\phi\rangle_{\Omega_{3}} =-\langle AU,\psi\rangle_{\mathbb{R}^{3},\rho}-\langle\Delta u, \psi\rangle_{\Omega}+\langle(1-\mathbb{P})\big{[}\{(u_{b}-\eta_{b})\cdot\nabla u \}\chi_{\Omega}\big{]},\psi\rangle_{\mathbb{R}^{3},\rho} \tag{3.78}\] \[=:I+II+III\] for all \(\phi\in C^{\infty}_{0}(\Omega_{3})\). Note that the pairing over \(\mathbb{R}^{3}\) should involve the constant weight \(\rho\), otherwise one can use neither (2.29) nor (3.10). It follows from Proposition 3.4, (2.33) and (3.77) that \[|III|\leq C\|U_{b}\|\|\nabla u\|_{q,\Omega}\|\psi\|_{q^{\prime},\mathbb{R}^{3}} \leq C(t-s)^{-1/2}\|U_{b}\|\|F\|_{q,\mathbb{R}^{3}}\|\phi\|_{q^{\prime},\Omega_ {3}} \tag{3.79}\] and that \[|II|\leq\|\nabla u\|_{q,\Omega}\|\nabla\psi\|_{q^{\prime},\Omega}\leq C(t-s)^{-1/2} \|F\|_{q,\mathbb{R}^{3}}\|\phi\|_{q^{\prime},\Omega_{3}} \tag{3.80}\] for all \((t,s)\) with \(t-s\leq\tau_{*}\) and \(F\in X_{q}(\mathbb{R}^{3})\). By virtue of the momentum inequality for fractional powers with \(\gamma\in(0,1)\) together with (2.41) and (3.52)-(3.53) we find \[\|A^{\gamma}U\|_{q,\mathbb{R}^{3}}\leq C\|AU\|_{q,\mathbb{R}^{3}}^{\gamma}\|U \|_{q,\mathbb{R}^{3}}^{1-\gamma}\leq C(t-s)^{-\gamma}\|F\|_{q,\mathbb{R}^{3}} \tag{3.81}\] for all \((t,s)\) with \(t-s\leq\tau_{*}\) and \(F\in X_{q}(\mathbb{R}^{3})\). If in particular \(\gamma\in\big{(}(1+1/q)/2,\,1\big{)}\), then Lemma 3.4 with \(r=q^{\prime}\), \(\delta=1-\gamma\in(0,1/2q^{\prime})\) and (3.77) for \(\psi\in W^{1,q^{\prime}}(\mathbb{R}^{3})\cap L_{R}^{q^{\prime}}(\mathbb{R}^{3})\) imply \[\|A^{1-\gamma}\mathbb{P}\psi\|_{q^{\prime},\mathbb{R}^{3}}\leq C\big{(}\| \psi\|_{W^{1,q^{\prime}}(\Omega)}+\|\psi\|_{q^{\prime},\mathbb{R}^{3}}\big{)} \leq C\|\phi\|_{q^{\prime},\Omega_{3}}\] for all \(\phi\in C_{0}^{\infty}(\Omega_{3})\), which combined with (3.81) yields \[|I|=|\langle A^{\gamma}U,A^{1-\gamma}\mathbb{P}\psi\rangle_{\mathbb{R}^{3}, \rho}|\leq C(t-s)^{-\gamma}\|F\|_{q,\mathbb{R}^{3}}\|\phi\|_{q^{\prime},\Omega _{3}} \tag{3.82}\] for all \((t,s)\) with \(t-s\leq\tau_{*}\) and \(F\in X_{q}(\mathbb{R}^{3})\) by taking into account (2.29) and (3.10). We collect (3.78)-(3.80) and (3.82) to conclude (3.73) for the pressure. It remains to show (3.73) for \(\partial_{t}U(t)|_{\Omega_{3}}=\partial_{t}u(t)|_{\Omega_{3}}\), that readily follows from \[|\langle\partial_{t}u,\Phi\rangle_{\Omega_{3}}| =|\langle\Delta u+(\eta_{b}-u_{b})\cdot\nabla u-\nabla p,\Phi \rangle_{\Omega_{3}}|\] \[\leq C\big{(}\|\nabla u\|_{q,\Omega_{3}}+\|U_{b}\|\|u\|_{q, \Omega_{3}}+\|p\|_{q,\Omega_{3}}\big{)}\|\nabla\Phi\|_{q^{\prime},\Omega_{3}}\] for all \(\Phi\in C_{0}^{\infty}(\Omega_{3})^{3}\) together with estimate for the pressure obtained above and Proposition 3.4. The proof is complete. **Remark 3.1**.: _Tolksdorf [50, Proposition 3.4] has made it clear that the smoothing rate \((t-s)^{-3/4}\) of the pressure is sharp within the \(L^{2}\) theory of the Stokes semigroup in bounded domains through the behavior of the resolvent for \(|\lambda|\to\infty\). This suggests that (3.73) would be almost optimal. The same rate as in (3.73) was discovered first by Noll and Saal [40] and it was slightly improved as \(\gamma=(1+1/q)/2\) by [30, 28] for the exterior problem with prescribed rigid motions._ ### Adjoint evolution operator and backward problem Let \(t\in\mathbb{R}\) be a parameter as the final time of the problem below. For better understanding of the initial value problem (1.9) for the linearized system, it is useful to study the backward problem for the adjoint system subject to the final conditions at \(t\): \[-\partial_{s}v=\Delta v-(\eta_{b}(s)-u_{b}(s))\cdot\nabla v+ \nabla p_{v},\qquad\text{div }v=0\quad\text{in }\Omega\times(-\infty,t),\] \[v|_{\partial\Omega}=\eta+\omega\times y,\qquad v\to 0\quad\text{as }|y| \to\infty,\] \[-m\,\frac{d\eta}{ds}+\int_{\partial\Omega}S(v,-p_{v})\nu\,d\sigma =0, \tag{3.83}\] \[-J\,\frac{d\omega}{ds}+\int_{\partial\Omega}y\times S(v,-p_{v}) \nu\,d\sigma=0,\] \[v(\cdot,t)=v_{0},\quad\eta(t)=\eta_{0},\quad\omega(t)=\omega_{0},\] where \(v(y,s)\), \(p_{v}(y,s)\), \(\eta(s)\) and \(\omega(s)\) are unknowns functions. By using the Oseen-structure operator (2.36) together with the description (2.7)-(2.9), the backward problem (3.83) is formulated as \[-\frac{dV}{ds}+L_{-}(s)V=0,\quad s\in(-\infty,t);\qquad V(t)=V_{0} \tag{3.84}\] for the monolithic velocity \(V=v\chi_{\Omega}+(\eta+\omega\times y)\chi_{B}\), where \(V_{0}=v_{0}\chi_{\Omega}+(\eta_{0}+\omega_{0}\times y)\chi_{B}\). Conversely, once we have a solution to (3.84), there exists a pressure \(p_{v}\) which together with \((v,\eta,\omega)\) solves (3.83) as in subsection 3.2. We begin with justification of duality relation between the operators \(L_{\pm}(t)\) within \(X_{q}(\mathbb{R}^{3})^{*}=X_{q^{\prime}}(\mathbb{R}^{3})\), see (3.8), and the dissipative structure of each of those operators, see Remark 3.3 below. From subsection 3.4 we know that \(k+L_{\pm}(t)\) is bijective for large \(k>0\), which combined with (3.85) below implies that \(L_{\pm}(t)^{*}=L_{\mp}(t)\). The latter property (3.86) also plays a crucial role in section 4. **Lemma 3.5**.: _Suppose (2.33) and (2.34). Let \(1<q<\infty\), then_ \[\langle L_{\pm}(t)U,V\rangle_{\mathbb{R}^{3},\rho}=\langle U,L_{\mp}(t)V \rangle_{\mathbb{R}^{3},\rho} \tag{3.85}\] _for all \(U\in D_{q}(A)\), \(V\in D_{q^{\prime}}(A)\) and \(t\in\mathbb{R}\), where \(\langle\cdot,\cdot\rangle_{\mathbb{R}^{3},\rho}\) is given by (2.11). Moreover, we have_ \[\langle L_{\pm}(t)U,U\rangle_{\mathbb{R}^{3},\rho}=2\|Du\|_{2,\Omega}^{2} \tag{3.86}\] _for all \(U\in D_{2}(A)\) and \(t\in\mathbb{R}\), where \(u=U|_{\Omega}\)._ Proof.: The following fine properties of the Stokes-structure operator \(A\) is well-known, see Takahashi and Tucsnak [46, Section 4]: \[\langle AU,V\rangle_{\mathbb{R}^{3},\rho}=\langle U,AV\rangle_{\mathbb{R}^{3 },\rho}=2\langle Du,Dv\rangle_{\Omega} \tag{3.87}\] for all \(U\in D_{q}(A)\) and \(V\in D_{q^{\prime}}(A)\), where \(u=U|_{\Omega}\), \(v=V|_{\Omega}\); in particular, \[\langle AU,U\rangle_{\mathbb{R}^{3},\rho}=2\|Du\|_{2,\Omega}^{2} \tag{3.88}\] for all \(U\in D_{2}(A)\). Computation of (3.88) is already done in the latter half of the proof of Proposition 3.3 of the present paper as well. We stress that the constant weight \(\rho>0\) is needed in order that (3.87) and (3.88) hold true. It thus suffices to show that \[\langle B(t)U,V\rangle_{\mathbb{R}^{3},\rho}+\langle U,B(t)V\rangle_{\mathbb{ R}^{3},\rho}=0 \tag{3.89}\] for all \(U\) and \(V\) as above. We see from (3.10) that \[\langle B(t)U,V\rangle_{\mathbb{R}^{3},\rho}=\langle(u_{b}(t)-\eta_{b}(t)) \cdot\nabla u,v\rangle_{\Omega} \tag{3.90}\] and the same relation for \(\langle U,B(t)V\rangle_{\mathbb{R}^{3},\rho}\). Let us use the same cut-off function \(\phi_{R}\) as in (3.14). Since \[\nu\cdot(u_{b}(t)-\eta_{b}(t))=x\cdot(\omega_{b}(t)\times x)=0,\qquad x\in\partial\Omega \tag{3.91}\] by (2.33), we find \[\int_{\Omega}\operatorname{div}\,\left[(u\cdot v)(u_{b}(t)-\eta_{b}(t))\phi_{ R}\right]dx=\int_{\partial\Omega}(u\cdot v)\nu\cdot(u_{b}(t)-\eta_{b}(t))\,d\sigma=0\] for all \(u=U|_{\Omega}\) and \(v=V|_{\Omega}\). By (2.33) again it is readily seen that \[\lim_{R\to\infty}\int_{2R<|x|<3R}|u||v||(u_{b}(t)-\eta_{b}(t))\cdot\nabla\phi_{R}| \,dx=0,\] which implies \[\int_{\Omega}(u_{b}(t)-\eta_{b}(t))\cdot\nabla(u\cdot v)\,dx=0.\] This combined with (3.90) concludes (3.89), which together with (3.87) leads us to (3.85). The relation (3.86) follows from (3.88) and (3.89) with \(V=U\in D_{2}(A)\). The proof is complete. Several remarks are in order. **Remark 3.2**.: _It should be emphasized that Lemma 3.5 is not accomplished if the drift term in (1.9)/(3.83) is replaced by the purely Oseen term \(\eta_{b}\cdot\nabla u\), see (3.91). This is why the other drift term \(u_{b}\cdot\nabla u\) must be additionally involved into the right linearization. If the shape of the body is arbitrary, we see from (1.6) that the corresponding drift term is given by \((u_{b}-\eta_{b}-\omega_{b}\times x)\cdot\nabla u\), so that the boundary integral from this term vanishes as in the proof of Lemma 3.5._ **Remark 3.3**.: _If we wish to involve the term \((u-\eta)\cdot\nabla u_{b}\) as well into the linearization, we employ_ \[|\langle(u-\eta)\cdot\nabla u_{b},u\rangle_{\Omega}|\leq C\left(\|u_{b}\|_{L^ {3,\infty}(\Omega)}+\|u_{b}\|_{2,\Omega}\right)\|\nabla u\|_{2,\Omega}^{2}\] _with the aid of \(|\eta|\leq C\|Du\|_{2,\Omega}\), see for instance Galdi [15, Lemma 4.9], where \(L^{3,\infty}(\Omega)\) stands for the weak-\(L^{3}\) space. Hence the smallness of \(u_{b}\) in \(L^{2}(\Omega)\), that is more restrictive (from the viewpoint of summability at infinity) than (2.33), is needed to ensure the desired energy relation in subsection 4.2. Indeed this is possible under the self-propelling condition, see subsection 2.4, but it does not follow solely from the wake structure. Since we want to develop the theory under less assumption (2.33) on the basic motion, it is better not to put the term \((u-\eta)\cdot\nabla u_{b}\) in the linearization but to treat this term together with the nonlinear term._ **Remark 3.4**.: _If the shape of the body is arbitrary, the corresponding term to the one mentioned in Remark 3.3 is given by \((u-\eta-\omega\times x)\cdot\nabla u_{b}\) in view of (1.6). In order that the desired energy relation is available even if this term is involved into the linearization, we have to ask it to be \(|x|u_{b}\in L^{2}(\Omega)\), which is too restrictive. In fact, \((\omega\times x)\cdot\nabla u_{b}\) is the worst term among all linear terms and thus we do need the specific shape already in linear analysis unlike [10]._ **Remark 3.5**.: _Even though \(\nu\cdot u_{*}|_{\partial\Omega}\neq 0\), it would be possible to analyze the stability for small \(\nu\cdot u_{*}\). In this case the equations for the rigid body in (1.1) are slightly different and, therefore, the operator \(B(t)\) given by (2.37) consists of more terms and it is no longer skew-symmetric, nevertheless, can be controlled by the dissipation as long as \(\nu\cdot u_{*}\) is small enough in a sense. This issue will be discussed elsewhere._ Let us consider the auxiliary initial value problem \[\frac{dW}{d\tau}+L_{-}(t-\tau)W=0,\quad\tau\in(s,\infty);\qquad W(s)=V_{0}, \tag{3.92}\] where \(t\in\mathbb{R}\) is a parameter involved in the coefficient operator. By exactly the same argument as in subsections 3.4 and 3.5, we see that the operator family \(\{L_{-}(t-\tau);\,\tau\in\mathbb{R}\}\) generates an evolution operator \(\{\widetilde{T}(\tau,s;\,t);\,-\infty<s\leq\tau<\infty\}\) on \(X_{q}(\mathbb{R}^{3})\) for every \(q\in(1,\infty)\) and that it satisfies the similar smoothing estimates to (2.51)-(2.52), in which the constants \(C=C(\tau_{*})\) are taken uniformly in \((\tau,s)\) with \(\tau-s\in(0,\tau_{*}]\) for given \(\tau_{*}>0\) and do not depend on \(t\in\mathbb{R}\). This implies (3.98) below on the evolution operator defined by \[S(t,s):=\widetilde{T}(t-s,0;\,t),\qquad-\infty<s\leq t<\infty, \tag{3.93}\] which coincides with the adjoint of \(T(t,s)\), see (3.95) in the next lemma. For every \(V_{0}\in X_{q}(\mathbb{R}^{3})\), the function \(V(s)=S(t,s)V_{0}\) is a solution to the backward problem (3.84), that is, \[-\partial_{s}S(t,s)+L_{-}(s)S(t,s)=0,\quad s\in(-\infty,t);\qquad S(t,t)=I \tag{3.94}\] in \(\mathcal{L}(X_{q}(\mathbb{R}^{3}))\). As we would expect, the following duality relation (3.95) holds true. Note that the latter assertion (3.96) does not follow directly from (3.93). **Lemma 3.6**.: _Let \(1<q<\infty\), then_ \[T(t,s)^{*}=S(t,s),\qquad S(t,s)^{*}=T(t,s)\qquad\text{in }\mathcal{L}(X_{q}( \mathbb{R}^{3})) \tag{3.95}\] _for all \((t,s)\) with \(-\infty<s\leq t<\infty\) in the sense of (3.97) below. Moreover, we have the backward semigroup property_ \[S(\tau,s)S(t,\tau)=S(t,s)\qquad(-\infty<s\leq\tau\leq t<\infty) \tag{3.96}\] _in \(\mathcal{L}(X_{q}(\mathbb{R}^{3}))\)._ Proof.: We fix \((t,s)\) with \(-\infty<s<t<\infty\). For every \(\tau\in(s,t)\), it follows from (2.49), (3.85) and (3.94) that \[\partial_{\tau}\langle T(\tau,s)F,S(t,\tau)G\rangle_{\mathbb{R}^ {3},\rho}\] \[=\langle-L_{+}(\tau)T(\tau,s)F,S(t,\tau)G\rangle_{\mathbb{R}^{3}, \rho}+\langle T(\tau,s)F,L_{-}(\tau)S(t,\tau)G\rangle_{\mathbb{R}^{3},\rho}=0\] for all \(F\in X_{q}(\mathbb{R}^{3})\) and \(G\in X_{q^{\prime}}(\mathbb{R}^{3})\), which implies \[\langle T(t,s)F,G\rangle_{\mathbb{R}^{3},\rho}=\langle F,S(t,s)G\rangle_{ \mathbb{R}^{3},\rho} \tag{3.97}\] yielding (3.95). Then the semigroup property of \(T(t,s)\) leads to (3.96), which completes the proof. In the following proposition we provide the estimate of the associated pressure as well as \[\|\nabla^{j}T(t,s)^{*}G\|_{r,\mathbb{R}^{3}}\leq C(t-s)^{-j/2-(3/q-3/r)/2}\|G \|_{q,\mathbb{R}^{3}} \tag{3.98}\] near \(s=t\). Indeed (3.98) with \(j=0\) and \(q\), \(r\in(1,\infty)\) follows from the duality, but the other cases are obtained via estimates of \(\widetilde{T}(\tau,s;t)\), see (3.93). The proof of the estimates of the pressure and \(\partial_{s}T(t,s)^{*}\) are exactly the same as in Proposition 3.6 and my be omitted. **Proposition 3.7**.: _Suppose (2.33) and (2.34). Let_ \[\begin{array}{ll}1<q<\infty,\,\,\,q\leq r\leq\infty&\text{for }j=0,\\ 1<q\leq r<\infty&\text{for }j=1.\end{array}\] _Given \(\tau_{*}\), \(\alpha_{0}\), \(\beta_{0}\in(0,\infty)\), estimate (3.98) holds with some constant \(C=C(j,q,r,\tau_{*},\alpha_{0},\beta_{0},\theta)>0\) for all \((t,s)\) with \(t-s\in(0,\tau_{*}]\) and \(G\in X_{q}(\mathbb{R}^{3})\) whenever \(\|U_{b}\|\leq\alpha_{0}\) and \([U_{b}]_{\theta}\leq\beta_{0}\), where \(\|U_{b}\|\) and \([U_{b}]_{\theta}\) are given by (2.35)._ _Moreover, for every \(\gamma\in\big{(}(1+1/q)/2,\,1\big{)}\), there is a constant \(C=C(\gamma,q,\tau_{*},\alpha_{0},\beta_{0},\theta)>0\) such that_ \[\|p_{v}(s)\|_{q,\Omega_{3}}+\|\partial_{s}T(t,s)^{*}G\|_{W^{-1,q}(\Omega_{3})} \leq C(t-s)^{-\gamma}\|G\|_{q,\mathbb{R}^{3}} \tag{3.99}\] _for all \((t,s)\) with \(t-s\in(0,\tau_{*}]\) and \(G\in L^{q}(\mathbb{R}^{3})\) whenever \(\|U_{b}\|\leq\alpha_{0}\) and \([U_{b}]_{\theta}\leq\beta_{0}\), where \(p_{v}(s)\) denotes the pressure associated with \(v(s)=T(t,s)^{*}G\) and it is singled out in such a way that \(\int_{\Omega_{3}}p_{v}(s)\,dy=0\) for each \(s\in(-\infty,t)\)._ Decay estimates of the evolution operator In this section we study the large time behavior of the Oseen-structure evolution operator. Our argument is based on the following two ingredients: One is the decay estimates of the related evolution operator without the rigid body, the other is a consequence from the energy relation. The former is discussed in subsection 4.1, whereas the latter is deduced in subsection 4.2. Then the proof of (2.51) (except for \(L^{\infty}\)-estimate) is given in subsection 4.3. Toward the gradient estimate (as well as the \(L^{\infty}\)-estimate), the local energy decay property is established in subsection 4.4, and successively the large time behavior near spatial infinity is studied in subsection 4.5. The final subsection is devoted to completion of the linear theory. ### Evolution operator without rigid body In this subsection we consider the initial value problem for the fluid equation relating to (1.9) in the whole space without the rigid body, that is, \[\partial_{t}u=\Delta u+(\eta_{b}(t)-U_{b}(t))\cdot\nabla u-\nabla p,\quad\mbox{div }u=0,\quad(x,t)\in\mathbb{R}^{3}\times(s,\infty),\] \[u\to 0\quad\mbox{as }|x|\to\infty, \tag{4.1}\] \[u(\cdot,s)=\psi,\] and the associated backward problem for the adjoint system \[-\partial_{s}v=\Delta v-(\eta_{b}(s)-U_{b}(s))\cdot\nabla v+ \nabla p_{v},\quad\mbox{div }v=0,\quad(y,s)\in\mathbb{R}^{3}\times(-\infty,t),\] \[v\to 0\quad\mbox{as }|y|\to\infty, \tag{4.2}\] \[v(\cdot,t)=\phi,\] where the initial and final velocities are taken from the space \(L^{q}_{\sigma}(\mathbb{R}^{3})\) with some \(q\in(1,\infty)\), see (2.21), while \(U_{b}\) and \(\eta_{b}\) are as in (2.32)-(2.34). The family \(\{L_{0,\pm}(t);\,t\in\mathbb{R}\}\) of the modified Oseen operators on \(L^{q}_{\sigma}(\mathbb{R}^{3})\) is given by \[D_{q}(L_{0,\pm}(t))=L^{q}_{\sigma}(\mathbb{R}^{3})\cap W^{2,q}( \mathbb{R}^{3}), \tag{4.3}\] \[L_{0,\pm}(t)u=-\mathbb{P}_{0}\big{[}\Delta u\pm\big{(}\eta(t)-U_ {b}(t)\big{)}\cdot\nabla u\big{]}.\] where \(\mathbb{P}_{0}\) denotes the classical Fujita-Kato projection, see (2.22). Then the same arguments as in subsections 3.4 and 3.7 show that the operator families \(\{L_{0,+}(t);\,t\in\mathbb{R}\}\) and \(\{L_{0,-}(t-\tau);\,\tau\in\mathbb{R}\}\) generate parabolic evolution operators \(\{T_{0}(t,s);\,-\infty<s\leq t<\infty\}\) and \(\{\widetilde{T}_{0}(\tau,s;\,t);\,-\infty<s\leq\tau<\infty\}\), respectively, on \(L^{q}_{\sigma}(\mathbb{R}^{3})\) for every \(q\in(1,\infty)\) and that the duality relation between the operators \(L_{0,\pm}(t)\) as in Lemma 3.5 implies that \(T_{0}(t,s)^{*}=\widetilde{T}_{0}(t-s,0;\,t)\) is the solution operator to the backward problem (4.2). Let \(q\in(1,\infty)\) and \(\tau_{*},\,\alpha_{0},\,\beta_{0}\in(0,\infty)\). On account of the globally Holder condition (2.34), there is a constant \(C=C(q,\tau_{*},\alpha_{0},\beta_{0},\theta)>0\) such that \[\|\nabla T_{0}(t,s)\psi\|_{q,\mathbb{R}^{3}}\leq C(t-s)^{-1/2}\|\psi\|_{q, \mathbb{R}^{3}} \tag{4.4}\] \[\|\nabla T_{0}(t,s)^{*}\phi\|_{q,\mathbb{R}^{3}}\leq C(t-s)^{-1/2}\|\phi\|_{q, \mathbb{R}^{3}} \tag{4.5}\] \[\|T_{0}(t,s)^{*}\mathbb{P}_{0}\mbox{div }\Phi\|_{q,\mathbb{R}^{3}}\leq C(t-s)^{-1/2} \|\mathbb{P}_{0}\|_{\mathcal{L}(L^{q}(\mathbb{R}^{3}))}\|\Phi\|_{q,\mathbb{R}^ {3}} \tag{4.6}\] for all \((t,s)\) with \(t-s\in(0,\tau_{*}]\), \(\psi\), \(\phi\in L^{q}_{\sigma}(\mathbb{R}^{3})\) and \(\Phi\in L^{q}(\mathbb{R}^{3})^{3\times 3}\) with \(\mbox{div }\Phi\in\cup_{1<\sigma<\infty}L^{\sigma}(\mathbb{R}^{3})\) as long as \(\|U_{b}\|\leq\alpha_{0}\) and \([U_{b}]_{\theta}\leq\beta_{0}\), see (2.35), by the same argument as in Proposition 3.4 and by duality as to (4.6). Our task here is to deduce the large time behavior of \(T_{0}(t,s)\) and \(T_{0}(t,s)^{*}\). This is the stage in which the smallness of \(\|U_{b}\|\) as well as the condition \(q_{0}<3\) in (2.33) is needed. The idea is to regard the problem (4.1) as perturbation from the Oseen evolution operator \[\big{(}E(t,s)f\big{)}(x)=\int_{\mathbb{R}^{3}}G\left(x-y+\int_{s}^{t}\eta_{b}( \sigma)\,d\sigma,\;t-s\right)f(y)\,dy \tag{4.7}\] which solves the non-autonomous Oseen initial problem \[\begin{array}{l}\partial_{t}u=\Delta u+\eta_{b}(t)\cdot\nabla u-\nabla p, \quad\mbox{div }u=0,\quad(x,t)\in\mathbb{R}^{3}\times(s,\infty),\\ u\to 0\quad\mbox{as }|x|\to\infty,\\ u(\cdot,s)=f,\end{array} \tag{4.8}\] provided that \(f\) is a solenoidal vector field, where \[G(x,t)=(4\pi t)^{-3/2}e^{-|x|^{2}/4t}\] so that \[e^{t\Delta}f=G(t)*f\] with \(*\) being the convolution denotes the heat semigroup. Likewise, the problem (4.2) is viewed as perturbation from \[\big{(}E(t,s)^{*}g\big{)}(y)=\int_{\mathbb{R}^{3}}G\left(x-y+\int_{s}^{t}\eta_ {b}(\sigma)\,d\sigma,\;t-s\right)g(x)\,dx. \tag{4.9}\] The heat semigroup enjoys the \(L^{q}\)-\(L^{r}\) estimates (\(q\leq r\)) \[\|e^{t\Delta}f\|_{r,\mathbb{R}^{3}}\leq(4\pi t)^{-(3/q-3/r)/2}\|f\|_{q,\mathbb{ R}^{3}} \tag{4.10}\] \[\|\nabla e^{t\Delta}f\|_{r,\mathbb{R}^{3}}\leq 4(2\pi t)^{-1/2-(3/q-3/r)/2}\|f\|_ {q,\mathbb{R}^{3}} \tag{4.11}\] \[\|e^{t\Delta}\mathbb{P}_{0}\mbox{div }F\|_{r,\mathbb{R}^{3}}\leq 4(2\pi t)^{-1 /2-(3/q-3/r)/2}\|\mathbb{P}_{0}\|_{\mathcal{L}(L^{r}(\mathbb{R}^{3}))}\|F\|_{ q,\mathbb{R}^{3}} \tag{4.12}\] for all \(t>0\), \(f\in L^{q}(\mathbb{R}^{3})\) and \(F\in L^{q}(\mathbb{R}^{3})^{3\times 3}\) with \(\mbox{div }F\in\cup_{1<\sigma<\infty}L^{\sigma}(\mathbb{R}^{3})\). In fact, (4.11) follows from \(\|\nabla G(t)\|_{1,\mathbb{R}^{3}}=4(4\pi t)^{-1/2}\) and (4.10). As described in the right-hand sides of (4.10)-(4.11), the constants can be taken uniformly in \(q\) and \(r\) since those with the case \(q=r\) give upper bounds (although the upper bound \(\sqrt{8/\pi}\) for (4.11) is not sharp). This will be taken into account in the proof of the following proposition; for instance, the specific constant \(c_{0}\) in (4.20) below is independent of \(q\). Gradient estimate (4.11) implies (4.12) by duality and by using \(\mathbb{P}_{0}e^{t\Delta}=e^{t\Delta}\mathbb{P}_{0}\), where the end-point case (\(r=1\), \(\infty\)) is missing (although \(q=1\) is allowed); indeed, this case can be actually covered and the additional condition \(\mbox{div }F\in\cup_{1<\sigma<\infty}L^{\sigma}(\mathbb{R}^{3})\) is redundant if one makes use of estimate of the kernel function of the composite operator \(e^{t\Delta}\mathbb{P}_{0}\mbox{div as in }\)[39], however, this is not useful here because we wish to replace \(e^{t\Delta}\) by the Oseen evolution operators (4.7) and (4.9). We also use the fact: For every \(r_{1},\,r_{2}\) with \(1<r_{1}<r_{2}<\infty\), there is a constant \(C(r_{1},r_{2})>0\) independent of \(q\in[r_{1},r_{2}]\) such that the Riesz transform satisfies \[\sup_{r_{1}\leq q\leq r_{2}}\|\mathcal{R}\|_{\mathcal{L}(L^{q}(\mathbb{R}^{3}) )}\leq C(r_{1},r_{2}), \tag{4.13}\] which follows simply from the Riesz-Thorin theorem, and thereby, the Fujita-Kato projection \(\mathbb{P}_{0}=\mathcal{I}+\mathcal{R}\otimes\mathcal{R}\) possesses the same property. It is obvious that both evolution operators \(E(t,s)\) and \(E(t,s)^{*}\) fulfill the same estimates as in (4.10)-(4.12) with the same constants, in the right-hand sides of which \(t\) is of course replaced by \(t-s\). The following proposition provides us with (4.14) for \(1<q\leq r\leq\infty\) (\(q\neq\infty\)) and (4.15) for \(1<q\leq r<\infty\) when the basic motion \(U_{b}\) is small enough, however, the smallness is not uniform near the end-point \(q=1\). The smallness of the basic motions together with the condition \(q_{0}<3\) in (2.33) to develop the linear analysis is needed merely at the present stage. In fact, the small constant \(\alpha_{2}\) in Theorem 2.1 is determined by the following proposition. **Proposition 4.1**.: _Suppose (2.33) and (2.34). Given \(\beta_{0}>0\), assume that \([U_{b}]_{\theta}\leq\beta_{0}\). Given \(r_{1}\in(1,4/3]\), there exist constants \(\alpha_{2}(r_{1},q_{0})>0\) and \(C=C(q,r,\alpha_{2},\beta_{0},\theta)>0\) such that if \(\|U_{b}\|\leq\alpha_{2}\), then the evolution operator \(T_{0}(t,s)\) and the pressure \(p(t)\) associated with \(u(t)=T_{0}(t,s)\psi\) enjoy_ \[\|T_{0}(t,s)\psi\|_{r,\mathbb{R}^{3}}\leq C(t-s)^{-(3/q-3/r)/2}\|\psi\|_{q, \mathbb{R}^{3}} \tag{4.14}\] \[\|\nabla T_{0}(t,s)\psi\|_{r,\mathbb{R}^{3}}\leq C(t-s)^{-1/2-(3/q-3/r)/2}\| \psi\|_{q,\mathbb{R}^{3}} \tag{4.15}\] \[\|\nabla p(t)\|_{r,\mathbb{R}^{3}}\leq C(t-s)^{-1/2-(3/q-3/r)/2}\|\psi\|_{q, \mathbb{R}^{3}} \tag{4.16}\] _for all \((t,s)\) with \(t>s\) and \(\psi\in L^{q}_{\sigma}(\mathbb{R}^{3})\) as long as_ \[\begin{array}{l}r_{1}\leq q<\infty,\quad q\leq r\leq\infty\qquad\quad\text{ for \eqref{eq:14}},\\ r_{1}\leq q\leq r<\infty\qquad\qquad\quad\text{for \eqref{eq:15}}-\eqref{eq:16}, \end{array}\] _where \(\|U_{b}\|\) is given by (2.35)._ _The same assertions hold true for the adjoint \(T_{0}(t,s)^{*}\) and the associated pressure \(p_{v}(s)\) to (4.2) under the same smallness condition on \(\|U_{b}\|\) as above._ Proof.: The solution \(u(t)=T_{0}(t,s)\psi\) of (4.1) with \(\psi\in L^{q}_{\sigma}(\mathbb{R}^{3})\) obeys \[u(t)=E(t,s)\psi-\int_{s}^{t}E(t,\tau)\mathbb{P}_{0}(U_{b}\cdot\nabla u)(\tau) \,d\tau. \tag{4.17}\] Let \(-\infty<s<t<\infty\) and set \[M(t,s):=\sup_{\tau\in(s,t)}(\tau-s)^{1/2}\|\nabla u(\tau)\|_{q,\mathbb{R}^{3}}.\] From (4.4) we know \(M(t,s)<\infty\) and \[M(t,s)\leq C\|\psi\|_{q,\mathbb{R}^{3}}\qquad\text{for $t-s\leq 2$}.\] We are going to show that \[M(t,s)\leq C\|\psi\|_{q,\mathbb{R}^{3}}\qquad\text{for $t-s>2$} \tag{4.18}\] with some constant \(C>0\) independent of such \((t,s)\) when \(\|U_{b}\|\) is small enough. Let \(t-s>2\). Since \(u_{b}\in L^{\infty}(\Omega)\), one may assume that \(q_{0}\in[8/3,3)\) in (2.33). Then we have \(U_{b}(t)\in X_{q_{0}}(\mathbb{R}^{3})\cap X_{6}(\mathbb{R}^{3})\) with \[\|U_{b}(t)\|_{q_{0},\mathbb{R}^{3}}+\|U_{b}(t)\|_{6,\mathbb{R}^{3}}\leq C\|U_{ b}\|\] by (2.35). The following argument is nowadays standard and it was traced back to Chen [4] in the nonlinear context (where the case \(q=3\) is important), but it works merely for \(q\in(3/2,\infty)\). We thus consider the case \(q\in[2,\infty)\) so that \[\frac{1}{q}+\frac{1}{q_{0}}\leq\frac{7}{8} \tag{4.19}\] on account of \(q_{0}\in[8/3,3)\) and, given \(r_{1}\in(1,4/3]\), the other case \(q\in[r_{1},2]\) will be discussed later by duality. We use (4.17) and apply (4.11) in which \(e^{t\Delta}\) is replaced by \(E(t,s)\) to find \[\|\nabla u(t)\|_{q,\mathbb{R}^{3}} \leq C(t-s)^{-1/2}\|\psi\|_{q,\mathbb{R}^{3}} \tag{4.20}\] \[\quad+C\int_{s}^{t-1}(t-\tau)^{-1/2-3/2q_{0}}\|U_{b}(\tau)\|_{q_{ 0},\mathbb{R}^{3}}\|\nabla u(\tau)\|_{q,\mathbb{R}^{3}}\,d\tau\] \[\quad+C\int_{t-1}^{t}(t-\tau)^{-3/4}\|U_{b}(\tau)\|_{6,\mathbb{R} ^{3}}\|\nabla u(\tau)\|_{q,\mathbb{R}^{3}}\,d\tau\] \[\leq C(t-s)^{-1/2}\|\psi\|_{q,\mathbb{R}^{3}}+c_{0}\|U_{b}\|(t-s) ^{-1/2}M(t,s)\] with some constant \(c_{0}=c_{0}(q_{0})>0\), which involves \(\sup_{8/7\leq r\leq 6}\|\mathbb{P}_{0}\|_{\mathcal{L}(L^{r}(\mathbb{R}^{3}))}\), see (4.13) and (4.19), and is independent of \(q\in[2,\infty]\). This readily follows by splitting the former integral into \(\int_{s}^{(t+s)/2}+\int_{(t+s)/2}^{t-1}\) and by using \(q_{0}<3\), see (2.33); in fact, \[\int_{s}^{(t+s)/2} \leq\|U_{b}\|M(t,s)\left(\frac{t-s}{2}\right)^{-1/2-3/2q_{0}}\int _{s}^{(t+s)/2}(\tau-s)^{-1/2}\,d\tau,\] \[\int_{(t+s)/2}^{t-1} \leq\|U_{b}\|M(t,s)\left(\frac{t-s}{2}\right)^{-1/2}\int_{1}^{ \infty}\tau^{-1/2-3/2q_{0}}\,d\tau.\] As a consequence, we obtain \[M(t,s)\leq C\|\psi\|_{q,\mathbb{R}^{3}}+c_{0}\|U_{b}\|M(t,s)\] for all \((t,s)\) with \(t-s>2\), which implies (4.18) and, therefore, \[\|\nabla T_{0}(t,s)\psi\|_{q,\mathbb{R}^{3}}\leq C(t-s)^{-1/2}\|\psi\|_{q, \mathbb{R}^{3}} \tag{4.21}\] for all \(t>s\), \(\psi\in L^{q}_{\sigma}(\mathbb{R}^{3})\) and \(q\in[2,\infty)\) when \(\|U_{b}\|\leq 1/2c_{0}\). Let \(1<r_{1}\leq 4/3\). We turn to the case \(q\in[r_{1},2]\), so that \(q^{\prime}\in[2,r_{1}^{\prime}]\). To this end, by use of \(\mathrm{div}\ U_{b}=0\), we consider the solution \(v(s)=T_{0}(t,s)^{*}\phi\) to (4.2) in the form \[v(s)=E(t,s)^{*}\phi+\int_{s}^{t}E(\tau,s)^{*}\mathbb{P}_{0}\,\mathrm{div}\ (v \otimes U_{b})(\tau)\,d\tau\] with the initial velocity field of the specific form \(\phi=\mathbb{P}_{0}\mathrm{div}\ \Phi\), where \(\Phi\in C_{0}^{\infty}(\mathbb{R}^{3})^{3\times 3}\). Set \[\widetilde{M}(t,s):=\sup_{\tau\in(s,t)}(t-\tau)^{1/2}\|v(\tau)\|_{q^{\prime}, \mathbb{R}^{3}}\] which is finite and \[\widetilde{M}(t,s)\leq C\|\Phi\|_{q^{\prime},\mathbb{R}^{3}}\qquad\text{for }t-s \leq 2\] on account of (4.6). Let \(t-s>2\). Applying (4.12) in which \(e^{t\Delta}\) is replaced by \(E(t,s)^{*}\), we find \[\|v(s)\|_{q^{\prime},\mathbb{R}^{3}}\leq C(t-s)^{-1/2}\|\Phi\|_{q^{\prime}, \mathbb{R}^{3}}+c_{0}^{\prime}\|U_{b}\|(t-s)^{-1/2}\widetilde{M}(t,s)\] with some constant \(c_{0}^{\prime}=c_{0}^{\prime}(r_{1},q_{0})>0\), which involves \(\sup_{2\leq r\leq r_{1}^{\prime}}\|\mathbb{P}_{0}\|_{\mathcal{L}(L^{r}(\mathbb{ R}^{3}))}\) and does not depend on \(q^{\prime}\in[2,r_{1}^{\prime}]\), by the same splitting of the integral as in (4.20). Note that \(c_{0}^{\prime}\) is increasing to \(\infty\) when \(r_{1}\to 1\). We thus obtain \[\|T_{0}(t,s)^{*}\mathbb{P}_{0}\mathrm{div}\ \Phi\|_{q^{\prime},\mathbb{R}^{3}} \leq C(t-s)^{-1/2}\|\Phi\|_{q^{\prime},\mathbb{R}^{3}} \tag{4.22}\] for all \(t>s\), \(\Phi\in C_{0}^{\infty}(\mathbb{R}^{3})^{3\times 3}\) and \(q^{\prime}\in[2,r_{1}^{\prime}]\) when \(\|U_{b}\|\leq 1/2c_{0}^{\prime}\). By continuity the composite operator \(T_{0}(t,s)^{*}\mathbb{P}_{0}\mathrm{div}\) extends to a bounded operator on \(L^{q}(\mathbb{R}^{3})^{3\times 3}\) with (4.22). By duality, we are led to (4.21) for \(q\in[r_{1},2]\) as well under the same smallness condition. Set \[\alpha_{2}(r_{1},q_{0}):=\min\left\{\frac{1}{2c_{0}(q_{0})},\ \frac{1}{2c_{0}^{ \prime}(r_{1},q_{0})}\right\}, \tag{4.23}\] then we furnish (4.21) for all \(t>s\) and \(q\in[r_{1},\infty)\) when \(\|U_{b}\|\leq\alpha_{2}\). By the aforementioned dependence of \(c_{0}^{\prime}\) on \(r_{1}\), we see that \(\alpha_{2}\) is decreasing to zero when \(r_{1}\to 1\). Suppose still this smallness of \(\|U_{b}\|\). We then use (4.17) and (4.21) to see immediately that \[\begin{split}\|u(t)\|_{q,\mathbb{R}^{3}}&\leq C\| \psi\|_{q,\mathbb{R}^{3}}+C\int_{s}^{t}(t-\tau)^{-1/2}\|U_{b}(\tau)\|_{3, \mathbb{R}^{3}}\|\nabla u(\tau)\|_{q,\mathbb{R}^{3}}\,d\tau\\ &\leq C(1+\|U_{b}\|)\|\psi\|_{q,\mathbb{R}^{3}}\end{split} \tag{4.24}\] yielding (4.14) with \(r=q\in[r_{1},\infty)\) for all \(t>s\). By the interpolation inequality together with (4.24) as well as (4.21) for \(q\in[r_{1},\infty)\), we conclude (4.14) for \(r\in[q,\infty]\). Then, (4.15) for \(r_{1}\leq q\leq r<\infty\) follows from (4.14) and (4.21) for \(q\in[r_{1},\infty)\) with the aid of the semigroup property (2.47). By (4.1) we have \[-\Delta p=\mathrm{div}\ (U_{b}\cdot\nabla u)\] in \(\mathbb{R}^{3}\), which leads us to \[\nabla p=(\mathcal{R}\otimes\mathcal{R})(U_{b}\cdot\nabla u),\] where \(\mathcal{R}\) denotes the Riesz transform. From (2.35) and (4.13) it follows that \[\|\nabla p(t)\|_{r,\mathbb{R}^{3}}\leq C\|U_{b}\|\|\nabla T_{0}(t,s)\psi\|_{r,\mathbb{R}^{3}}\] which combined with (4.15) concludes (4.16). Finally, one can deduce the same estimates of \(T_{0}(t,s)^{*}\) and \(T_{0}(t,s)\mathbb{P}_{0}\mathrm{div}\) as in (4.21)-(4.22) to conclude the same result for the adjoint \(T(t,s)^{*}\). The proof is complete. ### Useful estimates from energy relations By the dissipative structure (3.86) along with (2.49) we find \[\frac{1}{2}\,\partial_{t}\|T(t,s)F\|^{2}_{X_{2}(\mathbb{R}^{3})}+2\|DT(t,s)F\| ^{2}_{2,\Omega}=0 \tag{4.25}\] for every \(F\in X_{2}(\mathbb{R}^{3})\). Recall that the energy (2.13) is written as \[\|T(t,s)F\|^{2}_{X_{2}(\mathbb{R}^{3})}=\|u(t)\|^{2}_{2,\Omega}+m|\eta(t)|^{2} +\frac{2m}{5}|\omega(t)|^{2}\] where \((u,\eta,\omega)=i(U)\) with \(U=T(t,s)F\), see (2.7). The \(L^{2}\)-norm we should adopt is not the usual one but the norm above to describe exactly the energy relation (4.25). Likewise, we use (3.94) to observe \[\frac{1}{2}\,\partial_{s}\|T(t,s)^{*}G\|^{2}_{X_{2}(\mathbb{R}^{3})}=2\|DT(t, s)^{*}G\|^{2}_{2,\Omega} \tag{4.26}\] for every \(G\in X_{2}(\mathbb{R}^{3})\), where \[\|T(t,s)^{*}G\|^{2}_{X_{2}(\mathbb{R}^{3})}=\|v(s)\|^{2}_{2,\Omega}+m|\eta_{v}(s )|^{2}+\frac{2m}{5}|\omega_{v}(s)|^{2}\] where \((v,\eta_{v},\omega_{v})=i(V)\) with \(V=T(t,s)^{*}G\). This subsection shows that the energy relations (4.25)-(4.26) imply key inequalities (4.27)-(4.28) below, which are very useful in the next subsection, see (4.48) and (4.50). **Proposition 4.2**.: _Suppose (2.33) and (2.34). Then there is a constant \(C>0\) such that_ \[\int_{\sigma}^{t}\|T(\tau,s)F\|^{2}_{2,\Omega_{3}}\,d\tau\leq C\|T(\sigma,s)F \|^{2}_{2,\mathbb{R}^{3}} \tag{4.27}\] \[\int_{s}^{\sigma}\|T(t,\tau)^{*}G\|^{2}_{2,\Omega_{3}}\,d\tau\leq C\|T(t, \sigma)^{*}G\|^{2}_{2,\mathbb{R}^{3}} \tag{4.28}\] _for all \(F,\)\(G\in X_{2}(\mathbb{R}^{3})\) and \(s<\sigma<t\)._ Proof.: The proof is easy, but we briefly show (4.28). Fix \((t,s)\) with \(s<t\) and set \(V(\tau)=T(t,\tau)^{*}G\) for \(\tau\in(s,t)\), then we observe \[\|V(\tau)\|_{2,\Omega_{3}}\leq C\|V(\tau)\|_{6,\mathbb{R}^{3}}\leq C\|\nabla V (\tau)\|_{2,\mathbb{R}^{3}}. \tag{4.29}\] Since \(V\) is solenoidal, we have \(\Delta V=\text{div }(2DV)\) in \(\mathbb{R}^{3}\), which together with \(V|_{B}\in\text{RM}\) yields \[\|\nabla V\|^{2}_{2,\mathbb{R}^{3}}=2\|Dv\|^{2}_{2,\Omega}, \tag{4.30}\] where \(v=V|_{\Omega}\). By (4.29) and (4.30) we are led to \[C\|V(\tau)\|^{2}_{2,\Omega_{3}}\leq 2\|Dv(\tau)\|^{2}_{2,\Omega}. \tag{4.31}\] We now employ the energy relation (4.26) and (2.12) to conclude (4.28) for every \(\sigma\in(s,t)\), where \(C>0\) is dependent only on \(\rho\). The other estimate (4.27) is proved similarly. The proof is complete. ### Proof of (2.51) except for the case \(r=\infty\) This subsection is devoted to the proof of (2.51) for all \(t>s\) when \(1<q\leq r<\infty\). This can be proved simultaneously with (3.98) for all \(t>s\). \(L^{\infty}\)-estimate will be studied in subsections 4.4 and 4.5. The essential step is to show the uniformly boundedness of both \(T(t,s)\) and \(T(t,s)^{*}\) for every \(r\in(2,\infty)\) and all \((t,s)\) with \(t>s\): \[\|T(t,s)F\|_{r,\mathbb{R}^{3}}\leq C\|F\|_{r,\mathbb{R}^{3}}, \tag{4.32}\] \[\|T(t,s)^{*}G\|_{r,\mathbb{R}^{3}}\leq C\|G\|_{r,\mathbb{R}^{3}}. \tag{4.33}\] Since we know (4.32)-(4.33) for \(t-s\leq 3\) by Propositions 3.4 and 3.7, it suffices to derive them for \(t-s>3\). In fact, we have the following lemma. **Lemma 4.1**.: _Assume (2.33) and (2.34)._ 1. _Suppose that, for some_ \(r_{0}\in(2,\infty)\)_, estimate (_4.32_) with_ \(r=r_{0}\) _holds for all_ \((t,s)\) _with_ \(t>s\) _and_ \(F\in\mathcal{E}(\mathbb{R}^{3})\)_, see (_2.15_)._ _._ 1. _Let_ \(2\leq q\leq r\leq r_{0}\)_. Then we have (_2.51_) for all_ \((t,s)\) _with_ \(t>s\) _and_ \(F\in X_{q}(\mathbb{R}^{3})\)_._ 2. _Let_ \(r_{0}^{\prime}\leq q\leq r\leq 2\)_. Then we have (_3.98_) with_ \(j=0\) _for all_ \((t,s)\) _with_ \(t>s\) _and_ \(G\in X_{q}(\mathbb{R}^{3})\)_._ 2. _Suppose that, for some_ \(r_{0}\in(2,\infty)\)_, estimate (_4.33_) with_ \(r=r_{0}\) _holds for all_ \((t,s)\) _with_ \(t>s\) _and_ \(G\in\mathcal{E}(\mathbb{R}^{3})\)_._ 1. _Let_ \(2\leq q\leq r\leq r_{0}\)_. Then we have (_3.98_) with_ \(j=0\) _for all_ \((t,s)\) _with_ \(t>s\) _and_ \(G\in X_{q}(\mathbb{R}^{3})\)_._ 2. _Let_ \(r_{0}^{\prime}\leq q\leq r\leq 2\)_. Then we have (_2.51_) for all_ \((t,s)\) _with_ \(t>s\) _and_ \(F\in X_{q}(\mathbb{R}^{3})\)_._ Proof.: We show (b) of the first item, from which (a) follows by duality and by (2.12). The other item is proved in the same way. The case \(q=r=2\) is obvious by taking into account (2.12)-(2.13) since we have the energy relation (4.26). Fix \(q\in[r_{0}^{\prime},2)\), then we obtain from the assumption together with (4.26) that \[\|T(t,s)^{*}G\|_{q,\mathbb{R}^{3}}\leq C\|G\|_{q,\mathbb{R}^{3}} \tag{4.34}\] for all \(t>s\) and \(G\in X_{q}(\mathbb{R}^{3})\), in which \(\mathcal{E}(\mathbb{R}^{3})\) is dense, see Proposition 3.1. From (4.34) and (4.30) with the aid of the interpolation inequality it follows that \[\|T(t,s)^{*}G\|_{X_{2}(\mathbb{R}^{3})}\leq C\|\nabla T(t,s)^{*}G\|_{2, \mathbb{R}^{3}}^{\mu}\|T(t,s)^{*}G\|_{q,\mathbb{R}^{3}}^{1-\mu}\leq C\|DT(t,s) ^{*}G\|_{2,\Omega}^{\mu}\|G\|_{q,\mathbb{R}^{3}}^{1-\mu}\] for all \(G\in\mathcal{E}(\mathbb{R}^{3})\setminus\{0\}\), where \(1/2=\mu/6+(1-\mu)/q\). We fix \(t\in\mathbb{R}\) and combine the inequality above with (4.26) to find that \(v(s)=T(t,s)^{*}G\) enjoys \[\frac{d}{ds}\|v(s)\|_{X_{2}(\mathbb{R}^{3})}^{2}\geq\frac{C\|v(s)\|_{X_{2}( \mathbb{R}^{3})}^{2/\mu}}{\|G\|_{q,\mathbb{R}^{3}}^{2(1/\mu-1)}}\] for \(s\in(-\infty,t)\), which implies that \[\|v(s)\|_{2,\mathbb{R}^{3}}\leq C\|v(s)\|_{X_{2}(\mathbb{R}^{3})}\leq C(t-s)^ {-\frac{\mu}{2(1-\mu)}}\|G\|_{q,\mathbb{R}^{3}}\] with \[\frac{\mu}{2(1-\mu)}=\frac{3}{2}\left(\frac{1}{q}-\frac{1}{2}\right).\] This together with (4.34) leads us to (2.51) for \(r_{0}^{\prime}\leq q\leq r\leq 2\). The proof is complete. **Proposition 4.3**.: _Suppose (2.33) and (2.34). Given \(\beta_{0}>0\), assume that \([U_{b}]_{\theta}\leq\beta_{0}\). There is a constant \(\alpha_{1}=\alpha_{1}(q_{0})\) such that if \(\|U_{b}\|\leq\alpha_{1}\), then (3.98) with \(j=0\) as well as (2.51) holds for all \((t,s)\) with \(t>s\) and \(F,\,G\in X_{q}(\mathbb{R}^{3})\) provided that \(1<q\leq r<\infty\), where the constants \(C\) in those estimates depend on \(q,r,\alpha_{1},\beta_{0},\theta\)._ Proof.: We follow the argument developed by [27, 29]. Set \[\alpha_{1}=\alpha_{1}(q_{0}):=\alpha_{2}(4/3,q_{0}), \tag{4.35}\] where \(\alpha_{2}\) is the constant given in Proposition 4.1, see also (4.23). In what follows we assume \(\|U_{b}\|\leq\alpha_{1}\). Let \(2<r<\infty\). Given \(F\in\mathcal{E}(\mathbb{R}^{3})\), see (2.15), we set \((f,\eta_{f},\omega_{f})=i(F)\), see (2.7). For the proof of (4.32), let us take \(u_{0}(t)=T_{0}(t,s)F\) as the approximation near spatial infinity of \(U(t)=T(t,s)F\). We fix a cut-off function \(\phi\in C_{0}^{\infty}(B_{3})\) as in (3.5). Let \(\mathbb{B}\) be the Bogovskii operator for the domain \(\Omega_{3}\), see (3.75). By (4.14) together with (3.76) we see that \[U_{0}(t):=(1-\phi)u_{0}(t)+\mathbb{B}\left[u_{0}(t)\cdot\nabla\phi\right]\quad \text{with }u_{0}(t)=T_{0}(t,s)F \tag{4.36}\] belongs to \(X_{r}(\mathbb{R}^{3})\) and satisfies \[\|U_{0}(t)\|_{r,\mathbb{R}^{3}}\leq C\|F\|_{r,\mathbb{R}^{3}} \tag{4.37}\] for all \((t,s)\) with \(t>s\) since \(\|U_{b}\|\leq\alpha_{1}\). We fix the pressure \(p(t)\) associated with \(U(t)=T(t,s)F\) and also choose the pressure \(p_{0}(t)\) associated with \(u_{0}(t)=T_{0}(t,s)F\) such that \(\int_{\Omega_{3}}p_{0}(t)\,dx=0\), yielding \[\|p_{0}(t)\|_{r,\Omega_{3}}\leq C\|\nabla p_{0}(t)\|_{r,\Omega_{3}}. \tag{4.38}\] Let us define \(V\) and \(p_{v}\) by \[U(t)=U_{0}(t)+V(t),\qquad p(t)=(1-\phi)p_{0}(t)+p_{v}(t) \tag{4.39}\] and set \((v,\eta,\omega)=i(V)\), see (2.7). Then the rigid motion \(\eta+\omega\times x\) associated with \(V(t)\) is exactly the same as the one determined by \(U(t)=T(t,s)F\) through (2.7) since \(U(t)|_{B}=V(t)|_{B}\). We see that \(v\), \(p_{v}\), \(\eta\) and \(\omega\) obey \[\begin{split}&\partial_{t}v=\Delta v+(\eta_{b}(t)-u_{b}(t))\cdot \nabla v-\nabla p_{v}+H(t),\qquad\text{div }v=0\quad\text{in }\Omega\times(s,\infty),\\ & v|_{\partial\Omega}=\eta+\omega\times x,\qquad v\to 0\quad \text{as }|x|\to\infty,\\ & m\frac{d\eta}{dt}+\int_{\partial\Omega}S(v,p_{v})\nu\,d\sigma= 0,\\ & J\frac{d\omega}{dt}+\int_{\partial\Omega}x\times S(v,p_{v})\nu \,d\sigma=0,\\ & v(s)=\phi f-\mathbb{B}[f\cdot\nabla\phi],\quad\eta(s)=\eta_{f}, \quad\omega(s)=\omega_{f},\end{split} \tag{4.40}\] where \[\begin{split} H(t)&=-2\nabla\phi\cdot\nabla u_{0}(t)- \left[\Delta\phi+(\eta_{b}(t)-u_{b}(t))\cdot\nabla\phi\right]u_{0}(t)\\ &\quad+(\nabla\phi)p_{0}(t)+\big{\{}-\partial_{t}+\Delta+(\eta_{b }(t)-u_{b}(t))\cdot\nabla\big{\}}\mathbb{B}\left[u_{0}(t)\cdot\nabla\phi \right].\end{split} \tag{4.41}\] By the same symbol \(H(t)\) we denote its extension on \(\mathbb{R}^{3}\) by setting zero outside \(\Omega_{3}\), then \(H(t)\in L^{r}_{R}(\mathbb{R}^{3})\) for every \(r\in(1,\infty)\). Furthermore, we find \[\|H(t)\|_{r,\Omega_{3}}\leq C(t-s)^{-1/2}(1+t-s)^{-3/2r+1/2}\|F\|_{r,\mathbb{R} ^{3}} \tag{4.42}\] for every \(r\in[4/3,\infty)\) (we are considering the case \(r>2\)) and all \((t,s)\) with \(t>s\) owing to Proposition 4.1 together with (3.76) since \(\|U_{b}\|\leq\alpha_{1}\). In fact, it follows from (4.16) and (4.38) that \[\|p_{0}(t)\|_{r,\Omega_{3}}\leq\left\{\begin{array}{ll}\|\nabla p_{0}(t)\|_{ r,\mathbb{R}^{3}}\leq C(t-s)^{-1/2}\|F\|_{r,\mathbb{R}^{3}}&(t-s\leq 1),\\ \|\nabla p_{0}(t)\|_{\max\{r,3\},\mathbb{R}^{3}}\leq C(t-s)^{-3/2r}\|F\|_{r, \mathbb{R}^{3}}&(t-s>1),\end{array}\right. \tag{4.43}\] and that \[\begin{split}&\|\partial_{t}\mathbb{B}\left[u_{0}(t)\cdot\nabla \phi\right]\|_{r,\Omega_{3}}\\ &=\|\mathbb{B}\left[\{\Delta u_{0}+(\eta_{b}-U_{b})\cdot\nabla u_{0}- \nabla p_{0}\}\cdot\nabla\phi\right]\|_{r,\Omega_{3}}\\ &\leq C\|\Delta u_{0}+(\eta_{b}-U_{b})\cdot\nabla u_{0}-\nabla p_{0}\} \cdot\nabla\phi\|_{W^{1,r^{\prime}}(\Omega_{3})^{*}}\\ &\leq C\|\nabla u_{0}(t)\|_{r,\Omega_{3}}+C\|p_{0}(t)\|_{r,\Omega_{3}}\end{split}\] to which one can apply (4.15) and (4.43). The other terms are harmless. In this way, we obtain (4.42). Note that the smallness \(\|U_{b}\|\leq\alpha_{1}\) is needed only for large \((t-s)\), see Proposition 4.1; indeed, we have used (4.14) with \(r=\infty\) and (4.15)-(4.16) with \(r\) replaced by \(\max\{r,3\}\). We formulate the problem (4.40) as \[\frac{dV}{dt}+L_{+}(t)V=\mathbb{P}H(t),\quad t\in(s,\infty);\qquad V(s)= \widetilde{F}\] where \[\widetilde{F}=\big{(}\phi f-\mathbb{B}[f\cdot\nabla\phi]\big{)}\chi_{\Omega}+ (\eta_{f}+\omega_{f}\times x)\chi_{B} \tag{4.44}\] whose support is contained in \(B_{3}\) and which belongs to \(X_{q}(\mathbb{R}^{3})\) for every \(q\in(1,\infty)\). By use of the evolution operator \(T(t,s)\), we convert the problem above into \[V(t)=T(t,s)\widetilde{F}+\int_{s}^{t}T(t,\tau)\mathbb{P}H(\tau)\,d\tau. \tag{4.45}\] To make full use of the advantage that \(H(\tau)\) is compactly supported, it is better to deal with the weak form \[\langle V(t),\psi\rangle_{\mathbb{R}^{3},\rho}=\langle\widetilde{F},T(t,s)^{ *}\psi\rangle_{\mathbb{R}^{3},\rho}+\int_{s}^{t}\langle H(\tau),T(t,\tau)^{*} \psi\rangle_{\Omega_{3}}\,d\tau\] for \(\psi\in\mathcal{E}(\mathbb{R}^{3})\), where we have used \[\langle T(t,\tau)\mathbb{P}H(\tau),\psi\rangle_{\mathbb{R}^{3},\rho}=\langle H (\tau),T(t,\tau)^{*}\psi\rangle_{\mathbb{R}^{3},\rho}=\langle H(\tau),T(t, \tau)^{*}\psi\rangle_{\Omega_{3}}\] on account of (3.10) and (3.97), and employ the duality argument. Let \(t-s>3\) and recall that \(2<r<\infty\). It is readily seen from Proposition 3.7, (4.26) and (4.42) that \[\begin{split}&|\langle\widetilde{F},T(t,s)^{*}\psi\rangle_{ \mathbb{R}^{3},\rho}|+\left|\left(\int_{s}^{s+1}+\int_{t-1}^{t}\right)\langle H (\tau),T(t,\tau)^{*}\psi\rangle_{\Omega_{3}}\,d\tau\right|\\ &\leq\|\widetilde{F}\|_{X_{2}(\mathbb{R}^{3})}\|T(t,s)^{*}\psi\|_ {X_{2}(\mathbb{R}^{3})}+\int_{s}^{s+1}\|H(\tau)\|_{2,\Omega_{3}}\|T(t,\tau)^{ *}\psi\|_{2,\Omega_{3}}\,d\tau\\ &\quad+\int_{t-1}^{t}\|H(\tau)\|_{r,\Omega_{3}}\|T(t,\tau)^{*} \psi\|_{r^{\prime},\Omega_{3}}\,d\tau\\ &\leq C\left(\|\widetilde{F}\|_{r,\Omega_{3}}+\int_{s}^{s+1}\|H (\tau)\|_{r,\Omega_{3}}\,d\tau\right)\|T(t,t-1)^{*}\psi\|_{X_{2}(\mathbb{R}^{ 3})}\\ &\quad+C\|F\|_{r,\mathbb{R}^{3}}\|\psi\|_{r^{\prime},\mathbb{R}^{ 3}}\int_{t-1}^{t}(\tau-s)^{-3/2r}\,d\tau\\ &\leq C\|F\|_{r,\mathbb{R}^{3}}\|\psi\|_{r^{\prime},\mathbb{R}^{ 3}}\\ &\leq C\|F\|_{r,\mathbb{R}^{3}}\|\psi\|_{X_{r^{\prime}}(\mathbb{R}^ {3})}.\end{split} \tag{4.46}\] Our main task is thus to discuss the term \[J:=\int_{s+1}^{t-1}\langle H(\tau),T(t,\tau)^{*}\psi\rangle_{\Omega_{3}}\,d\tau,\] for which we have \[|J|\leq\int_{s+1}^{t-1}\|H(\tau)\|_{2,\Omega_{3}}\|T(t,\tau)^{*}\psi\|_{2, \Omega_{3}}\,d\tau. \tag{4.47}\] We make use of (4.28) with \(\sigma=t-1\) along with (4.42) for \(r>2\) to find \[|J|\leq C\left(\int_{s+1}^{t-1}(\tau-s)^{-3/r}\,d\tau\right)^{1/2}\|F\|_{r, \mathbb{R}^{3}}\|T(t,t-1)^{*}\psi\|_{2,\mathbb{R}^{3}} \tag{4.48}\] which combined with (3.98) with \(\tau_{*}=1\) yields \[|J|\leq C\|F\|_{r,\mathbb{R}^{3}}\|\psi\|_{r^{\prime},\mathbb{R}^{3}}\leq C\|F \|_{r,\mathbb{R}^{3}}\|\psi\|_{X_{r^{\prime}}(\mathbb{R}^{3})}\] for \(t-s>3\) and \(\psi\in\mathcal{E}(\mathbb{R}^{3})\) provided \(2<r<3\). This together with (4.46) and (4.37) imply (4.32) for \(t-s>3\) and, therefore, for all \((t,s)\) with \(t>s\) since we already know (4.32) for \(t-s\leq 3\) from Proposition 3.4. Hence, by virtue of Lemma 4.1 we obtain (3.98) with \(j=0\) for all \((t,s)\) with \(t>s\) and \(G\in X_{q}(\mathbb{R}^{3})\) provided \(3/2<q\leq r\leq 2\). With this at hand, we proceed to the next step in which the case \(r\in(3,6)\) is discussed. Given such \(r\), we set \[\mu=1-\frac{3}{r},\qquad\frac{1}{q}-\frac{1}{2}=\frac{\mu}{3}, \tag{4.49}\] where \(\mu\) comes from the growth rate \((t-s-1)^{\mu}\) of the integral of (4.48), which we intend to overcome by use of the decay property of the adjoint evolution operator. Then we have \(\mu\in(0,1/2)\) and \(q\in(3/2,2)\). We use (4.28) with \(\sigma=(s+t)/2\), apply the result obtained in the previous step and recall (3.98) with \(\tau_{*}=1\) to furnish \[\begin{split}\int_{s+1}^{(s+t)/2}\|T(t,\tau)^{*}\psi\|_{2, \Omega_{3}}^{2}\,d\tau&\leq C\|T(t,(s+t)/2)^{*}\psi\|_{2, \mathbb{R}^{3}}^{2}\\ &\leq C(t-s-2)^{-\mu}\|T(t,t-1)^{*}\psi\|_{q,\mathbb{R}^{3}}^{2} \\ &\leq C(t-s-2)^{-\mu}\|\psi\|_{r^{\prime},\mathbb{R}^{3}}^{2}\end{split} \tag{4.50}\] for \(t-s>2\). Following (3.93), we set \(W(t-\tau):=T(t,\tau)^{*}\psi\) and \(\sigma:=(t-s-2)/2\). We then rewrite (4.50) as \[\int_{1+\sigma}^{1+2\sigma}\|W(\tau)\|_{2,\Omega_{3}}^{2}\,d\tau\leq c_{1}\, \sigma^{-\mu}\|\psi\|_{r^{\prime},\mathbb{R}^{3}}^{2}\] with some \(c_{1}>0\) for all \(\sigma>0\), from which one can deduce the following optimal growth estimate by dyadic splitting method developed in [27, Lemma 3.4]: \[\begin{split}\int_{(s+t)/2}^{t-1}\|T(t,\tau)^{*}\psi\|_{2, \Omega_{3}}\,d\tau&=\int_{1}^{1+\sigma}\|W(\tau)\|_{2,\Omega_{3}} \,d\tau\\ &=\sum_{j=0}^{\infty}\int_{1+\sigma/2^{j+1}}^{1+\sigma/2^{j}}\|W( \tau)\|_{2,\Omega_{3}}\,d\tau\\ &\leq\sqrt{c_{1}}\,\sigma^{(1-\mu)/2}\|\psi\|_{r^{\prime}, \mathbb{R}^{3}}\sum_{j=0}^{\infty}2^{-(1-\mu)(j+1)/2}\\ &=C(t-s-2)^{(1-\mu)/2}\|\psi\|_{r^{\prime},\mathbb{R}^{3}}\end{split} \tag{4.51}\] for \(t-s>2\). We now split (4.47) into two parts as below which should be comparable with each other and then use (4.50)-(4.51) together with (4.42) to find \[|J| \leq\int_{s+1}^{(s+t)/2}+\int_{(s+t)/2}^{t-1}\] \[\leq C(t-s)^{(1-3/r)/2}\|F\|_{r,\mathbb{R}^{3}}\left(\int_{s+1}^{(s +t)/2}\|T(t,\tau)^{*}\psi\|_{2,\Omega_{3}}^{2}\,d\tau\right)^{1/2}\] \[\quad+C(t-s)^{-3/2r}\|F\|_{r,\mathbb{R}^{3}}\int_{(s+t)/2}^{t-1}\| T(t,\tau)^{*}\psi\|_{2,\Omega_{3}}\,d\tau\] \[\leq C\|F\|_{r,\mathbb{R}^{3}}\|\psi\|_{r^{\prime},\mathbb{R}^{3}}\] for \(t-s>3\), yielding (4.32) for all \(t>s\) provided \(3<r<6\). Then Lemma 4.1 concludes (3.98) with \(j=0\) for all \((t,s)\) with \(t>s\) provided \(6/5<q\leq r\leq 2\). At the final stage, given \(r\in(6,\infty)\), we take the same \(\mu\) and \(q\) as in (4.49), then we observe \(\mu\in(1/2,1)\) and \(q\in(6/5,3/2)\). The same argument as above with better decay property of \(T(t,s)^{*}\) obtained in the previous step implies (4.32) for all \((t,s)\) with \(t>s\) and, thereby, (2.51) for such \((t,s)\) provided \(2\leq q\leq r<\infty\) as well as (3.98) with \(j=0\) for the same \((t,s)\) provided \(1<q\leq r\leq 2\). The opposite case, that is, (2.51) for \(1<q\leq r\leq 2\) and (3.98) with \(j=0\) for \(2\leq q\leq r<\infty\) can be discussed with the aid of (4.27) under the same condition \(\|U_{b}\|\leq\alpha_{1}\) in the similar fashion, see also the last part of [27, section 4]. Finally, the remaining case \(1<q<2<r<\infty\) for both estimates is obvious because of the semigroup properties (2.47) and (3.96). The proof is complete. **Remark 4.1**.: _It would be remarkable that Proposition 4.3 is established under the smallness of \(\|U_{b}\|\) uniformly in \((q,r)\) regardless of the circumstances in Proposition 4.1. This is because one needs (4.14)-(4.16) only with the case \(q>2\). The other remark concerns the constants in (2.51) and (3.98) with \(j=0\). Let \(\|U_{b}\|\leq\alpha_{0}\), then, according to Proposition 3.4, the constant \(C\) in (2.51) near \(t=s\) depends on \(\alpha_{0}\). This is why the constant \(C\) in Proposition 4.3 is also dependent on \(\alpha_{1}\). It is also the case in Propositions 4.4 and 4.5 below._ ### Local energy decay estimates For the proof of (2.52), it suffices to show that \[\|\nabla T(t,s)F\|_{q,\mathbb{R}^{3}}\leq C(t-s)^{-\min\{1/2,\,3/2q\}}\|F\|_{q,\mathbb{R}^{3}} \tag{4.52}\] for all \((t,s)\) with \(t-s>2\) and \(F\in X_{q}(\mathbb{R}^{3})\) on account of the semigroup property, (2.51) with \(r<\infty\) and Proposition 3.4. As in [33, 34, 8, 30, 28, 29, 10], let us split (4.52) into estimates of \(\|\nabla T(t,s)F\|_{q,B_{3}}\) and \(\|\nabla T(t,s)F\|_{q,\mathbb{R}^{3}\setminus B_{3}}\). The former is given by the following proposition and it is called the local energy decay property, whereas the latter is studied in the next subsection. To discuss the latter, the local energy decay of \(\partial_{t}T(t,s)F\) is also needed, see (4.54) below. The following proposition gives us (4.53)-(4.54) for \(1<q<\infty\) when \(\|U_{b}\|\) is small enough, however, the smallness is not uniform near \(q=1\); in fact, this circumstance arises from Proposition 4.1. **Proposition 4.4**.: _Suppose (2.33) and (2.34). Given \(r_{1}\in(1,4/3]\) and \(\beta_{0}\in(0,\infty)\), assume that \(\|U_{b}\|\leq\alpha_{2}\in(0,\alpha_{1}]\) and \([U_{b}]_{\theta}\leq\beta_{0}\), where \(\alpha_{2}=\alpha_{2}(r_{1},q_{0})\) and \(\alpha_{1}=\alpha_{1}(q_{0})=\alpha_{2}(4/3,q_{0})\), see (4.35), are respectively the constants given in Propositions 4.1 and 4.3, while \(\|U_{b}\|\) and \([U_{b}]_{\theta}\) are given by (2.35). Then there is a constant \(C=C(q,\alpha_{2},\beta_{0},\theta)>0\) such that_ \[\|T(t,s)F\|_{W^{1,q}(B_{3})}\leq C(t-s)^{-3/2q}\|F\|_{q,\mathbb{R}^{3}} \tag{4.53}\] \[\|\partial_{t}T(t,s)F\|_{W^{-1,q}(\Omega_{3})}+\|p(t)\|_{q,\Omega_{3}}\leq C(t-s)^ {-3/2q}\|F\|_{q,\mathbb{R}^{3}} \tag{4.54}\] _for all \((t,s)\) with \(t-s>2\) and \(F\in X_{q}(\mathbb{R}^{3})\) as long as \(r_{1}\leq q<\infty\), where \(p(t)\) denotes the pressure associated with \(T(t,s)F\) and it is singled out in such a way that \(\int_{\Omega_{3}}p(t)\,dx=0\) for each \(t\in(s,\infty)\)._ _Under less smallness condition \(\|U_{b}\|\leq\alpha_{1}\) than described above, that is, the same one as in Proposition 4.3, there is a constant \(C=C(q,\alpha_{1},\beta_{0},\theta)>0\) such that_ \[\|T(t,s)F\|_{\infty,B_{3}}\leq C(t-s)^{-3/2q}\|F\|_{q,\mathbb{R}^{3}} \tag{4.55}\] _for all \((t,s)\) with \(t-s>2\) and \(F\in X_{q}(\mathbb{R}^{3})\) as long as \(1<q<\infty\)._ _The same assertions hold true for the adjoint \(T(t,s)^{*}\) and the associated pressure \(p_{v}(s)\) to (3.83) under the same smallness conditions on \(\|U_{b}\|\) as above._ Proof.: Given \(q\in(1,\infty)\), let us take \(r_{0}\) so large that \[\max\left\{\frac{2q}{q-1},\ q,\,6\right\}<r_{0}<\infty,\] which yields \[q>r_{0}^{\prime},\qquad\kappa:=\frac{3}{2}\left(1-\frac{2}{r_{0}}\right)>\max \left\{\frac{3}{2q},\ 1\right\}. \tag{4.56}\] Let \(\|U_{b}\|\leq\alpha_{1}(q_{0})\), then we have (2.51) except for the case \(r=\infty\) by Proposition 4.3. If \(H\in L^{q}_{R}(\mathbb{R}^{3})\) satisfies \(H(x)=0\) a.e. \(\mathbb{R}^{3}\setminus B_{3}\), then it follows from Proposition 3.4 and (3.52) with \(\tau_{*}=1\) as well as Proposition 4.3 that \[\|T(t,s)\mathbb{P}H\|_{W^{1,q}(\mathbb{R}^{3})}+\|L_{+}(t)T(t,s) \mathbb{P}H\|_{r_{0},\mathbb{R}^{3}} \leq C\|T(t-1,s)\mathbb{P}H\|_{r_{0},\mathbb{R}^{3}}\] \[\leq C(t-s-1)^{-(3/r_{0}^{\prime}-3/r_{0})/2}\|\mathbb{P}H\|_{r_{ 0}^{\prime},\mathbb{R}^{3}}\] \[\leq C(t-s)^{-\kappa}\|H\|_{q,\mathbb{R}^{3}}\] for \(t-s>2\). This along with Propositions 3.4 and 3.6 implies that \[\|T(t,s)\mathbb{P}H\|_{W^{1,q}(B_{3})}\leq C(t-s)^{-1/2}(1+t-s)^{-\kappa+1/2} \|H\|_{q,\mathbb{R}^{3}} \tag{4.57}\] and that \[\|\partial_{t}T(t,s)\mathbb{P}H\|_{W^{-1,q}(\Omega_{3})}\leq C(t-s)^{-\gamma}( 1+t-s)^{-\kappa+\gamma}\|H\|_{q,\mathbb{R}^{3}} \tag{4.58}\] for all \((t,s)\) with \(t>s\), where \(\gamma\in\big{(}(1+1/q)/2,\,1\big{)}\) is fixed arbitrarily. We fix \(r_{1}\in(1,4/3]\) arbitrarily, and let \(q\in[r_{1},\infty)\). Given \(F\in\mathcal{E}(\mathbb{R}^{3})\), which is dense in \(X_{q}(\mathbb{R}^{3})\) (see Proposition 3.1), the function (4.36) and its temporal derivative \(\partial_{t}U_{0}(t)\) enjoy the desired estimates as in (4.53)-(4.54) due to (4.14)-(4.16) and (3.76) (together with the equation (4.1) for \(\partial_{t}T_{0}(t,s)F\)) under the condition \[\|U_{b}\|\leq\alpha_{2}(r_{1},q_{0})\leq\alpha_{2}(4/3,q_{0})=\alpha_{1}(q_{0}), \tag{4.59}\] see (4.35) as well as (4.23). Our task is thus to estimate \(V(t)\) defined by (4.39). We use the integral equation (4.45) and its temporal derivative \[\partial_{t}V(t)=\partial_{t}T(t,s)\widetilde{F}+\mathbb{P}H(t)+\int_{s}^{t} \partial_{t}T(t,\tau)\mathbb{P}H(\tau)\,d\tau, \tag{4.60}\] in \(W^{-1,q}(\Omega_{3})\), where we can apply (4.57)-(4.58) to \(\widetilde{F}\) given by (4.44) and also to \(H(\tau)\) given by (4.41) since they vanish outside \(B_{3}\). In view of the relation (4.56), we see at once that \(T(t,s)\widetilde{F}\) and \(\partial_{t}T(t,s)\bar{F}\) fulfill the desired estimates; moreover, so does the second term \(\mathbb{P}H(t)\) of (4.60) already by (4.42). From (4.57)-(4.58) together with (4.42) it follows that \[\int_{s}^{t}\|T(t,\tau)\mathbb{P}H(\tau)\|_{W^{1,q}(B_{3})}\,d\tau\] \[\leq C\|F\|_{q,\mathbb{R}^{3}}\left(\int_{s}^{(s+t)/2}+\int_{(s+t )/2}^{t}\right)\] \[\qquad(t-\tau)^{-1/2}(1+t-\tau)^{-\kappa+1/2}(\tau-s)^{-1/2}(1+ \tau-s)^{-3/2q+1/2}\,d\tau\] \[\leq C(t-s)^{-3/2q}\|F\|_{q,\mathbb{R}^{3}}\] and, similarly, that \[\int_{s}^{t}\|\partial_{t}T(t,\tau)\mathbb{P}H(\tau)\|_{W^{-1,q} (\Omega_{3})}\,d\tau\] \[\leq C\|F\|_{q,\mathbb{R}^{3}}\left(\int_{s}^{(s+t)/2}+\int_{(s+t )/2}^{t}\right)\] \[\qquad(t-\tau)^{-\gamma}(1+t-\tau)^{-\kappa+\gamma}(\tau-s)^{-1/ 2}(1+\tau-s)^{-3/2q+1/2}\,d\tau\] \[\leq C(t-s)^{-3/2q}\|F\|_{q,\mathbb{R}^{3}}\] for all \((t,s)\) with \(t-s>2\), which conclude (4.54) for \(\partial_{t}T(t,s)F\) as well as (4.53) by taking into account (4.56) provided (4.59) is satisfied. Since \(\int_{\Omega_{3}}p(t)\,dx=0\), we see that \[\|p(t)\|_{q,\Omega_{3}}\leq C\|\nabla p(t)\|_{W^{-1,q}(\Omega_{3})}\] from which together with the first equation of (1.9), (4.53), (4.54) for \(\partial_{t}T(t,s)F\) and (2.33), we obtain (4.54) for the pressure under the same smallness (4.59). Let \(\|U_{b}\|\leq\alpha_{1}(q_{0})=\alpha_{2}(4/3,q_{0})\), see (4.35), then (4.53) with \(q>3\) implies (4.55) for the same \(q\), which combined with Proposition 4.3 gives (4.55) for the other case \(q\in(1,3]\) by the semigroup property (2.47). Finally, the argument for the adjoint \(T(t,s)^{*}\) works essentially in the same manner. The proof is complete. Propositions 3.4, 3.6 and 4.4 immediately lead us to the following corollary, that plays an important role in the next subsection. **Corollary 4.1**.: _Assume the same conditions as in the first half of Proposition 4.4. Then, for every \(\gamma\in\left((1+1/q)/2,\,1\right)\), there are constants \(C_{1}=C_{1}(\gamma,q,\alpha_{2},\beta_{0},\theta)>0\) and \(C_{2}=C_{2}(q,\alpha_{2},\beta_{0},\theta)>0\) such that_ \[\|\partial_{t}T(t,s)F\|_{W^{-1,q}(\Omega_{3})}+\|p(t)\|_{q,\Omega_{3}}\leq C_ {1}(t-s)^{-\gamma}(1+t-s)^{-3/2q+\gamma}\|F\|_{q,\mathbb{R}^{3}} \tag{4.61}\] \[\|T(t,s)F\|_{W^{1,q}(B_{3})}\leq C_{2}(t-s)^{-1/2}(1+t-s)^{-3/2q+1/2}\|F\|_{q,\mathbb{R}^{3}} \tag{4.62}\] _for all \((t,s)\) with \(t>s\) and \(F\in X_{q}(\mathbb{R}^{3})\) as long as \(r_{1}\leq q<\infty\), where the associated pressure \(p(t)\) is chosen as in Proposition 4.4._ _The same assertions hold true for the adjoint \(T(t,s)^{*}\) and the associated pressure \(p_{v}(s)\) to (3.83) under the same condition \(\|U_{b}\|\leq\alpha_{2}(r_{1},q_{0})\)._ ### Large time behavior near spatial infinity In this subsection we deduce the decay property of \(\nabla T(t,s)\) near spatial infinity under the same conditions as in Proposition 4.4 to complete the proof of (4.52). **Proposition 4.5**.: _Suppose (2.33) and (2.34). Given \(r_{1}\in(1,4/3]\) and \(\beta_{0}\in(0,\infty)\), assume that \(\|U_{b}\|\leq\alpha_{2}\in(0,\alpha_{1}]\) and \([U_{b}]_{\theta}\leq\beta_{0}\), where \(\alpha_{2}=\alpha_{2}(r_{1},q_{0})\) and \(\alpha_{1}=\alpha_{1}(q_{0})\) are the constants given in Propositions 4.1 and 4.3, while \(\|U_{b}\|\) and \([U_{b}]_{\theta}\) are given by (2.35). Then there is a constant \(C=C(q,\alpha_{2},\beta_{0},\theta)>0\) such that_ \[\|\nabla T(t,s)F\|_{q,\mathbb{R}^{3}\setminus B_{3}}\leq C(t-s)^{-\min\{1/2, \,3/2q\}}\|F\|_{q,\mathbb{R}^{3}} \tag{4.63}\] _for all \((t,s)\) with \(t-s>2\) and \(F\in X_{q}(\mathbb{R}^{3})\) as long as \(r_{1}\leq q<\infty\)._ _Under the same condition \(\|U_{b}\|\leq\alpha_{1}\) as for (4.55), there is a constant \(C=C(q,\alpha_{1},\beta_{0},\theta)>0\) such that_ \[\|T(t,s)F\|_{\infty,\mathbb{R}^{3}\setminus B_{3}}\leq C(t-s)^{-3/2q}\|F\|_{q, \mathbb{R}^{3}} \tag{4.64}\] _for all \((t,s)\) with \(t-s>2\) and \(F\in X_{q}(\mathbb{R}^{3})\) as long as \(1<q<\infty\)._ _The same assertions hold true for the adjoint \(T(t,s)^{*}\) under the same smallness conditions on \(\|U_{b}\|\) as above._ Proof.: Given \(F\in\mathcal{E}(\mathbb{R}^{3})\), see (2.15), we set \(U(t)=T(t,s)F\), \(u(t)=U(t)|_{\Omega}\) and take the associated pressure \(p(t)\) such that \(\int_{\Omega_{3}}p(t)\,dx=0\). Using the same cut-off function \(\phi\) and the Bogovskii operator \(\mathbb{B}\), see (3.5) and (3.75), as in the proof of Propositions 4.3 and 4.4, we consider \[v(t)=(1-\phi)u(t)+\mathbb{B}[u(t)\cdot\nabla\phi],\qquad p_{v}(t)=(1-\phi)p(t).\] Then \(v(t)\) obeys \[v(t)=T_{0}(t,s)\widetilde{F}+\int_{s}^{t}T_{0}(t,\tau)\mathbb{P}_{0}K(\tau)\,d\tau \tag{4.65}\] in terms of the evolution operator \(T_{0}(t,s)\) without rigid body studied in subsection 4.1, where \[\widetilde{F}=(1-\phi)F+\mathbb{B}[F\cdot\nabla\phi]\in C^{\infty}_{0,\sigma }(\mathbb{R}^{3})\] and \[K(t) =2\nabla\phi\cdot\nabla u(t)+\left[\Delta\phi+\left(\eta_{b}(t)-u _{b}(t)\right)\cdot\nabla\phi\right]u(t)\] \[\quad-(\nabla\phi)p(t)+\left\{\partial_{t}-\Delta-\left(\eta_{b} (t)-u_{b}(t)\right)\cdot\nabla\right\}\mathbb{B}[u(t)\cdot\nabla\phi].\] By the same symbol \(K(t)\) we denote its extension on \(\mathbb{R}^{3}\) by setting zero outside \(\Omega_{3}\). Given \(r_{1}\in(1,4/3]\), we assume the smallness (4.59) on \(\|U_{b}\|\) as in Proposition 4.4 and Corollary 4.1, then it follows from (4.61)-(4.62) and (2.33) together with (3.76) that \[\|K(t)\|_{r,\mathbb{R}^{3}}\leq C(t-s)^{-\gamma}(1+t-s)^{-3/2q+\gamma}\|F\|_{q,\mathbb{R}^{3}} \tag{4.66}\] for all \((t,s)\) with \(t>s\) and \(r\in(1,q]\) as long as \(q\in[r_{1},\infty)\), where \(\gamma\in\left((1+1/q)/2,\,1\right)\) is fixed arbitrarily. Let \(t-s>2\). For the proof of (4.63)-(4.64), our task is to deduce the desired eatimes of \(\|\nabla v(t)\|_{q,\mathbb{R}^{3}}\) and \(\|v(t)\|_{\infty,\mathbb{R}^{3}}\) by using (4.65). It is obvious that the term \(T_{0}(t,s)\widetilde{F}\) fulfills those thanks to (4.14)-(4.15) provided \(q\geq r_{1}\). By (4.66) along with (4.15) (for \(q\geq r_{1}\)) and by choosing, for instance, \[\left\{\begin{array}{ll}r=q&\quad\text{if $q\in[r_{1},3/2)$},\\ r=4/3&\quad\text{if $q\in[3/2,\infty)$},\end{array}\right.\] (note that \(r\in[r_{1},q]\) is required to apply Proposition 4.1), we find \[\begin{split}&\int_{s}^{t}\|\nabla T_{0}(t,\tau)\mathbb{P}_{0}K( \tau)\|_{q,\mathbb{R}^{3}}\,d\tau\\ &\leq C\|F\|_{q,\mathbb{R}^{3}}\left(\int_{s}^{(s+t)/2}+\int_{(s+t)/2}^{t} \right)\\ &\qquad(t-\tau)^{-1/2}(1+t-\tau)^{-(3/r-3/q)/2}(\tau-s)^{-\gamma}(1+ \tau-s)^{-3/2q+\gamma}\,d\tau\\ &\leq C(t-s)^{-\min\{1/2,\,3/2q\}}\|F\|_{q,\mathbb{R}^{3}}\end{split} \tag{4.67}\] which concludes (4.63) under the condition \(\|U_{b}\|\leq\alpha_{2}(r_{1},q_{0})\). For the \(L^{\infty}\)-estimate, the integrand of (4.67) is replaced by \[(t-\tau)^{-3/2q}(1+t-\tau)^{-(3/r-3/q)/2}(\tau-s)^{-\gamma}(1+\tau-s)^{-3/2q+ \gamma}.\] Let \(3/2<q<\infty\). We then take \(r=4/3\) and compute the integral in the same way as above to deduce (4.64) provided \[\|U_{b}\|\leq\alpha_{1}(q_{0})=\alpha_{2}(4/3,q_{0}),\] which ensures (4.66) with \(q\in(3/2,\infty)\) and allows us to apply (4.14) with \(r=\infty\) and \(q=4/3\). In order to derive this for the other case \(q\in(1,3/2]\) as well, we have only to combine (4.64) for \(q=2\) (say) obtained above with (2.51) for \(q\in(1,3/2]\) under the condition \(\|U_{b}\|\leq\alpha_{1}(q_{0})\) by taking into account the semigroup property (2.47). The adjoint \(T(t,s)^{*}\) is discussed similarly. The proof is complete. ### Proof of Theorem 2.1 We collect Proposition 3.4 (case \(r=\infty\)), Proposition 4.3, (4.55) and (4.64) to furnish (2.51) provided that \(\|U_{b}\|\leq\alpha_{1}(q_{0})\). Let \(r_{1}\in(1,4/3]\) and suppose \(\|U_{b}\|\leq\alpha_{2}(r_{1},q_{0})\leq\alpha_{1}(q_{0})\). Then it follows from (4.53) and (4.63) along with Proposition 3.4 that \[\|\nabla T(t,s)F\|_{q,\mathbb{R}^{3}}\leq C(t-s)^{-1/2}(1+t-s)^{\max\{(1-3/q) /2,\,0\}}\|F\|_{q,\mathbb{R}^{3}} \tag{4.68}\] for all \((t,s)\) with \(t>s\) and \(F\in X_{q}(\mathbb{R}^{3})\) as long as \(r_{1}\leq q<\infty\). One may combine (4.68) with (2.51) to conclude (2.52) for \(r\in[r_{1},\infty)\) and \(q\in(1,r]\). With the same estimates for the adjoint \(T(t,s)^{*}\) under the same smallness of \(\|U_{b}\|\) at hand, let us show (2.53). Let \(1<q<\infty\) and \(\phi\in\mathcal{E}(\mathbb{R}^{3})\). Then, in view of (3.97) together with (3.10), see also (2.14), we have \[\begin{split}&\left|\langle T(t,s)\mathbb{P}\text{div }F,\;\phi \rangle_{\mathbb{R}^{3},\rho}\right|\\ &=\left|-\langle F,\;\nabla T(t,s)^{*}\phi\rangle_{\mathbb{R}^{3},\rho}+(1-\rho)\int_{\partial\Omega}(F\nu)\cdot\left(T(t,s)^{*}\phi\right)d \sigma\right|\\ &\leq\|F\|_{q,(\mathbb{R}^{3},\rho)}\|\nabla T(t,s)^{*}\phi\|_{q ^{\prime},(\mathbb{R}^{3},\rho)}\\ &\leq C\|F\|_{q,\mathbb{R}^{3}}\|\nabla T(t,s)^{*}\phi\|_{q^{ \prime},\mathbb{R}^{3}}\end{split}\] for all \(F\in L^{q}(\mathbb{R}^{3})^{3\times 3}\) with \(F\nu=0\) at \(\partial\Omega\) as well as \(\operatorname{div}\,F\) belonging to \(L^{p}(\mathbb{R}^{3})\) for some \(p\in(1,\infty)\) (so that \((F\nu)|_{\partial\Omega}\) from both directions coincide with each other) and fulfilling \((\operatorname{div}\,F)|_{B}\in\operatorname{RM}\). Given \(r_{0}\in[4,\infty)\), suppose that \[\|U_{b}\|\leq\alpha_{3}(r_{0},q_{0}):=\alpha_{2}(r_{0}^{\prime},q_{0})\leq \alpha_{1}(q_{0}). \tag{4.69}\] Then we apply (2.52) for \(T(t,s)^{*}\) to conclude (2.53) by duality provided that \(q\in(1,r_{0}]\) and \(r\in[q,\infty)\). This together with (2.51) with \(r=\infty\) leads us to (2.53) with \(r=\infty\) as well. Finally, we deduce (2.54). Given \(r_{0}\in[4,\infty)\) and \(r_{1}\in(1,4/3]\), assume that \[\|U_{b}\|\leq\alpha_{4}(r_{0},r_{1},q_{0}):=\alpha_{2}\big{(}\min\{r_{0}^{ \prime},r_{1}\},q_{0}\big{)}. \tag{4.70}\] Then we use (2.52) and (2.53) to obtain (2.54) for \(1<q\leq r<\infty\) with \(q\in(1,r_{0}]\) as well as \(r\in[r_{1},\infty)\). The proof is complete. ## 5 Stability of the basic motion The initial value problem (2.44) is transformed into \[U(t)=T(t,s)U_{0}+\int_{s}^{t}T(t,\tau)H(U(\tau))\,d\tau=:\overline{U}(t)+( \Lambda U)(t). \tag{5.1}\] Look at (2.46); since \((\eta-u)\cdot\nu=0\) at \(\partial\Omega\), we observe \[H(U)=\mathbb{P}\left\{\!\left\{\operatorname{div}\,\left((u_{b}+u)\otimes( \eta-u)\right)\!\right\}\!\chi_{\Omega}\right\}\,=\mathbb{P}\operatorname{div }\,\big{\{}(u_{b}+u)\otimes(\eta-u)\chi_{\Omega}\big{\}}. \tag{5.2}\] We do need this divergence form especially for the nonlinear term \(\eta\cdot\nabla u\) in the first integral of (5.15) below (as in [9, 10]), while the other terms can be discussed anyway if we impose more assumptions on \(\nabla u_{b}\) than (2.56). For finding a solution to (5.1) we adopt the function space \[E:=\big{\{}U\in C\big{(}(s,\infty);\,W^{1,3}(\mathbb{R}^{3})\cap L^{\infty}( \mathbb{R}^{3})\big{)};\] \[U(t)\in X_{3}(\mathbb{R}^{3})\;\forall t\in(s,\infty),\,\lim_{t \to s}\|U\|_{E(t)}=0,\;\|U\|_{E}<\infty\big{\}}\] with \[\|U\|_{E(t)}:=\sup_{\tau\in(s,t)}(\tau-s)^{1/2}\big{(}\|\nabla U (\tau)\|_{3,\mathbb{R}^{3}}+\|U(\tau)\|_{\infty,\mathbb{R}^{3}}\big{)}\quad \text{for }t\in(s,\infty),\] \[\|U\|_{E}:=\sup_{t\in(s,\infty)}\left(\|U\|_{E(t)}+\|U(t)\|_{3, \mathbb{R}^{3}}\right).\] Then \(E\) is a Banach space endowed with norm \(\|\cdot\|_{E}\). Let us remark that \(U\in E\) already involves the boundary condition (2.16) with \((u(t),\eta(t),\omega(t))=i(U(t))\) for each \(t>s\), see (2.7), since \(U(t)\in W^{1,3}(\mathbb{R}^{3})\). In view of (2.7), \(U\in E\) implies that \[(t-s)^{1/2}\big{(}\|\nabla u(t)\|_{3,\Omega}+\|u(t)\|_{\infty,\Omega}+|\eta(t) |+|\omega(t)|\big{)}\leq C\|U\|_{E(t)}\to 0 \tag{5.3}\] as \(t\to s\) and that \[(t-s)^{1/2}\big{(}\|\nabla u(t)\|_{3,\Omega}+\|u(t)\|_{\infty,\Omega}\big{)}+( 1+t-s)^{1/2}\big{(}|\eta(t)|+|\omega(t)|\big{)}+\|u(t)\|_{3,\Omega}\leq C\|U\|_ {E} \tag{5.4}\] for all \((t,s)\) with \(t>s\). By \(\Lambda U\) we denote the Duhamel term in (5.1): \[(\Lambda U)(t):=\int_{s}^{t}T(t,\tau)H(U(\tau))\,d\tau.\] Then we see the following lemma. **Lemma 5.1**.: _Suppose (2.33)-(2.34) and (2.56). If \(\|U_{b}\|\leq\alpha_{1}\) with \(\alpha_{1}=\alpha_{1}(q_{0})\) being the constant given in Proposition 4.3, see (4.35), then we have \(\Lambda U\in E\) as well as_ \[\lim_{t\to s}\|(\Lambda U)(t)\|_{3,\mathbb{R}^{3}}=0 \tag{5.5}\] _for every \(U\in E\) and_ \[\|\Lambda U\|_{E}\leq c_{1}\|U_{b}\|^{\prime}\|U\|_{E}+c_{2}\|U\|_{E}^{2} \tag{5.6}\] \[\|\Lambda U-\Lambda V\|_{E}\leq\big{(}c_{1}\|U_{b}\|^{\prime}+c_{2}\|U\|_{E}+ c_{2}\|V\|_{E}\big{)}\|U-V\|_{E} \tag{5.7}\] _for all \(U,\,V\in E\) with some constants \(c_{1}=c_{1}(q_{0},\alpha_{1},\beta_{0},\theta)>0\) and \(c_{2}=c_{2}(\alpha_{1},\beta_{0},\theta)\), where \(\|U_{b}\|\) and \(\|U_{b}\|^{\prime}\) are the constants given by (2.35) and (2.57), respectively. Furthermore, under the condition above, the following additional properties hold for every \(U\in E\): (i) Let \(r\in(3,\infty)\), then_ \[\|\nabla(\Lambda U)(t)\|_{r,\mathbb{R}^{3}}=O\big{(}(t-s)^{-1/2}\big{)} \tag{5.8}\] _as \((t-s)\to\infty\). (ii) \(\Lambda U\) is locally Holder continuous on \((s,\infty)\) with values in \(W^{1,3}(\mathbb{R}^{3})\cap L^{\infty}(\mathbb{R}^{3})\), to be precise,_ \[\begin{split}&\Lambda U\in C^{\theta_{0}}_{\rm loc}\big{(}(s, \infty);\,X_{3}(\mathbb{R}^{3})\big{)}\cap C^{\theta_{1}}_{\rm loc}\big{(}(s, \infty);\,L^{\infty}(\mathbb{R}^{3})\big{)},\\ &\nabla\Lambda U\in C^{\theta_{1}}_{\rm loc}\big{(}(s,\infty);\,L ^{3}(\mathbb{R}^{3})\big{)},\end{split} \tag{5.9}\] _for every \(\theta_{0}\in(0,3/4)\) and \(\theta_{1}\in(0,1/4)\)._ Proof.: We are concerned only with (5.5)-(5.6) since the other estimate (5.7) is shown similarly. Set \[\begin{split}(\Lambda_{1}U)(t)&=\int_{s}^{t}T(t, \tau)\mathbb{P}[\{(\eta-u)\cdot\nabla u_{b}\}\chi_{\Omega}](\tau)\,d\tau\\ &=\int_{s}^{t}T(t,\tau)\mathbb{P}\text{\rm div }\{u_{b}\otimes(\eta-u)\chi_{\Omega}\}(\tau)\,d\tau, \\ (\Lambda_{2}U)(t)&=\int_{s}^{t}T(t,\tau)\mathbb{P}[\{ (\eta-u)\cdot\nabla u\}\chi_{\Omega}](\tau)\,d\tau\\ &=\int_{s}^{t}T(t,\tau)\mathbb{P}\text{\rm div }\{u\otimes(\eta-u)\chi_{\Omega}\}(\tau)\,d\tau,\end{split} \tag{5.10}\] where (5.2) is taken into account. Let us note that both \(u_{b}\otimes(\eta-u)\chi_{\Omega}\) and \(u\otimes(\eta-u)\chi_{\Omega}\) satisfy the conditions imposed on \(F\) for (2.53)-(2.54) since \((\eta-u)\cdot\nu=0\) at \(\partial\Omega\). In what follows we assume that \[\|U_{b}\|\leq\alpha_{1}(q_{0})=\alpha_{2}(4/3,q_{0})=\alpha_{3}(4,q_{0})= \alpha_{4}(4,4/3,q_{0}),\] see (4.35) and (4.69)-(4.70). This condition allows us to employ all the estimates obtained in Theorem 2.1 with the exponents needed below. Let \(U\in E\) and let us begin with estimate of (5.10). Since \(u_{b}\in L^{\infty}(\Omega)\), one may assume \(q_{0}\in(3/2,3)\) in (2.33). We make use of (2.51)-(2.54) along with (5.3)-(5.4) to find that \[\begin{split}&\|\nabla(\Lambda_{1}U)(t)\|_{3,\mathbb{R}^{3}}+\|( \Lambda_{1}U)(t)\|_{\infty,\mathbb{R}^{3}}\\ &\leq C\int_{s}^{t-1}(t-\tau)^{-3/2q_{0}-1/2}\|u_{b}(\tau)\|_{q_{ 0},\Omega}\big{(}|\eta(\tau)|+\|u(\tau)\|_{\infty,\Omega}\big{)}\,d\tau\\ &\quad+C\int_{t-1}^{t}(t-\tau)^{-1/2}\|\nabla u_{b}(\tau)\|_{3, \Omega}\big{(}|\eta(\tau)|+\|u(\tau)\|_{\infty,\Omega}\big{)}\,d\tau\\ &\leq C(t-s)^{-1/2}\|U_{b}\|^{\prime}\|U\|_{E(t)}\end{split} \tag{5.12}\] for \(t-s>2\) by splitting the first integral further into \(\int_{s}^{(s+t)/2}+\int_{(s+t)/2}^{t-1}\), where \(q_{0}\in(3/2,3)\) allows us to obtain sharp decay rate in (2.54), and that \[\begin{split}&\|\nabla(\Lambda_{1}U)(t)\|_{3,\mathbb{R}^{3}}+\|( \Lambda_{1}U)(t)\|_{\infty,\mathbb{R}^{3}}\\ &\leq C\int_{s}^{t}(t-\tau)^{-1/2}\|\nabla u_{b}(\tau)\|_{3, \Omega}\big{(}|\eta(\tau)|+\|u(\tau)\|_{\infty,\Omega}\big{)}\,d\tau\\ &\leq C\|U_{b}\|^{\prime}\|U\|_{E(t)}\end{split}\] for \(t-s\leq 2\). Also, it is readily seen that \[\begin{split}\|(\Lambda_{1}U)(t)\|_{3,\mathbb{R}^{3}}& \leq C\int_{s}^{t}(t-\tau)^{-1/2}\|u_{b}(\tau)\|_{3,\Omega}\big{(}| \eta(\tau)|+\|u(\tau)\|_{\infty,\Omega}\big{)}\,d\tau\\ &\leq C\|U_{b}\|\|U\|_{E(t)}\end{split} \tag{5.13}\] for all \(t>s\). Collecting estimates above, we infer \[\begin{split}&\|\Lambda_{1}U\|_{E}\leq c_{1}\|U_{b}\|^{\prime}\|U\|_{E },\\ &\lim_{t\to s}\big{(}\|\Lambda_{1}U\|_{E(t)}+\|(\Lambda_{1}U)(t) \|_{3,\mathbb{R}^{3}}\big{)}=0,\end{split} \tag{5.14}\] for all \(U\in E\) with some constant \(c_{1}=c_{1}(q_{0},\alpha_{1},\beta_{0},\theta)\). We turn to the other integral \(\Lambda_{2}U\) given by (5.11). The computation with splitting below was not done by [10] and this was why the decay rate of \(\|\nabla u(t)\|_{3,\Omega}\) was not sharp in that literature. From (2.51)-(2.54) and (5.3)-(5.4) it follows that \[\begin{split}&\|\nabla(\Lambda_{2}U)(t)\|_{3,\mathbb{R}^{3}}+\|( \Lambda_{2}U)(t)\|_{\infty,\mathbb{R}^{3}}\\ &\leq C\int_{s}^{(s+t)/2}(t-\tau)^{-1}\|u(\tau)\|_{3,\Omega} \big{(}|\eta(\tau)|+\|u(\tau)\|_{\infty,\Omega}\big{)}\,d\tau\\ &\quad+C\int_{(s+t)/2}^{t}(t-\tau)^{-1/2}\|\nabla u(\tau)\|_{3, \Omega}\big{(}|\eta(\tau)|+\|u(\tau)\|_{\infty,\Omega}\big{)}\,d\tau\\ &\leq C(t-s)^{-1/2}\big{(}\|U\|_{E}\|U\|_{E(t)}+\|U\|_{E(t)}^{2} \big{)}\end{split} \tag{5.15}\] for all \(t>s\) and that \[\begin{split}\|(\Lambda_{2}U)(t)\|_{3,\mathbb{R}^{3}}& \leq C\int_{s}^{t}(t-\tau)^{-1/2}\|u(\tau)\|_{3,\Omega}\big{(}| \eta(\tau)|+\|u(\tau)\|_{\infty,\Omega}\big{)}\,d\tau\\ &\leq c_{0}B(1/2,1/2)\|U\|_{E}\|U\|_{E(t)}\end{split} \tag{5.16}\] for all \(t>s\) with some constant \(c_{0}>0\), where \(B(\cdot,\cdot)\) denotes the beta function and \(B(\frac{1}{2},\frac{1}{2})=\pi\). Those estimates lead us to \[\begin{split}&\|\Lambda_{2}U\|_{E}\leq c_{2}\|U\|_{E}^{2},\\ &\lim_{t\to s}\big{(}\|\Lambda_{2}U\|_{E(t)}+\|(\Lambda_{2}U)(t) \|_{3,\mathbb{R}^{3}}\big{)}=0,\end{split} \tag{5.17}\] for all \(U\in E\) with some constant \(c_{2}=c_{2}(\alpha_{1},\beta_{0},\theta)\), which together with (5.14) completes the proof of (5.5)-(5.6). It remains to show the additional properties: (i) Look at (5.12) and (5.15) in which \(\|\nabla(\cdot)\|_{3,\mathbb{R}^{3}}\) is replaced by \(\|\nabla(\cdot)\|_{\tau,\mathbb{R}^{3}}\) with \(r\in(3,\infty)\), then the computations still work, where further splitting \(\int_{(s+t)/2}^{t-1}+\int_{t-1}^{t}\) is needed in (5.15) and, in the latter integral, \((t-\tau)^{-1/2}\) must be replaced by \((t-\tau)^{-1+3/2r}\). (ii) Let us recall the Holder estimate of the evolution operator obtained in Proposition 3.5. Since the issue is merely local in time, we do not have to take care of the large time behavior and thus we have only to use the first form of each of (5.10)-(5.11), respectively. We employ (3.60) with \((j,q,r)=(0,3,3),(0,3,\infty)\) and \((1,3,3)\) for \(\Lambda_{1}U\). Concerning \(\Lambda_{2}U\), we use (3.60) with \((j,q,r)=(0,2,3),(0,2,\infty)\) and \((1,2,3)\) for \(u\cdot\nabla u\), while \((j,q,r)=(0,3,3),(0,3,\infty)\) and \((1,3,3)\) for \(\eta\cdot\nabla u\). Note that (3.58) is fulfilled for all of those \((j,q,r)\). Then the computations are the same as in the autonomous case with analytic semigroups. The proof is complete. Set \(\overline{U}(t):=T(t,s)U_{0}\) with \(U_{0}\in X_{3}(\mathbb{R}^{3})\), then (3.51) implies that \(\|\overline{U}\|_{E(t)}\to 0\) as \(t\to s\). By taking into account (3.60) as well, it is clear to see that \(\overline{U}\in E\) together with \[\|\overline{U}\|_{E}\leq c_{*}\|U_{0}\|_{3,\mathbb{R}^{3}},\] with some constant \(c_{*}>0\), which follows from (2.51)-(2.52) provided \(\|U_{b}\|\leq\alpha_{1}\). With Lemma 5.1 at hand, we easily find that the map \[U\mapsto\overline{U}+\Lambda U\] is contractive from the closed ball \(E_{R}=\{U\in E;\;\|U\|_{E}\leq R\}\) with radius \[R=\frac{1}{2c_{2}}\left(\frac{1}{2}-\sqrt{\frac{1}{4}-4c_{2}c_{*}\|U_{0}\|_{3,\mathbb{R}^{3}}}\right)<4c_{*}\|U_{0}\|_{3,\mathbb{R}^{3}} \tag{5.18}\] into itself provided that \[\|U_{b}\|^{\prime}\leq\frac{1}{2c_{1}},\qquad\|U_{0}\|_{3,\mathbb{R}^{3}}< \delta:=\frac{1}{16c_{2}c_{*}}. \tag{5.19}\] The smallness of the basic motion thus reads \[\|U_{b}\|^{\prime}\leq\alpha=\alpha(q_{0},\beta_{0},\theta):=\min\left\{ \alpha_{1},\,\frac{1}{2c_{1}}\right\},\] where \(\alpha_{1}=\alpha_{1}(q_{0})\) is the constant given in Proposition 4.3, see (4.35). Then the fixed point \(U\in E_{R}\) provides a solution to (5.1) and also the initial condition \[\lim_{t\to s}\|U(t)-U_{0}\|_{3,\mathbb{R}^{3}}=0\] holds on account of (5.5). Let us remark that uniqueness of solutions to (5.1) still holds within the class \(E\) rather than the ball \(E_{R}\) with small radius (5.18) by means of standard argument as in Fujita and Kato [12], where the behavior \(\|U\|_{E(t)}\to 0\) for \(t\to s\) plays a role. Actually, even this behavior near the initial time is redundant for uniqueness of the solution constructed above, as pointed out by Brezis [3], where the last assertion on the uniformly convergence (3.51) in Proposition 3.4 is employed. Note, however, that uniqueness within the class \(E\) is independent of the existence of solutions, while it is not the case for the latter. All the desired properties of the solution obtained above (except for the large time behavior of the little order) follow from Lemma 5.1 as well as several properties of the evolution operator. By (ii) of Lemma 5.1 together with (2.56), the term \(H(U)\) given by (2.46) is locally Holder continuous with values in \(X_{3}(\mathbb{R}^{3})\), so that the solution \(U(t)\) is a strong one ([48, Chapter 5, Theorem 2.3], [52, Theorem 3.9]) as described in Theorem 2.2. Finally, let us close the paper with verification of the sharp large time behavior of the little order although this is common as long as we work with underlying space in which the class of nice functions is dense. Suppose \(U_{0}\in X_{3}(\mathbb{R}^{3})\cap X_{p}(\mathbb{R}^{3})\) with some \(p\in(1,3)\), then both \(\overline{U}(t)\) and \((\Lambda_{1}U)(t)\) decays to zero in \(X_{3}(\mathbb{R}^{3})\) with definite rate, say, \((t-s)^{-\gamma}\). In fact, we have only to replace \(\|u_{b}\|_{3,\Omega}\) by \(\|u_{b}\|_{q_{0},\Omega}\) with \(q_{0}\in(3/2,3)\) in (5.13) as for \(\Lambda_{1}U\). One the other hand, it is readily seen that \[\|(\Lambda_{2}U)(t)\|_{3,\mathbb{R}^{3}}\leq c_{0}B(1/2,1/2-\gamma)(t-s)^{- \gamma}\|U\|_{E}\sup_{\tau\in(s,t)}(\tau-s)^{\gamma}\|U(\tau)\|_{3,\mathbb{R}^ {3}}\] for all \(t>s\), where \(c_{0}\) is the same constant as in (5.16). Note that the constant \(c_{2}\) in (5.17) should satisfy \(c_{2}\geq\pi c_{0}=c_{0}B(\frac{1}{2},\frac{1}{2})\). By the continuity of the beta function, one can take \(\gamma>0\) so small that \(c_{0}B(\frac{1}{2},\frac{1}{2}-\gamma)\leq 2c_{2}\), which together with (5.18) leads to \[c_{0}B(1/2,1/2-\gamma)\|U\|_{E}\leq 2c_{2}R<8c_{2}c_{*}\|U_{0}\|_{3,\mathbb{R}^ {3}}<\frac{1}{2}\] under the condition (5.19). We thus find \[\|U(t)\|_{3,\mathbb{R}^{3}}\leq C(t-s)^{-\gamma}\big{(}\|U_{0}\|_{3,\mathbb{R }^{3}}+\|U_{0}\|_{p,\mathbb{R}^{3}}\big{)},\] from which combined with the continuity of the solution map \(U(s)\mapsto U\) in the sense that \[\sup_{t\in[s,\infty)}\|U(t)-V(t)\|_{3,\mathbb{R}^{3}}\leq C\|U(s)-V(s)\|_{3, \mathbb{R}^{3}}\] as well as denseness of \(X_{3}(\mathbb{R}^{3})\cap X_{p}(\mathbb{R}^{3})\) in \(X_{3}(\mathbb{R}^{3})\), we conclude that \[\lim_{t-s\to\infty}\|U(t)\|_{3,\mathbb{R}^{3}}=0. \tag{5.20}\] Several papers (including mine) on the Navier-Stokes claim that (5.20) is accomplished provided initial data are still smaller, however, if we look carefully into estimates as above, then we see that further smallness than (5.19) is not needed. This observation is due to Tomoki Takahashi, who is the author of [47]. Once we have (5.20), we can deduce the other decay properties by following the argument as in [8], see also [47]. Let \(\tau_{*}>s\). Using the equation \[U(t)=T(t,\tau_{*})U(\tau_{*})+\int_{\tau_{*}}^{t}T(t,\tau)H(U(\tau))\,d\tau \tag{5.21}\] and performing exactly the same computations as in the proof of Lemma 5.1, we infer \[(t-\tau_{*})^{1/2}\big{(}\|U(t)\|_{\infty,\mathbb{R}^{3}}+\|\nabla U (t)\|_{3,\mathbb{R}^{3}}\big{)}\] \[\leq C\|U(\tau_{*})\|_{3,\mathbb{R}^{3}}+\big{(}c_{1}\|U_{b}\|^{ \prime}+c_{2}\|U\|_{E}\big{)}\sup_{\tau\in(\tau_{*},t)}(\tau-\tau_{*})^{1/2}\| U(\tau)\|_{\infty,\mathbb{R}^{3}}\] \[\leq C\|U(\tau_{*})\|_{3,\mathbb{R}^{3}}+\frac{3}{4}\sup_{\tau\in (\tau_{*},t)}(\tau-\tau_{*})^{1/2}\|U(\tau)\|_{\infty,\mathbb{R}^{3}}\] for all \(t>\tau_{*}\) in view of (5.18)-(5.19), where the constants \(c_{1}\) and \(c_{2}\) are the same as those in (5.6)-(5.7). Given \(\varepsilon>0\) arbitrarily, we take \(\tau_{*}-s\) so large that the first term of the right-hand side above is less than \(\varepsilon\), which is indeed possible because of (5.20). We then have \((t-\tau_{*})^{1/2}\big{(}\|U(t)\|_{\infty,\mathbb{R}^{3}}+\|\nabla U(t)\|_{3, \mathbb{R}^{3}}\big{)}\leq 4\varepsilon\) for all \(t>\tau_{*}\). If \(t-s>2(\tau_{*}-s)\), then we get \[\left(\frac{t-s}{2}\right)^{1/2}\big{(}\|U(t)\|_{\infty,\mathbb{R}^{3}}+\| \nabla U(t)\|_{3,\mathbb{R}^{3}}\big{)}\leq 4\varepsilon,\] which concludes (2.58) except for \(\|\nabla u(t)\|_{r,\Omega}\) with \(r\in(3,\infty)\). For such \(r\), finally, we use (5.21) with \(\tau_{*}\) replaced by \(\frac{t+s}{2}\) and compute it as in deduction of (5.8) to find \[\|\nabla U(t)\|_{r,\mathbb{R}^{3}} \leq C(t-s)^{-1/2}\|U((t+s)/2)\|_{3,\mathbb{R}^{3}}\] \[\quad+C(t-s)^{-1/2}\big{(}\|U_{b}\|^{\prime}+\|U\|_{E}\big{)} \sup_{\tau>(t+s)/2}(\tau-s)^{1/2}\|U(\tau)\|_{\infty,\mathbb{R}^{3}}\] for \(t-s>2\). Hence, (2.58) with \(q=\infty\) and (5.20) yield (2.58) with \(r\in(3,\infty)\). The proof of Theorem 2.2 is complete. **Acknowledgments**. I am grateful to Professor Giovanni P. Galdi for stimulating discussion about the issue of the paper. This work is partially supported by the Grant-in-aid for Scientific Research 22K03372 from JSPS. **Declarations** **Conflict of interest.** The author states that there is no conflict of interest.
2309.03334
The 4-Intersection Unprojection Format
Unprojection theory is a philosophy due to Miles Reid, which becomes a useful tool in algebraic geometry for the construction and the study of new interesting geometric objects such as algebraic surfaces and 3-folds. In the present work we introduce a new format of unprojection, which we call the 4-intersection format. It is specified by a codimension 2 complete intersection ideal which is contained in four codimension 3 complete intersection ideals and leads to the construction of codimension 6 Gorenstein rings. As an application, we construct three families of codimension 6 Fano 3-folds embedded in weighted projective space.
Vasiliki Petrotou
2023-09-06T19:22:46Z
http://arxiv.org/abs/2309.03334v1
# The 4-intersection unprojection format ###### Abstract. Unprojection theory is a philosophy due to Miles Reid, which becomes a useful tool in algebraic geometry for the construction and the study of new interesting geometric objects such as algebraic surfaces and 3-folds. In this present work we introduce a new format of unprojection, which we call the \(4\)-intersection format. It is specified by a codimension 2 complete intersection ideal \(I\) which is contained in four codimension 3 complete intersection ideals \(J_{1},J_{2},J_{3},J_{4}\) and leads to the construction of codimension 6 Gorenstein rings. As an application, we construct three families of codimension 6 Fano 3-folds embedded in weighted projective space which correspond to the entries with identifier numbers 29376, 9176 and 24198 respectively in the Graded Ring Database. Key words and phrases: Gorenstein rings, Fano 3-folds, Birational Geometry, Unprojection 2010 Mathematics Subject Classification: Primary 14M05, 14J45; Secondary 13H10, 14E99 ## 1. Introduction The theory of unprojection focuses on constructing and analyzing commutative rings in terms of simpler ones. It also provides, in an intrinsic way, the relation between the commutative rings associated to certain constructions which appear often in algebraic geometry such as for example the Castelnuovo blow-down of a \(-1\)-curve on a surface [32, 33, 36, 39] and also in algebraic combinatorics [4]. Kustin and Miller, motivated by the question of the structure of codimension 4 Gorenstein rings [24, 25, 26, 27, 28], introduced [23] a method that constructs Gorenstein rings with more complicated structure than the initial data, which is now known as Kustin-Miller (or Type I) unprojection. In this unprojection type the codimension increases by 1. Some years later, around 1995, Reid reinterpreted and generalised the construction of Kustin and Miller and formulated the main principles of unprojection theory [39]. His motivation was to provide an algebraic language useful for the study of birational geometry. Since then, many papers on foundational questions of unprojection theory [1, 15, 16, 29, 30, 31, 33, 34, 35, 36, 37, 40] have appeared. We now summarise Reid's formulation of unprojection. Assume that \(J\subset R\) is a codimension 1 ideal with \(R\), \(R/J\) being Gorenstein. Denote by \(i\colon J\to R\) the inclusion map. Then there exists an \(R\)-module homomorphism \(\phi\colon J\to R\) such that the \(R\)-module \(\operatorname{Hom}_{R}(J,R)\) is generated by the set \(\{i,\phi\}\). Using \(\phi\), Reid defined the new unprojection ring, see [37, Definition 3.1]. Papadakis and Reid proved that the ring of unprojection is Gorenstein ([36, Theorem 1.5]). We refer the reader to [36, Example 2.3] for the simplest example of Kustin-Miller unprojection. Unprojection theory has found many applications. In birational geometry, in the study of Fano 3-folds, algebraic surfaces of general type and Mori-flips [7, 11, 17, 18, 30]. Moreover, in the construction of some interesting geometric objects such as K3 surfaces, Fano 3-folds and Calabi-Yau 3-folds of high codimension [10, 11, 31, 37]. There are also interesting applications in algebraic combinatorics [3, 4, 5, 6]. Sometimes, especially for the construction of geometric objects of high codimension, it is necessary to perfom not only one but a series of unprojections. Neves and Papadakis [31] developed a theory which is based on the repeated use of Kustin-Miller unprojection and as a result produces Gorenstein rings of high codimension as through the process a new unprojection variable is added. This theory is called parallel Kustin-Miller unprojection. A brief summary of the main aspects of the theory is given in [37, Subsection 3.2]. In this paper we develop a new format of unprojection, which we call the \(4\)-intersection format. It is specified by a codimension \(2\) complete intersection ideal \(I\) with the property that it is contained in four codimension \(3\) complete intersection ideals \(J_{1},\ldots,J_{4}\). Using this data we construct, by parallel Kustin-Miller unprojection, a codimension \(6\) Gorenstein ring. As an application we construct three families of Fano \(3\)-folds of codimension \(6\) embedded in weighted projective space which correspond to the entries with ID: \(29376\), ID: \(9176\), and ID: \(24198\) in the Graded Ring Database [2, 9, 12, 13, 14]. In Section 2 we give some preliminary notions and results that we need in this paper. In Section 3 we introduce the \(4\)-intersection unprojection format. In Subsection 3.1, we give a specific example of \(4\)-intersection format, and we construct, using parallel unprojection, a codimension \(6\) Gorenstein ring. In Section 4 we give some applications. Subsections 4.1, 4.2 and 4.3 contain three specific \(4\)-dimensional quotients of the ring studied in Section 3. We check, using partially the computer algebra systems Macaulay2 [21] and Singular [19], that the geometric objects defined by them correspond to the above mentioned families of the Fano \(3\)-folds. ## 2. Preliminaries We start by recalling some notions and results that are required for the rest of the paper. Denote by \(k=\mathbb{C}\) the field of complex numbers. **Definition 2.1**.: Let \(R\) be a Noetherian ring and \(I\subset R\) an ideal. We define the _codimension_ of \(I\) in \(R\), denoted by \(\operatorname{codim}I\), as follows: \[\operatorname{codim}I=\dim R-\dim R/I.\] **Theorem 2.2**.: _(Krull's Principal Ideal Theorem) Assume that \(R\) is a local Noetherian ring and \(I\) is an ideal of \(R\) which is generated by \(n\) elements. Then, \(\operatorname{codim}I\leq n\)._ For a proof of Theorem 2.2, see for example [8, p. 414]. In the present work, we will call an ideal \(I\) of a polynomial ring \(k[x_{1},\ldots,x_{n}]\) over a field \(k\) a _complete intersection ideal_ if \(I\) can be generated by \(\operatorname{codim}I\) elements. We refer to [8, Section 2.3] for more details about this notion. **Definition 2.3**.: A Noetherian local ring \(R\) is called _Gorenstein_ if it has finite injective dimension as an \(R\)-module. More generally, a Noetherian ring \(R\) is called Gorenstein if for every maximal ideal \(\mathfrak{m}\) of \(R\) the localization \(R_{\mathfrak{m}}\) is Gorenstein. **Definition 2.4**.: An ideal \(I\) of a Gorenstein ring \(R\) is called _Gorenstein_ if the quotient ring \(R/I\) is Gorenstein. **Theorem 2.5**.: _(Serre) Let \(R=k[x_{1},\ldots,x_{n}]/I\) be the polynomial ring in \(n\) variables of positive degree divided by a homogeneous ideal \(I\). If codim \(I=1\) or \(2\) then_ \[R\text{ is Gorenstein}\Leftrightarrow I\text{ is a complete intersection.}\] For a proof of Theorem 2.5, see for example [20, Corollary 21.20]. **Definition 2.6**.: Let \(R\) be a Gorenstein local ring and \(J\subset R\) a codimension \(1\) ideal such that the quotient ring \(R/J\) be Gorenstein. Under these assumptions, the \(R\)-module \(\operatorname{Hom}_{R}(J,R)\) is generated by the inclusion map \(i\colon J\to R\) and an extra homomorphism \(\phi\colon J\to R\), ([36, Lemma 1.1]). Denote by \(T\) a new unprojection variable. We call _Kustin-Miller unprojection ring_, \(\operatorname{Unpr}(J,R)\), of the pair \(J\subset R\) the quotient \[\operatorname{Unpr}(J,R)=\frac{R[T]}{(Tr-\phi(r):r\in J)}.\] **Theorem 2.7**.: _([36, Theorem 1.5]) The Kustin-Miller unprojection ring \(\operatorname{Unpr}(J,R)\) is Gorenstein._ More discussion about the motivation of study and basic examples of Kustin-Miller unprojection are contained in [36, 37, 39]. For the main principles of parallel Kustin-Miller unprojection we refer the reader to [31, 37]. ### Unprojection of a codimension 2 complete intersection inside a codimension 3 complete intersection In this subsection we specify a codimension 2 complete intersection ideal \(I\) and a codimension 3 complete intersection \(J\) such that \(I\subset J\). Following [33, Section 4], we give the explicit description of the unprojection ring \(\operatorname{Unpr}(J/I,R/I)\) of the pair \(J/I\subset R/I\). Let \(R=k[a_{i},b_{i},x_{j}]\), where \(1\leq i\leq 3\) and \(j\in\{1,3,5\}\), be the standard graded polynomial ring in 9 variables over a field \(k\). We set \[f_{1}=a_{1}x_{1}+a_{2}x_{3}+a_{3}x_{5},\qquad f_{2}=b_{1}x_{1}+b_{2}x_{3}+b_{3} x_{5},\] and consider the ideals \[I=(f_{1},f_{2}),\qquad J=(x_{1},x_{3},x_{5})\] of \(R\). We denote by \(A\) the \(2\times 3\) matrix \[A=\begin{pmatrix}a_{1}&a_{2}&a_{3}\\ b_{1}&b_{2}&b_{3}\end{pmatrix}\] and, for \(1\leq i\leq 3\), by \(A_{i}\) the \(2\times 2\) submatrix of \(A\) obtained by removing the \(i\)-th column of \(A\). **Proposition 2.8**.: _The ideal \(I\) is a homogeneous codimension \(2\) Gorenstein ideal of \(R\) and the ideal \(J\) is a homogeneous codimension \(3\) Gorenstein ideal. Moreover, \(I\) is a subset of \(J\)._ **Proof.** We first prove that \(\operatorname{codim}I=2\). The ideal \(I\) is generated by two homogeneous polynomials of \(R\) of degree \(2\). Hence, by Theorem 2.2\(\operatorname{codim}I\leq 2\). To prove the claim it is enough to show that \(\operatorname{codim}I\geq 2\). We set \(f_{3}=-b_{1}f_{1}+a_{1}f_{2}\). Let \(>\) be the lexicographic order on \(R\) with \[a_{1}>\cdots>a_{3}>b_{1}>\cdots>b_{3}>x_{1}>\cdots>x_{3}.\] We denote by \(Q\) the initial ideal of \(I\) with respect \(>\). It is well-known that \(\operatorname{codim}I=\operatorname{codim}Q\). We set \[L=(a_{1}x_{1},b_{1}x_{1},a_{1}b_{2}x_{3}).\] Since the initial term of \(f_{1}\) is \(a_{1}x_{1}\), the initial term of \(f_{2}\) is \(b_{1}x_{1}\) and the initial term of \(f_{3}\) is \(a_{1}b_{2}x_{3}\) we have \(L\subset Q\), hence \(\operatorname{codim}L\leq\operatorname{codim}Q\). We consider the affine variety \(X=V(L)\subset\mathbb{A}^{9}\). It holds that \[X=V(x_{1},x_{3})\cup V(b_{2},x_{1})\cup V(a_{1},b_{1})\cup V(a_{1},x_{1}),\] hence \(\dim X=9-2=7\). Using that \[\dim\ R/L=\dim\ X,\] it follows that \(\operatorname{codim}L=2\). Hence \(\operatorname{codim}I\geq 2\). Therefore, by Theorem 2.5 the ideal \(I\) is Gorenstein. We now prove that \(\operatorname{codim}J=3\). According to the Third Isomorphism Theorem of rings \[R/J\cong k[a_{1},a_{2},a_{3},b_{1},b_{2},b_{3}]\] So, \(\dim\ R/J=6\). Hence, \[\operatorname{codim}\ J=\dim\ R-\dim\ R/J=3.\] By the last isomorphism, the ideal \(J\) is Gorenstein. By the equality of matrices \[\left(f_{1}\ f_{2}\right)=A\begin{pmatrix}x_{1}\\ x_{3}\\ x_{5}\end{pmatrix}\] it follows that \(I\subset J\). \(\square\) We set, for \(1\leq i\leq 3\), \(h_{i}\) to be the determinant of the matrix \(A_{i}\). Denote by \[\phi\colon J/I\to R/I\] the map such that \[\phi(x_{1}+I)=h_{1}+I,\ \ \phi(x_{3}+I)=-h_{2}+I,\ \ \phi(x_{5}+I)=h_{3}+I.\] By [33, Theorem 4.3], \(\operatorname{Hom}_{R/I}(J/I,R/I)\) is generated as \(R/I\)-module by the inclusion map \(i\) and \(\phi\). As a corollary, \[\operatorname{Unpr}(J/I,R/I)=\frac{R[T]}{I+(Tx_{1}-h_{1},Tx_{3}-(-h_{2}),Tx_{ 5}-h_{3})}.\] ## 3. The \(4\)-intersection unprojection format In this section we introduce the notion of \(4\)-intersection unprojection format. **Definition 3.1**.: Assume that \(J_{1},\ldots,J_{4}\) are four codimension \(3\) complete intersection ideals and \(I\) is a codimension \(2\) complete intersection ideal. We say that \(I\) is a \(4\)-intersection ideal in \(J_{1},\ldots,J_{4}\) if \(I\subset J_{t}\) for all \(1\leq t\leq 4\). An important question is how to explicitly construct \(I\) and \(J_{t}\) such that \(I\) is a \(4\)-intersection ideal in \(J_{1},\ldots,J_{4}\). In Subsection 3.1 we present such a construction. ### A specific \(4\)-intersection unprojection format In the present subsection we specify the following: a codimension \(2\) complete intersection ideal \(I\) and four codimension \(3\) complete intersection ideals \(J_{1},\ldots,J_{4}\) such that \(I\) is a \(4\)-intersection ideal in \(J_{1},\ldots,J_{4}\). Using this configuration as initial data, we construct, by parallel Kustin-Miller unprojection [31], a codimension \(6\) Gorenstein ring. Assume that \(k\) is a field. We consider the standard graded polynomial ring \(R=\ k[c_{i},x_{i}]\), where \(1\leq i\leq 6\). We set \[f=c_{1}x_{1}x_{2}+c_{2}x_{3}x_{4}+c_{3}x_{5}x_{6},\qquad g=c_{4}x_{1}x_{2}+c_ {5}x_{3}x_{4}+c_{6}x_{5}x_{6},\] \(I=(f,g)\) and \[J_{1}=(x_{1},x_{3},x_{5}),\ J_{2}=(x_{1},x_{4},x_{6}),\ J_{3}=(x_{2},x_{3},x_{ 6}),\ J_{4}=(x_{2},x_{4},x_{5}).\] It is clear that \(f,g\) are homogeneous elements of degree \(3\) and \(I\) is a \(4\)-intersection ideal in the ideals \(J_{1},\ldots,J_{4}\). In the applications we need to specialize the variables \(c_{i}\) to elements of \(k\). We now give a precise way to do that. Consider the Zariski open subset \[\mathcal{U}=\{(u_{1},\ldots,u_{6})\in\mathbb{A}^{6}:u_{i}\neq 0\,\ \text{for all}\ \,1\leq i\leq 6\}.\] We assume that \((d_{1},\ldots,d_{6})\in\mathcal{U}\). We denote by \(\hat{R}=k[x_{1},\ldots,x_{6}]\) the polynomial ring in the variables \(x_{i}\). Let \[\hat{\phi}\colon R\to\hat{R}\] be the unique \(k\)-algebra homomorphism such that \[\hat{\phi}(x_{i})=x_{i},\ \ \ \hat{\phi}(c_{i})=d_{i}\] for all \(1\leq i\leq 6\). We denote by \(\hat{I}\) the ideal of the ring \(\hat{R}\) generated by the subset \(\hat{\phi}(I)\). **Proposition 3.2**.: _The ideals \(I\) and \(\hat{I}\) are homogeneous codimension \(2\) Gorenstein ideals._ **Proof.** Since \(I\) is generated by two elements, we have, by Theorem 2.2, that \(\operatorname{codim}I\leq 2\). Now we show that \(\operatorname{codim}I\geq 2\). We set \[r_{1}=-c_{4}f+c_{1}g,\qquad r_{2}=g,\qquad r_{3}=f.\] Let \(>\) be the lexicographic order on \(R\) with \(c_{1}>\cdots>c_{6}>x_{1}>\cdots>x_{6}\). Consider the ideal \[L=(\operatorname{in}_{>}(r_{1}),\operatorname{in}_{>}(r_{2}),\operatorname{in }_{>}(r_{3})),\] where \({\rm in}_{>}(r_{1})=x_{3}x_{4}c_{1}c_{5},{\rm in}_{>}(r_{2})=x_{1}x_{2}c_{4}\) and \({\rm in}_{>}(r_{3})=x_{1}x_{2}c_{1}.\) We now prove that \({\rm codim}\,L=2\). It is enough to show that \(\dim\,\,R/L=10\). Consider the affine variety \(X=V(L)\subset\mathbb{A}^{12}\). It holds that \[X=V(c_{4},c_{1})\cup V(c_{5},x_{1})\cup V(x_{4},x_{1})\cup V(x_{3},x_{1})\cup V (c_{1},x_{1})\cup V(c_{5},x_{2})\cup\] \[\cup V(x_{4},x_{2})\cup V(x_{3},x_{2})\cup V(c_{1},x_{2}).\] Using that, \[\dim\,\,R/L=\dim\,\,X\] the claim is proven. Hence, \({\rm codim}\,I\geq 2\). In what follows we show that the ideal \(\hat{I}\) is also a codimension \(2\) Gorenstein ideal. We set \[\tilde{r_{1}}=\hat{\phi}(r_{1}),\qquad\tilde{r_{2}}=\hat{\phi}(r_{2}).\] Let \(>\) be the lexicographic order on \(\hat{R}\) with \(x_{1}>\cdots>x_{6}\). Consider the ideal \[Q=({\rm in}_{>}(\tilde{r_{1}}),{\rm in}_{>}(\tilde{r_{2}})),\] where \({\rm in}_{>}(\tilde{r_{1}})=x_{3}x_{4}d_{1}d_{5},\,{\rm in}_{>}(\tilde{r_{2}})= x_{1}x_{2}d_{4}.\) It is immediate that \(Q=(x_{3}x_{4},x_{1}x_{2})\). It is enough to show that \(\dim\,\,R/Q=4\). Consider the affine variety \(Y=V(Q)\subset\mathbb{A}^{6}\). It holds that \[Y=V(x_{2},x_{4})\cup V(x_{2},x_{3})\cup V(x_{1},x_{3})\cup V(x_{1},x_{4}).\] Using that, \[\dim\,\,R/Q=\dim\,\,Y\] the claim is proven. Hence, \({\rm codim}\,\hat{I}\geq 2\). By Theorem 2.5, the ideals \(I\) and \(\hat{I}\) are Gorenstein. **Proposition 3.3**.: _(i) For all \(t\) with \(1\leq t\leq 4\), the ideal \(J_{t}/I\) is a codimension \(1\) homogeneous ideal of the quotient ring \(R/I\) such that the ring \(R/J_{t}\) is Gorenstein._ _(ii) For all \(t,s\) with \(1\leq t<s\leq 4\), it holds that \({\rm codim}_{R/I}(J_{t}/I+J_{s}/I)=3\)._ **Proof.** We first prove \((i)\). According to the Third Isomorphism Theorem of rings \[R/J_{1}\cong k[c_{1},\ldots,c_{6},x_{2},x_{4},x_{6}],\,\,\,R/J_{2}\cong k[c_{ 1},\ldots,c_{6},x_{2},x_{3},x_{5}], \tag{1}\] \[R/J_{3}\cong k[c_{1},\ldots,c_{6},x_{1},x_{4},x_{5}],\,\,\,R/J_{4}\cong k[c_{ 1},\ldots,c_{6},x_{1},x_{3},x_{6}].\] So, we conclude that for all t with \(1\leq t\leq 4\), \[\dim\,\,R/J_{t}=9.\] By Proposition 3.2, it follows that \[\dim\,\,R/I=\dim\,\,R-{\rm codim}\,\,\,I=10.\] Hence, using the last two equalities we have that for all t with \(1\leq t\leq 4\) \[{\rm codim}\,J_{t}/I=1.\] Due to the isomorphisms (1) for all \(t\) with \(1\leq t\leq 4\), the ring \(R/J_{t}\) is Gorenstein. Concerning the Claim \((ii)\), the Third Isomorphism Theorem of rings implies that \[R/(J_{1}+J_{2})\cong k[c_{1},\ldots,c_{6},x_{2}],\,\,\,\,R/(J_{1}+J_{3})\cong k [c_{1},\ldots,c_{6},x_{4}],\] \[R/(J_{1}+J_{4})\cong k[c_{1},\ldots,c_{6},x_{6}],\,\,\,\,R/(J_{2}+J_{3})\cong k [c_{1},\ldots,c_{6},x_{5}],\] \[R/(J_{2}+J_{4})\cong k[c_{1},\ldots,c_{6},x_{3}],\,\,\,R/(J_{3}+J_{4})\cong k[c_ {1},\ldots,c_{6},x_{1}].\] From the later isomorphisms it holds that for \(t,s\) with \(1\leq t<s\leq 4\), \[\dim\,\,R/(J_{t}+J_{s})=7.\] Recall that \(\dim\,\,R/I=10\). Taking into account the definition of codimension we conclude that for all \(t,s\) with \(1\leq t<s\leq 4\), \[{\rm codim}\,\,(J_{t}/I+J_{s}/I)=3.\] For all \(t\), with \(1\leq t\leq 4\), we denote by \(i_{t}\colon J_{t}/I\to R/I\) the inclusion map. In what follows, we define \(\phi_{t}\colon J_{t}/I\to R/I\) for all \(t\), with \(1\leq t\leq 4\), and prove that these maps satisfy the assumptions of the [31, Theorem 2.3]. Recall the polynomials \(h_{1},h_{2},h_{3}\) which were defined in Section 2.1. We denote by \(\widetilde{h_{1}},\widetilde{h_{2}},\widetilde{h_{3}}\) the polynomials which occur from \(h_{1},h_{2},h_{3}\) if we substitute \[a_{1}=c_{1}x_{2},\,a_{2}=c_{2}x_{4},\,a_{3}=c_{3}x_{6},\,b_{1}=c_{4}x_{2},\,b_ {2}=c_{5}x_{4},\,b_{3}=c_{6}x_{6}.\] **Proposition 3.4**.: _There exists a unique graded homomorphism of \(R/I\)-modules \(\phi_{1}\colon J_{1}/I\to R/I\) such that_ \[\phi_{1}(x_{1}+I)=\widetilde{h_{1}}+I,\ \ \phi_{1}(x_{3}+I)=\widetilde{h_{2}}+I, \ \ \phi_{1}(x_{5}+I)=\widetilde{h_{3}}+I\] Proof.: It follows from [31, Theorem 4.3]. For the definition of \(\phi_{2}\) we replace \(x_{3}\) by \(x_{4}\) and \(x_{5}\) by \(x_{6}\). In this case, \(\widetilde{h_{1}},\widetilde{h_{2}},\widetilde{h_{3}}\) are the polynomials which occur from \(h_{1},h_{2},h_{3}\) if we substitute \[a_{1}=c_{1}x_{2},\,a_{2}=c_{2}x_{3},\,a_{3}=c_{3}x_{5},\,b_{1}=c_{4}x_{2},\,b_ {2}=c_{5}x_{3},\,b_{3}=c_{6}x_{5}.\] For the definitions of \(\phi_{3}\) and \(\phi_{4}\) we work similarly. For all t, with \(1\leq t\leq 4\), the degree of \(\phi_{t}\) is equal to \(3\). By the discussion after [31, Proposition 2.1] the new unprojection variable has degree equal to the degree of the corresponding \(\phi_{t}\). **Proposition 3.5**.: _For all \(t\), with \(1\leq t\leq 4\), the \(R/I\)-module \(\operatorname{Hom}_{R/I}(J_{t}/I,R/I)\) is generated by the two elements \(i_{t}\) and \(\phi_{t}\)._ Proof.: It follows from [33, Theorem 4.3]. For all \(t,s\), with \(1\leq t,s\leq 4\) and \(t\neq s\), we define \(r_{ts}=0\). **Proposition 3.6**.: _For all \(t,s\), with \(1\leq t,s\leq 4\) and \(t\neq s\), it holds that_ \[\phi_{t}(J_{t}/I)\subset J_{s}/I.\] Proof.: It is a direct computation using the definition of the maps \(\phi_{t}\). **Proposition 3.7**.: _For all \(t,s\), with \(1\leq t,s\leq 4\) and \(t\neq s\), there exists a homogeneous element \(A_{st}\) such that_ \[\phi_{s}(\phi_{t}(p))=A_{st}p\] _for all \(p\in J_{t}/I\)._ Proof.: It follows from [31, Proposition 2.1]. **Remark 3.8**.: We note that the elements \(A_{st}\) are polynomial expressions in the variables \(c_{i}\) and \(x_{j}\). We computed them using the computer algebra program Macaulay2 [21]. We now write down \(A_{12}\): \[A_{12}=(x_{2}^{2})(c_{3}c_{4}-c_{1}c_{6})(-c_{2}c_{4}+c_{1}c_{5}).\] Applying symmetry, one can get formulas for all \(A_{st}\). Following [31, Section 2], we write down explicitly the final ring as a quotient of a polynomial ring by a codimension \(6\) ideal. **Definition 3.9**.: Let \(T_{1},T_{2},T_{3},T_{4}\) be four new variables of degree \(3\). We define as \(I_{un}\) the ideal \[(I)+(T_{1}x_{1}\text{-}\phi_{1}(x_{1}),\,T_{1}x_{3}\text{-}\phi_{1}(x_{3}),\,T _{1}x_{5}\text{-}\phi_{1}(x_{5}),\,T_{2}x_{1}\text{-}\phi_{2}(x_{1}),\,T_{2}x_ {4}\text{-}\phi_{2}(x_{4}),\,T_{2}x_{6}\text{-}\phi_{2}(x_{6}),\] \[T_{3}x_{2}\text{-}\phi_{3}(x_{2}),\,T_{3}x_{3}\text{-}\phi_{3}(x_{3}),\,T_{3} x_{6}\text{-}\phi_{3}(x_{6}),\,T_{4}x_{2}\text{-}\phi_{4}(x_{2}),T_{4}x_{4} \text{-}\phi_{4}(x_{4}),\,T_{4}x_{5}\text{-}\phi_{4}(x_{5}),\] \[T_{2}T_{1}\text{-}A_{21},\,T_{3}T_{1}\text{-}A_{31},\,T_{4}T_{1}\text{-}A_{41},\,T_{3}T_{2}\text{-}A_{32},\,T_{4}T_{2}\text{-}A_{42},\,T_{4}T_{3}\text{-}A_{ 43})\] of the polynomial ring \(R[T_{1},T_{2},T_{3},T_{4}]\). We set \(R_{un}=R[T_{1},T_{2},T_{3},T_{4}]/I_{un}\). **Remark 3.10**.: The reason we put, for all \(1\leq i\leq 4\), \(\deg T_{i}=3\) is that each homomorphism \(\phi_{i}\) is graded of degree \(3\). We also note that according to [31, Proposition 2.1] the degree of each \(A_{st}\) is equal to \(6\). **Theorem 3.11**.: _The ring \(R_{un}\) is Gorenstein._ **Proof.** By Propositions 3.3, 3.4 and 3.6, the assumptions of [31, Theorem 2.3] are satisfied. Hence, the ring \(R_{un}\) is Gorenstein. **Proposition 3.12**.: _The homogeneous ideal \(I_{un}\) is a codimension \(6\) ideal with a minimal generating set of \(20\) elements._ **Proof.** According to the grading of the variables and the discussion before Proposition 3.5 it is not difficult to see that \(I_{un}\) is a homogeneous ideal. Recall that in Kustin-Miller unprojection the codimension increases by \(1\). Hence, the homogeneous ideal \(I_{un}\), as a result of a series of four unprojections of Kustin-Miller type starting by the codimension \(2\) ideal \(I\), is a codimension \(6\) ideal. In order to prove that \(I_{un}\) is minimally generated by \(20\) elements we use the idea of specialization. More precisely we set \[c_{1}=c_{3}=c_{5}=c_{6}=0\] and \[c_{2}=c_{4}=1\] in the ideal \(I_{un}\). We call \(\widetilde{I_{un}}\) the ideal which occurs after these substitutions. The ideal \(\widetilde{I_{un}}\) is a homogeneous ideal with \(16\) monomials and \(4\) binomials as generators. It is not difficult to see that \(\widetilde{I_{un}}\) is minimally generated by these elements. Hence, we conclude that \(I_{un}\) is generated by at least \(20\) elements. By Definition 3.9, \(I_{un}\) is generated by \(20\) homogeneous elements. The result follows. ## 4. Applications In this section we prove, using Theorem 3.11, the existence of \(3\) families of Fano \(3\)-folds of codimension \(6\) in weighted projective space. For some basic definitions and facts related to singularities and Fano \(3\)-folds which appear through this section we refer to [37, Section 3]. We note that in what follows we make essential use of the computer algebra systems Macaulay2 [21] and Singular [19]. The first construction is summarised in the following theorem. It corresponds to the entry \(29376\) of Graded Ring Database [2, 9, 12, 13, 14]. More details for the construction are given in Subsection 4.1. **Theorem 4.1**.: _There exists a family of quasismooth, projectively normal and projectively Gorenstein Fano \(3\)-folds \(X\subset\mathbb{P}(1^{8},2,3)\), nonsingular away from one quotient singularity \(\frac{1}{3}(1,1,2)\), with Hilbert series_ \[P_{X}(t)=\frac{1-6t^{2}+15t^{4}-20t^{6}+15t^{8}-6t^{10}+t^{12}}{(1-t)^{8}(1-t^ {2})(1-t^{3})}.\] The second construction is summarised in the following theorem. It corresponds to the entry \(9176\) of Graded Ring Database. More details for the construction are given in Subsection 4.2. **Theorem 4.2**.: _There exists a family of quasismooth, projectively normal and projectively Gorenstein Fano \(3\)-folds \(X\subset\mathbb{P}(1^{2},2^{5},3^{3})\), nonsingular away from eight quotient singularities \(\frac{1}{2}(1,1,1)\), with Hilbert series_ \[P_{X}(t)=\frac{1-6t^{4}-8t^{5}+2t^{6}+24t^{7}+21t^{8}-16t^{9}-36t^{10}-16t^{11} +21t^{12}+24t^{13}+2t^{14}-8t^{15}-6t^{16}+t^{20}}{(1-t)^{2}(1-t^{2})^{5}(1-t^{ 3})^{3}}.\] The third construction is summarised in the following theorem. It corresponds to the entry \(24198\) of Graded Ring Database. More details for the construction are given in Subsection 4.3. **Theorem 4.3**.: _There exists a family of quasismooth, projectively normal and projectively Gorenstein Fano \(3\)-folds \(X\subset\mathbb{P}(1^{6},2^{3},3)\), nonsingular away from two quotient singularities \(\frac{1}{2}(1,1,1)\) and one quotient singularity \(\frac{1}{3}(1,1,2)\), with Hilbert series_ \[P_{X}(t)=\frac{1-t^{2}-10t^{2}+5t^{4}+24t^{5}-5t^{6}-28t^{7}-5t^{8}+24t^{9}+5t ^{10}-10t^{11}-t^{12}+t^{14}}{(1-t)^{6}(1-t^{2})^{4}(1-t^{3})^{3}}.\] ### Construction of Graded Ring Database entry with ID: 29376 In this subsection, we give the details of the construction for the family described in Theorem 4.1. Denote by \(k=\mathbb{C}\) the field of complex numbers. Consider the polynomial ring \(R=k[x_{i},c_{i}]\), where \(1\leq i\leq 6\). Let \(R_{un}\) be the ring in Definition 3.9 and \(\hat{R}=k[x_{1},\ldots,x_{6}]\) be the polynomial ring in the variables \(x_{i}\). We substitute the variables \((c_{1},\ldots,c_{6})\) which appear in the definitions of the rings \(R\) and \(R_{un}\) with a general element of \(k^{6}\) (in the sense of being outside a proper Zariski closed subset of \(k^{6}\)). Let \(\hat{I}\) be the ideal of \(\hat{R}\) which is obtained by the ideal \(I\) and \(\hat{I}_{un}\) the ideal of \(\hat{R}[T_{1},T_{2},T_{3},T_{4}]\) which is obtained by the ideal \(I_{un}\) after this substitution. We set \(\hat{R}_{un}=\hat{R}[T_{1},T_{2},T_{3},T_{4}]/\hat{I}_{un}\). In what follows \(x_{1},x_{3},x_{5}\) are variables of degree \(1\) and \(x_{2},x_{4},x_{6}\) are variables of degree \(2\). Hence, from the discussion before the Proposition 3.5 it follows that the degrees of \(T_{2},T_{3},T_{4}\) are equal to \(1\) and the degree of \(T_{1}\) is equal to \(3\). According to this grading the ideals \(\hat{I}\) and \(\hat{I}_{un}\) are homogeneous. Due to Theorem 3.11, Proj \(\hat{R}_{un}\subset\mathbb{P}(1^{6},2^{3},3)\) is a projectively Gorenstein \(3\)-fold. Let \(A=k[w_{1},w_{2},T_{2},T_{3},T_{4},x_{1},x_{3},x_{5},x_{6},T_{1}]\) be the polynomial ring over \(k\) with \(w_{1},w_{2}\) variables of degree \(1\) and the other variables of degree noted as above. Consider the unique \(k\)-algebra homomorphism \[\psi\colon\hat{R}[T_{1},T_{2},T_{3},T_{4}]\to A\] such that \[\psi(x_{1})=x_{1},\ \ \psi(x_{2})=f_{1},\ \ \psi(x_{3})=x_{3},\ \ \psi(x_{4})=f_{2},\] \[\psi(x_{5})=x_{5},\ \ \psi(x_{6})=x_{6},\ \ \psi(T_{1})=T_{1},\ \ \psi(T_{2})=T_{2},\] \[\psi(T_{3})=T_{3},\ \ \psi(T_{4})=T_{4}\] where, \(f_{1}=l_{1}x_{1}^{2}+l_{2}x_{1}x_{3}+l_{3}x_{3}^{2}+l_{4}x_{1}x_{5}+l_{5}x_{3} x_{5}+l_{6}x_{5}^{2}+l_{7}x_{1}T_{2}+l_{8}x_{3}T_{2}+l_{9}x_{5}T_{2}+l_{10}T_{2}^ {2}+l_{11}x_{1}T_{3}+l_{12}x_{3}T_{3}+l_{13}x_{5}T_{3}+l_{14}T_{2}T_{3}+l_{15}T _{3}^{2}+l_{16}x_{1}T_{4}++l_{17}T_{3}T_{4}+l_{18}x_{5}T_{4}+l_{19}T_{2}T_{4}+l _{20}T_{3}T_{4}+l_{21}T_{4}^{2}+l_{22}x_{1}w_{1}+l_{23}x_{3}w_{1}+l_{24}x_{5}w_ {1}+l_{25}T_{2}w_{1}+l_{26}T_{3}w_{1}+l_{27}T_{4}w_{1}+l_{28}w_{1}^{2}+l_{29}x _{1}w_{2}+l_{30}x_{3}w_{2}+l_{31}x_{5}w_{2}+l_{32}T_{2}w_{2}+l_{33}T_{3}w_{2}+l_ {34}T_{4}w_{2}+l_{35}w_{1}w_{2}+l_{36}w_{2}^{2}+l_{37}x_{6},\] \(f_{2}=l_{38}x_{1}^{2}+l_{39}x_{1}x_{3}+l_{40}x_{3}^{2}+l_{41}x_{1}x_{5}+l_{42}x _{3}x_{5}+l_{43}x_{5}^{2}+l_{44}x_{1}T_{2}+l_{45}x_{3}T_{2}+l_{44}x_{1}T_{3}+l_ {49}x_{3}T_{3}+l_{50}x_{5}T_{3}+l_{51}T_{2}T_{3}+l_{52}T_{3}^{2}+l_{53}x_{1}T_{4 }+l_{54}T_{3}T_{4}+l_{55}x_{5}T_{4}+l_{56}T_{2}T_{4}+l_{57}T_{3}T_{4}+l_{58}T_{ 4}^{2}+l_{59}x_{1}w_{1}+l_{60}x_{3}w_{1}+l_{61}x_{5}w_{1}+l_{62}T_{2}w_{1}+l_ {63}T_{3}w_{1}+l_{64}T_{4}w_{1}+l_{65}w_{1}^{2}+l_{66}x_{1}w_{2}+l_{67}x_{3}w _{2}+l_{68}x_{5}w_{2}+l_{69}T_{2}w_{2}+l_{70}T_{3}w_{2}+l_{71}T_{4}w_{2}+l_{72}w _{1}w_{2}+l_{73}w_{2}^{2}+l_{74}x_{6},\] and \((l_{1},\ldots,l_{74})\in k^{74}\) are general. In other words, \(f_{1},f_{2}\) are two general degree \(2\) homogeneous elements of \(A\). Denote by \(Q\) the ideal of the ring A generated by the subset \(\psi(\hat{I}_{un})\). Let \(X=V(Q)\subset\mathbb{P}(1^{8},2,3)\). It is immediate that \(X\subset\mathbb{P}(1^{8},2,3)\) is a codimension \(6\) projectively Gorenstein \(3\)-fold. **Proposition 4.4**.: _The ring \(A/Q\) is an integral domain._ **Proof.** It is enough to show that the ideal \(Q\) is prime. For a specific choice of rational values for the parameters \(c_{i},l_{j}\), for \(1\leq i\leq 6\) and \(1\leq j\leq 74\) we checked, using the computer algebra program Macaulay2, that the ideal which was obtained by specialization from \(Q\) is a homogeneous, codimension \(6\), prime ideal with the right Betti table. \(\square\) In what follows, we show that the only singularities of \(X\subset\mathbb{P}(1^{8},2,3)\) is a quotient singularity of type \(\frac{1}{3}(1,1,2)\). According to the discussion after [37, Definition 2.7], \(X\) belongs to the Mori category. The proof of the following proposition is based on a computation with computer algebra system Singular [19] using the strategy described in [37, Proposition 6.4] and is omitted. **Proposition 4.5**.: _Consider \(X=V(Q)\subset\mathbb{P}(1^{8},2,3)\). Denote by \(X_{cone}\subset\mathbb{A}^{10}\) the affine cone over \(X\). The scheme \(X_{cone}\) is smooth outside the vertex of the cone._ **Remark 4.6**.: For the computation of singular locus of weighted projective space in Proposition 4.7, we follow [22, Section 5]. **Proposition 4.7**.: _Consider the singular locus_ \[\text{Sing}(\mathbb{P}(1^{8},2,3))=\{[0:0:0:0:0:0:0:0:1:0]\}\cup\{[0:0:0:0:0:0:0:0: 0:1]\}\] _of the weighted projective space \(\mathbb{P}(1^{8},2,3)\). The intersection of \(X\) with \(\text{Sing}(\mathbb{P}(1^{8},2,3))\) consists of a unique reduced point which is quotient singularity of type \(\frac{1}{3}(1,1,2)\) for \(X\)._ **Proof.** We checked with the computer algebra program Macaulay2 that the intersection of \(X\) with \(Z\) consists of one reduced point. We denote this point by \(P\). The point \(P\) corresponds to the ideal \((x_{i},T_{j},w_{k})\) for \(i\in\{1,3,5,6\}\), \(2\leq j\leq\ 4\), \(1\leq k\leq 2\). By Proposition 4.5, X is smooth outside \(P\). Around \(P\) we have that \(T_{1}=1\). Looking at the equations of \(Q\) we can eliminate the variables \(x_{1},x_{3},x_{5},T_{2},T_{3},T_{4}\) since these variables appear in the set of equations multiplied by \(T_{1}\). This means that \(P\) is a quotient singularity of type \(\frac{1}{3}(1,1,2)\). \(\square\) **Lemma 4.8**.: _Let \(\omega_{\hat{R}/\hat{I}}\) be the canonical module of \(\hat{R}/\hat{I}\). It holds that the canonical module \(\omega_{\hat{R}/\hat{I}}\) is isomorphic to \(\hat{R}/\hat{I}(-3)\)._ **Proof.** From the minimal graded free resolution of \(\hat{R}/\hat{I}\) as \(\hat{R}\)-module \[0\rightarrow\hat{R}(-6)\rightarrow\hat{R}(-3)^{2}\rightarrow\hat{R}\] and the fact that the sum of the degrees of the variables is equal to \(9\) we conclude that \[\omega_{\hat{R}/\hat{I}}=\hat{R}/\hat{I}(6-9)=\hat{R}/\hat{I}(-3).\] \(\square\) **Proposition 4.9**.: _The minimal graded resolution of \(A/Q\) as \(A\)-module is equal to_ \[0\to C_{6}\to C_{5}\to C_{4}\to C_{3}\to C_{2}\to C_{1}\to C_{0}\to 0 \tag{2}\] _where_ \[C_{6} =A(-12),\qquad C_{5}=A(-8)^{6}\oplus A(-9)^{8}\oplus A(-10)^{6},\] \[C_{4} =A(-6)^{8}\oplus A(-7)^{24}\oplus A(-8)^{24}\oplus A(-9)^{8},\] \[C_{3} =A(-4)^{3}\oplus A(-5)^{24}\oplus A(-6)^{36}\oplus A(-7)^{24} \oplus A(-8)^{3},\] \[C_{2} =A(-3)^{8}\oplus A(-4)^{24}\oplus A(-5)^{24}\oplus A(-6)^{8},\] \[C_{1} =A(-2)^{6}\oplus A(-3)^{8}\oplus A(-4)^{6},\qquad C_{0}=A.\] _Moreover, the canonical module of \(A/Q\) is isomorphic to \((A/Q)(-1)\) and the Hilbert series of \(A/Q\) as graded \(A\)-module is equal to_ \[\frac{1-6t^{2}+15t^{4}-20t^{6}+15t^{8}-6t^{10}+t^{12}}{(1-t)^{8}(1-t^{2})(1-t^ {3})}.\] **Proof.** The computation of the minimal graded free resolution of \(A/Q\) is based on the method which is described in the proof of [30, Proposition 3.4]. Using the minimal graded free resolution (2) of \(A/Q\) and that the sum of the degrees of the variables is equal to \(13\) we conclude that \[\omega_{A/Q}=A/Q(12-13)=A/Q(-1).\] The last conclusion of Proposition 4.9 follows easily from the resolution (2). \(\square\) By Propositions 4.5, 4.7 and 4.9, it follows that \(X\) is a Fano \(3\)-fold. ### Construction of Graded Ring Database entry with ID: 9176 In this subsection we sketch the construction of the family of Fano 3-folds described in Theorem 4.2. Denote by \(k=\mathbb{C}\) the field of complex numbers. Consider the polynomial ring \(R=k[x_{i},c_{i}]\), where \(1\leq i\leq 6\). Let \(R_{un}\) be the ring in Definition 3.9 and \(\hat{R}=k[x_{1},\ldots,x_{6}]\) be the polynomial ring in the variables \(x_{i}\). We substitute the variables \((c_{1},\ldots,c_{6})\) which appear in the definitions of the rings \(R\) and \(R_{un}\) with a general element of \(k^{6}\) (in the sense of being outside a proper Zariski closed subset of \(k^{6}\)). Let \(\hat{I}\) be the ideal of \(\hat{R}\) which is obtained by the ideal \(I\) and \(\hat{I}_{un}\) the ideal of \(\hat{R}[T_{1},T_{2},T_{3},T_{4}]\) which is obtained by the ideal \(I_{un}\) after this substitution. We set \(\hat{R}_{un}=\hat{R}[T_{1},T_{2},T_{3},T_{4}]/\hat{I}_{un}\). In what follows \(x_{1},x_{3},x_{5}\) are variables of degree 2 and \(x_{2},x_{4},x_{6}\) are variables of degree 3. Hence, from the discussion before the Proposition 3.5 it follows that the degrees of \(T_{2},T_{3},T_{4}\) are equal to 2 and the degree of \(T_{1}\) is equal to 4. According to this grading the ideals \(\hat{I}\) and \(\hat{I}_{un}\) are homogeneous. Due to Theorem 3.11, Proj \(\hat{R}_{un}\subset\mathbb{P}(2^{6},3^{3},4)\) is a projectively Gorenstein 3-fold. Let \(A=k[w_{1},w_{2},x_{1},x_{5},T_{2},T_{3},T_{4},x_{2},x_{4},x_{6}]\) be the polynomial ring over \(k\) with \(w_{1},w_{2}\) variables of degree 1 and the other variables with degree noted as above. Consider the unique \(k\)-algebra homomorphism \[\psi\colon\hat{R}[T_{1},T_{2},T_{3},T_{4}]\to A\] such that \[\begin{array}{llll}\psi(x_{1})=x_{1},&\psi(x_{2})=x_{2},&\psi(x_{3})=f_{1},& \psi(x_{4})=x_{4},\\ \psi(x_{5})=x_{5},&\psi(x_{6})=x_{6},&\psi(T_{1})=f_{2},&\psi(T_{2})=T_{2},\\ &\psi(T_{3})=T_{3},&\psi(T_{4})=T_{4}\end{array}\] where, \(f_{1}=l_{1}w_{1}^{2}+l_{2}w_{1}w_{2}+l_{3}w_{2}^{2}+l_{4}x_{1}+l_{5}x_{5}+l_{ 6}T_{2}+l_{7}T_{3}+l_{8}T_{4},\) \(f_{2}=l_{9}w_{1}^{4}+l_{10}w_{1}^{3}w_{2}+l_{11}w_{1}^{2}w_{2}^{2}+l_{12}w_{1} ^{3}+l_{13}w_{2}^{4}+l_{14}w_{1}^{2}x_{1}+l_{15}w_{1}w_{2}x_{1}+l_{17}x_{1}^{2} +l_{18}w_{1}^{2}x_{5}+l_{19}w_{1}w_{2}x_{5}+l_{20}w_{2}^{2}x_{5}+l_{21}x_{1}x_{ 5}+l_{22}x_{5}^{2}+l_{23}w_{1}^{2}T_{2}+l_{24}w_{1}w_{2}T_{2}+l_{25}w_{2}^{2} T_{2}+l_{26}x_{1}T_{2}+l_{27}x_{5}T_{2}+l_{28}T_{2}^{2}+l_{29}w_{1}^{2}T_{3}+l_{ 30}w_{1}w_{2}T_{3}+l_{31}w_{2}^{2}T_{3}+l_{32}x_{1}T_{3}+l_{33}x_{5}T_{3}+l_{34} T_{2}T_{3}+l_{35}T_{3}^{2}+l_{36}w_{1}^{2}T_{4}+l_{37}w_{1}w_{2}T_{4}+l_{38}w_{2}^{2} T_{4}+l_{39}x_{1}T_{4}+l_{40}x_{5}T_{4}+l_{41}T_{2}T_{4}+l_{42}T_{3}T_{4}+l_{43} T_{4}^{2}+l_{44}w_{1}x_{2}+l_{45}w_{2}x_{2}+l_{46}w_{1}x_{4}+l_{47}w_{2}x_{4}+l_{48}w _{1}x_{6}+l_{49}w_{2}x_{6},\) and \((l_{1},\ldots,l_{49})\in k^{49}\) are general. In other words, \(f_{1}\) is a general degree 2 homogeneous element of \(A\) and \(f_{2}\) is a general degree 4 homogeneous element of \(A\). Denote by \(Q\) the ideal of the ring A generated by the subset \(\psi(\hat{I}_{un})\). Let \(X=V(Q)\subset\mathbb{P}(1^{2},2^{5},3^{3})\). It is immediate that \(X\subset\mathbb{P}(1^{2},2^{5},3^{3})\) is a codimension 6 projectively Gorenstein 3-fold. **Proposition 4.10**.: _The ring \(A/Q\) is an integral domain._ **Proof.** It is enough to show that the ideal \(Q\) is prime. For a specific choice of rational values for the parameters \(c_{i},l_{j}\), for \(1\leq i\leq 6\) and \(1\leq j\leq 49\) we checked using the computer algebra program Macaulay2 that the ideal which was obtained by \(Q\) is a homogeneous, codimension 6, prime ideal with the right Betti table. \(\square\) In what follows, we show that the only singularities of \(X\subset\mathbb{P}(1^{2},2^{5},3^{3})\) are eight quotient singularities of type \(\frac{1}{2}(1,1,1)\). According to the discussion after [37, Definition 2.7], \(X\) belongs to the Mori category. The proof of the following proposition is based on a computation with computer algebra system Singular [19] using the strategy described in [37, Proposition 6.4] and is omitted. **Proposition 4.11**.: _Consider \(X=V(Q)\subset\mathbb{P}(1^{2},2^{5},3^{3})\). Denote by \(X_{cone}\subset\mathbb{A}^{10}\) the affine cone over \(X\). The scheme \(X_{cone}\) is smooth outside the vertex of the cone._ **Remark 4.12**.: For the computation of singular locus of weighted projective space in Proposition 4.13, we follow [22, Section 5]. **Proposition 4.13**.: _Consider the singular locus_ \[\text{Sing}(\mathbb{P}(1^{2},2^{5},3^{3}))=F_{1}\cup F_{2}\] _where,_ \[F_{1}=\{[0:0:a:b:c:d:e:0:0:0]:[a:b:c:d:e]\in\mathbb{P}^{4}\}\] _and_ \[F_{2}=\{[0:0:0:0:0:0:0:a:b:c]:[a:b:c]\in\mathbb{P}^{2}\}\] _of the weighted projective space \(\mathbb{P}(1^{2},2^{5},3^{3})\). The intersection of \(X\) with \(\text{Sing}(\mathbb{P}(1^{2},2^{5},3^{3}))\) consists of eight reduced points which are quotient singularities of type \(\frac{1}{2}(1,1,1)\) for \(X\)._ **Proof.** We proved with the computer algebra program Macaulay2 that the intersection of \(X\) with \(Z\) consists of eight reduced points. Following the strategy of the proof of Proposition 4.7, we checked that each of these points is a quotient singularity of type \(\frac{1}{2}(1,1,1)\). **Lemma 4.14**.: _Let \(\omega_{\hat{R}/\hat{I}}\) be the canonical module of \(\hat{R}/\hat{I}\). It holds that the canonical module \(\omega_{\hat{R}/\hat{I}}\) is isomorphic to \(\hat{R}/\hat{I}(-5)\)._ **Proof.** From the minimal graded free resolution of \(\hat{R}/\hat{I}\) as \(\hat{R}\)-module \[0\rightarrow\hat{R}(-10)\rightarrow\hat{R}(-5)^{2}\rightarrow\hat{R}\] and the fact that the sum of the degrees of the variables is equal to \(15\) we conclude that \[\omega_{\hat{R}/\hat{I}}=\hat{R}/\hat{I}(10-15)=\hat{R}/\hat{I}(-5).\] \(\square\) **Proposition 4.15**.: _The minimal graded resolution of \(A/Q\) as \(A\)-module is equal to_ \[0\to C_{6}\to C_{5}\to C_{4}\to C_{3}\to C_{2}\to C_{1}\to C_{0}\to 0 \tag{3}\] _where_ \[C_{6} =A(-20),\qquad C_{5}=A(-14)^{6}\oplus A(-15)^{8}\oplus A(-16)^{6},\] \[C_{4} =A(-11)^{8}\oplus A(-12)^{24}\oplus A(-13)^{24}\oplus A(-14)^{8},\] \[C_{3} =A(-8)^{3}\oplus A(-9)^{24}\oplus A(-10)^{36}\oplus A(-11)^{24} \oplus A(-12)^{3},\] \[C_{2} =A(-6)^{8}\oplus A(-7)^{24}\oplus A(-8)^{24}\oplus A(-9)^{8},\] \[C_{1} =A(-4)^{6}\oplus A(-5)^{8}\oplus A(-6)^{6},\qquad C_{0}=A.\] _Moreover, the canonical module of \(A/Q\) is isomorphic to \((A/Q)(-1)\) and the Hilbert series of \(A/Q\) as graded \(A\)-module is equal to_ \[\frac{1-6t^{4}-8t^{5}+2t^{6}+24t^{7}+21t^{8}-16t^{9}-36t^{10}-16t^{11}+21t^{12} +24t^{13}+2t^{14}-8t^{15}-6t^{16}+t^{20}}{(1-t)^{2}(1-t^{2})^{5}(1-t^{3})^{3}}.\] **Proof.** The computation of the minimal graded free resolution of \(A/Q\) is based on the method which is described in the proof of [30, Proposition 3.4]. Using the minimal graded free resolution (3) of \(A/Q\) and that the sum of the degrees of the variables is equal to \(21\) we conclude that \[\omega_{A/Q}=A/Q(20-21)=A/Q(-1).\] The last conclusion of Proposition 4.15 follows easily from the resolution (3). By Propositions 4.11, 4.13 and 4.15, it follows that \(X\) is a Fano 3-fold. ### Construction of Graded Ring Database entry with ID: 24198 In this final subsection, we sketch the construction for the family of Fano 3-folds which is described in Theorem 4.3. Denote by \(k=\mathbb{C}\) the field of complex numbers. Consider the polynomial ring \(R=k[x_{i},c_{i}]\), where \(1\leq i\leq 6\). Let \(R_{un}\) be the ring in Definition 3.9 and \(\hat{R}=k[x_{1},\dots,x_{6},c_{3},c_{6}]\) be the polynomial ring in the variables \(x_{i}\) and \(c_{3},c_{6}\). We substitute the variables \((c_{1},c_{2},c_{4},c_{5})\) which appear in the definitions of the rings \(R\) and \(R_{un}\) with a general element of \(k^{4}\) (in the sense of being outside a proper Zariski closed subset of \(k^{4}\)). Let \(\hat{I}\) be the ideal of \(\hat{R}\) which is obtained by the ideal \(I\) and \(\hat{I}_{un}\) the ideal of \(\hat{R}[T_{1},T_{2},T_{3},T_{4}]\) which is obtained by the ideal \(I_{un}\) after this substitution. We set \(\hat{R}_{un}=\hat{R}[T_{1},T_{2},T_{3},T_{4}]/\hat{I}_{un}\). In what follows \(x_{1},x_{3},x_{5},x_{6},c_{3},c_{6}\) are variables of degree \(1\) and \(x_{2},x_{4}\) are variables of degree \(2\). Hence, from the discussion before the Proposition 3.5 it follows that the degree of \(T_{1}\) is equal to \(3\), the degrees of \(T_{2},T_{3}\) are equal to \(2\) and the degree of \(T_{4}\) is equal to \(1\). According to this grading the ideals \(\hat{I}\) and \(\hat{I}_{un}\) are homogeneous. Due to Theorem 3.11, Proj \(\hat{R}_{un}\subset\mathbb{P}(1^{7},2^{4},3)\) is a projectively Gorenstein \(5\)-fold. Let \(A=k[x_{1},x_{3},x_{5},x_{6},c_{3},c_{6},x_{2},x_{4},T_{3},T_{1}]\) be the polynomial ring with variables of degree noted as above. Consider the unique \(k\)-algebra homomorphism \[\psi\colon\hat{R}[T_{1},T_{2},T_{3},T_{4}]\to A\] such that \[\psi(x_{1})=x_{1},\ \ \psi(x_{2})=x_{2},\ \ \psi(x_{3})=x_{3},\ \ \psi(x_{4})=x_{4},\] \[\psi(x_{5})=x_{5},\ \ \psi(x_{6})=x_{6},\ \ \psi(c_{3})=c_{3},\ \ \psi(c_{6})=c_{6},\] \[\psi(T_{1})=T_{1},\ \ \psi(T_{2})=f_{1},\ \ \psi(T_{3})=T_{3},\ \ \psi(T_{4})=f_{2}\] where, \(f_{1}=l_{1}x_{1}^{2}+l_{2}x_{1}x_{3}+l_{3}x_{3}^{2}+l_{4}x_{1}x_{5}+l_{5}x_{3}x _{5}+l_{6}x_{5}^{2}+l_{7}x_{1}x_{6}+l_{8}x_{3}x_{6}+l_{9}x_{5}x_{6}+l_{10}x_{6} ^{2}+l_{11}x_{1}c_{3}+l_{12}x_{3}c_{3}+l_{13}x_{5}c_{3}+l_{14}x_{6}c_{3}+l_{15} c_{3}^{2}+l_{16}x_{1}c_{6}+l_{17}x_{3}c_{6}+l_{18}x_{5}c_{6}+l_{19}x_{6}c_{6}+ l_{20}c_{3}c_{6}+l_{21}c_{6}^{2}+l_{22}x_{2}+l_{23}x_{4}+l_{24}T_{3},\) \(f_{2}=l_{25}x_{1}+l_{26}x_{3}+l_{27}x_{5}+l_{28}x_{6}+l_{29}c_{3}+l_{30}c_{6},\) and \((l_{1},\dots,l_{30})\in k^{30}\) are general. In other words, \(f_{1}\) is a general degree \(2\) homogeneous element of \(A\) and \(f_{2}\) is a general degree \(1\) homogeneous element of \(A\). Denote by \(Q\) the ideal of the ring A generated by the subset \(\psi(\hat{I}_{un})\). Let \(X=V(Q)\subset\mathbb{P}(1^{6},2^{3},3)\). It is immediate that \(X\subset\mathbb{P}(1^{6},2^{3},3)\) is a codimension \(6\) projectively Gorenstein \(3\)-fold. **Proposition 4.16**.: _The ring \(A/Q\) is an integral domain._ **Proof.** It is enough to show that the ideal \(Q\) is prime. For a specific choice of rational values for the parameters \(c_{i},l_{j}\), for \(i\in\{1,2,4,5\}\) and \(1\leq j\leq 30\) we checked using the computer algebra program Macaulay2 that the ideal which was obtained by \(Q\) is a homogeneous, codimension \(6\), prime ideal with the right Betti table. \(\square\) In what follows, we show that the only singularities of \(X\subset\mathbb{P}(1^{6},2^{3},3)\) are two quotient singularities of type \(\frac{1}{2}(1,1,1)\) and one quotient singularity of type \(\frac{1}{3}(1,1,2)\). According to the discussion after [37, Definition 2.7], \(X\) belongs to the Mori category. The proof of the following proposition is based on a computation with computer algebra system Singular [19] using the strategy described in [37, Proposition 6.4] and is omitted. **Proposition 4.17**.: _Consider \(X=V(Q)\subset\mathbb{P}(1^{6},2^{3},3)\). Denote by \(X_{cone}\subset\mathbb{A}^{10}\) the affine cone over \(X\). The scheme \(X_{cone}\) is smooth outside the vertex of the cone._ **Remark 4.18**.: For the computation of singular locus of weighted projective space in Proposition 4.19, we follow [22, Section 5]. **Proposition 4.19**.: _Consider the singular locus_ \[\text{Sing}(\mathbb{P}(1^{6},2^{3},3))=F_{1}\cup\{[0:0:0:0:0:0:0:0:1]\}\] _where,_ \[F_{1}=\{[0:0:0:0:0:0:a:b:c:0]:[a:b:c]\in\mathbb{P}^{2}\}\] _of the weighted projective space \(\mathbb{P}(1^{6},2^{3},3)\). The intersection of \(X\) with \(\text{Sing}(\mathbb{P}(1^{6},2^{3},3))\) consists of two reduced points which are quotient singularities of type \(\frac{1}{2}(1,1,1)\) and one reduced point which is quotient singularity of type \(\frac{1}{3}(1,1,2)\) for \(X\)._ **Proof**.: We proved with the computer algebra program Macaulay2 that the intersection of \(X\) with \(Z\) consists of three reduced points. Following the strategy of the proof of Proposition 4.7, we checked that two of these points are quotient singularities of type \(\frac{1}{2}(1,1,1)\) and the third point is quotient singularity of type \(\frac{1}{3}(1,1,2)\) for \(X\). **Lemma 4.20**.: _Let \(\omega_{\hat{R}/\hat{I}}\) be the canonical module of \(\hat{R}/\hat{I}\). It holds that the canonical module \(\omega_{\hat{R}/\hat{I}}\) is isomorphic to \(\hat{R}/\hat{I}(-4)\)._ **Proof**.: From the minimal graded free resolution of \(\hat{R}/\hat{I}\) as \(\hat{R}\)-module \[0\to\hat{R}(-6)\to\hat{R}(-3)^{2}\to\hat{R}\] and the fact that the sum of the degrees of the variables is equal to \(10\) we conclude that \[\omega_{\hat{R}/\hat{I}}=\hat{R}/\hat{I}(6-10)=\hat{R}/\hat{I}(-4).\] **Proposition 4.21**.: _The minimal graded resolution of \(A/Q\) as \(A\)-module is equal to_ \[0\to C_{6}\to C_{5}\to C_{4}\to C_{3}\to C_{2}\to C_{1}\to C_{0}\to 0 \tag{4}\] _where_ \[C_{6} =A(-14),\qquad C_{5}=A(-9)^{2}\oplus A(-10)^{7}\oplus A(-11)^{10} \oplus A(-12)^{1},\] \[C_{4} =A(-7)^{4}\oplus A(-8)^{20}\oplus A(-9)^{28}\oplus A(-10)^{12},\] \[C_{3} =A(-5)^{2}\oplus A(-6)^{25}\oplus A(-7)^{36}\oplus A(-8)^{25} \oplus A(-9)^{2},\] \[C_{2} =A(-4)^{12}\oplus A(-5)^{28}\oplus A(-6)^{20}\oplus A(-7)^{4},\] \[C_{1} =A(-2)^{1}\oplus A(-3)^{10}\oplus A(-4)^{7}\oplus A(-5)^{2},\qquad C _{0}=A.\] _Moreover, the canonical module of \(A/Q\) is isomorphic to \((A/Q)(-1)\) and the Hilbert series of \(A/Q\) as graded \(A\)-module is equal to_ \[\frac{1-t^{2}-10t^{3}+5t^{4}+24t^{5}-5t^{6}-28t^{7}-5t^{8}+24t^{9}+5t^{10}-10t ^{11}-t^{12}+t^{14}}{(1-t)^{6}(1-t^{2})^{3}(1-t^{3})}.\] **Proof**.: The computation of the minimal graded free resolution of \(A/Q\) is based on the method which is described in the proof of [30, Proposition 3.4]. Using the minimal graded free resolution (3) of \(A/Q\) and that the sum of the degrees of the variables is equal to \(15\) we conclude that \[\omega_{A/Q}=A/Q(14-15)=A/Q(-1).\] The last conclusion of Proposition 4.21 follows easily from the resolution (4). By Propositions 4.17, 4.19 and 4.21, it follows that \(X\) is a Fano 3-fold. ## Acknowledgements I would like to thank Stavros Papadakis for important discussions and suggestions which have improved the present paper. I benefited from experiments with the computer algebra programs Macaulay2 [21] and Singular [19]. Part of this work is contained in my PhD Thesis, [38] carried out at the University of Ioannina, Greece. This work was financially supported by Horizon Europe ERC Grant number: 101045750 / Project acronym: Hodge-GeoComb with principal investigator Karim Adiprasito, whom I warmly thank.
2309.14890
New Revival Phenomena for Bidirectional Dispersive Hyperbolic Equations
In this paper, the dispersive revival and fractalization phenomena for bidirectional dispersive equations on a bounded interval subject to periodic boundary conditions and discontinuous initial profiles are investigated. Firstly, we study the periodic initial-boundary value problem of the linear beam equation with step function initial data, and analyze the manifestation of the revival phenomenon for the corresponding solution at rational times. Next, we extend the investigation to periodic initial-boundary value problems of more general bidirectional dispersive equations. We prove that, if the initial functions are of bounded variation, the dynamical evolution of such periodic problems depend essentially upon the large wave number asymptotics of the associated dispersion relations. Integral polynomial or asymptotically integral polynomial dispersion relations produce dispersive revival/fractalization rational/irrational dichotomies, whereas those with non-polynomial growth result in fractal profiles at all times. Finally, numerical experiments, in the concrete case of the nonlinear beam equation, are used to demonstrate how such effects persist into the nonlinear regime.
George Farmakis, Jing Kang, Peter J. Olver, Changzheng Qu, Zihan Yin
2023-09-26T12:48:31Z
http://arxiv.org/abs/2309.14890v1
# New revival phenomena for bidirectional dispersive hyperbolic equations ###### Abstract. In this paper, the dispersive revival and fractalization phenomena for bidirectional dispersive equations on a bounded interval subject to periodic boundary conditions and discontinuous initial profiles are investigated. Firstly, we study the periodic initial-boundary value problem of the linear beam equation with step function initial data, and analyze the manifestation of the revival phenomenon for the corresponding solution at rational times. Next, we extend the investigation to periodic initial-boundary value problems of more general bidirectional dispersive equations. We prove that, if the initial functions are of bounded variation, the dynamical evolution of such periodic problems depend essentially upon the large wave number asymptotics of the associated dispersion relations. Integral polynomial or asymptotically integral polynomial dispersion relations produce dispersive revival/fractalization rational/irrational dichotomies, whereas those with non-polynomial growth result in fractal profiles at all times. Finally, numerical experiments, in the concrete case of the nonlinear beam equation, are used to demonstrate how such effects persist into the nonlinear regime. _Key words and phrases:_ beam equation; revival; fractalization; Talbot effect; dispersive equation. Mathematics Subject Classification (2020): 37K55, 35Q51 ## 1. Introduction This paper is devoted to the study of the periodic initial-boundary value problem for bidirectional dispersive partial differential equations. We prove that, for linear equations, if the initial condition at time zero is a step function or, more generally, a function of bounded variation, the time evolution of the bidirectional dispersive equations subject to periodic boundary conditions will exhibit new revival phenomena at rational times, of a different form from that previously observed in unidirectional dispersive evolution equations, whereas at irrational times the solution exhibits a continuous, but non-differentiable fractal profile. The term "revival" is based on the experimentally observed phenomenon of quantum revival [3, 40], in which an electron that is initially concentrated near a single location of its orbital shell is re-concentrated near a finite number of orbital locations at certain times. A precursor of the revival phenomenon was observed as far back as 1834 in a striking optical experiment, [37], conducted in 1836 by William Henry Fox Talbot. This motivated the pioneering work of Berry and his collaborators, [1, 2, 3], on what they called the Talbot effect in the context of the linear free space Schrodinger equation. Rigorous analytical results and estimates justifying the Talbot effect can be found in the work of Kapitanski and Rodnianski, [26, 32], Oskolkov, [30, 31], and Taylor, [38]. The Talbot effect governs, in the quantum mechanical setting, the behavior of rough solutions subject to periodic boundary conditions. The evolution of the rough initial profile, for instance, a step function, also known as the Riemann problem [41], "quantizes" into a dispersive revival profile at rational times, but "fractalizes" into a continuous but nowhere differentiable profile having a specific fractal dimension at irrational times. In [7, 27], the same Talbot effect, which the authors called dispersive quantization and fractalization, was shown to appear in general periodic linear dispersive equations possessing an "integral polynomial" (a polynomial with integer coefficients) dispersion relation, which included the prototypical linearized Schrodinger and Korteweg-de Vries (KdV) equations. Based on these investigations, one learns that a linear dispersive equation admitting a polynomial dispersion relation and subject to periodic boundary conditions will exhibit the revival phenomenon at each rational time, which means that the fundamental solution, i.e., that induced by a delta function initial condition, localizes into a finite linear combination of delta functions. This has the remarkable consequence that the solution, to any initial value problem, at rational times is a finite linear combination of translates of the initial data and hence its value at any point on the periodic domain depends only upon finitely many of the initial values. In [28], the revival phenomenon for the linear free space Schrodinger equation subject to pseudo-periodic boundary conditions was investigated, see also [4] for the same model and for the quasi-periodic linear KdV equation. In [6], a more general revival phenomenon, that produces dispersively quantized cusped solutions of the periodic Riemann problem for three linear integro-differential equations, including the Benjamin-Ono equation, the Intermediate Long Wave equation and the Smith equation were studied. More recently, these phenomena were shown to extend to multi-component dispersive equations, see [42]. For a class of two-component linear systems of dispersive evolution equations, the dispersive quantization conditions, which may yield quantized structures for step-function initial value at rational times, are provided. Inspired by these linear results, the phenomena of dispersive quantization and fractalization for the periodic Riemann problem for nonlinear dispersive evolution equations on periodic domains, including the integrable nonlinear Schrodinger (NLS), KdV and modified KdV (mKdV) equations as well as non-integrable versions with higher-order nonlinearities were studied numerically in [8]. Erdogan, Tzirakis and their collaborators established rigorous results on the fractalization for the nonlinear equations at a dense set of times. Quantifying the irrational time fractalization in terms of the estimate on the fractal dimension, their results, on the one hand extend the results of Oskolkov and Rodnianski to a class of nonlinear integer polynomial dispersive equations subject to initial data of bounded variation, and, on the other hand, confirm the numerical observations of fractalization in [8]. Erdogan and Tzirakis studied the cubic NLS and KdV equations on a periodic domain with initial data of bounded variation in [13] and [14], respectively. Subsequently, together with Chousionis, they obtained some results on the Minkowski dimension of the fractalization profiles for dispersive linear partial differential equations with monomial dispersion relation [10]. We refer the reader to the survey texts [12, 15] for irrational time fractalization results. See also the recent survey [35]. To date, investigations have almost all concentrated on unidirectional dispersive systems. In the present paper, we will show that the dispersive revival/fractalization rational/irrational dichotomy extends to bidirectional dispersive equations of the form \[u_{tt}=L[u], \tag{1.1}\] where \(L\) is a scalar differential operator with constant coefficients. Obviously, equation (1.1) is equivalent to the following two-component evolutionary system \[u_{t}=v,\qquad v_{t}=L[u], \tag{1.2}\] which, however, does not satisfy the dispersive quantization conditions given in [42]. As we describe below, in the bidirectional setting, if we set the initial conditions equal to the same step function, then the solution of the corresponding periodic Riemann problem will exhibit qualitative dispersive quantization behaviour, of a different form than the standard piecewise constant solutions admitted by the unidirectional systems, such as the linear KdV and Schrodinger equations and their associated multi-component generalizations. Interestingly, in the concrete case of the linear beam equation \[u_{tt}+u_{xxxx}=0, \tag{1.3}\] these solutions at rational times \(t^{*}=\pi p/q\) with \(q>2\) appear to be piecewise parabolic, non-constant between jump discontinuities, whereas at the rational times \(t^{0}_{k}=\pi(2k-1)/2\), the solution becomes a continuously differentiable curve, with analytical expression (2.19). Similar studies were initiated in one of the authors' recent Ph.D. thesis [16], which studies the revival property in bidirectional dispersive equations (1.1) where the operator \(L\) is an even-order poly-Laplacian, which includes the linear wave equation and the beam equation (1.3), subject to periodic and quasi-periodic boundary conditions. We should further mention that from a general perspective, the form of the revival effect in the periodic bidirectional problems considered here resembles this of the revival effect in the free linear Schrodinger equation with Robin boundary conditions \(bu(t,0)=(1-b)u_{x}(t,\pi)\), where \(b\in(0,1)\) is a parameter, see [4]. Indeed, in both cases the solution at rational times is given as the sum of the revival of the initial condition and a more regular function, which can be considered as a weak type of revival. Other models in the literature that exhibit such weak revivals include the periodic cubic NLS and KdV equations [13, 14], the periodic linear Schrodinger equation with periodic potential [33, 9] and the linear Schrodinger equation subject to Dirichlet boundary conditions [5]. The linear beam equation, which is studied in Section 2, is a typical example of a model with the dispersion relation of the form \(\omega(k)=\pm k^{N},\ 2\leq N\leq\mathbb{Z}^{+}\), namely when \(N=2\). Although it is a special case, it motivates the study of the more general case and illustrates the idea and the method in the proof. More importantly, through the analysis and derivation of its periodic initial-boundary value problem, we arrive at some classical results for the Riemann zeta function. It implies that the periodic initial-boundary value problems for such systems can provide an alternative mechanism for establishing such classical identities. The three concrete goals of the present paper are as follows. The first is to investigate the new phenomenon of dispersive revival in greater detail, by examining the periodic initial-boundary value problems for bidirectional dispersive equations. We will provide an explicit characterization of the solution profiles of the periodic Riemann problem for the linear beam equation, leading to the general form of dispersive revival for bidirectional periodic initial-boundary value problems with various dispersion relations, including integral polynomial and non-polynomial. Our main results contain the analytic description of the new phenomena of the dispersive revival, which can be found in Section 2 for the linear beam equation, and Section 3 for general bidirectional equations, respectively. In the particular case of monomial dispersion relations, we present an alternative approach. Secondly, with the aim to show that such effects can persist into the nonlinear regime, we present numerical simulations, based on the Fourier spectral method, of the periodic Riemann problem for the nonlinear beam equation in Section 4. Numerical approximation supplies strong evidence that the dispersive revival/fractalization rational/irrational dichotomy persists into the nonlinear regime, whereas, when compared with the unidirectional systems, the nonlinear terms induce greater variations of the curve profiles, including their convexities. Finally, in the course of our analysis, we find that the solutions at rational times of the periodic Riemann problem for bidirectional dispersive equations with integral polynomial dispersion relations are closely related to identities for the Riemann zeta function, which is of great importance in analytic number theory. In summary, these new revival phenomena warrant further investigation, both mathematically, and to develop its profound applications. ## 2. Revival for the linear beam equation The starting point is the periodic initial-boundary value problem for the linear beam equation on the interval \(0\leq x\leq 2\pi\): \[\left\{\begin{array}{l}u_{tt}+u_{xxxx}=0,\\ u(0,x)=f(x),\qquad\qquad u_{t}(0,x)=g(x),\\ \partial_{x}^{j}u(t,0)=\partial_{x}^{j}u(t,2\pi)\\ \partial_{x}^{j}\partial_{t}u(t,0)=\partial_{x}^{j}\partial_{u}u(t,2\pi), \qquad j=0,\ 1,\ 2,\ 3.\end{array}\right. \tag{2.1}\] The linear beam equation is a bidirectional dispersive equation modeling small vibrations of a thin elastic beam, with quadratic dispersion relation \(\omega(k)=\pm k^{2}\). We focus our attention on the initial data given by a step function: \[f(x)=g(x)=\sigma(x)=\left\{\begin{array}{ll}-1,&0\leq x<\pi,\\ 1,&\pi\leq x<2\pi,\end{array}\right. \tag{2.2}\] known as the _Riemann problem_. Without further mention, here and elsewhere below, we assume that functions and distributions defined on \([\,0,\,2\pi\,]\) are extended \(2\pi\)-periodically to \(\mathbb{R}\) in the usual way, when required. For the solution of (2.1) with initial data (2.2), we have the following result. **Lemma 2.1**.: _The periodic initial-boundary value problem (2.1)-(2.2) has the following solution_ \[u(t,x)=-\frac{4}{\pi}\left(\sum_{n=0}^{+\infty}\frac{\cos((2n+1)^{2}t)\sin((2 n+1)x)}{2n+1}+\sum_{n=0}^{+\infty}\frac{\sin((2n+1)^{2}t)\sin((2n+1)x)}{(2n+1)^ {3}}\right). \tag{2.3}\] Throughout the paper, a time \(t>0\) will be designated as _rational_ if \(t/\pi\in\mathbb{Q}\), i.e., \(t=t^{*}=\pi p/q\), with \(p\) and \(0\neq q\in\mathbb{Z}^{+}\) having no common factors. Otherwise, if \(t/\pi\notin\mathbb{Q}\), the time is called _irrational_. To analyze the qualitative behavior of solution (2.3) at the rational times, we invoke the following Lemma. **Lemma 2.2**.: _Given \(j,q\in\mathbb{Z}^{+}\) with \(q\neq 0\), let \(\sigma^{j,q}(x)\) be the box function defined as_ \[\sigma^{j,q}(x)=\left\{\begin{array}{ll}1,&\frac{\pi j}{q}\leq x<\frac{\pi( j+1)}{q},\quad 0\leq j\leq 2q-1,\\ 0,&otherwise.\end{array}\right. \tag{2.4}\] _Let \(N,\,p,\,q\in\mathbb{Z}^{+},\,N\geq 2,\,q\neq 0\). Then, the following formulae hold._ \((i)\)__ \[-\frac{4}{\pi}\sum_{n=0}^{+\infty}\frac{\cos\left((2n+1)^{N}\frac{\pi p}{q} \right)}{2n+1}\sin((2n+1)x)=\sum_{j=0}^{2q-1}a_{j}\left(\frac{p}{q}\right) \sigma^{j,q}(x), \tag{2.5}\] \((ii)\) _For each even \(N\),_ \[-\frac{4}{\pi}\sum_{n=0}^{+\infty}\frac{\sin\left((2n+1)^{N}\frac{\pi p}{q} \right)}{2n+1}\sin((2n+1)x)=\sum_{j=0}^{2q-1}b_{j}\left(\frac{p}{q}\right) \sigma^{j,q}(x), \tag{2.6}\] \((iii)\) _For each odd \(N\),_ \[-\frac{4}{\pi}\sum_{n=0}^{+\infty}\frac{\sin\left(((2n+1)^{N}\frac{\pi p}{q} \right)}{2n+1}\cos((2n+1)x)=\sum_{j=0}^{2q-1}\tilde{b}_{j}\left(\frac{p}{q} \right)\sigma^{j,q}(x), \tag{2.7}\] _where \(a_{j},b_{j},\tilde{b}_{j}\in\mathbb{R}\), \(j=0,\ldots,2q-1\), are certain constants which depend on \(p\) and \(q\)._ To prove Lemma 2.2, we need the following theorem, which is based on Theorem 3.2 and Corollary 3.4 in [7], and underlies the dispersive quantization effect for equations with "integral polynomial" dispersion relation. **Definition 2.3**.: _A polynomial \(P(k)=c_{0}+c_{1}k+\cdots+c_{N}k^{N}\) is called an integral polynomial if its coefficients are integers: \(c_{i}\in\mathbb{Z},i=0,\ldots,N\)._ **Theorem 2.4**.: _Suppose that the dispersion relation of the evolution equation \(u_{t}=L[u]\) is an integral polynomial\(:\)_ \[\omega(k)=P(k).\] _Then at every rational time \(t^{*}=\pi p/q\), with \(p\) and \(0\neq q\in\mathbb{Z}\), the fundamental solution_ \[F(t,x)\sim\frac{1}{2\pi}\sum_{k=-\infty}^{\infty}e^{\operatorname{i}\left(kx+ \omega(k)t\right)}\] _is a linear combination of \(q\) periodically extended delta functions concentrated at the rational nodes \(x_{j}=\pi j/q\) for \(j\in\mathbb{Z}\). Moreover, at \(t^{*}=\pi p/q\), the solution profile to the periodic initial-boundary value problem for \(u_{t}=L[u]\) is a linear combination of \(\leq 2q\) translates of its initial data \(u(0,x)=f(x)\), i.e.,_ \[u\left(\frac{\pi p}{q},x\right)=\sum_{j=0}^{2q-1}a_{j}\left(\frac{p}{q}\right) f\left(x-\frac{\pi j}{q}\right).\] **Remark 2.5**.: _Slightly more generally, the dispersion relation can be a nonzero multiple of an integral polynomial. By suitably rescaling time, the stated result holds, the only difference being which times are designated as rational or irrational. A similar remark holds if one rescales space to consider the equation on a different spatial interval._ **Proof of Lemma 2.2**.: According to Theorem 2.4, if the underlying equation admits a dispersion relation \(\omega(k)=\pm k^{N}\), and the initial data is the unit step function \[u(0,x)=\left\{\begin{array}{ll}0,&0\leq x<\pi,\\ 1,&\pi\leq x<2\pi,\end{array}\right.\] with Fourier coefficients \[c_{k}=\left\{\begin{array}{ll}\frac{1}{2},&k=0,\\ 0,&k\neq 0\;\text{even},\\ \frac{\operatorname{i}}{\pi k},&k\;\text{odd}.\end{array}\right.\] Then, at a rational time \(t^{*}=\pi p/q\), the corresponding solution has the Fourier series form \[u^{\pm}(t,x)=\sum_{k=-\infty}^{+\infty}c_{k}e^{\operatorname{i}\left(kx\pm k ^{N}t\right)}\] and hence is constant on every subinterval \(\pi j/q<x<\pi(j+1)/q\), for \(j=0,\ldots,2q-1\), namely \[u^{\pm}(t^{*},x)=\sum_{j=0}^{2q-1}\gamma_{j}^{\pm}\left(\frac{p}{q}\right) \sigma^{j,q}(x), \tag{2.8}\] for certain \(\gamma_{j}^{\pm}\in\mathbb{C}\), \(j=0,\ldots,2q-1\), dependent on \(p\) and \(q\). We thus need to distinguish two cases: **Case 1.** \(N\) is even. It is easy to see that \[u^{+}(t^{*},x)+u^{-}(t^{*},x) =\sum_{k=-\infty}^{+\infty}c_{k}\left(e^{\,\mathrm{i}\,(kx+k^{N}t^{ *})}+e^{\,\mathrm{i}\,(kx-k^{N}t^{*})}\right)\] \[=1-\frac{4}{\pi}\sum_{n=0}^{+\infty}\frac{\cos\left((2n+1)^{N}t^{ *}\right)}{2n+1}\sin((2n+1)x),\] \[u^{+}(t^{*},x)-u^{-}(t^{*},x) =\sum_{k=-\infty}^{+\infty}c_{k}\left(e^{\,\mathrm{i}\,(kx+k^{N}t ^{*})}-e^{\,\mathrm{i}\,(kx-k^{N}t^{*})}\right)\] \[=-\frac{4\,\mathrm{i}}{\pi}\sum_{n=0}^{+\infty}\frac{\sin\left((2 n+1)^{N}t^{*}\right)}{2n+1}\sin((2n+1)x).\] **Case 2.** \(N\) is odd. Similar to the above, we have \[u^{+}(t^{*},x)+u^{-}(t^{*},x) =\sum_{k=-\infty}^{+\infty}c_{k}\left(e^{\,\mathrm{i}\,(kx+k^{N} t^{*})}+e^{\,\mathrm{i}\,(kx-k^{N}t^{*})}\right)\] \[=1-\frac{4}{\pi}\sum_{n=0}^{+\infty}\frac{\cos\left((2n+1)^{N}t^{ *}\right)}{2n+1}\sin((2n+1)x),\] \[u^{+}(t^{*},x)-u^{-}(t^{*},x) =\sum_{k=-\infty}^{+\infty}c_{k}\left(e^{\,\mathrm{i}\,(kx+k^{N} t^{*})}-e^{\,\mathrm{i}\,(kx-k^{N}t^{*})}\right)\] \[=-\frac{4}{\pi}\sum_{n=0}^{+\infty}\frac{\sin\left((2n+1)^{N}t^{ *}\right)}{2n+1}\cos((2n+1)x).\] Finally, these formulae, together with (2.8) yield (2.5)-(2.7), respectively, proving the lemma. Denote the solution (2.3) as \[u(t,x):=\mathrm{I}(t,x)+\Pi(t,x). \tag{2.9}\] Note that (2.5) implies that the first summation in (2.9) evaluated at rational times \(t^{*}\) has the representation \[\mathrm{I}(t^{*},x)=\sum_{j=0}^{2q-1}a_{j}\left(\frac{p}{q}\right)\sigma^{j,q} (x), \tag{2.10}\] for certain constants \(a_{0},\dots,a_{2q-1}\) determined by (2.5). In particular, if \(q=2\), at the corresponding specific rational time \(t_{k}^{0}=\pi(2k-1)/2\), \(k\in\mathbb{Z}^{+}\), it vanishes identically. When it comes to \(\mathrm{I}(t^{*},x)\), firstly, a direct computation shows that for any \(t>0\) we have that \[\frac{\sin((2n+1)^{2}t)\sin((2n+1)x)}{(2n+1)^{3}} \tag{2.11}\] \[\qquad\qquad=-\int_{0}^{x}\int_{0}^{y}\frac{\sin((2n+1)^{2}t)\sin ((2n+1)z)}{2n+1}\;\mathrm{d}z\mathrm{d}y+\frac{\sin((2n+1)^{2}t)}{(2n+1)^{2}}x.\] Next, again thanks to Lemma 2.1, we find that the first of the two components in \(\mathrm{II}(t,x)\) satisfies, \[\begin{split}\mathrm{II}^{(1)}(t^{*},x)&=\frac{4}{\pi} \int_{0}^{x}\int_{0}^{y}\sum_{n=0}^{+\infty}\frac{\sin((2n+1)^{2}t^{*})\sin((2n+ 1)z)}{2n+1}\;\mathrm{d}z\mathrm{d}y\\ &=-\int_{0}^{x}\int_{0}^{y}\sum_{j=0}^{2q-1}b_{j}\left(\frac{p}{q }\right)\sigma^{j,q}(z)\;\mathrm{d}z\mathrm{d}y,\end{split} \tag{2.12}\] for certain constants \(b_{j}\), \(j=0,\dots,2q-1\) determined by (2.6). We denote \[F(y)=\int_{0}^{y}\sum_{j=0}^{2q-1}b_{j}\sigma^{j,q}(z)\;\mathrm{d}z,\quad \text{for}\quad 0\leq y\leq x,\] and set \[H(x)=\int_{0}^{x}F(y)\mathrm{d}y.\] It is easy to verify that \[F(y)=\left\{\begin{aligned} & b_{0}y,& 0\leq y\leq\frac{\pi}{q},\\ & b_{j}y+\frac{\pi}{q}\sum_{m=0}^{j-1}b_{m}-\frac{\pi}{q}jb_{j},& \frac{\pi}{q}j\leq y\leq\frac{\pi}{q}(j+1),\quad j=1,\dots,2q-1,\end{aligned}\right.\] and hence \[H(x)=\left\{\begin{aligned} &\frac{1}{2}b_{0}x^{2},& 0\leq x \leq\frac{\pi}{q},\\ &\frac{1}{2}b_{1}x^{2}+\frac{\pi}{q}(b_{0}-b_{1})x+\frac{\pi^{2}}{ 2q^{2}}(b_{1}-b_{0}),&\frac{\pi}{q}\leq x\leq\frac{2\pi}{q},\\ &\frac{1}{2}b_{j}x^{2}+h_{1}x+h_{0},&\frac{\pi}{q}j \leq x\leq\frac{\pi}{q}(j+1),\quad j=1,\dots,2q-1,\end{aligned}\right. \tag{2.13}\] where \[h_{1}=\frac{\pi}{q}\left(\sum_{m=0}^{j-1}b_{m}-mb_{m}\right)\quad\text{and} \quad h_{0}=\frac{\pi^{2}}{q^{2}}\left[\,\sum_{m=1}^{j-1}\left(\sum_{i=0}^{m- 1}b_{i}+\frac{b_{m}}{2}\right)+\frac{j^{2}}{2}b_{j}-j\sum_{m=0}^{j-1}b_{m}+ \frac{b_{0}}{2}\,\right].\] This, when combined with (2.10) and (2.11), allows us to arrive at the exact result for the solution (2.1) at the rational times, which is summarized in the following theorem. **Theorem 2.6**.: _At a rational time \(t^{*}=\pi p/q\), the solution to the periodic initial-boundary value problem (2.1) for the linear beam equation with the step function initial datum (2.2) takes the form_ \[u(t^{*},x)=\sum_{j=0}^{2q-1}a_{j}\left(\frac{p}{q}\right)\sigma^{j,q}(x)-H(x)+ C(t^{*})x, \tag{2.14}\] _where_ \[C(t^{*})=-\frac{4}{\pi}\sum_{n=0}^{+\infty}\frac{\sin((2n+1)^{2}t^{*})}{(2n+1 )^{2}}, \tag{2.15}\] \(\sigma^{j,q}(x)\) _is the box function defined in (2.4), \(H(x)\) is the piecewise quadratic function defined in (2.13), and \(a_{j}\), \(j=0,\dots,2q-1\), are certain constants determined by equation (2.5)._ With the explicit expression (2.3) of the solution in hand, we now analyze its qualitative behaviour. First of all, as a direct corollary of the estimate on the solution of the linearized dispersion equation given by Oskolkov [31] and Rodnianski [32] -- see also [15] -- one has, for almost all irrational \(t/(2\pi)\), the first summation \(\mathrm{I}\in\bigcap_{e>0}C^{\frac{1}{2}-\epsilon}\), which in turn indicates that the second one \(\mathrm{II}\in\bigcap_{e>0}C^{\frac{5}{2}-\epsilon}\). We thus conclude that, at the irrational times, the profile of the solution (2.1) is a continuous fractal, with fractal dimension \(D=3/2\). When it comes to the rational times, note that in the expression of the solution (2.14), on the one hand, \(H(x)\in C^{1}\), on the other hand, \(\mathrm{I}(t^{*},x)\) is a piecewise constant with \(\leq 2q\) discontinuities, apart from the specific times \(t_{k}^{0}=\pi(2k-1)/2,\,k\in\mathbb{Z}^{+}\), which will result in \(\mathrm{I}(t_{k}^{0},x)\equiv 0\). Therefore, we need to distinguish two cases: **Case 1.**\(q=2\). At times \(t_{k}^{0}=\pi(2k-1)/2\), \(k\in\mathbb{Z}^{+}\), \[u(t_{k}^{0},x)=-H(x)+(-1)^{k}\frac{4}{\pi}\sum_{n=0}^{+\infty}\frac{1}{(2n+1) ^{2}}x\in C^{1}.\] Referring back to the relation (2.6) corresponding to \(q=2,p=2k-1\), and \(N=2\), and solving for \(b_{j}\) by making use of the inverse discrete Fourier transform (IDFT) gives rise to \[b_{0}=b_{1}=(-1)^{k},\quad b_{2}=b_{3}=(-1)^{k-1}. \tag{2.16}\] Then, (2.13) readily leads to \[H_{k}(x)=(-1)^{k-1}\left\{\begin{array}{ll}-\frac{1}{2}x^{2},&0\leq x\leq\pi,\\ \frac{1}{2}x^{2}-2\pi x+\pi^{2},&\pi\leq x\leq 2\pi,\end{array}\right. \tag{2.17}\] which, together with the fact that the solution (2.3) is \(2\pi\)-periodic, i.e., \(u(t_{k}^{0},0)=u(t_{k}^{0},2\pi)\), yields \[\sum_{n=0}^{+\infty}\frac{1}{(2n+1)^{2}}=\frac{\pi^{2}}{8}. \tag{2.18}\] It follows that, evaluated at each \(t_{k}^{0}=(2k-1)\pi/2\), \(k\in\mathbb{Z}^{+}\), the solution can be written as \[u(t_{k}^{0},x)=(-1)^{k-1}\left\{\begin{array}{ll}\frac{1}{2}(x^{2}-\pi x),& 0\leq x\leq\pi,\\ -\frac{1}{2}(x^{2}-3\pi x+2\pi^{2}),&\pi\leq x\leq 2\pi.\end{array}\right. \tag{2.19}\] More interestingly, we find that the conclusion (2.18) agrees with the classical result \[\zeta(2)=\sum_{n=0}^{+\infty}\frac{1}{n^{2}}=\frac{\pi^{2}}{6}, \tag{2.20}\] where \[\zeta(s)=\sum_{n=0}^{+\infty}\frac{1}{n^{s}},\quad s>1 \tag{2.21}\] is the Riemann zeta function. The above procedure provides an alternative mechanism for establishing such classical identities, and the behavior of these solutions at rational times has intriguing connections with such number-theoretic exponential sums. **Case 2.**\(q\neq 2\). According to (2.14), at each rational time \(t^{*}=\pi p/q,\,q\neq 2\), the solution consists of a piecewise constant function, which is constant on the intervals \(\pi j/q<x<\pi(j+1)/q\) for \(j=0,\dots,2q-1\), combined with a continuously differentiable function \(-H(x)+C(t^{*})x\), being composed of different parabolas defined on the intervals \(\pi j/q\leq x<\pi(j+1)/q,\ j=0,\dots,2q-1\). Therefore, in the present case, the solution profile is a discontinuous, piecewise parabolic curve. For instance, let us take \(t^{*}=\pi/3\) as an example. On the one hand, \[\mathrm{I}\left(\frac{\pi}{3},x\right)=-\frac{4}{\pi}\sum_{n=0}^{+\infty}\frac{ \cos((2n+1)^{2}\frac{\pi}{3})}{2n+1}\sin((2n+1)x)=\sum_{j=0}^{5}a_{j}\left( \frac{1}{3}\right)\sigma^{j,q}(x),\] where, a direct computation through IDFT shows that \[a_{0}=a_{2}=a_{3}=a_{5}=0,\quad a_{1}=-1,\quad a_{4}=1.\] On the other hand, \(b_{j}\,(0\leq j\leq 5)\) in \(H(x)\) are obtained from the relation \[-\frac{4}{\pi}\sum_{n=0}^{+\infty}\frac{\sin((2n+1)^{2}\frac{\pi}{3})}{2n+1} \sin((2n+1)x)=\sum_{j=0}^{5}b_{j}\left(\frac{1}{3}\right)\sigma^{j,q}(x).\] We have \[b_{0}=b_{2}=-b_{3}=-b_{5}=-\frac{\sqrt{3}}{3},\quad b_{1}=-b_{4}=-\frac{2 \sqrt{3}}{3}.\] Again, owing to the periodicity of the solution, one has \[\sum_{n=0}^{+\infty}\frac{\sin((2n+1)^{2}\frac{\pi}{3})}{(2n+1)^{2}}=-\frac{ \pi}{4}C\left(\frac{\pi}{3}\right)=\frac{\sqrt{3}\,\pi^{2}}{18}. \tag{2.22}\] Finally, inserting them into (2.13) and then (2.14) gives rise to the explicit solution at time \(t^{*}=\pi/3\), namely \[u\left(\frac{\pi}{3},x\right)=\begin{cases}\frac{\sqrt{3}}{6}\left(x^{2}- \frac{4\pi}{3}x\right),&0\leq x\leq\frac{\pi}{3},\\ \frac{\sqrt{3}}{3}\left(x^{2}-\pi x+\frac{\pi^{2}}{18}\right)-1,&\frac{\pi}{3} \leq x\leq\frac{2\pi}{3},\\ \frac{\sqrt{3}}{6}\left(x^{2}-\frac{2\pi}{3}x-\frac{\pi^{2}}{3}\right),&\frac {2\pi}{3}\leq x\leq\pi,\\ -\frac{\sqrt{3}}{6}\left(x^{2}-\frac{10\pi}{3}x+\frac{7\pi^{2}}{3}\right),& \pi\leq x\leq\frac{4\pi}{3},\\ -\frac{\sqrt{3}}{3}\left(x^{2}-3\pi x+\frac{37\pi^{2}}{18}\right)+1,&\frac{4 \pi}{3}\leq x\leq\frac{5\pi}{3},\\ -\frac{\sqrt{3}}{6}\left(x^{2}-\frac{8\pi}{3}x+\frac{4\pi^{2}}{3}\right),& \frac{5\pi}{3}\leq x\leq 2\pi.\end{cases} \tag{2.23}\] **Remark 2.7**.: _Note that, at the rational points, the series (2.22) contains the odd terms of Riemann's non-differentiable function, which was introduced by Riemann in 1872; see [11] for details. The connection between the Riemann's non-differentiable function and solutions of the vortex filament equation with polygonal initial data was recently established in [22, 23]._ All in all, in the context of the linear beam equation, the evolution of the periodic step function initial datum will take on three different qualitative behaviors. At irrational times, it evolves into continuous but non-differentiable fractal-like profile. At rational times \(t^{*}=\pi p/q\,(q\neq 2)\), the solution takes on a discontinuous, piecewise parabola behavior. On the other hand, at each specific rational time \(t_{k}^{0}=\pi(2k-1)/2,\,k\in\mathbb{Z}^{+}\), the quantization effect disappears entirely, and the solution instantly becomes a continuously differentiable function, emerging at regular \(\pi\)-periodic intervals. We conclude that the revival phenomena exhibited by the periodic evolution of the linear beam equation differs from that arising in the linear KdV, the linear Schrodinger, and other unidirectional linear dispersive evolution equations studied previously. In Figures 1 and 2, we display the graphs of the solution at some representative rational and irrational times, respectively. These figures are plotted by straightforwardly applying the Fourier series representation of \(u(t,x)\) given by (2.3). We sum over \(1001\) terms1 to obtain the numerical approximation of the solution. As illustrated in Figure 1(a), the solution is continuous at \(\pi/2\). While, referring to Figure 1(b) and Figure 1(c), it appears that, at \(\pi/3\) and \(\pi/5\), there exist a finite number of jump discontinuities, and between which parabolic curves of different form arise. Obviously, the plots in Figure 1, which obtained by simply truncating the Fourier series (2.3), are entirely consistent with the explicit expressions given by (2.19) and (2.23). On the other hand, Figure 2 shows that, at irrational times, the solution displays continuous, but nowhere differentiable fractal-like profiles, as claimed above. Footnote 1: Summing over a larger number of terms produces no appreciable difference in the solution profiles. **Remark 2.8**.: _It is worth mentioning that, in view of the series (2.10) and that in \(\mathrm{II}^{(1)}(t^{*},x)\), the distribution of the discontinuity points in \(H(x)\) depends on the value of \(q\), especially on its parity. As studied in [29], general speaking, the piecewise subintervals for these series are \(\pi j/q\leq x<\pi(j+1)/q,j=0,\ldots,2q-1\). However, if \(q\) is even (for instance \(q=2\), whose corresponding solution is a representative example which can manifest such characteristic), the solutions sometimes assume identical values on adjacent subintervals, and so exhibits larger regions of constancy. See also [29] for a number-theoretic characterization of these occurrences._ Figure 1: The solutions to the periodic initial-boundary value problem for the linear beam equation at rational times. Figure 2: The solutions to the periodic initial-boundary value problem for the linear beam equation at irrational times. Figure 4: The solutions to the periodic initial-boundary value problem (2.1) with initial data \(f(x)=0\), \(g(x)=\sigma(x)\). Figure 3: The solutions to the periodic initial-boundary value problem (2.1) with initial data \(f(x)=\tilde{\sigma}(x)\), \(g(x)=\sigma(x)\). Formula (2.3) provides the solution of (2.1) with the same step function \(f=g=\sigma(x)\) (2.2) as the initial data. Indeed, in view of Theorem 3.1 below, we find that the first and second terms in solution (2.3) are induced by the initial data \(u|_{t=0}=f(x)\) and \(u_{t}|_{t=0}=g(x)\), respectively. It is noticed that the emergence of such a dichotomy phenomenon is not only for the case \(f=g\). For instance, if we take the initial \(g(x)=\sigma(x)\), while \[f(x)=\tilde{\sigma}(x)=\left\{\begin{array}{ll}0,&0\leq x<\pi,\\ 1,&\pi\leq x<2\pi.\end{array}\right.\] Figure 3 suggests that the different steps functions will also evolve into three different qualitative behaviors. Furthermore, in Figure 4 and Figure 5, we display the graphs of solutions corresponding to \(f(x)=0,\,g(x)=\sigma(x)\), and \(f(x)=\sigma(x),\,g(x)=0\), respectively. As demonstrated in Figure 4, if \(f(x)=0\), the dispersive quantization induced by \(f(x)\) dispears entirely, and then the solution will retain a \(C^{1}\) profile all the time. On the other hand, if \(f(x)=\sigma(x),\,g(x)=0\), referring to Figure 5(b) and Figure 5(c), the solutions take on dispersive quantization at the rational times. While, as shown in Figure 5(a), the solution will vanish at \(\pi/2\), since \(\mathrm{I}(\pi/2,x)\equiv 0\) as claimed above. has real dispersion relation \(\omega(k)=\pm\sqrt{\varphi(k)}\). We subject (3.1) to periodic boundary conditions posed on the interval \(0\leq x\leq 2\pi\), and initial conditions \[u(0,x)=f(x),\quad u_{t}(0,x)=g(x), \tag{3.2}\] where \(f(x)\) and \(g(x)\) are of bounded variation, and \(g(x)\) is required to satisfy \(\int_{0}^{2\pi}g(x)\,\mathrm{d}x=0\). As usual, the first step is to construct the (formal) solution as a Fourier series \[u(t,x)\sim\sum_{k=-\infty}^{+\infty}a_{k}(t)e^{\,\mathrm{i}\,kx}.\] To this end, we first expand the initial data \(f(x)\) and \(g(x)\) in Fourier series \[f(x)\sim\sum_{k=-\infty}^{+\infty}c_{k}e^{\,\mathrm{i}\,kx},\quad\text{where} \quad c_{k}=\widehat{f}(k)=\frac{1}{2\pi}\int_{0}^{2\pi}\,f(x)e^{-\,\mathrm{i }\,kx}\,\mathrm{d}x\] and \[g(x)\sim\sum_{k=-\infty}^{+\infty}d_{k}e^{\,\mathrm{i}\,kx},\quad\text{where} \quad d_{k}=\widehat{g}(k)=\frac{1}{2\pi}\int_{0}^{2\pi}\,g(x)e^{-\,\mathrm{i }\,kx}\,\mathrm{d}x.\] Next, an analogous analysis as used for the linear beam equation with step function initial conditions, implies that the corresponding coefficients \(a_{k}(t)\) satisfy the following linear ODE \[a_{k}^{\prime\prime}(t)+\varphi(k)a_{k}(t)=0. \tag{3.3}\] Solving it yields \[a_{k}(t)=A_{k}e^{\,\mathrm{i}\,\sqrt{\varphi(k)}t}+B_{k}e^{-\,\mathrm{i}\, \sqrt{\varphi(k)}t}.\] Finally, using the initial data again, we find that the solution to the periodic initial-boundary value problem (3.1)-(3.2) is given by \[u(t,x)=\sum_{k}\widehat{f}(k)\cos\left(\sqrt{\varphi(k)}\;t\right)e^{\, \mathrm{i}\,kx}+\sum_{k\neq 0}\frac{\widehat{g}(k)}{\sqrt{\varphi(k)}}\sin \left(\sqrt{\varphi(k)}\;t\right)e^{\,\mathrm{i}\,kx}. \tag{3.4}\] With the Fourier series representation (3.4) in hand, we are now able to analyze the qualitative behavior of the solution at rational times. We will show that the dynamical evolution of equation (3.1) on periodic domains with initial profiles (3.2) depends dramatically upon the asymptotics of the dispersion relation at large wave number. In all cases considered here, the large wave number asymptotics of the dispersion relation is given by a positive power of the wave number: \[\sqrt{\varphi(k)}\sim|k|^{\alpha},\quad 2\leq\alpha\in\mathbb{R},\quad\text{as} \quad|k|\to\infty. \tag{3.5}\] ### Monomial dispersion relation: As the first step, we will study the special case of monomial dispersion relation given by \[\omega(k)=\pm k^{N},\quad 2\leq N\in\mathbb{Z}^{+}. \tag{3.6}\] The main results for the corresponding solutions are summarized in Theorem 3.1 below. Hereafter, we define the operator \(\partial_{x}^{-1}\) by the formula \[\partial_{x}^{-1}\,P(x)=\int_{2(k-1)\pi}^{x}\,P(y)\,\mathrm{d}y,\qquad x\in \left[\,2(k-1)\pi,\,2k\pi\,\right]. \tag{3.7}\] We further define its \(M\)-th order power \(\partial_{x}^{-M}\) via the recursive relation \(\partial_{x}^{-M}=\partial_{x}^{-1}\partial_{x}^{-(M-1)}\), for \(M\geq 1\). **Theorem 3.1**.: _Suppose that equation (3.1) has the monomial dispersion relation (3.6), the initial data \(f(x)\) and \(g(x)\) in (3.2) are of bounded variation, and \(g(x)\) satisfies \(\int_{0}^{2\pi}g(x)\,\,\mathrm{d}x=0\). Let \(G(x)=\partial_{x}^{-N}g(x)\). Then at each rational time \(t^{*}=\pi p/q\), the solution to the periodic initial-boundary value problem (3.1)-(3.2) takes the form_ \[u(t^{*},x)=\sum_{j=0}^{2q-1}a_{j}\left(\frac{p}{q}\right)\,f\left(x-\frac{\pi j }{q}\right)+\,\mathrm{i}\,^{N}\sum_{j=0}^{2q-1}b_{j}\left(\frac{p}{q}\right)\, G\left(x-\frac{\pi j}{q}\right)+\sum_{j=0}^{N-1}C_{j}x^{j}, \tag{3.8}\] \[C_{j}=\frac{\,\mathrm{i}\,^{j}}{j\,!}\sum_{k\neq 0}\frac{\widehat{g}(k)\sin(k ^{N}t^{*})}{k^{N-j}}, \tag{3.9}\] _where the coefficients \(a_{j},\,b_{j}\in\mathbb{C}\), \(j=0,\dots,2q-1\), are constants depending on \(p\) and \(q\)._ The proof of the theorem relies on the following lemma, which is a direct corollary of Theorem 3.2 established in [7] and is a special case of Lemmas 7.5 and 7.6 in [16]. Thus, we omit the proof. Moreover, we remark that the expression (3.8) for the exact solution is equivalent to (3.25) in the next subsection, although not exactly the same in form. **Lemma 3.2**.: _Let \(P(k)\) be an integral polynomial. Assume that \(f(x)\) is of bounded variation, and let \(\widehat{f}(k)\) be the Fourier coefficient of \(f(x)\), i.e.,_ \[\widehat{f}(k)=\frac{1}{2\pi}\int_{0}^{2\pi}\,f(x)e^{-\,\mathrm{i}\,kx}\, \mathrm{d}x.\] _Given \(t^{*}=\pi p/q\), with \(p\) and \(0\neq q\in\mathbb{Z}^{+}\), there exist constants \(a_{j}^{1},a_{j}^{2}\in\mathbb{C}\), \(j=0,\dots,2q-1\), depending on \(p\) and \(q\), such the following two formulae hold:_ \[\sum_{k=-\infty}^{\infty}\widehat{f}(k)\cos\left(P(k)t^{*}\right)e^{\, \mathrm{i}\,kx}=\sum_{j=0}^{2q-1}a_{j}^{1}\left(\frac{p}{q}\right)f\left(x- \frac{\pi j}{q}\right), \tag{3.10}\] \[\sum_{k=-\infty}^{\infty}\widehat{f}(k)\sin\left(P(k)t^{*}\right)e^{\, \mathrm{i}\,kx}=\sum_{j=0}^{2q-1}a_{j}^{2}\left(\frac{p}{q}\right)f\left(x- \frac{\pi j}{q}\right). \tag{3.11}\] **Proof of Theorem 3.1.** First of all, according to (3.4), under the assumption of the theorem, the solution to the corresponding periodic initial-boundary problem has the form \[u(t,x)=\sum_{k}\widehat{f}(k)\cos(k^{N}t)e^{\,\mathrm{i}\,kx}+\sum_{k\neq 0} \frac{\widehat{g}(k)\sin(k^{N}t)}{k^{N}}e^{\,\mathrm{i}\,kx}:=\mathrm{I}(t,x) +\mathrm{II}(t,x). \tag{3.12}\] Furthermore, since \(f(x)\) and \(g(x)\) are of bounded variation and \(N\geq 2\), the first summation in expression (3.12) is conditionally convergent, and the second one is absolutely convergent. At the rational times \(t^{*}=\pi p/q\), by Lemma 3.2, the first summation is a linear combination of translates of \(f(x)\), i.e., \[\mathrm{I}(t^{*},x)=\sum_{j=0}^{2q-1}a_{j}^{1}\left(\frac{p}{q}\right)\,f \left(x-\frac{\pi j}{q}\right),\] for certain \(a_{0}^{1},\dots,a_{2q-1}^{1}\in\mathbb{C}\) determined by (3.10) with \(P(k)=k^{N}\). Note that \[\partial_{x}^{-M}e^{\,\mathrm{i}\,kx}=\frac{1}{(\,\mathrm{i}\,k)^{M}}e^{\, \mathrm{i}\,kx}-\sum_{j=0}^{M-1}\frac{1}{j!\,(\,\mathrm{i}\,k)^{M-j}}x^{j}, \quad\text{for}\quad 0\leq x\leq 2\pi. \tag{3.13}\] It follows that, at the rational times \(t^{*}=\pi p/q\), the second summation satisfies \[\begin{split}\operatorname{II}(t^{*},x)&=\sum_{k}\, \operatorname{i}^{N}\widehat{g}(k)\sin(k^{N}t^{*})\partial_{x}^{-N}e^{\, \operatorname{i}kx}+\sum_{k\neq 0}\widehat{g}(k)\sin(k^{N}t^{*})\sum_{j=0}^{N-1} \frac{\operatorname{i}^{\,j}}{j!\,k^{N-j}}x^{j}\\ &:=\operatorname{I}^{(1)}(x)+\operatorname{II}^{(2)}(x).\end{split}\] Since \(g(x)\) is of bounded variation, the series \(C_{j}\) given in (3.9) is convergent for each \(j=0,\ldots,N-1\), then the second component \(\operatorname{II}^{(2)}(x)\) readily leads to the last term in (3.8). On the other hand, in the case of \(P(k)=k^{N}\), applying equation (3.11) to the delta function \(\delta(x)\) yields \[\frac{1}{2\pi}\sum_{k=-\infty}^{\infty}\sin\left(k^{N}t^{*}\right)e^{\, \operatorname{i}kx}=\sum_{j=0}^{2q-1}b_{j}\left(\frac{p}{q}\right)\delta \left(x-\frac{\pi j}{q}\right),\] for some constants \(b_{j}\in\mathbb{C},\;j=0,\ldots,2q-1.\) We thus deduce \(\operatorname{II}^{(1)}(x)\) as follows: \[\begin{split}\operatorname{II}^{(1)}(x)&=\frac{ \operatorname{i}^{\,N}}{2\pi}\sum_{k}\sin(k^{N}t^{*})\int_{0}^{2\pi}g(y)e^{- \operatorname{i}ky}\partial_{x}^{-N}e^{\,\operatorname{i}kx}\,\mathrm{d}y\\ &=\frac{\operatorname{i}^{\,N}}{2\pi}\sum_{k}\sin(k^{N}t^{*}) \int_{0}^{2\pi}e^{\,\operatorname{i}ky}\partial_{x}^{-N}g(x-y)\,\mathrm{d}y\\ &=\operatorname{i}^{\,N}\int_{0}^{2\pi}G(x-y)\sum_{j=0}^{2q-1}b_ {j}\delta\left(y-\frac{\pi j}{q}\right)\,\mathrm{d}y=\operatorname{i}^{\,N} \sum_{j=0}^{2q-1}b_{j}G\left(x-\frac{\pi j}{q}\right).\end{split}\] Summing \(\operatorname{II}^{(1)}(x)\), \(\operatorname{II}^{(2)}(x)\) and \(\operatorname{I}(t^{*},x)\) gives (3.8), which justifies the statement of the theorem. In particular, if the initial data \(f(x)\) and \(g(x)\) are the step function \(\sigma(x)\) given in (2.2), the following corollary holds. **Corollary 3.3**.: _Let \(\sigma^{j,q}(x)\) be the box function defined in (2.4). At a rational time \(t^{*}=\pi p/q\), the solution to the periodic initial-boundary value problem (3.1)-(3.2) on the interval \(0\leq x\leq 2\pi\), with initial data \(f(x)=g(x)=\sigma(x)\) given in (2.2) takes the form_ \[u(t^{*},x)=\sum_{j=0}^{2q-1}a_{j}\left(\frac{p}{q}\right)\sigma^{j,q}(x)+(-1) ^{\left[\frac{N}{2}\right]}\partial_{x}^{-N}\sum_{j=0}^{2q-1}b_{j}\left(\frac {p}{q}\right)\sigma^{j,q}(x)+\sum_{j=0}^{\left[\frac{N}{2}\right]-1}D_{j}x^{2 j+1}, \tag{3.14}\] _where_ \[D_{j}=\frac{(-1)^{j+1}4}{\pi(2j+1)!}\sum_{n=0}^{+\infty}\frac{\sin((2n+1)^{N} t^{*})}{(2n+1)^{N-2j}},\qquad j=0,\ldots,\left[\frac{N}{2}\right]-1, \tag{3.15}\] _and the coefficients \(a_{j},j=0,\ldots,2q-1\) are determined by formula (2.5) in Lemma 2.1, \(b_{j},j=0,\ldots,2q-1\) satisfy (2.6) for even \(N\), and (2.7) for odd \(N\), respectively._ Proof.: If \(f(x)=g(x)=\sigma(x)\), the corresponding solution (3.4) reduces to \[u(t^{*},x)=-\frac{4}{\pi}\left[\,\sum_{n=0}^{+\infty}\frac{\cos((2n+1)^{N}t^{*} )\sin((2n+1)x)}{2n+1}+\sum_{n=0}^{+\infty}\frac{\sin((2n+1)^{N}t^{*})\sin((2n+ 1)x)}{(2n+1)^{N+1}}\,\right].\] Obviously, the first summation is exactly the first term in (3.14). As for the second summation, a direct induction procedure shows that, if \(N\) is even, \[\frac{\sin((2n+1)x)}{(2n+1)^{N+1}}=(-1)^{\frac{N}{2}}\partial_{x}^{-N}\frac{ \sin((2n+1)x)}{2n+1}+\sum_{j=0}^{\frac{N}{2}-1}\frac{(-1)^{j}}{(2j+1)!(2n+1)^{ N-2j}}x^{2j+1},\] whereas, if \(N\) is odd, \[\frac{\sin((2n+1)x)}{(2n+1)^{N+1}}=(-1)^{\frac{N-1}{2}}\partial_{x}^{-N}\frac{ \cos((2n+1)x)}{2n+1}+\sum_{j=0}^{\frac{N-1}{2}-1}\frac{(-1)^{j}}{(2j+1)!(2n+1)^ {N-2j}}x^{2j+1}.\] Substituting into the second summation, and making use of formulae (2.6) and (2.7) for even and odd \(N\), respectively, verifies (3.14), proving the corollary. More specifically, if the underlying equation is exactly the linear beam equation in (2.1) with dispersion relation \(\omega(k)=\pm k^{2}\). It follows that the second term in (3.14) reduces to (2.12), which is nothing but \(-H(x)\) in (2.14). Meanwhile, the third term is identical to \(C(t^{*})x\) in (2.14). This indicates that in this particular case, Corollary 3.3 is in accordance with Theorem 2.6. As in Section 2, let us now illustrate how, by Corollary 3.3, we can calculate the value of the Riemann zeta function at \(s=4\). We define \[H_{p,q}^{N}(x)=\partial_{x}^{-N}\sum_{j=0}^{2q-1}b_{j}\left(\frac{p}{q}\right) \sigma^{j,q}(x),\] where \(b_{j}\) are determined by the formulae (2.6) for even \(N\), or (2.7) for odd \(N\), respectively. Denote \[S_{l}^{N}(t)=\sum_{n=0}^{+\infty}\frac{\sin((2n+1)^{N}t)}{(2n+1)^{l}},\qquad \text{for}\quad l\in\mathbb{Z}^{+},\quad\text{with}\quad l\geq 2, \tag{3.16}\] and let \[\Gamma_{N}=\left\{\begin{aligned} &\sum_{k=1}^{\frac{N}{2}}\frac{(-1)^{k}(2 \pi)^{N-2k+1}}{(N-2k+1)!}S_{2k}^{N}(t^{*}),&\text{if $N$ even},\\ &\sum_{k=1}^{\frac{N-1}{2}}\frac{(-1)^{k}(2\pi)^{N-2k}}{(N-2k)!}S_ {2k+1}^{N}(t^{*}),&\text{if $N\geq 3$ odd}.\end{aligned}\right. \tag{3.17}\] According to (3.14), we find a formula involving the sum \(\Gamma_{N}\), which along with the periodicity produces \[\Gamma_{N}=\frac{\pi}{4}H_{p,q}^{N}(2\pi). \tag{3.18}\] Note that, if \(N\) is even, at the special rational times \(t_{l}^{*}=(2l-1)\pi/2\), \(l\in\mathbb{Z}^{+}\), \[S_{N}^{N}\left(t_{l}^{*}\right)=(-1)^{l-1}\sum_{n=0}^{+\infty}\frac{1}{(2n+1) ^{N}},\] while, if \(N\) is odd, \(S_{N}^{N}\left(t_{l}^{*}\right)\) is a alternating series, namely, \[S_{N}^{N}\left(t_{l}^{*}\right)=(-1)^{l-1}\sum_{n=0}^{+\infty}\frac{(-1)^{n}}{ (2n+1)^{N}}.\] Hereafter, we denote \[\sigma(N)=\sum_{n=0}^{+\infty}\frac{1}{(2n+1)^{N}},\quad\text{for even $N$},\qquad\tau(N)=\sum_{n=0}^{+\infty}\frac{(-1)^{n}}{(2n+1)^{N}},\quad \text{for odd $N$},\] respectively. Therefore, in the special rational times \(t_{l}^{*}\) setting, (3.18) establishes the recursion formulae for \(\sigma(2k)\) and \(\tau(2k+1)\) for each \(k\in\mathbb{Z}^{+}\). More precisely, for even \(N\) \[\sigma(N)=\frac{(-1)^{\frac{N}{2}}}{8}\left(H_{1,2}^{N}(2\pi)-\frac{4}{\pi} \sum_{k=0}^{\frac{N}{2}-1}\frac{(-1)^{k}(2\pi)^{N-2k+1}}{(N-2k+1)!}\sigma(2k) \right),\] or, for odd \(N\), \[\tau(N)=\frac{(-1)^{\frac{N-1}{2}}}{8}\left(H^{N}_{1,2}(2\pi)-\frac{4}{\pi}\sum_ {k=0}^{\frac{N-1}{2}-1}\frac{(-1)^{k}(2\pi)^{N-2k}}{(N-2k)!}\tau(2k+1)\right),\] which are initiated by the series \(\sigma(2)\) (2.18) for even \(N\), or \(\tau(3)\) for odd \(N\), respectively. As far as \(\tau(3)\) is concerned, one can verify from (2.7) for \(N=3\) that \[\tilde{b}_{0}=-1,\quad\tilde{b}_{1}=\tilde{b}_{2}=1,\quad\tilde{b}_{3}=-1,\] which immediately yields \[H^{3}_{1,2}(x)=\begin{cases}-\frac{1}{6}x^{3},&0\leq x\leq\frac{\pi}{2},\\ \frac{1}{6}\left(x^{3}-3\pi x^{2}+\frac{3\pi^{2}}{2}x-\frac{\pi^{3}}{4}\right),&\frac{\pi}{2}\leq x\leq\frac{3\pi}{2},\\ -\frac{1}{6}\left(x^{3}-6\pi x^{2}+12\pi^{2}x-\frac{13\pi^{3}}{2}\right),& \frac{3\pi}{2}\leq x\leq 2\pi.\end{cases}\] We thus arrive at \[\tau(3)=\sum_{n=0}^{+\infty}\frac{(-1)^{n}}{(2n+1)^{3}}=\frac{\pi^{3}}{32}.\] When it comes to \(\zeta(4)\), we calculate from (2.16) that \[H^{4}_{1,2}(x)=\begin{cases}-\frac{1}{24}x^{4},&0\leq x\leq\pi,\\ \frac{1}{24}\left(x^{4}-8\pi x^{3}+12\pi^{2}x^{2}-8\pi^{3}x+2\pi^{4}\right),& \pi\leq x\leq 2\pi.\end{cases}\] Consequently, \[\sigma(4)=\sum_{n=0}^{+\infty}\frac{1}{(2n+1)^{4}}=\frac{1}{8}\left(H^{4}_{1,2 }(2\pi)+\frac{4\pi^{2}}{3!}\bar{\zeta}(2)\right)=\frac{\pi^{4}}{96},\] which further yields the following classical result for the Riemann zeta function at \(s=4\): \[\zeta(4)=\sum_{n=0}^{+\infty}\frac{1}{n^{4}}=\frac{\pi^{4}}{90}.\] ### Monomial dispersion relation -- second approach We now briefly consider a different approach in the monomial case, which is based on [16, Chapter 7] and derive an alternative representation of the solution at rational times. Hence, the dispersion relation assumes the form (3.6). Moreover, only for this subsection, we relax the condition on \(g\) and allow it to have non-zero mean over \([0,2\pi]\). The solution to the periodic initial-boundary value problem (3.1)-(3.2) is given by \[\begin{split} u(t,x)&=\sum_{k}\widehat{f}(k)\cos(k^ {N}t)e^{\operatorname{i}kx}+\frac{1}{2\pi}\int_{0}^{2\pi}g(y)\mathrm{d}y\ t+ \sum_{k\neq 0}\frac{\widehat{g}(k)\sin(k^{N}t)}{k^{N}}e^{\operatorname{i}kx}\\ &:=\operatorname{I}(t,x)+\langle g\rangle\ t+\operatorname{II}(t, x),\end{split} \tag{3.19}\] where \(\langle g\rangle\) is the mean of \(g\) and \(\operatorname{I}(t,x)\), \(\operatorname{II}(t,x)\) correspond to the two Fourier series representations respectively. In the following, we derive an alternative representation of the term \(\operatorname{II}(t,x)\). In particular, we will show that \(\operatorname{II}(t,x)\) can be expressed as the time-evolution of the periodic convolution of the function \(g-\langle g\rangle\) with a polynomial of degree \(N\geq 2\). As it is known, see for example [24, Proposition 2.76], the convolution gains the regularity of the most regular function between the two involved. Consequently, we may deduce that at any time \(t>0\), either rational or irrational, the Fourier series representation of \(\operatorname{II}(t,x)\) defines a \(2\pi\)-periodic function of class \(C^{N}(\mathbb{R})\). Let us make all the above precise and define first the following family of polynomials on \([0,2\pi]\). **Definition 3.4**.: _Let \(N\geq 1\) be an integer. We denote by \(Q_{N}:[0,2\pi]\to\mathbb{C}\) the polynomial of degree \(N\), defined inductively by the formula_ \[Q_{N}(x)=\frac{(-\,\mathrm{i}\,)^{N}x^{N}}{(-1)^{N-1}N!}-\sum_{\ell=1}^{N-1} \frac{(-1)^{\ell-N}}{(-\,\mathrm{i}\,)^{\ell-N}}\frac{(2\pi)^{N-\ell}}{(N-\ell+ 1)!}Q_{\ell}(x). \tag{3.20}\] The crucial feature of the polynomial \(Q_{N}\) is the form of its Fourier coefficients, which are equal to \(k^{-N}\). As we shall shortly see, this will allow us to invoke the operation of the periodic convolution in the representation of \(\mathrm{II}(t,x)\). **Lemma 3.5**.: _Fix an integer \(N\geq 1\) and consider the polynomial \(Q_{N}:[0,2\pi]\to\mathbb{C}\). Then, for \(k\neq 0\), \(\widehat{Q_{N}}(k)=k^{-N}\)._ Proof.: The proof follows by induction on \(N\). It is easy to show that the statement holds for \(N=1\) and \(N=2\). We assume that \(\widehat{Q_{\ell}}(k)=k^{-\ell}\) for \(\ell=1,2,\ldots,N\), with \(N\geq 3\), and calculate the Fourier coefficients of \(Q_{N+1}\). Let \(k\neq 0\). Then, we have that \[\widehat{Q_{N+1}}(k) =\frac{1}{2\pi}\int_{0}^{2\pi}Q_{N+1}(y)e^{-\,\mathrm{i}\,ky}dy\] \[=\frac{(-i)^{N+1}}{2\pi(-1)^{N}(N+1)!}\int_{0}^{2\pi}y^{N+1}e^{- \,\mathrm{i}\,ky}dy-\sum_{\ell=1}^{N}\frac{(-1)^{\ell-N-1}}{(-\,\mathrm{i}\,)^ {\ell-N-1}}\frac{(2\pi)^{N+1-\ell}}{(N-\ell)!}\frac{1}{k^{\ell}}.\] However, a direct calculation shows that \[\int_{0}^{2\pi}y^{N+1}e^{-\,\mathrm{i}\,ky}dy=\frac{2\pi(-1)^{N}(N+1)!}{(-\, \mathrm{i}\,)^{N+1}k^{N+1}}+(N+1)!\sum_{\ell=1}^{N}\frac{(-1)^{\ell-1}}{(-\, \mathrm{i}\,)^{\ell}}\frac{(2\pi)^{N+2-\ell}}{(N-\ell)!}\frac{1}{k^{\ell}}. \tag{3.21}\] Substituting back for \(\widehat{Q_{N+1}}(k)\) we find that \[\widehat{Q_{N+1}}(k)=\frac{1}{k^{N+1}},\] which concludes the proof. We now turn our attention to the second ingredient needed for the alternative representation of \(\mathrm{II}(t,x)\). Thus, we recall the definition of the periodic convolution, see [36]. **Definition 3.6**.: _Let \(f\) and \(g\) be \(2\pi\)-periodic on \(\mathbb{R}\) and such that \(f\), \(g\in L^{1}(0,2\pi)\). Then the \(2\pi\)-periodic convolution of \(f\) and \(g\) is defined by_ \[f*g(x)=\frac{1}{2\pi}\int_{0}^{2\pi}f(x-y)g(y)dy,\quad x\in[0,2\pi]. \tag{3.22}\] From [36, Proposition 3.1], we know that \(f*g\) defines a \(2\pi\)-periodic continuous function whose Fourier coefficients are given by \(\widehat{f*g}(k)=\widehat{f}(k)\widehat{g}(k)\). Summarizing the above, we arrive at the following lemma which identifies \(\mathrm{II}(t,x)\) based on the convolution of \(g-\langle g\rangle\) with \(Q_{N}\). **Lemma 3.7**.: _Assume that \(g\) is of bounded variation over \([0,2\pi]\). Fix integer \(N\geq 2\) and consider the function_ \[v(x)=(g-\langle g\rangle)*Q_{N}(x),\quad x\in[0,2\pi]. \tag{3.23}\] _Then, at any fixed time \(t\geq 0\), we have that_ \[\mathrm{II}(t,x)=\sum_{k=-\infty}^{\infty}\widehat{v}(k)\sin(k^{N}t)e^{\, \mathrm{i}\,kx}. \tag{3.24}\] Proof.: Let \(\tilde{g}=g-\langle g\rangle\). Then, the Fourier coefficients of \(\tilde{g}\) are given by \[\widehat{\tilde{g}}(k)=\widehat{g}(k),\quad k\neq 0,\quad\widehat{\tilde{g}}(0)=0.\] Moreover, from Lemma 3.5, we know that for \(k\neq 0\), \(\widehat{Q_{N}}(k)=k^{-N}\). Hence, \[\widehat{v}(k)=\widehat{\tilde{g}}(k)\,\widehat{Q_{N}}(k)=\left\{\begin{array} []{ll}0,\quad k=0,&\\ \frac{\widehat{g}(k)}{k^{N}},\quad k\neq 0,&\end{array}\right.\] which implies that for any \(t>0\), \[\sum_{k=-\infty}^{\infty}\widehat{v}(k)\sin(k^{N}t)e^{\,{\rm i}\,kx}=\sum_{k \neq 0}\frac{\widehat{g}(k)}{k^{N}}\sin(k^{N}t)e^{\,{\rm i}\,kx}=\Pi(t,x).\] The validity of the revival effect at rational times \(t^{*}=\pi p/q\) follows again by Lemma 3.2 applied directly on \({\rm I}(t^{*},x)\) and \(\Pi(t^{*},x)\) in conjunction with Lemma 3.7. This is the context of the next theorem. **Theorem 3.8**.: _Suppose that equation (3.1) admits the monomial dispersion relation (3.6), the initial data \(f(x)\) and \(g(x)\) in (3.2) are of bounded variation. Then, at each rational time \(t^{*}=\pi p/q\), the solutions to the periodic initial-boundary value problem (3.1)-(3.2) take the form_ \[u(t^{*},x)=\sum_{j=0}^{2q-1}a_{j}\left(\frac{p}{q}\right)\,f\left(x-\frac{\pi j }{q}\right)+\langle g\rangle\,t^{*}+\sum_{j=0}^{2q-1}d_{j}\left(\frac{p}{q} \right)\,v\left(x-\frac{\pi j}{q}\right), \tag{3.25}\] _where \(v(x)=(g-\langle g\rangle)*Q_{N}(x)\) and the coefficients \(a_{j},\,d_{j}\in\mathbb{C}\), \(j=0,\ldots,2q-1\), are certain constants depending on \(p,q\)._ As a consequence of Theorem 3.8, equivalently of Theorem 3.1, the solution at rational times is, at least, piecewise continuous, given that \(f(x)\) has finitely many jump discontinuities. More specifically, the first term in (3.25), \({\rm I}(t^{*},x)\), corresponds to the revival of the initial function \(f(x)\), whereas the third term, \(\Pi(t^{*})\), is the revival of the \(2\pi\)-periodic, \(C^{N}(\mathbb{R})\) function \(v(x)=(g-\langle g\rangle)*Q_{N}(x)\) and thus, together with the constant term, \(\langle g\rangle t^{*}\), a \(2\pi\)-periodic, \(C^{N}(\mathbb{R})\) function. Therefore, the solution is given as the sum of the revival of the initial condition \(f(x)\) and a more regular function, which ensures the revival of the initial jump discontinuities of \(f(x)\). ### Integral polynomial dispersion relation This subsection is concerned with the case that \(\sqrt{\phi(k)}\) is an integral polynomial \(P(k)\). The corresponding solution takes the form \[u(t,x)=\sum_{k}\widehat{f}(k)\cos(P(k)t)e^{\,{\rm i}\,kx}+\sum_{k\neq 0}\frac{ \widehat{g}(k)\sin(P(k)t)}{P(k)}e^{\,{\rm i}\,kx}:={\rm I}(t,x)+\Pi(t,x). \tag{3.26}\] Firstly, using (3.10) again, we obtain that at each rational time \(t^{*}=\pi p/q\), the first term in (3.26) satisfies \[{\rm I}(t^{*},x)=\sum_{j=0}^{2q-1}a_{j}\left(\frac{p}{q}\right)\,f\left(x- \frac{\pi j}{q}\right).\] Next, notice that \[\left|\,\Pi(t,x)-\sum_{k\neq 0}\frac{\widehat{g}(k)\sin(P(k)t)}{c_{N}k ^{N}}e^{\,{\rm i}\,kx}\,\right| \leq\sum_{k\neq 0}\frac{|c_{N-1}k^{N-1}+\cdots+c_{1}k+c_{0}|}{|c_{N}P(k)k ^{N}|}|\widehat{g}(k)| \tag{3.27}\] \[\lesssim\sum_{k\neq 0}\frac{1}{k^{N+2}},\] where the fact that \(g(x)\) is of bounded variation has been used in the last inequality and \(c_{N}\) denotes the coefficient of the highest power of \(P(k)\). Since \(N\geq 2\), the final series is absolutely convergent, whose sum is a constant. Thus, the above estimate implies that the qualitative behavior of the second term relies crucially on that of the series \[\sum_{k\neq 0}\frac{\widehat{g}(k)\sin(P(k)t)}{c_{N}k^{N}}e^{\,{\rm i}\,kx}. \tag{3.28}\] While, as for (3.28), a direct generalization of the proof of Theorem 3.1 shows that, at each rational time \(t^{*}=\pi p/q\), \[\sum_{k\neq 0}\frac{\widehat{g}(k)\sin(P(k)t^{*})}{c_{N}k^{N}}e^{\,{\rm i} \,kx}=\,{\rm i}\,^{N}\sum_{j=0}^{2q-1}\bar{b}_{j}\left(\frac{p}{q}\right)\,G \left(x-\frac{\pi j}{q}\right)+\sum_{j=0}^{N-1}\bar{C}_{j}x^{j},\] where the coefficients \(\bar{b}_{0},\ldots,\bar{b}_{2q-1}\) are determined by \[\sum_{k}\sin\left(P(k)t^{*}\right)e^{\,{\rm i}\,kx}=\sum_{j=0}^{2q-1}\bar{b}_ {j}\left(\frac{p}{q}\right)\delta\left(x-\frac{\pi j}{q}\right),\quad\text{ and}\quad\bar{C}_{j}=\sum_{k\neq 0}\frac{\widehat{g}(k)\sin(P(k)t^{*})}{c_{N}j!k^{N-j}}.\] We thus conclude that, at each rational time, the series (3.28) admits the same discontinuities and revival structure as the second summation in solution (3.12). All in all, we may safely draw the conclusion that, in the present case, the discontinuities of the solution will be determined by the initial data. For instance, if the initial data are the step function \(\sigma(x)\), as in (2.2), by Corollary 3.3, \(u(t,x)\) will be a \(C^{N-1}\) curve at each \(t_{k}^{0}=(2k-1)\pi/2,\ k\in\mathbb{Z}^{+}\), and exhibit jump discontinuities and revival profile at other rational times. ### Non-polynomial dispersion relation If \(\sqrt{\varphi(k)}\) is not a polynomial, we distinguish two cases. The first one assumes that, for large wave numbers, the dispersion relation is asymptotically close to an integral polynomial \(P(k)\). Hence, suppose \[\sqrt{\varphi(k)}\sim P(k)+O(k^{-1}),\quad\text{as}\quad|k|\ \to\infty.\] Firstly, under the assumption that \(f(x)\) is of bounded variation, the first summation in (3.4) satisfies \[\left|\,\sum_{k}\widehat{f}(k)\cos\left(\sqrt{\varphi(k)}\,t \right) e^{\,{\rm i}\,kx}-\sum_{k}\widehat{f}(k)\cos(P(k)t)e^{\,{\rm i}\,kx}\, \right| \tag{3.29}\] \[\leq\sum_{k}|\widehat{f}(k)|\,|\cos\left(\sqrt{\varphi(k)}\,t \right)-\cos(P(k)t)|\lesssim\sum_{k\neq 0}\frac{1}{k^{2}},\] where the mean-value theorem has been used in the last inequality. Next, for the second summation, one has \[\left|\,\sum_{k\neq 0}\frac{\widehat{g}(k)}{\sqrt{\varphi(k)}}\sin \left(\sqrt{\varphi(k)}\,t\right)e^{\,\mathrm{i}\,kx}-\sum_{k\neq 0}\frac{ \widehat{g}(k)}{k^{N}}\sin(P(k)t)e^{\,\mathrm{i}\,kx}\,\right|\] \[\leq\left|\,\sum_{k\neq 0}\frac{\widehat{g}(k)}{\sqrt{\varphi(k)}} \sin\left(\sqrt{\varphi(k)}\,t\right)e^{\,\mathrm{i}\,kx}-\sum_{k\neq 0}\frac{ \widehat{g}(k)}{P(k)}\sin\left(\sqrt{\varphi(k)}\,t\right)e^{\,\mathrm{i}\,kx}\,\right|\] \[\qquad\qquad\qquad+\left|\,\sum_{k\neq 0}\frac{\widehat{g}(k)}{P(k)} \sin\left(\sqrt{\varphi(k)}\,t\right)e^{\,\mathrm{i}\,kx}-\sum_{k\neq 0}\frac{ \widehat{g}(k)}{P(k)}\sin(P(k)t)e^{\,\mathrm{i}\,kx}\,\right|\] \[\qquad\qquad+\left|\,\sum_{k\neq 0}\frac{\widehat{g}(k)}{P(k)} \sin(P(k)t)e^{\,\mathrm{i}\,kx}-\sum_{k\neq 0}\frac{\widehat{g}(k)}{k^{N}} \sin(P(k)t)e^{\,\mathrm{i}\,kx}\right|\] \[:=\Pi^{(1)}+\Pi^{(2)}+\Pi^{(3)}.\] We directly estimate the above three terms as follows: \[\Pi^{(1)} \leq\sum_{k\neq 0}\frac{|O(k^{-1})\widehat{g}(k)|}{|P(k)\varphi(k )|}\lesssim\sum_{k\neq 0}\frac{1}{k^{2N+2}},\] \[\Pi^{(2)} \leq\sum_{k\neq 0}\frac{|\sin(\sqrt{\varphi(k)}t)-\sin(P(k)t)|| \widehat{g}(k)|}{|P(k)|}\lesssim\sum_{k\neq 0}\frac{1}{k^{N+2}},\] \[\Pi^{(3)} \leq\sum_{k\neq 0}\frac{|c_{N-1}k^{N-1}+\ldots+c_{0}||\widehat{g} (k)|}{|P(k)k^{N}|}\lesssim\sum_{k\neq 0}\frac{1}{k^{N+2}}.\] We thus conclude that \[\left|\,\sum_{k\neq 0}\frac{\widehat{g}(k)}{\sqrt{\varphi(k)}}\sin(\sqrt{ \varphi(k)}t)e^{\,\mathrm{i}\,kx}-\sum_{k\neq 0}\frac{\widehat{g}(k)}{k^{N}} \sin(P(k)t)e^{\,\mathrm{i}\,kx}\,\right|\lesssim\sum_{k\neq 0}\frac{1}{k^{N+2}},\] which, together with the estimate (3.29) imply that, in the present case, the solution \(u(t,x)\) will exhibit the same asymptotic behavior as the polynomial case. The times at which the solution (approximately) exhibits revivals are densely embedded in the times at which it has a continuous, fractal profile. For example, the Boussinesq equation \[u_{tt}+\frac{1}{3}u_{xxxx}-u_{xx}+\frac{3}{2}\alpha(u^{2})_{xx}=0, \tag{3.30}\] has the linear dispersion relation \(\omega(k)=\pm k\sqrt{\frac{1}{3}k^{2}+1}\), and its leading order asymptotics is \(\pm\frac{1}{\sqrt{3}}k^{2}\). The solution of the periodic initial-boundary value problem for the linearization of equation (3.30) subject to the step function initial data (2.2) at several representative rational and irrational times are plotted in Figure 6. As illustrated in these figures, the solutions exhibit the (approximately) revival profile at rational times, and the overall jump discontinuities and revival structure is very similar to that of the linear beam equation. On the other hand, if the equation admits a non-polynomial dispersion relation with an non-integral asymptotic exponent, i.e., \[\sqrt{\varphi(k)}\sim|k|^{\alpha},\quad 2\leq\alpha\notin\mathrm{Z},\;\mathrm{as} \quad|k|\to\infty,\] we still estimate the two summations in (3.4) separately. As far as the first one is concerned, as studied in [7], its overall qualitative behavior is entirely determined by the asymptotic exponent \(\alpha\). In particular, when \(2\leq\alpha\notin\mathrm{Z}\) is not an integer, only fractal solution profiles will be observed at every time. While, as for the second term, observe that in the present situation, \[\left|\,\sum_{k\neq 0}\frac{\widehat{g}(k)}{\sqrt{\varphi(k)}} \sin(\sqrt{\varphi(k)}t)e^{\,\mathrm{i}\,kx}-\sum_{k\neq 0}\frac{\widehat{g}(k) }{|k|^{[\alpha]+1}}\sin(|k|^{\alpha}t)e^{\,\mathrm{i}\,kx}\,\right|\] \[\leq\sum_{k\neq 0}\frac{|k|^{[\alpha]+1}-|k|^{\alpha}}{|k|^{ \alpha}|k|^{[\alpha]+1}}|\widehat{g}(k)|\lesssim\sum_{k\neq 0}\frac{|k|^{ \alpha^{\prime}}\ln|k|}{|k|^{\alpha}|k|^{[\alpha]+1}}|\widehat{g}(k)|,\] for some \(\alpha<\alpha^{\prime}<[\alpha]+1\). Note that if \(g(x)\) is of bounded variation, the estimate can only be obtained by using \(\sum_{k\neq 0}|k|^{-[\alpha]-1}\). In view of this situation, we need to further require that \(g(x)\) satisfies \[\int\,e^{\,\mathrm{i}\,kx}\;\mathrm{d}g\sim O(k^{(\alpha-\alpha^{\prime})- \delta})\quad\mathrm{for\ all}\quad\delta>0.\] Under this hypothesis, the above estimate is bounded by \(\sum_{k\neq 0}|k|^{-[\alpha]-2}\), and hence the second term is completely determined by the series \[\sum_{k\neq 0}\frac{\widehat{g}(k)}{|k|^{[\alpha]+1}}\sin(|k|^{\alpha}t)e^{ \,\mathrm{i}\,kx},\] which, compared with the first term, will admit better regularity. Figure 6: The solutions to the periodic initial-boundary value problem for the linear Boussinesq equation. We conclude that, in the present case, the solutions will retain a fractal profile at all times. Results confirming this are displayed in Figure 7, which are the plots of the solutions to the periodic Riemann problem for the case of three-halves dispersion relation \(\omega(k)=\pm|k|^{\frac{3}{2}}\) corresponding to the equation \[u_{tt}=\mathcal{H}[u_{xxx}], \tag{3.31}\] where \(\mathcal{H}\) denotes the periodic Hilbert transform, \[\mathcal{H}[f](x)=\frac{1}{\pi}\sum_{-\infty}^{+\infty}\int_{-\pi}^{\pi}\, \frac{f(y)}{x-y+2\pi k}\,\mathrm{d}y=\frac{1}{2\pi}\int_{-\pi}^{\pi}\,\cot\frac {x-y}{2}f(y)\,\mathrm{d}y.\] ## 4. Numerical simulation of dispersive revival for nonlinear equations In this section, we will explore the effect of periodicity on rough initial data for nonlinear equations in the context of the nonlinear defocusing cubic beam equation of the form \[u_{tt}+u_{xxxx}+\mu\,u+\varepsilon\,|u|^{2}u=0, \tag{4.1}\] which is motivated by the nonlinear Boussinesq equation, see [34] for details. We will numerically approximate the solutions to the periodic initial-boundary value problem for the beam equation (4.1) subject to periodic boundary conditions on \([-\pi,\,\pi]\), with the same step function (2.2) as initial data. The goal of this section is to investigate to what extent revival and fractalization phenomena persist into the nonlinear regime. A basic numerical technique, the Fourier spectral method, will be employed to approximate the solution to this initial-boundary value problem. As we will see, our numerical studies strongly indicate that the dispersive revival phenomenon admitted by the associated linearized equation will persist into the nonlinear regime. However, some of the qualitative details -- for instance, the convexity of the curves between the jump discontinuities -- will be affected by the nonlinearity, in contrast to what was observed in the unidirectional case. Figure 7. The solutions to the periodic Riemann problem for the linear equation (3.31). ### The Fourier spectral method Let us first summarize the basic ideas behind the Fourier spectral method for approximating the solutions to nonlinear equations. One can refer to [19, 39] for details of the method. Formally, consider the initial value problem for a nonlinear evolution equation \[u_{t}=K[u],\qquad u(0,x)=u_{0}(x), \tag{4.2}\] where \(K\) is a differential operator in the spatial variable with no explicit time dependence. Suppose \(K\) can be written as \(K=L+N\), in which \(L\) is a linear operator characterized by its Fourier transform \(\widehat{Lu}(k)=\omega(k)\widehat{u}(k)\), while \(N\) is a nonlinear operator. We use \(\mathcal{F}[\cdot]\) and \(\mathcal{F}^{-1}[\cdot]\) to denote the Fourier transform and the inverse Fourier transform of the indicated function, respectively, so that the Fourier transform for equation in (4.2) takes the form \[\widehat{u}_{t}=\omega(k)\widehat{u}+\mathcal{F}\lfloor N(\mathcal{F}^{-1}[ \widehat{u}])\,\rceil.\] Firstly, periodicity and discretization of the spatial variable enables us to apply the fast Fourier transform (FFT) based on, for instance, \(512\) space nodes, and arrive at a system of ordinary differential equations (ODEs), which we solve numerically. For simplicity, we adopt a uniform time step \(0<\Delta t\ll 1\), and seek to approximate the solution \(\hat{u}(t_{n})\) at the successive times \(t_{n}=n\Delta t\) for \(n=0,1,\ldots\). The classic fourth-order Runge-Kutta method, which has a local truncation error of \(O((\Delta t)^{5})\), is adopted, and its iterative scheme is given by \[\widehat{u}(t_{n+1})=\widehat{u}(t_{n})+\frac{1}{6}(f_{k_{1}}+2f_{k_{2}}+2f_{k _{3}}+f_{k_{4}}),\quad n=0,1,\ldots,\quad\widehat{u}(t_{0})=\widehat{u}_{0}(k),\] where \[f_{k_{1}} =f(t_{n},\widehat{u}(t_{n})), f_{k_{2}} =f(t_{n}+\Delta t/2,\widehat{u}(t_{n})+\Delta tf_{k_{1}}/2),\] \[f_{k_{3}} =f(t_{n}+\Delta t/2,\widehat{u}(t_{n})+\Delta tf_{k_{2}}/2), f_{k_{4}} =f(t_{n}+\Delta t,\widehat{u}(t_{n})+\Delta tf_{k_{3}})\] with \[f(t,\widehat{u})=\omega(k)\widehat{u}+\mathcal{F}\lfloor N(\mathcal{F}^{-1}[ \widehat{u}])\,\rceil.\] Accordingly, the approximate solution \(u(t,x)\) can be obtained through the inverse discrete Fourier transform. Since the Runge-Kutta method is designed for first order systems of ordinary differential equations, we convert our bidirectional second order in time system (4.1) into a first order system by setting \[v=u_{t},\] an hence the beam equation (4.1) is mapped to the following evolutionary system \[u_{t}=v,\qquad v_{t}=-u_{xxxx}-\mu\,u-\varepsilon\,|u|^{2}u. \tag{4.3}\] The Fourier transform for (4.3) takes the form \[\widehat{u}_{t}=\widehat{v},\qquad\widehat{v}_{t}=-(k)^{4}\widehat{u}-\mu\, \widehat{u}-\varepsilon\,\mathcal{F}\lfloor|\mathcal{F}^{-1}[\widehat{u}]|^{2 }\mathcal{F}^{-1}[\widehat{u}]\,\rceil. \tag{4.4}\] Using the classic fourth-order Runge-Kutta method to solve the resulting system (4.4), and then taking the inverse discrete Fourier transform, one can obtain the numerical solution to the periodic initial-boundary value problem for the nonlinear beam equation (4.1). ### Numerical Results Figure 8 and Figure 9 display some results from our numerical approximations of the solutions to the nonlinear beam equation (4.1) with periodic boundary conditions and initial conditions (2.2) at some representative rational and irrational times. Comparing the graphs in these two figures with the graphs corresponding to the same times in Figure 1 and Figure 2, we find that, at each irrational time, all sets of plots are fairly similar to those from the associated linear beam equation, and the solution still takes a continuous, non-differentiable profile. When it comes to the rational times, the same jump discontinuities consistency for nonlinear and linear equations emerges as well. Meanwhile, closer inspection will reveal some differences. The most noticeable is that, the shape of the curves between jump discontinuities will change with time evolution. More precisely, the graphs corresponding to \(t=\pi/5\) show that the differences of the solution profile between the linear and nonlinear equations is slight, except that, in the nonlinear case, the curves between the jumps become closer to constants. Further, as the power \(p\) decreases with increasing time, the variation in the shape of the curves from linear to nonlinear becomes greater and greater. As illustrated in the graphs corresponding to \(t=\pi/3\) and \(t=\pi/2\), the convexity of the curves has completely changed. These differences of the qualitative behavior of the solutions exhibit the effect of the nonlinearity. Furthermore, in order to better understand the effect of the nonlinearity, we perform further numerical experiments for smaller values of coefficients \(\varepsilon\) and \(\mu\) in equation (4.1). Referring to Figure 10, it appears that solution at \(t=\pi/3\) tends to the linear profile as \(\varepsilon\) tends to zero. Meanwhile, the shape of the curves between jump discontinuities will change as \(\varepsilon\) increases, the most noticeable variation being the changes in convexity. More unexpected phenomena appear when \(t=\pi/2\). We find the variation of the profile of the solution will be affected not only by the nonlinear term but also by the linear term involving \(u\). The plots displayed in Figure 11, corresponding to some representative coefficients \(\varepsilon\) and \(\mu\), suggest that the solution profile, including its convexity and the values of its peak and trough will be affected by the combination of both coefficients \(\varepsilon\) and \(\mu\). Figure 8: The solutions to the periodic initial-boundary value problem for the beam equation with \(\mu=\varepsilon=1\) at rational times. Figure 9: The solutions to the periodic initial-boundary value problem for the beam equation with \(\mu=\varepsilon=1\) at irrational times. Recall that the numerical experiments to the periodic initial-boundary value problem for the KdV equation, the NLS equation and the multi-component KdV system have been previously analyzed in [8, 42], which show that, in the unidirectional regime, the effect of the nonlinear flow can be regarded as a perturbation of the linearized flow. When it comes to the bidirectional dispersive equations, our numerical simulation strongly indicates that, the dichotomy of revival/fractalization at rational/irrational times in linearization will persist into the nonlinear regime, and the finite "revival" nature of the solutions at rational times is not affected by the nonlinearity, however, the influence of the nonlinearity on the qualitative behavior of the solutions is much greater than in the unidirectional setting. Motivated by this observation, formulation of theorems and rigorous proofs concerning this novel revival phenomenon in the nonlinear bidirectional regime, specially for the nonlinear beam and Boussinesq equations, is eminently worth further study. **Acknowledgments.** Part of Farmakis' research was conducted during his Ph.D studies which were supported by the Maxwell Institute Graduate School in Analysis and its Applications, a Centre for Doctoral Training funded by EPSRC (grant EP/L016508/01), the Scottish Funding Council, Heriot-Watt University and the University of Edinburgh. Kang's research was supported by NSFC (Grants-11631007 and 11871395 and Basic Science Program of Shaanxi Province (Grant-2019JC-28). Qu's research was supported by NSFC (Grants-11971251, 11631007 and 12111530003). Yin's research was supported by the NSFC (Grant-11631007). Figure 11: The solutions to the periodic initial-boundary value problem for the beam equation at \(t=\pi/2\). Figure 10: The solutions to the periodic initial-boundary value problem for the beam equation at \(t=\pi/3\).
2309.17341
MixQuant: Mixed Precision Quantization with a Bit-width Optimization Search
Quantization is a technique for creating efficient Deep Neural Networks (DNNs), which involves performing computations and storing tensors at lower bit-widths than f32 floating point precision. Quantization reduces model size and inference latency, and therefore allows for DNNs to be deployed on platforms with constrained computational resources and real-time systems. However, quantization can lead to numerical instability caused by roundoff error which leads to inaccurate computations and therefore, a decrease in quantized model accuracy. Similarly to prior works, which have shown that both biases and activations are more sensitive to quantization and are best kept in full precision or quantized with higher bit-widths, we show that some weights are more sensitive than others which should be reflected on their quantization bit-width. To that end we propose MixQuant, a search algorithm that finds the optimal custom quantization bit-width for each layer weight based on roundoff error and can be combined with any quantization method as a form of pre-processing optimization. We show that combining MixQuant with BRECQ, a state-of-the-art quantization method, yields better quantized model accuracy than BRECQ alone. Additionally, we combine MixQuant with vanilla asymmetric quantization to show that MixQuant has the potential to optimize the performance of any quantization technique.
Eliska Kloberdanz, Wei Le
2023-09-29T15:49:54Z
http://arxiv.org/abs/2309.17341v1
# MixQuant: Mixed Precision Quantization with a Bit-width Optimization Search ###### Abstract Quantization is a technique for creating efficient Deep Neural Networks (DNNs), which involves performing computations and storing tensors at lower bit-widths than f32 floating point precision. Quantization reduces model size and inference latency, and therefore allows for DNNs to be deployed on platforms with constrained computational resources and real-time systems. However, quantization can lead to numerical instability caused by roundoff error which leads to inaccurate computations and therefore, a decrease in quantized model accuracy. Similarly to prior works, which have shown that both biases and activations are more sensitive to quantization and are best kept in full precision or quantized with higher bit-widths, we show that some weights are more sensitive than others which should be reflected on their quantization bit-width. To that end we propose _MixQuant_, a search algorithm that finds the optimal custom quantization bit-width for each layer weight based on roundoff error and can be combined with any quantization method as a form of pre-processing optimization. We show that combining _MixQuant_ with BRECQ, a state-of-the-art quantization method, yields better quantized model accuracy than BRECQ alone. Additionally, we combine _MixQuant_ with vanilla asymmetric quantization to show that _MixQuant_ has the potential to optimize the performance of any quantization technique. Keywords:quantization efficient inference mixed precision. ## 1 Introduction Quantization is a method for mapping continuous values to a set of discrete values. The goal of neural network quantization is to perform computations and store tensors at lower bit-widths than floating point precision to reduce model size and inference latency while maintaining model accuracy, which allows for deploying DNNs on platforms with constrained computational resources, e.g.: real time inference on mobile devices. Quantization can be performed during training or inference. In this paper we focus on quantized inference, specifically post-training quantization, which quantizes a full precision trained model without the need for re-training or fine-tuning. Quantized inference can be either simulated or integer-only, and in this paper we focus on simulated quantization, where the quantized model parameters are stored in low-precision, but the mathematical operations on them (e.g. matrix multiplications and additions) are performed with floating point arithmetic [1]. In Tensorflow, PyTorch, and HuggingFace (QDQBERT model), simulated quantization is referred to as fake quantization. This means that the DNN parameters are first quantized from f32 to, for example, int4, and then dequantized back to f32 to perform the forward pass executed during inference. We show that the roundtrip process of quantizing and dequantizing the model parameters leads to roundoff error, which may lead to numerical instability. Similarly to prior works, which have shown that both biases and activations are more sensitive to quantization and are best kept in full precision or quantized with higher bit-widths [2], we show that some weights are more sensitive than others which should be reflected on their quantization bit-width. To that end we propose _MixQuant_, a search algorithm that finds the optimal quantization bit-width from int2, int3, int4, int5, int6, int7, and int8 for each layer weight based on roundoff error and can be combined with any quantization method as a form of pre-processing optimization. We show that combining _MixQuant_ with BRECQ [3], a state-of-the-art quantization method, yields better quantized model accuracy than BRECQ alone. Additionally, we combine _MixQuant_ with vanilla asymmetric quantization to show that _MixQuant_ has the potential to optimize the performance of any quantization technique. _MixQuant_ has three main benefits. First, _MixQuant_ is a component of the quantization process, which can be leveraged to find optimal quantization mixed precision bit-widths that can be plugged into any quantization method to optimize its performance. Second, _MixQuant_ is linear and runs in a matter of seconds, which makes it practical. Third, combining _MixQuant_ with BRECQ, a state-of-the-art quantization method yields better quantized model accuracy than BRECQ alone, OMSE [4], AdaRound [5], AdaQuant [6], and Bit-Split [7]. ## 2 Related Work ### Neural Network Quantization Neural network quantization can be applied to training [8, 9, 10, 2, 11] or inference. There are two paradigms in quantized DNN inference: post-training quantization (PTQ) and quantization-aware training (QAT) [12, 13]. In contrast to PTQ, QAT requires that the f32 model is retrained while simulating quantized inference in the forward pass. While _MixQuant_ can be integrated with either, we focus on PTQ which does not require any re-training. [14] and [3] are amongst the recent state-of-the-art post training quantization works. [14] introduce AdaQuant, which finds optimal quantization for both weights and activations and is based on minimizing the error between quantized layer outputs and f32 layer outputs. This approach is similar to _MixQuant_; however, _MixQuant_ finds the optimal quantization bit-widths based on quantization error (QE) minimization, while AdaQuant treats the bit-width as a constant and quantizes all weights and activations using the same bit-width (either _int8_ or int4)_. [3] propose BRECQ, a quantization method based on DNN block reconstruction. [5] propose AdaRound, adaptive rounding for weights, which achieves better accuracy than rounding to the nearest. They formulate the rounding procedure as an optimization problem that minimizes the expected difference between model loss with and without weights quantization perturbation. [15] develop a method based on constraining all quantization levels as the sum of Powers-of-Two terms, [7] propose a Bit-Split and Stitching framework (Bit-split), [16] study the effect of quantization on the structure of the loss landscape, [17] develop ACIQ-Mix, a 4 bit convolutional neural network quantization, and [18] perform zero-shot quantization ZeroQ based on distilling a dataset that matches the input data distribution. Quantization originated with convolutional neural networks, but it has been extended to natural language processing neural networks as well. [19] propose differentiable product quantization, a learnable compression for embedding layers in DNNs. [20] study an integer-only quantization scheme for transformers, where the entire inference is performed with pure integer arithmetic. Other works studied hardware optimization for quantization or the relationship between quantization and adversarial robustness. [21] focus on performance optimization for Low-bit Convolution on ARM CPU and NVIDIA GPU. [22] investigate quantized models' adversarial robustness. They find that when an adversarially trained model is quantized to different precisions in a post-training manner, the associated adversarial attacks transfer poorly between different precisions. ### Mixed Precision Quantization In this paper we focus on mixed precision quantization. There are only a few prior works that focus on mixed precision quantization since most focus on single precision quantization, where the quantization bit-width of all weights are uniform and therefore; treated as a constant. [23] propose a framework for determining the quantization policy with mixed precision and reinforcement learning, but compared to _MixQuant_ it requires significantly more overhead (hardware simulators and reinforcement learning). [24] focuses on mixed precision quantization of activations and distinguishes between key and non-key activations to assign 8-bit and 4-bit precision respectively. In contrast to MixQuant, which searches for weights mixed precision from 8 to 2 bits, [24] is limited to a choice between 4 and 8 bits and applies only to activations while all weights are quantized with 8-bit precision. The primary focus of [25] is neural architecture search, which can also be used for mixed precision quantization. However, their search on ResNet 18 for ImageNet takes 5 hours, while _MixQuant_ runs in order of a few seconds. [26] use single precision for weights, where the mixed precision is represented only by selecting a different bit-width for weights than activations. [26] is the most most recent, and we show that _MixQuant_ yields better accuracy. Another mixed precision quantization work that we build on is [27], who identify optimal bit-width allocation across DNN layers. However, there are two primary differences between [27] and our work: (1) [27] focus on fixed-point precision, not integer precision, (2) [27] a different method for finding layer bit-widths based on predicted signal-quantization-to-noise -ratio. Moreover, while they find that on CIFAR-10 convolutional DNN is able to achieve 20 % model size reduction; their AlexNet experiments on ImageNet-1000 achieve less than 1% model reduction. In this work we are able to successfully leverage mixed precision optimal bit-width allocation on ImageNet-1000 models. ## 3 Quantization and Numerical Instability Quantization involves lowering the bit-width of a numeric tensor representation, which can cause numerical instability that leads to inaccurate outputs [28]. In general, numerical instability arises due to two types of numerical errors: (1) roundoff errors and (2) truncation errors. Roundoff errors are caused by approximating real numbers with finite precision, while truncation errors are caused by approximating an a mathematical process such as the Taylor series. We argue that quantization can significantly amplify the roundoff error, which leads to a degradation in quantized DNN accuracy. DNN training and inference is typically performed in f32 precison, which already introduces roundoff errors, because it has only 32 bits to represent real numbers. Specifically, f32 can represent a zero and numbers from -3.40282347E+38 to -1.17549435E-38 and from 1.17549435E-38 to 3.40282347E+38, but numbers outside of this range are not representable in f32. In simulated quantization the process of quantizing DNN parameters from f32 to int (e.g.: int4) and dequantizing them back to f32 to perform matrix multiply and add (e.g.: inputs * weights + biases) can lead to a loss of precision. Listing 1 shows an example of a simple simulated quantized inference, where the weights tensor is quantized to int2 and its subsequent dequantization back to f32 has a roundoff error. The roundoff error occurs in the second element of the weight tensor, which becomes 0.0 (line 40) while its true original value is 0.01 (line 38). This error caused by quantization then propagates further - the computation \(inputs*weights+biases\) returns 1.0000e-05 (line 43) instead of 1.2000e-05 (line 42) in the second element of the result tensor. ``` 1defscale(r,bits): 2miner=r.min() 3max_r=r.max() 4qmin=-1*(2*(bits-1)) 5qmax=2*(bits-1)-1 6scalen_r=(max_r-min_r)/(qmax-qmin) 7returnscalen_r= 8 9defzero_point(r,bits): 10scalen_r=scale(r,bits) 11min_r=r.min() 12qmin=-1*(2*(bits-1)) 13xpt_r=qmin_-inst(min_r/scalen_r) 14returnzpt_r 15defquant(r,bits): 16z=zcompoint(r,bits) 17z=scale(r,bits) 18q=(torch.round(r/s)+x).int() 19returnq 20defdequant(q,x,s): - x) * a return r.float() * [24] input = torch.tensor([0.005, 0.0002, 0.01, 0.003]) bins = torch.tensor([0.00001]) weight = torch.tensor([-1.0, 0.01, 1.0, 2.0]) # original weight * [29] * [30] S = scale(weight, 2) # quantization scale * [31] Z = zero*point(weight, 2) # quantization zero*point * [32] q=weight = output(weight, 2) # quantized weight * [33] q=weight = dequant(q, Z, S) # dequantized weight * [34] result = input * weight + bias * [35] q=result = input * q=weight + bias * [36] * [37] * [38] f32 weight: tensor([-1.0000, 0.0100, 1.0000, 2.0000]) * [39] quantized weight: tensor([-2, -1, 0, 1], dtype=torch.int32) * [39] * [40] * [41] f32 result: tensor([-4.9900e-03, 1.2000e-05, 1.0010e-02, 6.0100e-03]) * [42] ximulated quantization result: tensor([-4.9900e-03, 1.0000e-05, 1.0010e-02, 6.0100e-03]) ## 4 MixQuant _MixQuant_ is a quantization scheme that relies on mixed precision to find the bit-widths of individual layer weights that minimize roundoff error and therefore, minimize model accuracy degradation due to quantization. Specifically, _MixQuant_ is a search algorithm that finds optimal bit-widths that minimize model accuracy degradation caused by quantization. Prior works have shown that biases and activations are more sensitive to quantization than weights, and are therefore typically kept in higher precision. In this paper we argue some weights are more sensitive to quantization than others, which we show in our ablation studies. This warrants a careful bit-width allocation to individual weights and serves as motivation for MixQuant. In essence, _MixQuant_ can be viewed as an additional pre-processing optimization component of the quantization process, which can be combined with any quantization method optimize its performance. _MixQuant_ is described in Algorithm 1. The optimal weight layer bit-widths search has two primary components: layer-wise QE minimization and a QE multiplier (QEM). The layer-wise QE is calculated as the mean squared error (MSE) between the f32 model weights and the weights that have been dequantized following an int quantization (any quantization method can be used at line 8 in Algorithm 1) to capture the information loss due to roundoff error caused by quantization. This error is calculated for each layer for each bit-width from the following list: 8, 7, 6, 5, 4, 3, and 2 (lines 4-11 in Algorithm 1). Following that, _MixQuant_ searches for the optimal bit-width for each layer by comparing the QE of each bit-width from this list with an int8 error, which serves as a baseline (lines 12-13 in Algorithm 1). To push _MixQuant_ to select bit-widths lower than int8, _MixQuant_ leverages the QEM. If the QE at a bit-width b is less or equal to int8 QE multiplied by the QEM, b becomes the optimal bit-width for that layer. This can be expressed as an optimization problem: \[\begin{split} optBit=\arg\min_{optBit}\,quantErrors\\ \text{Subject to }quantErrors\leq 8bit_{q}Error*QEM\\ optBit\in B\end{split} \tag{1}\] Because the QEM is an input parameter into MixQuant, it allows the user to specify a custom trade-off between quantization bit-width and model accuracy; and therefore, it allows the user to find _their_ optimal layer bit-width. ``` 1:Input: full precision weights \(W\), bit-widths \(B\), QE multiplier \(QEM\) 2: Initialize \(optimalBitWidths\) /* Iterate over all layers */ 3:for\(l\)in\(layers\)do 4: \(8bit_{W}\) = Quantize(\(W\), \(bitWidth=8\)) /* Compute int8 quantization error in layer 1 */ 5: \(8bit_{q}Error\) = \(W\) - Dequantize(\(8bit_{W}\)) /* For every bit-width in \(B\) compute quantization error in layer 1 */ 6: Initialize \(quantErrors\) 7:for\(bitWidth\)in\(B\)do 8: \(quantizedW\) = Quantize(\(W\), \(bitWidth\)) 9: \(qError\) = \(W\) - Dequantize(\(quantizedW\)) 10: Append \(qError\) to \(quantErrors\) 11:endfor /* Select optimal bit-width at layer 1 */ 12: \(optBit\) = \(\arg\min_{optBit}\) quantErrors s.t. \(\text{quantErrors}\leq 8bit_{q}Error\) * \(QEM\), \(optBit\in B\) 13: Append \(optBit\) to \(optimalBitWidths\) 14:endfor 15:return\(optimalBitWidths\) ``` **Algorithm 1** MixQuant #### 3.2.2 Weights Mixed Precision Quantization We focus on weights quantization for three reasons. First, weights account for majority of parameters in a DNN and therefore, have the greatest impact on model size and inference time. Second, model accuracy is more sensitive to quantized activations than weights [2]. Third, we guided our algorithm design with the state-of-the art results in [3], who introduced BRECQ which shows weight-only quantization. Approximating Roundoff ErrorWe use the quantization error, QE, (measured as the MSE between f32 and dequantized weights) to approximate the impact of quantization on model accuracy for three reasons. First, prior works have leveraged QE as a proxy for quantized model accuracy - [17] used quantization MSE to approximate optimal clipping value (ACIQ) and optimal bit-width for each channel. Second, we provide empirical evidence that there is a negative relationship between model accuracy and QE (see Figure 1). Third, computing layer-wise QE instead of determining the model accuracy with respect to each layer and each possible layer bit-width has the advantage of linear time complexity. An exhaustive combinatorial search runs in exponential time [25]. #### 3.2.3 Time Complexity Analysis We analyze the algorithm's time complexity by considering its two logical components - the error calculations and the bit-width search based on them. Let L be the total number of layers, B the total number of bit-widths, and M the total of QEMs. We calculate the QE of each layer for each bit-width. Thus, the time complexity of _MixQuant's_ error calculations (line 4-11 in Algorithm 1) is \(\mathcal{O}(L*B)\). The bit-width search (line 12-13 in Algorithm 1) compares the QE of each bit-width to the baseline int8 QE for each layer and can be performed for M number of QEMs, which takes \((L*(B*M))\). Therefore, the overall time complexity is \(\mathcal{O}(L*B)+\mathcal{O}(L(B*M))=\mathcal{O}(L(B+B*M))\). This is linear with respect to the number of layers. If we used model loss instead of layer QE to search for optimal bits, we would need to consider all the models generated via the combinations of B number of bit-widths over L number of layers. The time complexity would be \(\mathcal{O}(B^{L})\), which is exponential. ## 5 Results We implement _MixQuant_ using Python and combine it with two types of quantization techniques: (1) BRECQ [3], a state-of-the-art quantization method, and (2) vanilla asymmetric quantization [12] and evaluate it on the validation set of the Imagenet ILSVRC2012 dataset. Our results demonstrate that _MixQuant_ can optimize the performance of existing quantization techniques. MixQuant with BRECQBRECQ is a state-of-the art quantization method that has been shown to outperform OMSE [4], AdaRound [5], AdaQuant [6], and BitSplit [7], and in Table 1, we demonstrate in that when _MixQuant_ is combined with BRECQ, we achieve better quantized accuracy than BRECQ alone. Additionally, Table 1 also compares our results with [26], a state of the art mixed precision quantization technique, and shows that the accuracy degradation is significantly greater in [26]. \begin{table} \begin{tabular}{c|c c|c c c} & **Bits W/A** & **MixQuant + BRECQ** & Bits W/A & BRECQ & Bits W/A & Liu et al. (2021) \\ \hline \multirow{3}{*}{ResNet-18} & **32/32** & **69.76** & 32/32 & 71.08 & 82/32 & 74.24 \\ & **5**, **6**/32 & **70.69** & 4/32 & 70.7 & 4/8 & 61.68 \\ & **4**, **5**, **6**/32 & **70.69** & 3/32 & 69.81 & 4/8 & 61.68 \\ & **2**, **5**, **6**/32 & **69.03** & 2/32 & 66.3 & 4/8 & 61.68 \\ \hline \multirow{3}{*}{MobileNetV2} & **32/32** & **71.88** & 32/32 & 72.49 & 71.78 \\ & **4**, **5**, **6**, **7**/32 & **71.92** & 4/32 & 71.66 & 8/8 & 70.7 \\ \cline{1-1} & **4**, **5**, **6**, **7**/32 & **71.92** & 3/32 & 69.5 & 8/8 & 70.7 \\ \cline{1-1} & **2**, **5**, **6**/32 & **59.82** & 3/32 & 59.67 & 8/8 & 70.7 \\ \end{tabular} \end{table} Table 1: Comparison of _MixQuant_ combined with BRECQ, BRECQ, and [26] Figure 1: Relationship between quantization error and model accuracy & loss MixQuant with Asymmetric Quantization In addition to BRECQ, we combine _MixQuant_ with asymmetric quantization and compare its quantized model accuracy with f32 and int8 baselines. Table 2 shows the set of bit-widths found via _MixQuant_ for various QEMs and various ResNet architectures along with model top-1 and top-5 accuracy. A user can flexibly select the quantization solution based on _their_ requirements with the QEM. For higher QEMs the bit-widths are lower and the model accuracy decreases, while for lower QEMs the bit-widths and quantized model accuracy are higher. Therefore, _MixQuant_ allows its user to flexibly select the trade-off between model accuracy and lowering the quantization bit-width. For example, the highlighted lines in Table 2 satisfy the requirement of selecting the minimum quantization bit-widths such that the model top-1 accuracy degradation is \(\leq\) 3%. Runtime AnalysisTable 3 reports the runtime in seconds of _MixQuant_ for various ResNet architectures, where _MixQuant_ considers the bit-widths of 8, 7, 6, 5, 4, 3, and 2, and one or ten different QEMs. It can be observed that the runtime grows with the number of layers since higher number of layers imply a larger \begin{table} \begin{tabular}{l c c c c c c} **Architecture** & **Experiment** & **QEM** & **layers\_bit\_widths** & **Top-1 Ace** & **Top-5 Ace** & **Avg Loss** & **QMSE** \\ \hline recent18 & baseline: f32 & N/A & all layers are float 32 & 63.76 & 89.08 & 1.25 & N/A \\ revent18 & baseline: int8 & N/A & all layers are int 8 & 69.63 & 89.07 & 1.25 & N/A \\ \hline revent18 & & 2 & 6, 7 & 68.20 & 88.30 & 1.31 & 0.23 \\ revent18 & & 3 & 5, 6, 7 & 63.96 & 85.58 & 1.51 & 0.36 \\ revent18 & MixQuant & 3.25 & 5, 6 & 64.00 & 85.54 & 1.51 & 0.37 \\ revent18 & & 3.3 & 4, 5, 6 & 61.29 & 83.81 & 1.64 & 0.37 \\ revent18 & & 3.5 & 4, 6 & 53.67 & 77.78 & 2.04 & 0.38 \\ \hline revent34 & baseline: f32 & N/A & all layers are int 32 & 73.31 & 91.42 & 1.08 & N/A \\ revent34 & baseline: int8 & N/A & all layers are int 8 & 73.24 & 91.39 & 1.08 & N/A \\ \hline revent34 & & 2 & 6, 7 & 72.35 & 90.91 & 1.12 & 0.24 \\ revent34 & & 3 & 4, 5, 6, 7 & 61.21 & 82.93 & 1.70 & 0.39 \\ revent34 & & 3.25 & 4, 6 & 61.36 & 83.05 & 1.68 & 0.40 \\ \hline revent50 & baseline: f32 & N/A & all layers are int 32 & 76.13 & 92.86 & 0.96 & N/A \\ revent50 & baseline: int8 & N/A & all layers are int 8 & 75.99 & 92.81 & 0.97 & N/A \\ \hline revent50 & & 6, 7 & 75.18 & 92.52 & 1.00 & 0.28 \\ revent50 & & MixQuant & 3 & 4, 5, 6 & 70.58 & 90.04 & 1.19 & 0.43 \\ revent50 & & 3.25 & 4, 5, 6 & 50.13 & 74.29 & 2.30 & 0.45 \\ \hline revent101 & baseline: f32 & N/A & all layers are int 32 & 77.37 & 93.55 & 0.91 & N/A \\ revent101 & baseline: int8 & N/A & all layers are int 8 & 77.21 & 93.51 & 0.92 & N/A \\ revent101 & & 1.3 & 5, 6, 7, 8 & 76.96 & 93.42 & 0.92 & 0.22 \\ revent101 & & 1.5 & 2, 5, 6, 7, 8 & 56.23 & 81.74 & 1.83 & 0.24 \\ revent101 & MixQuant & 1.7 & 2, 3, 4, 5, 6, 7, 8 & 58.86 & 10.50 & 1.86 & 0.30 \\ revent101 & & 1.8 & 2, 3, 5, 6, 7 & 52.32 & 75.61 & 2.25 & 0.33 \\ revent101 & & 1.9 & 2, 4, 5, 6, 7 & 49.36 & 72.63 & 2.44 & 0.34 \\ \hline revent152 & baseline: f32 & N/A & all layers are float 32 & 78.31 & 94.05 & 0.88 & N/A \\ revent152 & baseline: int8 & N/A & all layers are int 8 & 78.31 & 94.02 & 0.88 & N/A \\ revent152 & & 1.1 & 7, 8 & 78.20 & 94.01 & 0.88 & 0.20 \\ revent152 & & 1.3 & 6, 7, 8 & 78.15 & 94.01 & 0.89 & 0.20 \\ revent152 & MixQuant & 1.5 & 5, 6, 7, 8 & 77.58 & 93.76 & 0.91 & 0.23 \\ revent152 & & 1.7 & 3, 5, 6, 7, 8 & 70.68 & 90.11 & 1.22 & 0.28 \\ revent152 & & 1.8 & 2, 5, 6, 7 & 71.48 & 90.16 & 1.19 & 0.31 \\ revent152 & & 1.9 & 2, 4, 5, 6, 7 & 62.99 & 85.01 & 1.66 & 0.32 \\ \hline revent50\_32\&add & baseline: f32 & N/A & all layers are float 32 & 77.62 & 93.70 & 0.94 & N/A \\ revent50\_32\&add & baseline: int8 & N/A & all layers are int 8 & 77.40 & 93.63 & 0.95 & N/A \\ revent50\_32\&add & 1.3 & 7, 8 & 77.43 & 93.52 & 0.95 & 0.19 \\ revent50\_32\&add & 1.5 & 6, 7, 8 & 77.21 & 93.51 & 0.95 & 0.20 \\ revent50\_32\&add & 1.7 & 5, 6, 7, 8 & 76.93 & 93.29 & 0.98 & 0.27 \\ revent50\_32\&add & 1.8 & 5, 6, 7 & 75.43 & 92.60 & 1.05 & 0.30 \\ revent50\_32\&add & 2.4 & 5, 6, 7 & 72.60 & 90.79 & 1.18 & 0.30 \\ \hline revent101\_32\&add & baseline: f32 & N/A & all layers are float 32 & 79.31 & 94.53 & 0.93 & N/A \\ revent101\_32\&add & baseline: int8 & N/A & all layers are int 8 & 79.11 & 94.51 & 0.93 & N/A \\ revent101\_32\&add & 1.1 & 7, 8 & 79.12 & 94.51 & 0.93 & 0.31 \\ revent101\_32\&add & 1.3 & 4, 6, 7, 8 & 76.61 & 93.26 & 1.04 & 0.33 \\ revent101\_32\&add & 1.5 & 2, 4, 5, 6, 7, 8 & 59.91 & 81.46 & 2.05 & 0.39 \\ revent101\_32\&add & 1.7 & 2, 4, 5, 6, 7, 8 & 37.65 & 50.52 & 3.84 & 0.46 \\ revent101\_32\&add & 1.8 & 2, 3, 4, 5, 6, 7 & 26.14 & 45.57 & 5.02 & 0.49 \\ \hline \end{tabular} \end{table} Table 2: _MixQuant_ with vanilla asymmetric quantization search space. For one QEM, the _MixQuant_ search takes between 0.1 and 0.5 seconds. If it is combined with asymmetric per-layer quantization using the optimal bit-widths returned by the search, it takes between 1.0 and 3.2 seconds. If the number of QEMs is increased from one to ten the _MixQuant_ search takes between 0.9 and 5.5 seconds, which represents a linear increase in runtime. ## 6 Quantization Sensitivity of Weights Ablation Studies To demonstrate that quantizing DNN weights warrants a search for optimal bit-widths as opposed to uniform precision quantization, we perform two ablation studies to show that different weight layers have different sensitivity to quantization based on their type and position. Weights Quantization Sensitivity by Layer TypeFirst, we investigate if different layer types have different sensitivity to quantization. We consider four layer types in the ResNet architecture: (1) first conv layer, (2) conv layers with a 3x3 kernel, (3) conv layers with a 1x1 kernel, and (4) final fully connected layer. For each type of layer, we perform asymmetric quantization and vary its bit-width while keeping the bit-width of all other layer types constant at int8. We calculate the model accuracy, loss, and quantization error for the following quantization bit-widths: 8, 7, 6, 5, 4, 3, and 2. In Figure 2, we show the impact of varying the bit-width of one layer type at a time on the model top-1 accuracy. Lowering the quantization bit-width of conv layers with a 3x3 kernel has the most adverse impact on top-1 accuracy in shallower ResNet architectures, while in deeper ones it is the conv layers with a 1x1 kernel followed by conv layers with a 3x3 kernel that impacts model accuracy the most. The first conv layer and conv layers with a 1x1 kernel have approximately the same sensitivity to varying bit-width in the shallower architectures. Finally, the quantization bit-width of the final fully connected layer has the smallest impact on model accuracy for all ResNet architectures. In general, starting at 5 bits the model accuracy begins to degrade; however, the deeper architectures are less sensitive to decreasing bit-width. While the reason that the conv layers with a 3x3 kernel and 1x1 kernel are the most sensitive is the fact that those layer types account for the highest number of layers in ResNet, we can still conclude that different layer types have different sensitivity to quantization bit-width measured as the impact on the overall model quality. Therefore, different layer types will benefit from different quantization bit-widths, which motivates _MixQuant_. Similar results can also be found by measuring layer type sensitivity using the model average loss and quantization mean squared error. \begin{table} \end{table} Table 3: Runtime of _MixQuant_ for (a) 1 QEM and (b) 10 QEMs Weights Quantization Sensitivity by Layer PositionIn addition to the layer type, we investigate if the position of a layer has an impact on quantization sensitivity of weights. We measure the _relative quantization error_ (RQE) of individual layers for the following bit-widths: 8, 7, 6, 5, 4, 3, 2, and define the RQE as \(RQE=avg((f3\overline{2}\,w-dequantized\,w)/f3\overline{2}\,w)\), where \(\vec{w}\) is the weights vector and the \(avg\) operation returns a scalar that represents the mean of all elements in a vector. Table 4 identifies the most sensitive layers across various bit-widths and architectures, where layers are indexed from 0 through n, and n equals is the total number of layers minus one. For example, for int8, it is the 1st layer in resnet18 that has the highest relative QE compared to all other resnet18 layers while for resnet50 it is the 46th layer. We can see that the quantization bit-width has a significant impact on the position of the most sensitive layer with the exception of ResNet50. While ResNet50's most sensitive layer is located towards the end of the network for all bit-widths, other architectures's most sensitive layer position varies based on the bit-width. For higher bit-widths 8, 7, and 6 it is located at the beginning while for lower bit-widths 2, 3, and 4 it is at the end. The most \begin{table} \begin{tabular}{l|c|c|c|c|c|c} **Architecture** & **int8** & **int7** & **int6** & **int4** & **int8** & **int2** \\ \hline resnet18 & 1 & 1 & 1 & 17 & 17 & 17 & 16 \\ resnet34 & 1 & 1 & 20 & 20 & 33 & 35 \\ resnet50 & 46 & 46 & 46 & 46 & 47 & 44 \\ resnet101 & 6 & 6 & 6 & 97 & 97 & 99 \\ resnet152 & 1 & 45 & 45 & 148 & 148 & 152 \\ resnet50\_32e4d & 1 & 1 & 1 & 52 & 45 & 45 & 45 \\ resnet101\_32e8d & 1 & 1 & 49 & 49 & 49 & 96 & 96 \\ \end{tabular} \end{table} Table 4: The most sensitive layer positions in a DNN measured as a relative quantization error with respect to varying quantization bit-width Figure 2: Sensitivity of different layer types to quantization sensitive layers of ResNet34 and ResNeXt101_32x8d at bit-widths 4, 5, and 6 are in the middle of the network. Based on these experiments, we can conclude that different layer positions have different sensitivity to varying bit-width. ## 7 Conclusion In this paper we propose _MixQuant_, a search algorithm that finds the optimal quantization bit-width for each layer weight and can be combined with any quantization method as a form of pre-processing optimization. We show that combining _MixQuant_ with BRECQ [3], a state-of-the-art quantization method, yields better quantized model accuracy than BRECQ alone. Additionally, we combine BREQ with asymmetric quantization [12] to show that _MixQuant_ has the potential to optimize the performance of any quantization technique. Our code is open-sourced and available at: [https://anonymous.4open.science/r/gantizedImagenet-43C5](https://anonymous.4open.science/r/gantizedImagenet-43C5).
2309.08261
Linear and nonlinear eccentric mode evolution in unstratified MHD discs
In this paper we develop a framework for studying unstratified, magnetised eccentric discs and compute uniformly precessing eccentric modes in a cylindrical annulus which provide convenient initial conditions for numerical simulations. The presence of a magnetic field in an eccentric disc can be described by an effective gas with a modified equation of state. At magnetic field strengths relevant to the magneto-rotational instability the magnetic field has negligible influence on the evolution of the eccentric disc, however the eccentric disc can significantly enhance the magnetic field strength over that in the a circular disc. We verify the suitability of these eccentric disc solutions by carrying out 2D simulations in RAMSES. Our simulated modes (in 2D) follow a similar evolution to the purely hydrodynamical modes, matching theoretical expectations, provided they are adequately resolved. Such solutions will provide equilibrium states for studies of the eccentric magneto-rotational instability and magnetised parametric instability in unstratified discs and are useful for exploring the response of disc turbulence on top of a fluid flow varying on the orbital timescale.
Elliot M. Lynch, Janosz W. Dewberry
2023-09-15T09:14:56Z
http://arxiv.org/abs/2309.08261v1
# Linear and nonlinear eccentric mode evolution in unstratified MHD discs ###### Abstract In this paper we develop a framework for studying unstratified, magnetised eccentric discs and compute uniformly precessing eccentric modes in a cylindrical annulus which provide convenient initial conditions for numerical simulations. The presence of a magnetic field in an eccentric disc can be described by an effective gas with a modified equation of state. At magnetic field strengths relevant to the magneto-rotational instability the magnetic field has negligible influence on the evolution of the eccentric disc, however the eccentric disc can significantly enhance the magnetic field strength over that in the a circular disc. We verify the suitability of these eccentric disc solutions by carrying out 2D simulations in RAMSES. Our simulated modes (in 2D) follow a similar evolution to the purely hydrodynamical modes, matching theoretical expectations, provided they are adequately resolved. Such solutions will provide equilibrium states for studies of the eccentric magneto-rotational instability and magnetised parametric instability in unstratified discs and are useful for exploring the response of disc turbulence on top of a fluid flow varying on the orbital timescale. keywords: accretion, accretion discs - MHD - magnetic fields - celestial mechanics ## 1 Introduction Eccentric gaseous discs, where the gas orbits on Keplerian ellipses, are found in a variety of astrophysical contexts. To date there have been many theoretical and numerical studies considering unmagnetised eccentric discs (Ogilvie, 2001; Ogilvie & Barker, 2014; Barker & Ogilvie, 2016; Wienkers & Ogilvie, 2018; Ogilvie & Lynch, 2019; Pierens et al., 2020; Dewberry et al., 2020). Recently several studies have considered the behaviour of magnetic fields in eccentric discs. The effect of magnetic stresses on eccentric discs was considered by Ogilvie (2001) who developed a turbulent stress model based on the ideal induction equation in an orbital coordinate system. Ogilvie & Barker (2014) include an magnetohydrodynamic (MHD) form of their eccentric shearing box model. This was used to study the linear phase of the magneto-rotational instability (MRI) by Chan et al. (2018), while Lynch & Ogilvie (2021) used the formalism to study the effect of a coherent magnetic field on the disc vertical structure. Global simulation of MRI in eccentric discs were performed by Dewberry et al. (2020) who found that sufficiently nonlinear eccentric waves can shut off the MRI. Oyang et al. (2021) compared the excitation of eccentricity in MRI turbulent discs to viscous, hydrodynamical discs and found that the latter were excited to larger eccentricities. Finally Chan et al. (2022) performed a global simulation of the MRI in an elliptical annulus of large (\(e=0.5\)) constant eccentricity motivated by the highly eccentric discs found in tidal disruption events. One challenge for (hydro or MHD) simulations of eccentric discs is the strong differential precession due to pressure forces which arises for arbitrary eccentricity profiles. Notably this occurs for a uniformly eccentric ring which, naively, might be considered the simplest eccentricity profile to simulate. This strong differential precession was a problem encountered by Chan et al. (2022) who utilised an elliptical coordinate system to model a disc of uniform eccentricity, which became significantly misaligned from the simulation grid after only 15 outer disc orbits. Strong differential precession quickly generates large pressure gradients as a result of orbital compression, which can be very difficult to resolve numerically, leading to artificial damping of the disc eccentricity. The strength of this differential precession will depend on the magnetic field strength and configuration. The rapid evolution of the disc orbits can potentially lead to transient phenomena that are primarily a consequence of the choice of initial conditions. This makes it difficult to disentangle the effect of the MRI and parametric instability from the evolution of the non-steady initial conditions. It would thus be beneficial to study how the MRI develops on top of a steady, or slowly evolving, eccentric background. A solution to this problem can be found in the existence of eccentric modes. These are untwisted eccentric discs with a time independent eccentricity profile which undergo uniform (i.e. rigid body) precession as a result of pressure gradients and other non-Keplerian forces. Eccentric modes are thus a particularly suitable setting for numerical simulations. They are also well motivated physically as they often provide a good approximation to the relaxed state of many eccentric discs when excitation and damping processes are considered (Kley et al., 2008; Miranda et al., 2017; Teyssandier & Ogilvie, 2016, 2017; Ragusa et al., 2017). In this paper we extend the Hamiltonian eccentric disc theory of Ogilvie & Lynch (2019) to allow for the inclusion of a large scale, structured, magnetic field in an unstratified disc. We calculate modal (uniformly precessing) solutions for these eccentric MHD discs. Such solutions are not intended as a realistic model of a magnetised eccentric disc, owing to the neglect of important 3D effects (Ogilvie, 2001, 2008; Ogilvie and Barker, 2014; Teyssandier and Ogilvie, 2016; Ogilvie and Lynch, 2019) and unrealistic global field structure (see Ogilvie, 1997, for the 3D field structure in a circular disc), however these solutions are intended to provide a convenient setting for numerical simulations of the eccentric MRI. To this end we run 2D MHD simulations in the code RAMSES (Teyssier, 2002; Fromang et al., 2006; Faure et al., 2014) using our calculated eccentric mode as an initial condition to test their suitability for numerical calculations. By using an eccentric mode as our initial condition we aim to avoid strong differential precession, due to pressure, destroying the eccentric disc as seen in Chan et al. (2022). This paper is structured as follows. In Section 2 we give an overview of eccentric disc geometry and orbital coordinate systems. In Section 3 we extent the Hamiltonian formalism of Ogilvie and Lynch (2019) to unstratified ideal MHD discs, which we use to derive linear theory in Section 4 and compute nonlinear eccentric modes in Section 5. We compare the eccentric disc theory against 2D MHD simulations in Section 6. Finally, we present our conclusions in Section 8 and a derivation of the magnetic vector potential is given in the appendix to aid with future numerical work. ## 2 Eccentric disc geometry The geometry of an eccentric disc consists of a set of non-intersecting, confocal Keplerian ellipses where the dominant fluid motion consist of the Keplerian motion. These Keplerian orbits slowly evolve due to the effects of pressure gradients and, in MHD discs, magnetic fields. To describe both the geometry, and dynamics, of an eccentric discs it is often convenient to make use of an orbital coordinate system. This is a coordinate system, based on the orbital elements of celestial mechanics, that describes a point in the mid-plane of the disc by an orbit labelling coordinate, specifying the orbit the point lies on, and a coordinate denoting where along that orbit the point lies Typically such an orbital coordinate system will define a time dependant map from some circular reference disc onto the physical eccentric disc, with the dynamics of the eccentric disc being described by the slow evolution of this orbital coordinate system. We formulate this orbital coordinate system in terms of a Lagrangian map between the reference and physical variables \(\mathbf{a}\mapsto\mathbf{x}\), where \(\mathbf{a}\) are the orbital coordinates associated with a fluid element and \(\mathbf{x}\) is the fluid element position vector in Cartesian coordinates. This Lagrangian map can be thought of as mapping some reference circular state into the physical eccentric disc, similar to Ogilvie (2018) and Ogilvie and Lynch (2019). We denote the Jacobian associated with this Lagrangian map by \(J_{ij}=\frac{\partial\mathbf{x}_{ij}}{\partial\mathbf{a}\prime}\) and introduce the notation \[J_{3}=\det(J_{ij})=\frac{J}{J^{\circ}}\frac{H}{H^{\circ}}, \tag{1}\] where \(J\) is the Jacobian determinant of the 2D transform and \(H\) is the disc scale height and we adopt the convention that the superscript \(\cdot^{\circ}\) denotes a quantity in the reference circular disc. Note that this Jacobian is for an orbital coordinate system using the stretched vertical coordinate \(\tilde{z}\). In Ogilvie (2018) this is approximated by its value at the midplane, which is valid when the disc is sufficiently thin. For the unstratified discs considered here the horizontal and vertical parts of the transform are separable so this approximation is unnecessary. We can define an orbital coordinate system, following Ogilvie and Lynch (2019), where the orbits are labelled by the semimajor axis, a, and the position around the orbit are labelled by the eccentric anomaly \(E\). The shape of each orbit is controlled by the orbits eccentricity, \(e\), and longitude of pericentre \(\varpi\). The \((a,E)\) orbital coordinate system are related to the cylindrical radius through \[r=a(1-e\cos E), \tag{2}\] and to the azimuthal angle, \(\phi\), through the true anomaly \(f=\phi-\varpi\) which satisfies, \[\cos f=\frac{\cos E-e}{1-e\cos E},\quad\sin f=\frac{\sqrt{1-e^{2}}\sin E}{1-e \cos E}. \tag{3}\] We can extend this coordinate system to 3D by taking the disc midplane as a reference plane and labelling points by their height above/below the midplane, \(z\). One can also introduce a stretched vertical coordinate \(\tilde{z}=z/H\), where \(H\) is some characteristic vertical lengthscale such as the disc thickness or scale height. In unstratified models this is typically taken to be the sonic length \(H=c_{s}\Omega^{-1}\), with \(c_{s}\) the sound speed. It will also often be useful to make use of the mean anomaly \(M\), which is related to the eccentric anomaly through \[M=E-e\sin E=n(t-\tau), \tag{4}\] where \(n=(GM_{1}/a^{3})^{1/2}\) is the mean motion, \(M_{1}\) is the mass of the central object and \(\tau\) is the time of pericentre passage. This allows us to define an \((a,M)\) orbital coordinate system where the position around the orbit is now denoted by the mean anomaly. This can be extended to 3D in the same way as the \((a,E)\) coordinates. The Jacobian determinant of the \((a,M)\) orbital coordinate system can be expressed as \(J=J^{\circ}j\), where \(J^{\circ}=a\) and we have introduced the dimensionless Jacobian determinant from Ogilvie and Lynch (2019), \[\begin{split} j=(J/J^{\circ})&=\frac{1-e(e+ae_{ a})}{\sqrt{1-e^{2}}}-\frac{ae_{a}\cos E}{\sqrt{1-e^{2}}}-ae\varpi_{a}\sin E\\ &=\frac{1-e(e+ae_{a})}{\sqrt{1-e^{2}}}\left[1-q\cos(E-E_{0}) \right]\end{split} \tag{5}\] which is related to the elliptical geometry of the disc and we have introduced the notation that a subscript \(a\) denotes a partial derivative with respect to the semimajor axis. As in Ogilvie and Lynch (2019) we have introduced the orbital intersection parameter, q, and \(E_{0}\), the eccentric anomaly at which the maximum orbital compression occurs. These are related to the orbital elements and their derivatives through \[q\cos E_{0}=\frac{ae_{a}}{1-e(e+ae_{a})},\quad q\sin E_{0}=\frac{\sqrt{1-e^{2} }ae\varpi_{a}}{1-e(e+ae_{a})}. \tag{6}\] ## 3 Derivation of the eccentric disc Hamiltonian To derive the equation governing the evolution of the eccentric orbits we shall start from the Lagrangian formulation of ideal MHD. After performing a vertical integration we exploit a scale separation which occur in "thin" discs where the Lagrangian can be separated into an \(O(1)\) contribution from the Keplerian terms and \(O(\mathbf{e}^{2})\) contributions from the internal and magnetic energies. In an unstratified disc \(\epsilon\) should be thought of as a characteristic measure of the reciprocal Mach number (or reciprocal Alfven number for strongly magnetised discs), rather than the aspect ratio used in thin disc theory. The Lagrangian for ideal MHD is (e.g. Ogilvie, 2016) \[L=\iiint\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, \[\mathcal{H} =\int m_{a}na^{2}\sqrt{1-e^{2}}\varpi\,da-L\] \[=\int m_{a}\Bigg{(}\langle V(\bar{\mathbf{x}})\rangle+\langle\bar{\mathbf{x }}(\bar{\mathbf{a}},J)\rangle \tag{19}\] \[+a^{2}\frac{B_{0}^{2}(\bar{\mathbf{a}})H^{o}}{2\mu_{0}\Sigma^{o}} \left\langle\frac{1+e\cos E}{1-e\cos E}j^{-1}\right\rangle+\frac{B_{g0}^{2}( \bar{\mathbf{a}})H^{o}}{2\mu_{0}\Sigma^{o}}(j^{-1})\Bigg{]}\,da.\] Henceforth we shall only consider Keplerian potentials so that \(V(\bar{\mathbf{x}})=0\). For a perfect gas we can write (Ogilvie and Lynch, 2019), \[m_{a}\langle\bar{\mathbf{x}}(\bar{\mathbf{a}},J)\rangle=H_{a}^{o}F^{(\gamma)}, \tag{20}\] where we have introduced the geometric part of the Hamiltonian, \[F^{(p)}=\frac{1}{p-1}(j^{-(p-1)}), \tag{21}\] along with the circular Hamiltonian density, \[H_{a}^{o}=2\pi aP_{g}^{o}, \tag{22}\] where \(P_{g}^{o}\) is the vertically integrated gas pressure in the reference disc. Introducing the Alfven velocity in the reference disc: \(v_{a0}^{i}=B_{0}^{i}/\sqrt{\mu_{0}\rho^{o}}\), we parameterise the magnetic field strength in terms of a dimensionless toroidal (\(V_{t}\)) and vertical (\(V_{z}\)) Alfven velocities, where \(V_{t}=av_{a0}^{E}/c_{s}\) and \(V_{z}=v_{a0}^{2}/c_{s}\). From Ogilvie and Lynch (2019) we have the following expression for \(F^{(2)}\), \[F^{(2)}\left(e,q,E_{0}\right)=\langle j^{-1}\rangle=\frac{\sqrt{1-e^{2}}}{1-e \left(e+a\sigma_{a}\right)}\frac{q-e(1-\sqrt{1-q^{2}})\cos E_{0}}{q\sqrt{1-q^{ 2}}}. \tag{23}\] Similarly we obtain can obtain an expression for \(\left\langle\frac{1+e\cos E}{1-e\cos E}j^{-1}\right\rangle\), \[\left(\frac{1+e\cos E}{1-e\cos E}j^{-1}\right) =\frac{\sqrt{1-e^{2}}}{1-e\left(e+a\sigma_{a}\right)}\frac{q+e(1- \sqrt{1-q^{2}})\cos E_{0}}{q\sqrt{1-q^{2}}}\] \[=F^{(2)}\left(e,q,E_{0}+\pi\right). \tag{24}\] Thus the vertical magnetic field acts like a \(\gamma=2\) perfect gas, while the quasi-toroidal magnetic field acts like a \(\gamma=2\) gas with an anti-phased orbital compression. We can then write the Hamiltonian as \[\mathcal{H}=\int H_{a}^{o}\Bigg{(}F^{(\gamma)}\left(e,q,E_{0} \right) +\frac{\gamma}{2}V_{t}^{2}F^{(2)}\left(e,q,E_{0}+\pi\right) \tag{25}\] \[+\frac{\gamma}{2}V_{z}^{2}F^{(2)}\left(e,q,E_{0}\right)\Bigg{)}\,da,\] where the factor of \(\gamma\) in the magnetic terms appear as a result of factoring out \(H_{a}^{o}\). This Hamiltonian can be split into contributions from the gas internal energy, and the energy in the quasi-toroidal and vertical magnetic fields, with \(\mathcal{H}=\mathcal{H}_{\text{gas}}\)\(\leftrightarrow\)\(\mathcal{H}_{\text{tor}}\), corresponding to the first second and third term in the brackets of Equation 25. As in gas discs each of these terms are a product of the, geometry independent, Hamiltonian density in the reference disc and a geometric part which encapsulates the dependence on the orbital geometry. It is convenient to reformulate this Hamiltonian as a Hamiltonian for a single effective gas. We can do this by introducing a new geometric part of the Hamiltonian, \[F_{V_{t},V_{z}}(e,q,E_{0}) =\frac{1}{1+\frac{\gamma}{2}(V_{t}^{2}+V_{z}^{2})}F^{(\gamma)} \left(e,q,E_{0}\right)\] \[+\frac{\frac{\gamma}{2}V_{t}^{2}}{1+\frac{\gamma}{2}(V_{t}^{2}+V_ {z}^{2})}F^{(2)}\left(e,q,E_{0}+\pi\right)+\frac{\frac{\gamma}{2}V_{z}^{2}}{1 +\frac{\gamma}{2}(V_{t}^{2}+V_{z}^{2})}F^{(2)}\left(e,q,E_{0}\right), \tag{26}\] which is a weighted sum of two adiabatic gas \(F^{(p)}\) with different ratio of specific heats and a third nonadiabatic gas term for the toroidal field. In this case the Hamiltonian can be written as, \[\mathcal{H}=\int\tilde{H}_{a}^{o}F_{V_{t},V_{z}}(e,q,E_{0})da, \tag{27}\] where we have introduced \[\tilde{H}_{a}^{o}=2\pi a(P_{g}^{o}+P_{m}^{o})=\left(1+\frac{\gamma}{2}V_{t}^{2 }+\frac{\gamma}{2}V_{z}^{2}\right)H_{a}^{o}=\frac{1+\beta^{o}}{\beta^{o}}H_{a }^{o}, \tag{28}\] which is the Hamiltonian density in the reference circular disc and \(\beta^{o}\), the plasma-\(\beta\) in the reference circular disc. Unlike the simpler perfect gas case \(F_{V_{t},V_{z}}\) is no longer only a function of the geometry for a given ratio of specific heat, it also depends on the "partial pressure" of the constitutive effective gasses on a given orbit. Hamilton's equations in the noncannical \(e(a,t)\), \(\varpi(a,t)\) are (Ogilvie and Lynch, 2019) \[m_{a}\frac{\partial e}{\partial t} =\frac{\sqrt{1-e^{2}}}{na^{2}e}\frac{\delta\mathcal{H}}{\delta \varpi}, \tag{29}\] \[m_{a}\frac{\partial\varpi}{\partial t} =-\frac{\sqrt{1-e^{2}}}{na^{2}e}\frac{\delta\mathcal{H}}{\delta e}. \tag{30}\] The ideal MHD eccentric disc Hamiltonian preserves the symmetries of the unmagnetised eccentric disc Hamiltonian of Ogilvie and Lynch (2019); i.e. time translation and global rotation, with the Hamiltonian only depending on \(\varpi\) through it derivative \(\varpi_{a}\). As such, following Ogilvie and Lynch (2019), one can show that the total Hamiltonian, \(\mathcal{H}\), which in an unstratified disc corresponds to the sum of the magnetic and internal energies, and the angular momentum deficit (AMD), a positive definite measure of the total eccentricity commonly used in celestial mechanics, \[\mathcal{C}=\int m_{a}na^{2}\left(1-\sqrt{1-e^{2}}\right)da, \tag{31}\] are conserved. The simplest solutions to the eccentric disc equations are the eccentric modes, which are solutions where \(e\) is independent of time and the disc is untwisted and uniformly precessing at an angular frequency \(\omega\). These solutions are a particularly convenient setting for numerical simulations as they avoid strong differential precession seen in generic eccentricity profiles. The eccentric mode equation is obtained from Equation 30, \[m_{a}\omega=-\frac{\sqrt{1-e^{2}}}{na^{2}e}\frac{\delta\mathcal{H}}{\delta e}. \tag{32}\] Equation 29 is automatically satisfied as the disc is untwisted (so \(\frac{\delta\mathcal{H}}{\delta\varpi}=0\)) and \(e\) is independent of time. As shown in Ogilvie and Lynch (2019), Equation 32 can be written as \[\omega\frac{\delta L}{\delta e}=\frac{\delta\mathcal{H}}{\delta e}, \tag{33}\] where \[L=\int m_{a}na^{2}\sqrt{1-e^{2}}\,da \tag{34}\] is the angular momentum. Equation 33 can be interpreted as a variational problem which makes \(\mathcal{H}\) (here corresponding to the total disc internal + magnetic energy) stationary at a fixed angular momentum \(L=\mathrm{const}\). For constant \(V_{t},V_{z}\) the eccentric mode equations are explicitly (Ogilvie and Lynch, 2019): \[\begin{split}\frac{\omega m_{a}}{\dot{H}_{\alpha}^{\alpha}} \frac{na^{2}e}{\sqrt{1-e^{2}}}&=\frac{\partial F_{V_{t},V_{z}}} {\partial e}-ae_{a}\frac{\partial^{2}F_{V_{t},V_{z}}}{\partial e\partial f} \\ &-a(2e_{a}+ae_{aa})\frac{\partial^{2}F_{V_{t},V_{z}}}{\partial f^ {2}}-\frac{d\ln(\dot{H}_{\alpha}^{\alpha})}{d\ln a}\frac{\partial F_{V_{t},V_{ z}}}{\partial f},\end{split} \tag{35}\] where we have introduced \(f=e+ae_{a}\). If \(V_{t}\) or \(V_{z}\) depends on the semimajor axis then the equation for an eccentric mode becomes, \[\begin{split}\frac{\omega m_{a}}{\dot{H}_{\alpha}^{\alpha}}\frac {na^{2}e}{\sqrt{1-e^{2}}}&=\frac{\partial F_{V_{t},V_{z}}}{ \partial e}-ae_{a}\frac{\partial^{2}F_{V_{t},V_{z}}}{\partial e\partial f}\\ &-a(2e_{a}+ae_{aa})\frac{\partial^{2}F_{V_{t},V_{z}}}{\partial f ^{2}}-\frac{d\ln(\dot{H}_{\alpha}^{\alpha})}{d\ln a}\frac{\partial F_{V_{t},V_{ z}}}{\partial f}\\ &-\frac{\gamma}{2}\frac{\beta^{\alpha}}{1+\beta^{\alpha}}\left( \left.\frac{dV_{t}^{2}}{d\ln a}\frac{\partial F^{(2)}}{\partial f}\right|_{E_{ 0}=\pi}+\left.\frac{dV_{z}^{2}}{d\ln a}\frac{\partial F^{(2)}}{\partial f} \right|_{E_{0}=0}\right)\end{split} \tag{36}\] Note that we have the perfect gas circular Hamiltonian density (\(H_{\alpha}^{\phi}\)) on the forth term on the right hand side. We also have \(F^{(2)}|_{E_{0}=0}=F^{(2)}(e,q(e,f),0)\) and \(F^{(2)}|_{E_{0}=\pi}=F^{(2)}(e,q(e,f),\pi)\). For untwisted discs \(F_{V_{t},V_{z}}(e,f)\) has an apparent singularity when \(e=f\), where the eccentricity gradients vanish. Following Ogilvie and Lynch (2019) this apparent singularity can be removed using the trigonometric parametrisation \(e=\sin 2\alpha\), \(f=\sin 2\beta\). Expressions for \(F^{(1)}(e,q,0)\) and \(F^{(2)}(e,q,0)\), in terms of this parametrisation are given in Appendix C of Ogilvie and Lynch (2019). For including quasi-toroidal fields we will also need \[F^{(2)}(e,q,\pi)=\cos(2\alpha)\cos(\alpha-\beta)\sec(2\beta)\sec(\alpha+\beta). \tag{37}\] ## 4 Linear Theory When \(e\), \(ae_{a}\) and \(ae\varpi_{a}\) are much less than unity, the geometric part of the Hamiltonian density in a 2D disc can be approximated as (Ogilvie and Lynch, 2019) \[F^{(2\mathrm{D})}\approx\frac{1}{2}e(e+ae_{a})+\frac{1}{4}\gamma\left[(ae_{a })^{2}+(ae\varpi_{a})^{2}\right], \tag{38}\] where we have dropped an unimportant constant term, that has no influence on the dynamics, so that we can use this expression for isothermal discs. In addition to Equation 38 we require the linear limit of \(F^{(2)}(e,q,E_{0}+\pi)\), to include the quasi-toroidal field, this can be obtained in a similar way and is \[F^{(2)}(e,q,E_{0}+\pi)\approx\frac{1}{2}e(e+ae_{a})+\frac{1}{2}\left[(ae_{a}) ^{2}+(ae\omega_{a})^{2}\right]+aee_{a}, \tag{39}\] where the first two terms arise from the adiabatic variation of the magnetic pressure, similar to the vertical field, while the last term arises from the magnetic tension and the non-adiabatic variation of the magnetic pressure. Combining these we arrive at an expression for \(F_{V_{t},V_{z}}\), \[\begin{split} F_{V_{t},V_{z}}(e,q,E_{0})&\approx \frac{1}{2}e(e+ae_{a})+\frac{\tilde{\gamma}}{4}\left[(ae_{a})^{2}+(ae\omega_{a })^{2}\right]\\ &+\frac{\tilde{\gamma}V_{t}^{2}}{1+\frac{\gamma}{2}(V_{t}^{2}+V_ {z}^{2})}aee_{a},\end{split} \tag{40}\] where we have introduced a modified ratio of specific heats \[\tilde{\gamma}=\gamma\frac{1+V_{t}^{2}+V_{z}^{2}}{1+\frac{\gamma}{2}(V_{t}^{2} +V_{z}^{2})}. \tag{41}\] To connect with the existing work on linear eccentric disc theory it is useful to rewrite Equation 40 in terms of the complex eccentricity \(\mathcal{E}=e\exp(i\varpi)\), the geometric part of the Hamiltonian in the linear limit is then \[\begin{split} F_{V_{t},V_{z}}(e,q,E_{0})&\approx \frac{1}{2}[|\mathcal{E}|^{2}+\mathrm{Re}(a\mathcal{E}\mathcal{E}_{a}^{*})]+ \frac{\tilde{\gamma}}{4}|a\mathcal{E}_{a}|^{2}\\ &+\frac{\frac{\gamma}{2}V_{t}^{2}}{1+\frac{\gamma}{2}(V_{t}^{2}+ V_{z}^{2})}\mathrm{Re}(a\mathcal{E}\mathcal{E}_{a}^{*}).\end{split} \tag{42}\] The, non-canonical, Hamilton's equations for the complex eccentricity are1 Footnote 1: This form of Hamilton’s equations for the eccentric disc theory was originally suggested by Prof. Gordon Ogilvie in an earlier draft of Ogilvie and Lynch (2019) as a way of connecting Hamiltonian eccentric disc theory with the Shrodinger equation. \[m_{a}\dot{\mathcal{E}}=-\frac{2i\sqrt{1-e^{2}}}{na^{2}}\frac{\delta\mathcal{H}} {\delta\mathcal{E}^{*}}, \tag{43}\] with the functional derivative of \(\mathcal{E}^{*}\) being related to the functional derivative of \(e\) and \(\varpi\) through, \[\frac{\delta}{\delta\mathcal{E}^{*}}=\frac{1}{2}e^{i\varpi}\left(\frac{\delta}{ \delta e}+\frac{i}{e}\frac{\delta}{\delta\varpi}\right). \tag{44}\] Substituting in the Linear form of the Hamiltonian into Equation 43 and performing the functional derivative we obtain a linear equation for the evolution of the complex eccentricity in an unstratified ideal MHD disc, \[2\Sigma^{\circ}na^{3}\frac{\partial\mathcal{E}}{\partial t}=\frac{\partial}{ \partial a}\left(i\tilde{\gamma}P^{\circ}a^{3}\frac{\partial\mathcal{E}}{ \partial a}\right)+i\alpha^{2}\frac{dP^{\circ}}{da}\mathcal{E}+i\partial_{a} \left[a^{2}\frac{(aB_{0}^{\phi})^{2}}{\mu_{0}}\right]\mathcal{E} \tag{45}\] where we have used \(M_{a}=2\pi a\Sigma^{\circ}\) and \(\tilde{H}_{a}^{\circ}=2\pi aP^{\circ}\). The first two terms on the right-hand side correspond to an adiabatic gas with an effective ratio of specific heats, \(\tilde{\gamma}\) set by the plasma-\(\beta\). The final term arises from the non-adiabatic change to the magnetic pressure from the stretching of the magnetic field lines2. Terms arising due to the magnetic tension cancel with additional non-adiabatic terms along with the modification to the background rotation profile as a result of the magnetic tension in the unperturbed disc. Footnote 2: This can be shown by deriving the linear, magnetised, eccentric disc equations following a similar procedure to Goodchild & Ogilvie (2006), a task that is significantly more involved than taking the linear limit of the Hamiltonian theory. Specialising to an eccentric mode in a disc with a purely vertical field (\(V_{t}=0\)) the equation simplifies to \[2\Sigma^{\circ}na^{3}\omega\mathcal{E}=\frac{\partial}{\partial a}\left( \tilde{\gamma}P^{\circ}a^{3}\frac{\partial\mathcal{E}}{\partial a}\right)+ \frac{dP^{\circ}}{da}a^{2}\mathcal{E}. \tag{46}\] When the gas pressure, \(P_{g}\), and dimensionless Alfven velocity, \(V_{z}\), are constant one can rewrite the above equation as \[2\Sigma^{\circ}na^{3}\omega\mathcal{E}=\frac{\partial}{\partial a}\left( \gamma P^{\circ}_{g}a^{3}\frac{\partial\mathcal{E}}{\partial a}\right), \tag{47}\] where we have introduced a rescaled precession frequency \(\tilde{\omega}=\omega/(1+V_{z}^{2})\). Therefore, under these restrictions on the pressure and magnetic field, the eccentric mode in the magnetised and unmagnetised discs are identical and differ only by their precession frequency. ## 5 Nonlinear modes in an isothermal disc Our primary motivation for extending the unstratified eccentric disc theory to include magnetic fields is to provide initial conditions for simulations of the eccentric MRI. As such we focus on calculating the nonlinear eccentric modes found in a disc contained between two, circular, rigid walls as done to setup the hydrodynamical simulations of Barker & Ogilvie (2016), rather than the more realistic free boundaries considered in Ogilvie & Lynch (2019). As a model of a realistic MHD disc, however, the unstratified model derived in the previous section has major limitations; namely it fails to account for the dynamical vertical structure of the disc which is known to be important to correctly describe the dynamics of eccentric discs Ogilvie (2001, 2008); Ogilvie & Barker (2014); Ogilvie & Lynch (2019) and can significantly increase the field strength of the quasi-toroidal field, particularly for more nonlinear eccentric discs (Lynch & Ogilvie, 2021). There is also the issue of how the disc interacts with the external magnetic fields. We, thus, consider a simple MHD generalisation of the mode computed in Barker & Ogilvie (2016). This consists of a globally isothermal disc with a constant reference surface density, \(\Sigma^{\circ}\), with the Hamiltonian density in the reference circular disc being \(H_{a}^{\alpha}=2\pi ac_{s}^{2}\Sigma^{\circ}\), where \(c_{s}\) is a constant sound speed. For the modes that will be simulated in Section 6 we impose a purely vertical field with, \[V_{t}=0,\quad V_{z}=\frac{nI_{\rm MRI}}{2\pi c_{s}\sqrt{16/15}}W(a). \tag{48}\] For comparison, in this section, we also compute a quasi-toroidal case with \[V_{t}=\frac{nI_{\rm MRI}}{2\pi c_{s}\sqrt{16/15}}W(a),\quad V_{z}=0. \tag{49}\] In both cases \(I_{\rm MRI}\) is a constant lengthscale, which in the vertical field case corresponds to the lengthscale of the fastest growing MRI mode in the reference circular disc. \(W(a)\) describes the taper on the inner and outer disc boundaries, for which we use \[W(a)=\frac{1}{2}\left[1+\tanh\left(\frac{a-a_{\rm in}}{w_{\rm transition}} \right)\right]\left[1-\tanh\left(\frac{a-a_{\rm out}}{w_{\rm transition}} \right)\right], \tag{50}\] when we wish to include a taper. The disc is contained within two rigid circular walls located at \(a_{\rm min}\) and \(a_{\rm max}\), such that \(a_{\rm min}\leq a_{\rm in}\leq a_{\rm out}\leq a_{\rm max}\). We therefore have boundary conditions \(e(a_{\rm min})=e(a_{\rm max})=0\). The precession frequency of the mode, \(\omega\), is an eigenvalue of the problem. One can solve Equation 36for the eccentric mode by specifying \(e_{a}\) on the inner boundary and employing a shooting method to obtain \(\omega\). To provide initial conditions for our simulations we solve for eccentric modes with \(I_{\rm MRI}=c_{s}/n(a_{\rm min})\), \(a_{\rm min}=1\), \(a_{\rm in}=1.5\), \(a_{\rm out}=4.5\), \(a_{\rm max}=5\) and \(c_{s}=0.05\), with purely vertical fields. These modes are shown in Figure 1. The simulated modes with different choices of parameters (as discussed in Section 6) have functionally indistinguishable eccentricity profiles. The MHD eccentric modes depicted in Figure 1 are nearly indistinguishable from the unmagnetised case. The same is true of eccentric modes computed using a similar strength quasi-toroidal field. This is perhaps not surprising given the magnetic field strength in these discs is set by the requirement that the circular reference disc is MRI unstable. This means the plasma-\(\beta\) in the reference disc never drops below \(\beta=84\). In the eccentric disc the range of plasma-\(\beta\) attained is greater as a result of the lateral orbital compression which occurs in the presence of eccentricity gradients. Figures 2 and 3 show the minimum and maximum plasma-\(\beta\) on an orbit, in the absence of a taper, for the vertical and quasi-toroidal field models respectively. In the absence of a taper the orbital compression results in regions of high magnetic field strengths in the inner disc, particularly for the mode with \(\max[e]=0.5\). This effect is lessened Figure 1: Eccentricity profiles of the eccentric modes used in simulations with \(\max(e)=0.2\), \(\max(e)=0.35\) and \(\max(e)=0.5\) respectively. For both modes \(I_{\rm MRI}=c_{s}/n(a_{\rm min})\), \(a_{\rm min}=1\), \(a_{\rm in}=1.5\), \(a_{\rm out}=4.5\), \(a_{\rm max}=5\) and \(w_{\rm transition}=0.05\). The eccentric modes in the unmagnetised and MHD discs with a purely vertical field are almost indistinguishable. The dotted line is the ”limiting slope” solution. when the taper is included as the magnetic field strength drops to zero close to the boundary where the effects of orbital compression are greatest. Despite attaining plasma-\(\beta\) as low as \(\beta\sim 4\) for the \(\max[e]=0.5\) mode, without taper, differs only slightly from the unmagnetised case (having a slightly lower eccentricity gradient on the inner boundary). This is, in part, a geometric effect where the shape of highly nonlinear eccentric mode is dictated by the requirement that \(|q|<1\) to avoid an orbital intersection. As discussed in Barker & Ogilvie (2016); Ogilvie & Lynch (2019) this results in a limiting slope solution given by \[a\epsilon=\begin{cases}a-a_{\min},&a_{\min}<a<\tilde{a}\\ -a+a_{\max},&\tilde{a}<a<a_{\max}\end{cases} \tag{51}\] where \(\tilde{a}=(a_{\min}+a_{\max})/2\). In principle higher order limiting slope solutions might depend on the magnetic field strength as it is not obvious how the mode selects nodes for the higher order modes. The large orbital compression responsible for the regions of greatly enhanced magnetic fields in the modes calculated above are primarily a consequence of the adoption of rigid circular boundaries. As shown in Ogilvie & Lynch (2019) adoption of more realistic free boundaries conditions result in more moderate eccentricity gradients for a given \(\max[e]\). In Appendix A we solve for the MHD eccentric modes with free boundaries and a taper in both the magnetic field and the surface density. The variation of the magnetic field strength and the orbit are much reduced compared with the equivalent rigid boundary eccentric mode due to the smaller eccentricity gradients. This confirms that the strong enhancement of the magnetic fields seen in the modes computed for Figures 2 and 3 are primarily a consequence of the rigid wall boundaries. While the strong enhancement of the magnetic field in the fundamental (zero node) modes considered thus far are primarily a consequence of the choice of boundary conditions, higher order modes (i.e. with multiple nodes) will attain larger eccentricity gradients for a given \(\max[e]\). Figure 4 shows the maximum magnetic field enhancement for eccentric modes with \(\max[e]=0.1\). These modes are discrete due to the boundary conditions. Increasing mode number results in an increased \(\max[q]\), resulting in an increasing magnetic field enhancement due to the greater lateral compression. This magnetic field enhancement approximately follows \(1/(1-q_{\max})\) for small \(\max[e]\). This magnetic field enhancement may have important consequence for the eccentric MRI if the higher field strengths are able to stabilise the MRI. ## 6 Nonlinear simulations We have run nonlinear MHD simulations to demonstrate the integrity of the eccentric disc solutions calculated in Section 5, specifically the vertical field case including a taper. We limit our focus to purely 2D simulations in this paper in order to isolate the eccentric modes' role as MHD equilibria, which are of course unstable in three dimensions. Figure 4: Mode number dependence of the magnetic field enhancement for modes with \(\max[e]=0.1\). Dashed line shows the function \(1/(1-q_{\max})\), which is the approximate magnetic field enhancement for disc with low eccentricity. Figure 3: Same as Figure 2 but for a quasi-toroidal field. There is a greater variation of the magnetic field around the orbit in the inner disc. Unlike the vertical field case the stretching of the field lines around an eccentric orbit means the magnetic field always varies around the orbit, even at the eccentricity maxima where the surface density is constant. Figure 2: Minimum (dashed lines) and maximum (solid lines) plasma beta on orbits for the eccentric modes with a purely vertical field and no taper. The dotted line is the plasma beta in the circular disc. Orbital compression can lead to significant local enhancement of the magnetic fields, while at the eccentricity maxima the plasma-\(\beta\) is constant around the orbit. The simulations do not reach as extreme plasma-\(\beta\)s as a result of the taper. In a companion paper, we explore the growth and turbulent saturation of the instabilities of these equilibria in fully 3D simulations. ### Setup We use a uniform grid version of the code RAMSES (Teyssier, 2002; Fromang et al., 2006; Faure et al., 2014)3, which employs a high-order Godunov method to solve the magnetohydrodynamic equations under the cylindrical approximation (i.e., without vertical gravity). Taken with a purely isothermal equation of state, these are Footnote 3: Available at [https://sourcesup.renater.fr/projects/dumses/](https://sourcesup.renater.fr/projects/dumses/) \[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{u})=0, \tag{52}\] \[\frac{\partial(\rho\mathbf{u})}{\partial t}+\nabla\cdot(\rho\mathbf{u}\mathbf{u}-\mathbf{B}\bm {B})+\nabla\left(p+\frac{\mathbf{B}\cdot\mathbf{B}}{2}\right)=-\rho\nabla\Phi, \tag{53}\] \[\frac{\partial\mathbf{B}}{\partial t}+\nabla\cdot(\mathbf{u}\mathbf{B}-\mathbf{B}\mathbf{u})=0, \tag{54}\] where \(\rho\) is the gas density, \(P=c_{S}^{2}\rho\) is the pressure, \(c_{s}\) is the sound speed, \(\mathbf{u}\) is the fluid velocity field, \(\Phi=-GM_{1}/R\) is the (Newtonian and cylindrical) gravitational potential of a central mass \(M\), and \(\mathbf{B}\) is the magnetic field. We initialise purely two-dimensional simulations with the surface densities, radial and azimuthal velocities, and purely vertical magnetic fields (from Equations B1-B4) corresponding to eccentricity profiles like those shown in Fig. 1. Table 1 lists relevant properties. We do not simulate the quasi-toroidal field case in this paper. This case is complicated by the difficulty in ensuring the solenoidal condition is satisfied when switching from the orbital to the polar grid, this is best done by use of a vector potential (derived in Appendix B2) which is not implemented in the version of RAMSES we are using. We impose quasi-rigid wall boundary conditions at both the inner and outer radial boundaries \(r_{0}=a_{\rm min}\) and \(r_{1}=a_{\rm max}\), fixing \(u_{R}=0\) and setting \(u_{\phi}\) by the Keplerian angular velocity of the circular reference disk. Nonzero eccentricity gradients at the boundaries imply nonzero surface density variations with \(\phi\). We therefore use a zero-gradient boundary condition for the density, setting its value in the ghost cells to the value of the last cell in the active domain. We lastly set \(B_{z}\) to zero in the ghost cells (\(B_{R}\) and \(B_{\phi}\) remain identically zero throughout, and so the magnetic field remains trivially solenoidal in these 2D simulations). To track the evolution of eccentricity in our simulations, we compute the semimajor axis of each grid cell from (e.g., Miranda et al., 2017) \[a(R,\phi)=\left(\frac{2}{R}-\frac{u^{2}}{GM_{1}}\right)^{-1}, \tag{55}\] and the eccentricity vector from \[\mathbf{e}=\left[e\cos\varpi,e\sin\varpi\right]=(GM_{1})^{-1}\left[u^{2}\mathbf{R}-( \mathbf{u}\cdot R)\mathbf{u}\right]-\dot{\mathbf{R}}. \tag{56}\] Binning the eccentricity values in every cell by semimajor axis, we average within each bin to produce one-dimensional eccentricity profiles \(\tilde{\mathbf{e}}=\tilde{\mathbf{e}}(a)\) at each timestep. These we use in turn to compute the integrated angular momentum deficit \(\mathcal{C}\). We additionally consider the time-evolution of the total Hamiltonian \[\mathcal{H}=\int_{a_{\rm min}}^{a_{\rm max}}\int_{0}^{2\pi}(c_{S}^{2}\Sigma \ln\Sigma+P_{m})\;R\;d\phi\;dR, \tag{57}\] which is conserved in the ideal secular theory. ### Results For a given eccentricity profile and resolution, our hydrodynamic and MHD simulations show remarkably similar evolution. Table 1 compares the precession frequencies we observe in simulations against the eigenvalues computed in generating our initial conditions. We measure precession rates in the simulations by fitting lines to binned arguments of pericentre \(\dot{\varpi}\) that have been averaged over the interior of the disc. The measured precession frequencies agree reasonably well with the predicted eigenvalues, \(except\) in simulations that we identify as under-resolved (namely m35l, h5, m5, m5h). For the simulations, we also estimate eccentricity decay rates (listed as imaginary parts of the frequencies) by fitting slopes to the natural logarithm of the AMD as a function of time, and assuming that the AMD decays as \(e^{2}\). The spacetime diagrams in Fig. 5 show radial profiles of radial velocity as a function of time for most of the hydrodynamic and MHD simulations listed in Table 1. Sliced at a fixed \(\phi=0\), these spacetime diagrams illustrate the coherent precession of untwisted eccentric distortions for maximum eccentricities of 0.2 and 0.35. The modes initialised with max\([e]=0.5\) involve very strong eccentricity gradients (and hence density variations) near the inner boundary, coming closer to the limiting eccentric mode shape for our radial extent \(r_{1}/r_{0}=5\). Their interaction with the inner boundary leads to shock formation that is visible in the bottom three panels of Fig. 5. Although the eccentric distortions in these simulations continue to precess, they clearly take on a different character from the initial conditions. Papaloizou (2005) observed such shocks in simulations initialised with linear eccentric modes prescribed a finite amplitude, and Barker & Ogilvie (2016) excluded them by considering only smaller values of max\([e]\). For a given value of the maximum eccentricity in the simulation domain, these spacetime diagrams show very little difference between the hydrodynamic and MHD simulations; the vertical magnetic field simply causes slightly more rapid precession. The spacetime diagrams in Fig. 6 illustrate the corresponding evolution of the vertical field with time. The simulations m35 and m35t have similar values of max\([e]\), but two different widths of "envelope" (see Equation 50) for the vertical magnetic field (\(w_{\rm transition}=0.1\) and 0.05, respectively). The different distributions of vertical magnetic flux do little to alter the characteristics or behaviour of the eccentric mode. Fig. 7 provides a quantitative measure of eccentricity decay, plotting the per cent change in integrated AMD versus time. For lower eccentricities (max\([e]=0.2\), 0.35) we attribute eccentricity decay both to numerical diffusion, and to weak damping by the initial growth of the Papaloizou-Pringle instability (see Barker & Ogilvie, 2016). The level of decay over the simulation runtimes (of \(400T_{0}\), where \(T_{0}=2\pi/\Omega_{0}\) and \(\Omega_{0}=\sqrt{G\,M/a_{\rm in}^{3}}\) are the orbital period and angular velocity at the inner boundary) is consistent with the hydrodynamic results reported by Barker & Ogilvie (2016). The shock formation in the simulations with larger max\([e]=0.5\) leads to much stronger eccentricity damping initially (until \(t\Omega_{0}\simeq 400\)), and shallower, "bursty" decay at later times. We attribute this stochastic evolution to periodic interaction between the strongly modified distortion and the inner boundary. Fig. 7 quantitatively demonstrates the similarity of the hydrodynamic and MHD results for a given eccentricity profile and resolution, regardless of vertical flux distribution (compare m35 and m35t). The curves in Fig. 8 show the evolution of the total Hamiltonian (Equation 57), which is clearly not conserved in our simulations. Adiabatic damping, i.e. damping which is slow relative to the precession timescale, should lead to a slow evolution of the eccentricity along the family of ideal eccentric modes, towards modes of lower amplitude. As in the unmagnetised case (Ogilvie and Lynch, 2019), equation 33 implies that an infinitesimal change in the total Hamiltonian is related to an infinitesimal change in the AMD by \(d\mathcal{H}=-\omega dC\). The modes in our simulations have retrograde precession (\(\omega<0\)) meaning a decreases in AMD should lead to a decreasing \(\mathcal{H}\), when damping is slow enough. Thus slowly damped modes should follow the blue curve in Fig. 9 which shows the \(\mathcal{H}\)-AMD phase space. However, Fig. 8-9 demonstrates secular growth for the simulations with \(\max[e]=0.2,0.35\) (except for the low-resolution simulation m35l, which shows similar decay to the simulations with \(\max[e]=0.5\)). One potential explanation for this growth is that non-adiabatic damping in our simulations shifts the initialised eccentric profiles away from the family of ideal eccentric modes that minimise \(\mathcal{H}\) for a given AMD. In particular when damping is strong enough the eccentric disc will develop a twist as a result of the disc transporting AMD to compensate for spatial variations of the damping rate (Ferreira and Ogilvie, 2009). Fig. 9 shows that the resolved simulations evolve from the untwisted "modal" \(\mathcal{H}\)-AMD relation (blue curve) to the "maximally twisted" \(\mathcal{H}\)-AMD relation (Orange Curve). The latter is obtained by taking a given eccentric mode and twisting it until it reaches an orbital intersection everywhere. This is consistent with the disc gradually twisting, over the course of the simulation, causing a growth in the total Hamiltonian. This is supported by the simulations orbital elements, computed using 56, and from looking at the residual in the radial velocity when the radial velocity of the untwisted eccentricity profile is subtracted; both of which show the disc becoming increasingly twisted with time. This twisting of the disc occurs over 1000s of orbits and is thus much milder than that seen for non-modal initial profiles (e.g. the const-\(e\) profiles studied by Chan et al., 2022) which become highly twisted over 10s of orbital periods. Fig. 10 plots profiles of binned eccentricity \(\tilde{e}(a)\) at the beginning (solid lines) and end (dashed lines) of our simulations. For \(\max[e]=0.2\) and 0.35, the plot shows the decay of eccentric modes that retain roughly the same profile in eccentricity, except in m35l (which has half the radial and azimuthal resolution). Although the profiles for the simulations with \(\max[e]=0.5\) deviate qualitatively, h5, m5 and m5h remain strongly distorted and relatively untwisted by the end of the simulations. The panels in Fig. 11 show snapshots of radial velocity (top) and vertical magnetic field (bottom) at the end of the simulations m2 (left), m35 (middle), and m5 (right). The differences with increasing resolution illustrated by Figs. 7-10 (compare m35,m35l,m35h, and m5,m5h) indicate that care should be taken in resolving disk distortions with strong eccentricity gradients. We do not claim to have completely resolved the eccentric modes' precession in any of our simulations; m35l, m35, and m35h demonstrate a clear reduction in AMD decay with increasing resolution. However, this decay is slow compared with the dynamical timescales of interest for magnetorotational and parametric instabilities. Further, m35 and m35h exhibit qualitatively similar if not quantitatively identical evolution. ## 7 Discussion At the magnetic field strength relevant to the MRI the magnetic field has negligible influence on the eccentric modes, which are almost indistinguishable from their unmagnetised counterparts. In 2D (i.e. specifically suppressing the MRI and parametric instability) the evolution of the magnetised and unmagnetised eccentric modes in RAMSES are qualitatively the same. There are some differences seen between the magnetised and unmagnetised simulations with \(\max[e]=0.5\), however these simulations are not adequately resolved. Although the magnetic field has little effect on the eccentric mode, the presence of an eccentric mode can have a strong influence on the magnetic field: with lateral compression by the orbital motion, the presence of eccentricity gradients can enhance the magnetic field strength in regions of the disc. Similarly the magnetic field strength is reduced in regions of the disc where the orbital velocity diverges. Despite the strong magnetic field enhancements, the magnetic field configurations we setup are stable in our 2D simulations and their slow evolution is consistent with that expected due to the evolution of the eccentricity profile. In the simulated modes the enhancement of the magnetic field is primarily a result of the imposition of circular rigid wall boundaries. However, this effect is potentially very important in short wavelength/tightly-wound eccentric discs such as those expected in the inner regions of black hole discs as simulated by Dewberry et al. (2020). One issue that we have encountered is the difficulty of both resolving and converging the eccentric modes in numerical simulations. This is important for studies of the eccentric MRI, as having a high enough resolution to resolve the MRI (e.g. as measured by MRI quality factors) may not be sufficient to ensure the simulation is well resolved. One also needs adequate horizontal resolution to resolve the the eccentric mode as well. This is particularly important if one is interested in analysing the effects of the MRI on the eccentric disc as the strong damping of the eccentricity by the grid may overwhelm the \begin{table} \begin{tabular}{c c c c c c c} \hline Simulation & \(\max[e]\) & \(B_{z}\)? & \(N_{R}\times N_{\phi}\) & Predicted \(\omega\) & Simulation \(Re[\,\omega]\) & Simulation \(Im[\,\omega]\) \\ \hline h2 & 0.20 & No & \(480\times 800\) & -0.004235 & -0.0044 & \(-4.8\times 10^{-5}\) \\ h35 & 0.35 & No & \(480\times 800\) & -0.005012 & -0.0054 & \(-1.0\times 10^{-4}\) \\ h5 & 0.50 & No & \(480\times 800\) & -0.007459 & -0.0045 & \(-5.5\times 10^{-3}\) \\ m2 & 0.20 & Yes & \(480\times 800\) & -0.004244 & -0.0044 & \(-4.8\times 10^{-5}\) \\ m35 & 0.35 & Yes & \(480\times 800\) & -0.005042 & -0.0054 & \(-1.0\times 10^{-4}\) \\ m35t & 0.35 & Yes & \(480\times 800\) & -0.005047 & -0.0055 & \(-9.9\times 10^{-5}\) \\ m35l & 0.35 & Yes & \(240\times 400\) & -0.005042 & -0.0033 & \(-3.8\times 10^{-4}\) \\ m35h & 0.35 & Yes & \(960\times 800\) & -0.005042 & -0.0053 & \(-6.2\times 10^{-5}\) \\ m5 & 0.50 & Yes & \(480\times 800\) & -0.007582 & -0.0045 & \(-5.5\times 10^{-3}\) \\ m5h & 0.50 & Yes & \(960\times 800\) & -0.007582 & -0.0046 & \(-5.6\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 1: Details of the nonlinear simulations considered in this paper. All simulations were run on a uniform cylindrical mesh with \(r_{1}/r_{0}=5\), and \(c_{s}=0.05a_{\min}n(a_{\min})\). The magnetized runs include purely vertical fields as described by (48), with \(l_{\rm MRI}=c_{s}/n(a_{\min})\), \(n_{\rm transition}=0.1a_{\min}\), \(a_{\rm in}=2a_{\min}\), and \(a_{\rm out}=4a_{\min}\) (except for m35t, which uses \(w_{\rm transition}=0.05a_{\min}\), \(a_{\rm in}=1.5a_{\min}\), and \(a_{\rm out}=4.5a_{\min}\)). The final three columns compare precession frequencies predicted by the eccentric mode calculations with frequencies and decay rates measured in the simulations. effects of the MHD turbulence. Given the relatively strong damping seen in our 2D simulations assessing the influence of the MRI on the eccentric disc will prove challenging unless MRI is very efficient at damping (or in principle exciting) eccentricity. In this paper we have limited our focus to the 2.5D cylindrical disc setup. This setup has a number of advantages numerically (easier to implement the vertical boundary and to achieve adequate vertical resolution), however it does not give a good approximation to a physical 3D disc. As in hydrodynamic eccentric discs the variation of vertical gravity and pressure around an eccentric orbit leads to a dynamically varying scale height around an orbit. This causes prograde precession of the eccentric disc (Ogilvie, 2001, 2008; Ogilvie and Barker, 2014; Ogilvie and Lynch, 2019). More importantly the vertical compression induced by the scale height oscillation can greatly enhance the quasi-toroidal magnetic fields in nonlinearly eccentric discs (Lynch and Ogilvie, 2021). Additionally the periodic solution to the induction equation within the disc needs to match onto the current free external field. This can be constructed in a circular disc via matched asymptotics (Ogilvie, 1997). However for non-axisymmetric discs the set of external field solutions (which can be described using cylindrical harmonics) are generically incompatible with the field configuration within an eccentric disc (excepting the purely quasi-toroidal case where no magnetic flux leaves the disc). The internal and external fields could be connected by a force free transition layer in the upper disc atmosphere. However such a field configuration is likely unstable even in the absence of the MRI. The full 3D problem is important, however, and deserves further attention. A simpler initial approach might be to simulate a 3D MHD Figure 5: Spacetime diagrams showing radial profiles of radial velocity at a fixed \(\phi\) vs. time for both hydrodynamic (names starting with \(h\)) and MHD (\(m\)) simulations. The periodic changes in the sign of \(\nu_{r}\) over \(t\Omega_{0}\sim 1000\) illustrate the retrograde precession of the eccentric distortions. disc while exciting eccentricity at the outer boundary (Similar to Dewberry et al., 2020) and observe the magnetic field response. Figure 8: Time-evolution of the conserved Hamiltonian (Equation 57) for all of the simulations listed in Table 1. As with Figure 7 the evolution of the total-Hamiltonian for the magnetised and unmagnetised disc are nearly indistinguishable. Figure 6: Same as Fig. 5, but for vertical magnetic field in the MHD simulations. Figure 7: Time-evolution of angular momentum deficit (Equation 31) for all of the simulations listed in Table 1. The AMD of the magnetised and unmagnetised discs are nearly indistinguishable (_e.g._ see the curves for h2 and m2; h35 and m35; and h5 and m5). ## 8 Conclusion In this paper we have extended the Hamiltonian eccentric disc theory of Ogilvie and Lynch (2019) to include a magnetic field in an unstratified, cylindrical geometry. We have solved for the uniformly precessing eccentric mode solutions of our model and shown that, for magnetic field strengths relevant to the onset of MRI, the resulting eccentricity profile, and precession rate, is nearly identical to the unmagnetised case. While such eccentric modes are of limited utility in describing realistic 3D eccentric discs due to several important physical effects not being present in the unstratified geometry, they provide a useful setting for the study of the eccentric MRI and magnetised parametric instability to further our understanding of how disc turbulence operates in eccentric discs. More broadly, this will help inform our understanding of how disc turbulence operates in flows that vary on the orbital timescale. To this end we confirm the suitability of our eccentric mode solutions for numerical applications by using them as initial conditions for 2D MHD simulations in RAMSES. In 2D simulations we obtain long lived uniformly precessing eccentric flows that agree closely with the analytical predictions. These flows will provide the background state for 3D, unstratified, simulations studying the stability of these eccentric discs to both the MRI and parametric instability which will be presented in a future publication. ## Acknowledgements The authors would like to thank Guillaume Laibe and Enrico Ragusa for many helpful comments on the draft of this manuscript and the anonymous reviewer for comments and suggestions, which improved the clarity of the paper. E. Lynch would like to thank the European Research Council (ERC). This research was supported by the ERC through the CoG project PODCAST No 864965. This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 823823. J. Dewberry gratefully acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference #CITA 490888-16]. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2301.13705
Clever Design, Unexpected Obstacles: Insights on Implementing a Quantum Boltzmann Machine
We have implemented a gated-based quantum version of a restricted Boltzmann machine for approximating the ground state of a Pauli-decomposed qubit Hamiltonian. During the implementation and evaluation, we have noticed a variety of unexpected topics. It starts from limitations due to the structure of the algorithm itself and continues with constraints induced by specific quantum software development kits, which did not (yet) support necessary features for an efficient implementation. In this paper we systematically summarize our findings and categorize them according to their relevance for the implementation of similar quantum algorithms. We also discuss the feasibility of executing such implementations on current NISQ devices.
Felix Paul, Michael Falkenthal, Sebastian Feld
2023-01-31T15:29:16Z
http://arxiv.org/abs/2301.13705v1
# Clever Design, Unexpected Obstacles: Insights on Implementing a Quantum Boltzmann Machine ###### Abstract We have implemented a gated-based quantum version of a restricted Boltzmann machine for approximating the ground state of a Pauli-decomposed qubit Hamiltonian. During the implementation and evaluation, we have noticed a variety of unexpected topics. It starts from limitations due to the structure of the algorithm itself and continues with constraints induced by specific quantum software development kits, which did not (yet) support necessary features for an efficient implementation. In this paper we systematically summarize our findings and categorize them according to their relevance for the implementation of similar quantum algorithms. We also discuss the feasibility of executing such implementations on current NISQ devices. quantum software engineering, NISQ devices, quantum machine learning, limitations and constraints ## I Introduction The advent of quantum computers accessible to the public has given an immense boost to the development of new quantum algorithms in the last decade. Many algorithms promise advantages over previously known classical ones, e.g., with regard to a speedup [1] or enhanced solution quality [2]. However, to raise these advantages the conceptual algorithms first have to be applied to actual use cases. Second, they have to be implemented to run on specific quantum computers which are still NISQ devices providing limitations with respect to decoherence time, gate and measurement failure rates [3]. In practice, this leads to major difficulties when transferring conceptual algorithms into actual implementations. In order for them to be executable on NISQ-devices, the limitations of the specific quantum hardware must be taken into account accordingly. For example, the available number of qubits and the fidelity of their states must be considered such that an implemented algorithm can (i) be transferred to the available qubits at all and (ii) be executed by the quantum computer in an amount of time that still guarantees tolerable error rates and, thus, allows to read out meaningful results [4]. However, these limitations have a strong influence on whether the theoretically described advantages of an algorithm can be raised at all [5]. Therefore, it is valuable to examine algorithms for their practicability on NISQ computers and to recognize limitations early on. On this basis, it is possible to investigate and elaborate on appropriate mitigation. In this work, we present details and findings regarding the implementation of a quantum version of a restricted Boltzmann machine (QBM) based on the algorithm introduced by Xia and Kais [6]. We provide insights into what kind of obstacles we encountered when translating a theoretically proposed quantum algorithm into an implementation that can be executed on one of the quantum backends currently available. Some of these issues are caused by implementing the algorithm within a given software stack or quantum software development kit (quantum SDK). Others only arise when trying to execute the code on quantum backends, since their properties play a crucial role for gaining meaningful results. We also discuss aspects regarding the possible solution quality of the chosen model: There are limitations induced by transferring quantum data, stored within the hardware, into classical data and also some limitations that are induced by the variational model itself and the conditions it is built on. For some of the mentioned issues we suggest possible approaches, as to how the impact of these may be reduced. Those recommendations provide guidance for implementing other algorithms which make use of similar concepts. With this in mind, we classify the encountered problems in terms of how the same or similar ones may arise during the implementation of other algorithms. Since numerous algorithms proposed make use of similar subroutines or concepts (e.g., using parametrized rotational gates for variational models), we believe that by systematically classifying common issues, some of these can be managed better within future implementations. Thus, we aim on giving an initial list of indicators which need to be considered in order to access the practical feasibility of algorithms proposed. Since our findings will be explained in the context of a QBM algorithm, we also summarize the key aspects of it which we supplement with exemplary calculations to illustrate relevant procedures in more detail. The remainder of this paper is structured as follows: We introduce the essential concepts of the quantum Boltzmann machine algorithm by Xia and Kais [6] in Sec. II. We identify and discuss findings and restrictions of the QBM algorithm with respect to currently available quantum computers and software development stacks and development kits in Sec. III. Finally, we conclude the paper with an outlook on future work in Sec. IV. ## II Algorithm Details The goal of the paper is to highlight practical problems that arise during the implementation of quantum algorithms, which is done on the example of a QBM. For this purpose, we will present the algorithmic details in the following, which will be referred to in Section III. The goal of the approach described by Xia and Kais [6] is to approximate the ground state of a given Hamiltonian using a hybrid quantum-classical algorithm. The utilized QBM is made up of three layers: a visible layer with \(n\) qubits, a hidden layer with \(m\) qubits, and a classical sign layer realizing relative signs between different states of the visible layer (see Fig. 1). For explicit values of \(n\) and \(m\), we will refer to this model as a \((n,m)\)-QBM. The wave function to be generated by the QBM and which should approximate the ground state is \[\ket{\psi}=\sum\nolimits_{\{v\}}s(v)\phi(v)\ket{v}\enspace. \tag{1}\] Here, \(v\) denotes a single binary state consisting of \(n\) bits and \(\{v\}\) represents the set of all possible binary states of the visible layer (e.g., \(n=2\) leads to \(\{v\}=\{00,01,10,11\}\)). Furthermore, \(\phi(v)\) is the real-valued amplitude associated to state \(\ket{v}\), while \(s(v)\) is the corresponding value of the sign node. The probability distribution \(p(v)\) of the qubit states of the visible layer is generated by using the QBM, and it determines the amplitudes of the target wave function: \[\begin{split} p(v)=\phi^{2}(v)=\frac{1}{Z}\sum\nolimits_{\{h\}} \exp\left(E(v,h)\right)\enspace,\\ Z=\sum\nolimits_{\{v,h\}}\exp\left(E(v,h)\right)\enspace,\end{split} \tag{2}\] where \(\sum_{\{h\}}\) indicates the summation over all configurations of the hidden layer for a fixed configuration of the visible layer. The "energy" \(E(v,h)\) of a given state of the visible and hidden layer depends on real biases \(\{a_{i}\},\{b_{j}\}\) and weights \(\{w_{ij}\}\), which are all subject to the optimization procedure. The energy is given by \[E(v,h)=\sum\nolimits_{i}a_{i}v_{i}+\sum\nolimits_{j}b_{j}h_{j}+\sum\nolimits_ {ij}w_{ij}v_{i}h_{j}\enspace. \tag{3}\] In (3), \(v_{i},h_{j}\in\{\pm 1\}\) are the \(\sigma_{z}\)-eigenvalues of the computational basis states (\(\sigma_{z}\ket{0}=\ket{0}\), \(\sigma_{z}\ket{1}=-\ket{1}\)) for the \(i\)-th and \(j\)-th qubit in the visible and hidden layer, respectively. The sign node is a smooth function that depends on the state of the qubits from the visible layer as well as on parameters \(\{c_{i}\}\) and \(d\), which are to be optimized, and it is given by \[s(v)=\tanh\left(\sum\nolimits_{i}c_{i}v_{i}+d\right) \tag{4}\] In fact, the algorithm proposed in [6] does not actually generate the probability distribution described in (2), but a modified one with an additional regulator \(k=\mathcal{O}(\sum_{ij}|w_{ij}|)\). This regulator normalizes all parameters \(p\in\{a_{i},b_{j},w_{ij}\}\) according to \(p\to p/k\) in order to increase the probability for successfully generating the distribution initially wanted (see Section II-B). For simplicity, including the regulator will be omitted in the upcoming sections, but when implementing the algorithm it needs to be considered. In the following sections we will describe the necessary steps for generating the probability distribution given in (2) in two steps - namely generating the _linear_ and _quadratic_ terms of (3). After that, the basic optimization procedure and the analytical gradients used in it are introduced. ### _Linear terms_ The linear terms in (3) are generated by performing \(R_{y}\)-rotations on all qubit states from the visible and hidden layer with angle \[\theta_{\ell}=2\arcsin\left(\sqrt{\frac{e^{-p_{\ell}}}{e^{p_{\ell}}+e^{-p_{ \ell}}}}\right)\enspace, \tag{5}\] where \(p_{\ell}\in\{a_{i},b_{j}\}\) depends on whether the gate acts on a qubit state from the visible or hidden layer and \(\ell\) to be taken from the corresponding index set. To give an example, (6) shows the action on a single \(\ket{0}\)-state which generates the correct sign within the amplitude of each state: \[R_{y}(\theta_{\ell})\ket{0}=\frac{1}{\sqrt{e^{p_{\ell}}+e^{-p_{\ell}}}}\left( e^{p_{\ell}/2}\ket{0}+e^{-p_{\ell}/2}\ket{1}\right) \tag{6}\] Note, that by rotating the qubit states in the described manner, the probability distribution of (2) is generated for \(w_{ij}=0,\forall\ i,j\). ### _Quadratic terms_ In order to include the interaction term of the \(i\)-th visible and \(j\)-th hidden qubit in (3), a series of four doubly-controlled \(R_{y}\)-rotation gates has to act on an ancillary qubit. The schema of one such entangling layer is depicted in Fig. 2 and shows four 3-qubit gates which all get controlled by one of the four 2-qubit basis states \(\{\ket{00},\ket{01},\ket{10},\ket{11}\}\). Fig. 1: Exemplary network architecture of a \((2,3)\)-QBM. Vertices \(v_{1}\) and \(v_{2}\) represent qubits from the visible layer, while \(h_{1}\), \(h_{2}\), and \(h_{3}\) represent qubits from the hidden layer. \(s\) corresponds to a sign node realizing relative signs between states of the visible layer. As part of the algorithm, qubits from the visible and hidden layer are entangled with each other via 3-qubit gates. For each entangling layer two angles are necessary, which both depend on the weight \(w_{ij}\) between qubit \(i\) and \(j\) in the following way: \[\theta_{ij}^{\pm}=2\arcsin\left(\sqrt{\frac{e^{\pm w_{ij}}}{e^{|w_{ij}|}}} \right)\, \tag{7}\] where \(\theta_{ij}^{+}\) is used when the \(R_{y}\)-gate is controlled by states with even parity (\(|00\rangle\) and \(|11\rangle\)), while \(\theta_{ij}^{-}\) is accordingly used for control states with odd parity (\(|01\rangle\) and \(|10\rangle\)). This can better be understood when looking at the energy in (3): If visible qubit \(i\) and hidden qubit \(j\) are in the same state (either both \(|0\rangle\) or both \(|1\rangle\)), the product of their \(\sigma_{z}\)-eigenvalues is \(+1\) (since \((+1)^{2}=(-1)^{2}=+1\)), whereas them being in different states results in an eigenvalue product of \(-1\) (since \((-1)(+1)=(+1)(-1)=-1\)). Now, in order to understand the action of the doubly-controlled \(R_{y}\)-gates in more detail, we will look at the action of a single-qubit \(R_{y}\)-gate on an ancillary qubit with the angle defined in (7) (omitting the \(i,j\) indices for simplicity): \[R_{y}(\theta^{\pm})\ket{0}=\frac{1}{e^{|w/2|}}\left(\sqrt{e^{|w|}-e^{\pm w}} \ket{0}+e^{\pm w/2}\ket{1}\right). \tag{8}\] Eq. (8) shows that with probability \(\sim e^{\pm w}\) the ancillary qubit will be in state \(\ket{1}\), thus giving the correct contribution to the energy defined in (3) when measured to be in that particular state. In order to successfully generate the desired distribution, one ancillary qubit for every combination of qubits from the visible and hidden layer has to be prepared according to the doubly-controlled rotation layer depicted in Fig. 2, and it has to be measured to be in state \(\ket{1}\) directly after the action of the layer. This results in an additional amount of \(nm\) qubits required besides the \(n+m\) qubits making up the visible and hidden layer, respectively. However, the number of required ancillaries could be reduced to \(1\) if it is possible to reliably reinitialize it back to the computational ground state \(\ket{0}\) after each measurement, resulting in \(n+m+1\) required qubits in total. Furthermore, when looking at the modulus in the normalization factor in (8) it might seem out of place compared to the partition function-like normalization from (2). But after measuring the ancillary qubit to be in state \(\ket{1}\), the normalization of the wave function adapts accordingly. After following the steps described above, the probability distribution of the visible qubit states follows the one given in (2), which allows for the sampling of the target wave function defined in (1). In order to optimize the involved parameters, the sampled wave function then has to be used for calculating expectation values. These are necessary for calculating analytic gradients w.r.t. the parameters which will be described in the following. ### _Optimization & Analytic gradients_ It is common to present a problem Hamiltonian in its Pauli-decomposed form according to \[H=\sum_{k=1}^{N}c_{k}\bigotimes_{i=1}^{n}P_{i}^{(k)}\,\ P_{i}^{(k)}\in\{I, \sigma_{x},\sigma_{y},\sigma_{z}\}. \tag{9}\] The Hamiltonian in (9) consists of \(N\) terms contributing to the sum, each being a so-called Pauli-word or Pauli-string, a tensor product of operators taken from the set of Pauli matrices including the identity, multiplied with a real coefficient \(c_{k}\). As an example, a Pauli-decomposed Hamiltonian for \(n=3\) and \(N=2\) might read as \[H=2\ \sigma_{x}\otimes I\otimes\sigma_{z}-3\ I\otimes\sigma_{y}\otimes\sigma_{y}. \tag{10}\] The optimization procedure is straight-forward: For a given set of parameters \(p\in\{a_{i},b_{j},w_{ij},c_{i},d\}\), the expectation value \(\langle H\rangle=\langle\psi|H|\psi\rangle\) of the Hamiltonian w.r.t. the sampled wave function is the objective function used for the gradient-based optimization of the parameters. For plain gradient descent with a constant learning rate \(\eta\), the parameters in the \(k\)-th iteration step are adjusted according to \[p_{k+1}=p_{k}-\eta\partial_{p}\left\langle H\right\rangle. \tag{11}\] Given the explicitly known dependence of the amplitude \(a(v)\coloneqq s(v)\phi(v)\) on the parameters \(p\) and exploiting the fact that the amplitudes are real, the following covariance-like structure can be derived for the derivative of the expectation value (we refer to the supplementary notes of [6] for a detailed derivation): \[\partial_{p}\left\langle H\right\rangle=2\left\langle E_{loc}D_{p}\right\rangle -2\left\langle E_{loc}\right\rangle\left\langle D_{p}\right\rangle\, \tag{12}\] with the local energy \(E_{loc}(v)\) and the logarithmic amplitude derivative \(D_{p}(v)\) defined as follows \[E_{loc}(v)=\frac{\langle v|H|\psi\rangle}{a(v)}\, \tag{13}\] \[D_{p}(v)=\partial_{p}\log\left(a(v)\right)=\frac{\partial_{p}a(v )}{a(v)}. \tag{14}\] Fig. 2: Circuit diagram representation of an entangling layer generating the interaction term between the \(i\)-th qubit from the visible and the \(j\)-th qubit from the hidden layer. The gates are \(R_{y}\)-gates with the arguments denoted in the box and defined in (7). For each parameter, the explicit expression of \(D_{p}\) can be worked out analytically to give \[D_{a_{i}}(v) =\frac{1}{2}v_{i}-\frac{1}{2}\left\langle v_{i}\right\rangle_{QBM}\, \tag{15}\] \[D_{b_{j}}(v) =\frac{1}{2}\tanh\left(g_{j}\right)-\frac{1}{2}\left\langle h_{j} \right\rangle_{QBM}\,\] (16) \[D_{w_{ij}}(v) =\frac{1}{2}\tanh\left(g_{j}\right)v_{i}-\frac{1}{2}\left\langle v _{i}h_{j}\right\rangle_{QBM}\,\] (17) \[D_{d_{i}}(v) =v_{i}\left(\frac{1}{s(v)}-s(v)\right)\,\] (18) \[D_{c}(v) =\frac{1}{s(v)}-s(v)\, \tag{19}\] with \(g_{j}=\partial E/\partial b_{j}=h_{j}+\sum_{i}w_{ij}v_{i}\), \(v_{i}\) and \(h_{j}\) again being the corresponding \(\sigma_{z}\)-eigenvalues for the \(i\)-th and \(j\)-th qubit state in the visible and hidden layer, respectively, and \(s(v)\) being the sign node from (4). Due to the covariant structure of (12), the constant shifts \(\left\langle...\right\rangle_{QBM}\) in (15)-(17) cancel out and do not have to be calculated. With the QBM approach just explained, the underlying exponentially growing, complex distribution of \(2^{n}\) states can be generated using quadratically growing resources, namely the circuit width (assuming \(m\sim\mathcal{O}(n)\)), circuit depth (according to \(nm\) entangling layers necessary to generate the quadratic terms), and required parameters. However, during the investigation and implementation of the approach, several (unexpected) observations have been made that, when applied, diminish the practicability of the otherwise cleverly designed theoretical algorithm. These points will be discussed in the following section. ## III Implementation Insights and Hurdles in the NISQ-era Besides problems that can occur when executing circuits on today's NISQ devices, we have used the presented QBM algorithm as an example to work out points that can lead to issues when transferring a quantum algorithm into an executable implementation. In addition to that, in the rapidly growing domain of quantum software stacks, not all available SDKs support necessary features for an efficient implementation. A combination of these points will be addressed in this section, combined with the perspective for what types of algorithms these points might be an obstacle and, if possible, how to mitigate some of these issues. To ensure the verifiability of our results, we provide the source code of our implementation here [7]. The code was written within the open-source quantum computing framework Qiskit [8] at version 0.31.0. The implementation was developed in the context of a quantum chemistry use case with the goal of approximating electronic ground states of molecules. ### _Scaling of sampling quality_ In order to sample the wave function from the distribution, the circuit must be executed multiple times. However, the sample size (i.e., the number of shots) should be the same order of magnitude as there are states to be represented. Following this argument and assuming that for a given \(n\)-qubit Hamiltonian a large portion of the possible basis states contribute to the ground state, the number of required shots scales exponentially as \(\mathcal{O}(2^{n})\). Thus, in the regime where a complex \(2^{n}\)-dimensional distribution might be difficult to access classically, an exponentially growing number of shots has to be performed. Even for usual single-shot, small-depth circuit execution times of \(\sim\mathcal{O}(\mu s)\) an exponentially growing number of repetitions might be a limiting factor. As a reference, currently available NISQ-devices support a maximum of around 20,000 - 100,000 shots. Even if technically possible to allow for a much larger number of shots, stability of the calibration of the underlying hardware must be ensured in order to get meaningful results. If this is the case, an efficient generation of the probability distribution on the quantum register is possible at the expense of many circuit executions in order to sample it. This, however, is not just an limiting aspect of the discussed QBM algorithm but is rather an issue for any quantum algorithm that relies on sampling for representing a distribution of states encoded in a quantum register. ### _Classical Post Processing_ Conventional variational approaches like, e.g., the Variational Quantum Eigensolver (VQE) [9, 10], encode the target wave function and the problem Hamiltonian onto the quantum hardware in the form of quantum logic gates and allow, e.g., for the efficient evaluation of expectation values. In contrast, the QBM approach generates a probability distribution as a basis for classically calculating the target wave function by summing over hidden layer configurations for a given visible layer configuration. In order to evaluate expectation values necessary for the parameter optimization, the action of the problem Hamiltonian on the wave function must be calculated classically as well. This essentially results in calculating the action of exponentially large (yet sparse) matrices on state vectors which,especially in the context of computing expectation values, can be efficiently performed by quantum computers. However, this could be a common problem for algorithms, which need to further process information about quantum states after it has been transferred to classical data. Thus, it is important to be aware of additional classical calculations after the actual quantum computation in order to not lose the gained advantage by introducing a quantum step. ### _Mid-Circuit measurements_ As described in Section II-B, besides the data qubits from the visible and hidden registers, additional ancillary qubits are necessary in order to ensure a successful sampling. Naively, for \(n\) qubits in the visible and \(m\) qubits in the hidden layer, it requires additional \(nm\) ancillary qubits to generate the probability distribution. In this scenario, all measurements of the data and ancillary qubits can be performed at the end of the circuit, as it is usual for most circuits. But by reusing a single ancillary qubit for all connections between visible and hidden layers, the required qubit resources reduce from \(\mathcal{O}(n^{2})\) to \(\mathcal{O}(n)\). This, however, requires both, the measurement and fast, reliable relaxation of the ancillary during the execution of the circuit. Algorithmically, neither the relaxation nor the mid-circuit measurement pose a problem. Including these features from a hardware and software perspective, however, is more challenging since they inherently affect the way quantum circuits and their execution results must be represented. However, in order to efficiently implement the QBM, these features are necessary requirements for the hardware and software stack and are not yet supported by some, which limited the available options for the framework of choice. As a step further, allowing for mid-circuit measurements would also open up the possibility of including conditional operations or operation layers on the register, based on the measurement results as, in principle, is intended in the discussed QBM algorithm as well. ### _Ansatz universality_ The ansatz for the target wave function given in (1) allows for the generation of real amplitudes with different signs for the basis states in order to approximate the ground state of the problem Hamiltonian. Since the amplitudes of the ground state of an arbitrary Hamiltonian can be complex-valued, the ansatz itself does not allow for an arbitrary good approximation of the actual ground state. As with many optimization problems, it is in fact difficult to compare the quality of the best known solution in the context of the method, with the globally optimal solution. In order to address this issue, the originally proposed algorithm has been extended to allow for relative phases between basis states (see [11, 12]). This can be done by including an imaginary part in the sign-node in (4) according to \[s(v)=\tanh\left(\sum\nolimits_{k}(c_{k}+i\gamma_{k})v_{i}+d+i\delta\right)\, \tag{20}\] with \(\{\gamma_{k}\},\delta\) being \(n+1\) additional parameters. But by using a phase-node, the covariant structure of the analytic gradient in (12) is lost by then including real and imaginary parts of the involved quantities, thus not automatically cancelling out the shifts in (15)-(17). Besides that, the sign- and phase-node pose another not directly apparent issue, which various Quantum Machine Learning models might struggle with. This will be discussed next. ### _Ansatz expressivity_ Now, even when assuming that the ground state of a given problem Hamiltonian has real-valued amplitudes, the algorithm proposed by Xia and Kais [6] would still not be able to approximate any ground state arbitrarily well. This is due to the fact, that on the one hand, the most general real-valued \(n\)-qubit wave function has \(2^{n}-1\) degrees of freedom - one amplitude for each of the \(2^{n}\) states minus one fixed amplitude due to normalization. On the other hand, the number of parameters making up the model scales as \(\mathcal{O}(n^{2})\). This _information gap_ becomes especially apparent for the expressivity of the sign-node: Ideally, the node is supposed to realize relative signs between contributing states. But since it is built from only \(n+1\) parameters, it is not able to realize relative signs for all of the \(2^{n}\) possible states. For certain Hamiltonians this already posed a major issue for the solution quality obtained with as few as 2 qubits in the visible layer. Note, that even by replacing the sign- with a phase-node, the limited expressivity is not resolved since it merely doubles the number of available parameters which can only contribute to the imaginary part of the amplitude. In general, when building variational models, it is difficult to find a good compromise between expressivity and the number of parameters involved in building the model. It is reasonable to assume that in the context of the QBM algorithm for a Hamiltonian, good approximations can only be found for the ground state if it is composed of only a few basis states and if there is only a small number of relative signs between basis states. ### _Width & Depth_ As stated at the end of Section II-C, the width and depth requirements necessary for implementing the QBM algorithm scale as \(\mathcal{O}(n^{2})\) when assuming equally large visible and hidden layers (i.e., \(m\sim\mathcal{O}(n)\)). At a first glance, this scaling seems fairly tolerable. However, when implementing said algorithm and trying to execute it on current quantum devices, one might stumble over some points, which are not necessarily obvious: For example, in the NISQ era, the prefactor of the depth scaling plays a non-negligible role. Of course, in the context of complexity theory any prefactors and non-dominant terms can be neglected, but for implementing algorithms on today's NISQ-devices, e.g. just doubling the depth of an algorithm has a huge impact on the quality of the results. This prefactor actually becomes quite large for the necessary double-controlled rotation gates when they are decomposed into gates, which can be executed on quantum backends. This is necessary, because current backends support only a limited Fig. 3: Decomposition of (a) a double-controlled rotation gate into controlled-rotation gates and CNOTs and (b) a controlled-rotation gate into one-qubit rotations and CNOTs. The gates can either be \(R_{z}\)- or \(R_{y}\) rotations. All gates are \(R_{y}\)-rotation gates with the argument denoted in the box, although these decompositions hold for \(R_{x}\)- and \(R_{z}\)-rotations as well (for \(R_{x}\)-rotations replace the CNOTs in (b) with CZ-gates). number of one- and two-qubit gates, into which any other gate described in an algorithm must be decomposed. As a small example, assume that a backend supports \(R_{y}\)-gates and CNOTs. Following the decomposition rules of Fig. 2(a) and Fig. 2(b), a single doubly-controlled rotation gate requires \(8\) CNOT- and \(6\)\(R_{y}\)-gates. Thus, in order to implement all \(n^{2}\) entangling layers necessary for generating the quadratic terms in the probability distribution, it actually requires \(4n^{2}(2+3\cdot 2)=32n^{2}\) two-qubit gates and \(4n^{2}(3\cdot 2)=24n^{2}\) one-qubit gates. In addition to the gate decomposition, even more two-qubit SWAP-gates are necessary if the interacting qubits cannot be directly entangled due to the quantum processor's topology. Referring to the work of Leymann and Barzen [4], this example was intended to point out that the choice of a particular backend has a major impact on the circuit depth and, thus, the quality of the results. By analyzing the structure of an algorithm and the features of available backends, it is possible to improve on the solution quality. ## IV Conclusion & Outlook To gain insight into the practicability of theoretically formalized quantum algorithms on current NISQ devices, we have implemented an algorithm for a quantum Boltzmann machine (QBM) proposed by Xia and Kais [6] and systematically summarized obstacles and limitations we have faced in the process. Thereby, we have identified discrepancies between the domain of theoretical algorithm design and the practical application of quantum algorithms for actual use cases on current quantum hardware. The systematically presented obstacles, limitations, and initially identified mitigation ideas can guide algorithm developers and practitioners to apply and implement a QBM, as well as similar algorithms. One of the key findings is the following: Besides issues when transferring quantum data into classical data, e.g., when sampling from a distribution stored on the quantum register, to further process this data, we pointed out the desirable support of quantum hardware and quantum software for mid-circuit measurements. In our opinion, this feature holds potential for novel quantum algorithms, especially when considering the interchangeability of whole circuit operations conditioned on (multiple) measurement results. Based on the decomposition of gates necessary for the algorithm at hand, we argue that the choice of an appropriate backend supporting a favorable set of gates is crucial for reducing the circuit depth and improving on the quality of results. In this context, we would like to draw attention to promising automated backend and implementation selection approaches based on algorithm and hardware properties, as described in the work by Salm et al. [13]. Whilst experimenting with the implementation of different Hamiltonians (as described in Section III-E), the question has come up, in which way it might be possible to estimate the solution quality based on properties of the Hamiltonian and the ansatz of the variational circuit. Furthermore, we are going to share our implementation of the QBM via the collaborative quantum software platform PlanQK [14, 15] to present and discuss our findings with further developers, quantum algorithm experts, and scientists in the quantum algorithm and quantum software development community. We are eager to abstract and generalize the essence of our findings along with proven mitigation ideas into design patterns for quantum algorithms and contribute them to the body of knowledge on quantum pattern languages started by Weigold et al. [16]. ## Acknowledgements This work was partially funded by the project PlanQK (01MK20005N) supported by the German Federal Ministry for Economic Affairs and Climate Action.
2309.11724
Emotion-Aware Prosodic Phrasing for Expressive Text-to-Speech
Prosodic phrasing is crucial to the naturalness and intelligibility of end-to-end Text-to-Speech (TTS). There exist both linguistic and emotional prosody in natural speech. As the study of prosodic phrasing has been linguistically motivated, prosodic phrasing for expressive emotion rendering has not been well studied. In this paper, we propose an emotion-aware prosodic phrasing model, termed \textit{EmoPP}, to mine the emotional cues of utterance accurately and predict appropriate phrase breaks. We first conduct objective observations on the ESD dataset to validate the strong correlation between emotion and prosodic phrasing. Then the objective and subjective evaluations show that the EmoPP outperforms all baselines and achieves remarkable performance in terms of emotion expressiveness. The audio samples and the code are available at \url{https://github.com/AI-S2-Lab/EmoPP}.
Rui Liu, Bin Liu, Haizhou Li
2023-09-21T01:51:10Z
http://arxiv.org/abs/2309.11724v1
# Emotion-Aware Prosodic Phrasing for Expressive Text-to-Speech ###### Abstract Prosodic phrasing is crucial to the naturalness and intelligibility of end-to-end Text-to-Speech (TTS). There exist both linguistic and emotional prosody in natural speech. As the study of prosodic phrasing has been linguistically motivated, prosodic phrasing for expressive emotion rendering has not been well studied. In this paper, we propose an emotion-aware prosodic phrasing model, termed _EmoPP_, to mine the emotional cues of utterance accurately and predict appropriate phrase breaks. We first conduct objective observations on the ESD dataset to validate the strong correlation between emotion and prosodic phrasing. Then the objective and subjective evaluations show that the EmoPP outperforms all baselines and achieves remarkable performance in terms of emotion expressiveness. The audio samples and the code are available at [https://github.com/AI-S2-Lab/EmoPP](https://github.com/AI-S2-Lab/EmoPP). Rui Liu\({}^{1}\), Bin Liu\({}^{1}\), Haizhou Li\({}^{2,3}\)\({}^{1}\) Inner Mongolia University, Hohhot, China \({}^{2}\) Shenzhen Research Institute of Big Data, School of Data Science, The Chinese University of Hong Kong, Shenzhen, China \({}^{3}\) National University of Singapore, Singapore [email protected], [email protected], [email protected] Prosodic Phrasing, Emotion, Text-to-Speech (TTS) ## 1 Introduction Prosodic phrasing aims to break a long utterance into prosodic units using phrase break prediction [1]. Over the last few years, the Text-to-Speech (TTS) models have made significant improvement [2, 3, 4] with the help of end-to-end architecture. Note that prosodic phrasing is often the first step in generating a prosody pattern, such as intonation and duration modeling [5]. Any errors made in the prosodic phrasing are propagated to the downstream prosodic models, resulting in unnatural speech. Therefore, prosodic phrasing is critical in improving the naturalness and intelligibility of TTS systems [6]. Traditional prosodic phrasing approaches mainly focus on the following two categories: 1) rich linguistic feature extraction and 2) effective architecture design. For the first category, some works attempted to incorporate the part of speech (POS) [7], semantic and syntactic structure [8], contextual information [9], various high-level embedding representations [10], and even multi-modal knowledge [11] et al. as the enriched input feature. For the second category, researchers tried to build the prosodic phrasing model with conditional random fields (CRF) [12], deep neural networks (DNNs) [13], recurrent neural networks (RNNs) [14], bidirectional long short-term memory (BiLSTM) [15], and self-attention-based transformer network [16], that allows for learning the long-term time dependencies and sequential characteristics in text. The above work has contributed greatly to improving the naturalness and intelligibility of TTS. However, the influence of prosodic phrasing on expressive modeling, especially emotion, in TTS has not received much attention. Actually, different phrase breaks in the same utterance will express different emotions, and in turn, different emotional states will affect the placement of phrase breaks in an utterance [17]. When expressing nervous or anxious emotions, people may add more breaks to the sentence, making the voice more rhythmic and compact. There may be fewer breaks when expressing relaxed or composed emotions. Therefore, investigating the relationship between emotion and prosodic phrasing and incorporating this knowledge into the expressive TTS system to enhance its emotional expression will be the focus of this paper. In this paper, we propose an emotion-aware prosodic phrasing model, termed _EmoPP_, to contribute to the expressive modeling of TTS. Specifically, EmoPP consists of text encoder, emotion predictor, and decoder. The text encoder and emotion predictor aim to extract the linguistic feature and the emotion state from the input utterance, respectively. The decoder takes both the linguistic feature and the emotion state to predict the final phrase breaks relating to the emotional speech. In this way, EmoPP mines the emotional cues of utterance accurately and predicts appropriate phrase breaks. The objective experimental results on IEMOCAP dataset suggest that our EmoPP outperforms all baselines in terms of break prediction accuracy. The subjective listening experiments with an expressive TTS model further validate our method. The main contributions of this work can be summarized as follows: 1) We propose a novel emotion-aware prosodic phrasing model _EmoPP_ for expressive TTS; 2) We incorporate emotion information into the phrase break prediction model to learn the relationship between emotion and prosodic phrasing; 3) The objective and subjective experimental results validated our EmoPP. To our knowledge, this is the first emotion-aware prosodic phrasing scheme for expressive modeling of TTS. ## 2 Emotion-Specific Prosodic Phrase Breaks We first conduct objective observations on ESD dataset [18] to validate the strong correlation between emotion and prosodic phrasing. For ESD, we note that each text was read aloud with five emotion categories (neutral, happy, angry, sad, and surprise), allowing us to easily compare the prosodic phrasing differences of the same utterance under different emotional states. The original ESD dataset consists of 29 hours of recordings by 10 native English speakers and 10 native Chinese speakers. We just select the English subset, including 350 text transcriptions and 17500 audio recordings, to extract the phrase breaks in utterances for all audios. Specifically, we construct an automatic break extraction pipeline that includes _Force Alignment_ and _Break Label Generation_. _Force Alignment_ employ Montreal Forced Aligner (MFA) 1 to align the audio signal and word sequence. In _Break Label Generation_, following [19] and according to the MFA results, we mark a word as "1" (means break) if it is followed by a silence segment of more than 30 milliseconds, otherwise it is "0" (means non-break). Footnote 1: [https://montreal-forced-aligner.readthedocs.io/en/latest/](https://montreal-forced-aligner.readthedocs.io/en/latest/) Let \(P^{n}\), \(P^{h}\), \(P^{a}\), \(P^{sa}\), \(P^{su}\) denote the prosodic phrase break sequence of five emotion states for the same sentence from one speaker. To make a comprehensive comparison of the similarities and differences in the prosodic phrasing of different emotions, we sample two emotion categories, \(i\) and \(j\), of five emotional states and calculate the Simple Matching Coefficient (SMC) [20], denoted as \(\xi_{i,j}\), between the phrase break sequence pair from two emotions. \[\xi_{i,j}=\frac{\sum_{s\in[1,M]}^{M}\sum_{t\in[1,N]}^{N}S\!M\!C(P^{i}_{t,s},P^{ j}_{t,s})}{N*M} \tag{1}\] where \(N\) and \(M\) mean the utterance and speaker numbers of ESD respectively. Note that the speaker-specific prosodic phrasing [21, 22] is not our focus in this work. Therefore, we finally average the SMC scores from different texts and different speakers, as shown in Table 1. It is observed that the correlation coefficients between prosodic phrase breaks for any two different emotions are less than 1, while the same emotion paris are always 1. This suggests that the prosodic phrase-breaking patterns vary across emotions, indicating that prosodic phrase breaks may be emotion-specific. To this end, we will study the emotion-aware prosodic phrasing scheme to achieve expressive TTS. ## 3 EmoPP: Methodology The study on prosodic pause prediction with deep neural networks has achieved laudable results. However, such prosodic phrase breaks are typically linguistically instead of emotionally motivated. In this work, we introduce a novel approach that first extracts the speaker's emotional state from the text. This emotion information is combined with the text and used as an input for predicting prosodic pauses that align with the emotional context of the text. ### Overall Architecture The overall architecture of our model is shown in Fig 1. Our EmoPP consists of Text Encoder, Emotion Predictor, and Decoder. The text encoder aims to extract the linguistic feature from the input text. Emotion predictor seeks to infer the emotional category of the input text. The decoder takes both the linguistic feature and the emotional cues as input to predict the emotion-aware phrase breaks. #### 3.1.1 Text Encoder Let's use \(X\) to denote the input text. In view of the powerful semantic modeling of BERT [23], we adopt BERT as our text encoder to extract the word-level linguistic feature \(\mathcal{H}_{lin}\) of the input text. \[\mathcal{H}_{lin}=Enc_{text}(X) \tag{2}\] #### 3.1.2 Emotion Predictor The emotion predictor consists of a RoBERTa layer and a linear layer. Note that RoBERTa is a variant of BERT with \begin{table} \begin{tabular}{c c c c c c} \hline \hline & Angry & Happy & Neutral & Sad & Surprise \\ \hline Angry & 1.00 & 0.91 & 0.92 & 0.92 & 0.91 \\ Happy & - & 1.00 & 0.91 & 0.90 & 0.90 \\ Neutral & - & - & 1.00 & 0.91 & 0.91 \\ Sad & - & - & - & 1.00 & 0.90 \\ Surprise & - & - & - & - & 1.00 \\ \hline \hline \end{tabular} \end{table} Table 1: Simple Matching Coefficient (SMC) between prosodic phrase breaks of different emotions. new training strategies [24], such as removing the _next sentence prediction_ objective and training on longer sequences. Owing to the exceptional performance of RoBERTa in various emotion classification tasks [25], the RoBERTa layer is added to infer the emotional cues from the input text. After that, the linear layer is used to predict the final emotion category label. The predicted emotion labels are then embedded into emotion embedding \(\mathcal{H}_{emo}\) through an embedding layer. \[\mathcal{H}_{emo}=Pre_{emotion}(X) \tag{3}\] At last, the emotion embedding \(\mathcal{H}_{emo}\) is concatenated with the linguistic feature \(\mathcal{H}_{lin}\) of the text encoder to form a joint embedding \(\mathcal{H}\), which will be fed into the Decoder. \[\mathcal{H}=concat(\mathcal{H}_{lin},\mathcal{H}_{emo}) \tag{4}\] where \(concat(\cdot)\) means concatenate the \(\mathcal{H}_{lin}\) with the \(\mathcal{H}_{emo}\) along the last dimension. #### 3.1.3 Decoder The decoder consists of a BiLSTM layer and a linear layer. The BiLSTM reads \(\mathcal{H}\) to summarize long-term time dependencies and sequential characteristics into a representative feature. To prevent overfitting, dropout layers are applied after the BiLSTM layer. The output from the BiLSTM layer is passed through a linear layer to generate logits that are used to predict whether a pause exists at each word within the text. Finally, we obtain the final phrase break sequence \(Y\). \[Y=Dec(\mathcal{H}) \tag{5}\] ### Loss Functions It's worth mentioning that the emotion predictor is jointly trained with the whole network, which allows the EmoPP to perform both emotion prediction and prosodic pause prediction. Therefore, the total loss function include two parts \(\mathcal{L}_{emo}\) and \(\mathcal{L}_{pp}\). \(\mathcal{L}_{emo}\) aims to make the output of the emotion predictor close to the true emotion category of the corresponding speech of that text, and the \(\mathcal{L}_{pp}\) is used to make the output of the decoder close to the true sequence of prosodic phrase break sequence. \[\mathcal{L}=\mathcal{L}_{emo}+\alpha*\mathcal{L}_{pp} \tag{6}\] where \(\alpha\) is the balance factor between two loss functions. In this way, we can leverage emotional information to aid in prosodic break prediction, thereby improving the model's ability to generate the prosodic break sequence that is consistent with the emotional expression of the utterance. ## 4 Experiments and Results We validate the EmoPP with the IEMOCAP [26] dataset. To ensure class balance, we select data corresponding to five commonly observed emotions: neutral, happy, angry, sad, and surprised. Moreover, following the automatic break extraction pipeline, as introduced in Section 2, we derive the phrase break sequence for all utterances of IEMOCAP as the phrase break prediction training data. ### Experimental Setup For the baseline model, The dimension of word embeddings was 300. The hidden size and projection size for each BiLSTM layer were both 512. For the proposed model, we configured the BERT as bert-base-uncased1 2 and the RoBERTa as roberta-base2 3. The dimensions of the hidden sequence and emotion embeddings were 768. During training, each mini-batch had 16 sentences. We used the Adam optimizer and set the initial value of the dynamic learning rate to 1 \(\times 10^{-5}\). The balance factor in Eq. 6 is set to 0.7 based on experience. We set the number of training epochs to 10. The models that performed best on the validation set during these iterations were saved to make comparisons. To prevent training instability or the occurrence of gradient explosion when the norm of gradients becomes excessively large, we employ gradient clipping and set the norm threshold of gradients to 10. Footnote 2: [https://huggingface.co/bert-base-uncased/blob/main/config.json](https://huggingface.co/bert-base-uncased/blob/main/config.json) Footnote 3: [https://huggingface.co/roberta-base/blob/main/config.json](https://huggingface.co/roberta-base/blob/main/config.json) ### Comparative Study We develop three phrase break prediction models for a comparative study, that include the 1) **BiLSTM**[8]: the classical system that takes BiLSTM as the backbone; 2) **BERT + BiLSTM**[27]: the advanced system that adopts BERT to extract the linguistic feature; 3) **EmoPP (Ours**): the proposed emotion-aware prosodic phrasing model and 4) **w/o RoBERTa**: the ablation system aims to validate the RoBERTa module of emotion predictor. We replace the RoBERTa with a simple linear layer. ### Objective Evaluations We report the objective results in terms of Precision (P), Recall (R) and F-score (F) which is defined as the harmonic mean of the P and R. F values range from 0 to 1, with a higher value indicating better performance. Figure 1: The overall architecture of EmoPP, consists of text encoder, emotion predictor and decoder. We select 100 test samples from the test set randomly and report all results in Table 2. We observe that EmoPP performs best in terms of phrase break prediction accuracy. Specifically, the Precision, Recall, and F1-Score achieve the optimal value among all systems. It suggests that our EmoPP incorporates the emotional cues and contributes to generating emotion-aware phrase breaks for the utterance. To demonstrate that our performance improvement is not due to the addition of a RoBERTa with a huge number of parameters, we can check the results of the last row of Table 2. The ablation results show that the F1-Score performance dropped by a large margin. More importantly, although the F1-Score of \(w/o\) RoBERTa is lower than that of EmoPP, its value is still higher than the BiLSTM baseline. We can find that the emotional cues of utterance still play a key role in prosodic phrasing, which is encouraging. ### Subjective Evaluations To further validate our EmoPP in terms of human perception, we build two emotional TTS systems that take both input text and the phrase breaks information as input. As shown in Fig. 2, the phrase break information of (a) is obtained by the BiLSTM model, while (b) is obtained by our EmoPP. The emotional TTS is trained with an emotional conversational TTS dataset, DailyTalk [28]4, and implemented by this project 5. We invite 10 volunteers and each volunteer is asked to listen to 100 samples to rate the prosody expressiveness with 5-scale Expressive Mean Opinion Score (EMOS). Note that EMOS just focuses on the emotion expressiveness performance for all samples. Footnote 4: We attempted to train the emotional TTS model using the IEMOCAP dataset. However, the synthesized speech produced significant noise. Since IEMOCAP was not originally designed for TTS purposes, it is not optimal for our subjective test. Footnote 5: [https://github.com/keonlee9420/DailyTalk](https://github.com/keonlee9420/DailyTalk) The results are reported in Table 3. We observe that the EMOS score of "TTS with EmoPP" system is higher than the "TTS with BiLSTM", indicating that the emotion-aware prosodic phrasing indeed contributes to the emotion expressive rendering. The subjective evaluation further supports the effectiveness of our EmoPP in terms of expressive modeling of TTS. ### Visualization Analysis In this section, we visualize the Mel-spectrum features to further validate the EmoPP intuitively. We select the synthesized speech for text "You have a business here. I said what the hell is this?" from TTS models (a) and (b) in Fig. 2. The plotted Mel-spectrum features are shown in Fig. 3. We find that with the addition of the emotion-aware phrase break after the word "said", the "angry" emotion expressed in this utterance is more pronounced, enhancing the emotional expressiveness of the speech. ## 5 Conclusion In this paper, we proposed an emotion-aware prosodic phrasing scheme, termed EmoPP, for expressive modeling of TTS. The additional designed emotion predictor on the basis of text encoder and decoder allows the EmoPP to mine the emotional cues of utterance accurately and predict appropriate phrase breaks. The objective and subjective experimental results suggest that our EmoPP outperforms all baselines in terms of break prediction accuracy and prosody expressive rendering for emotional TTS. In future work, we will further improve the model architecture and validate it on more datasets. \begin{table} \begin{tabular}{c|c c c} \hline **Systems** & **Precision** & **Recall** & **F1-Score** \\ \hline BiLSTM & 75.08 & 73.08 & 73.90 \\ BERT + BiLSTM & 78.49 & 76.73 & 77.48 \\ \hline **EmoPP (Ours)** & **78.95** & **77.95** & **78.43** \\ \hline \(w/o\) RoBERTa & 77.76 & 73.57 & 74.95 \\ \hline \end{tabular} \end{table} Table 2: Performance comparison of baseline models and EmoPP model on IEMOCAP dataset. Figure 3: Visualizations of the generated mel-spectrograms by two TTS systems. The red box indicates the phrase breaks. \begin{table} \begin{tabular}{c|c} \hline **Systems** & **EMOS** \\ \hline TTS with BiLSTM & 3.84 \(\pm\) 0.09 \\ TTS with EmoPP & 4.09 \(\pm\) 0.05 \\ \hline \end{tabular} \end{table} Table 3: EMOS results of two TTS systems. Figure 2: Two TTS systems with different phrase breaks, predicted from BiLSTM baseline and the proposed EmoPP respectively.
2309.07146
Transmission matrix parameter estimation of COVID-19 evolution with age compartments using ensemble-based data assimilation
The COVID-19 pandemic and its multiple outbreaks have challenged governments around the world. Much of the epidemiological modeling was based on pre-pandemic contact information of the population, which changed drastically due to governmental health measures, so called non-pharmaceutical interventions made to reduce transmission of the virus, like social distancing and complete lockdown. In this work, we evaluate an ensemble-based data assimilation framework applied to a meta-population model to infer the transmission of the disease between different population agegroups. We perform a set of idealized twin-experiments to investigate the performance of different possible parameterizations of the transmission matrix. These experiments show that it is not possible to unambiguously estimate all the independent parameters of the transmission matrix. However, under certain parameterizations, the transmission matrix in an age-compartmental model can be estimated. These estimated parameters lead to an increase of forecast accuracy in agegroups compartments assimilating age-dependent accumulated cases and deaths observed in Argentina compared to a single-compartment model, and reliable estimations of the effective reproduction number. The age-dependent data assimilation and forecasting of virus transmission may be important for an accurate prediction and diagnosis of health care demand.
Santiago Rosa, Manuel Pulido, Juan Ruiz, Tadeo Cocucci
2023-09-06T19:42:57Z
http://arxiv.org/abs/2309.07146v1
Transmission matrix parameter estimation of COVID-19 evolution with age compartments using ensemble-based data assimilation ###### Abstract The COVID-19 pandemic and its multiple outbreaks have challenged governments around the world. Much of the epidemiological modeling was based on pre-pandemic contact information of the population, which changed drastically due to governmental health measures, so called non-pharmaceutical interventions made to reduce transmission of the virus, like social distancing and complete lockdown. In this work, we evaluate an ensemble-based data assimilation framework applied to a meta-population model to infer the transmission of the disease between different population agegroups. We perform a set of idealized twin-experiments to investigate the performance of different possible parameterizations of the transmission matrix. These experiments show that it is not possible to unambiguously estimate all the independent parameters of the transmission matrix. However, under certain parameterizations, the transmission matrix in an age-compartmental model can be estimated. These estimated parameters lead to an increase of forecast accuracy in agegroups compartments assimilating age-dependent accumulated cases and deaths observed in Argentina compared to a single-compartment model, and reliable estimations of the effective reproduction number. The age-dependent data assimilation and forecasting of virus transmission may be important for an accurate prediction and diagnosis of health care demand. ## 1 Introduction Governments around the world have had to make several difficult decisions with the widespread of the SARS-COV-2 virus in early 2020. Different flavors of social distancing measures from localized risk population to general lockdowns were implemented to alleviate the propagation of COVID-19, at the expense of a decline in the productivity. A lockdown may have a strong impact on the epidemic propagation with a flattening of the active cases curve. On the other hand, it also has a negative impact in the education and social activities. Furthermore, a large COVID-19 outbreak also affects the economy, as evidenced in the case of widespread and strictly enforced sick leaves. Therefore, decision makers need to evaluate carefully the trade-off between socio-economical well-being and sanitary conditions. There is a need to develop real time decision making tools which can monitor the situation of the pandemic and be able to predict the evolution of the disease at different scales: from neighborhoods and cities to states and nationwide. The epidemiological predictions may help to prevent some overloading of the health system: different analysis of thresholds and tendencies of the amount of active cases may be used by governments to implement different non-pharmaceutical interventions which can prevent the collapse of healthcare availability. Research on COVID-19 spread monitoring and modelling (e.g.[1]) had a strong political impact worldwide: several governments around the globe opted for various actions after it. However, despair COVID-19 evolution in different countries made clear that a continuous monitoring of the local spreads based on data was required to adopt timely distancing measures. This work is the result of a project from a grant call for COVID-19 research of the Research National Agency in Argentina in which real time prediction of the propagation, and in particular the epidemic peaks, around the country was one of the main objectives. COVID-19 propagation has been modeled through epidemiological models, most commonly population compartmental models, like Susceptible-Exposed-Infected-Recovered (SEIR) models. In some cases, these models may give good estimations, particularly at the initial phase of an outbreak. However, the virus propagation is subject to the complexity of human interactions or individual-wise varying viral loads [2], which are difficult to describe with compartmental models. Even the most advanced meta-population models (e.g. GLEAM [3]) and agent based models [4] represent very crudely the transmission dynamics of the virus since it depends on said interactions between individuals which are difficult to model and (most importantly) predict in a realistic fashion. Furthermore, social life changed significantly through the evolution of the pandemic because of several factors (government decisions, news, social status). The accumulated data about the epidemic is also rather limited and prone to errors: detection policies have changed with time, delay in reported cases occurring on weekends, lack of hospital discharges dates, etc. On top of the mentioned sources of uncertainty, there is a large amount of undetected cases: a large number of individuals does not suffer noticeable symptoms and/or they do not report them, or, in a smaller scale, the tests give false negatives [5]. Since data are incomplete and noisy and models suffer from misrepresentation of the underlying complex processes, the idea of combining model and data becomes appealing. The main aim of real-time model-data fusion techniques, referred to as sequential inference or data assimilation, is to combine very diverse sources of information considering their uncertainties. In particular, data and epidemiological models are considered with their uncertainty and the techniques aim to: determine the epidemiological state of the population, estimate the optimal model parameters and quantify the optimal model uncertainty represented via stochastic processes, using the observational evidence. One of the most advanced techniques for prediction and risk assessment are those associated with weather forecast events implemented in environment prediction centers and national weather services. These agencies need to model climate disasters including flash floods, extreme droughts and heat waves. There is a plethora of observational instruments of the atmosphere such as satellites, airplanes, radiosondes, and meteorological radars. Data is being generated continuously by these instruments and need to be fuse with numerical model predictions. Furthermore, there are substantial regions on Earth which are poorly observed (e.g. vast areas over southern oceans). State of the art data assimilation methods combining numerical models and data are essential to propagate information between different variables, both spatially and temporally for weather forecasting[6]. This process is conducted in real-time. There is a standardized protocol for the meteorological data acquisition and storage, and modelling for an optimal communication and collaboration between countries and/or state agencies. [7] propose to organize similar international protocols for epidemiological modeling. There are some works that apply data assimilation techniques for epidemiological modeling. Shaman et al [8], [9] use an ensemble-based data assimilation framework to model influenza propagation. The state evolution of an epidemiological model, i.e. SIRS model, is combined with direct and indirect data (e.g. level of web activity related with the illness) from the epidemic. At the same time, parameters of the system are learned online as the observations become available. In that work, they use a variant of the ensemble Kalman filter (EnKF). An EnKF estimation and forecast cases of Cholera applied to a SIRB model (the B stands for the concentration of _V. cholerae_ in water reservoirs) divided into communities is conducted in [10]. The model is forced with the amount of rainfall each community experienced, and assimilating weekly cases and deaths, it allows to forecast new cases. With the necessity of monitoring the spread of COVID-19 and because of the worldwide abundance of data, several works used data assimilation to estimate the spread of the SARS-COV-2 virus. Li et al. [11] use the iterated filter-ensemble adjustment Kalman filter to assimilate COVID-19 data within China using a meta-population model and mobility data. They propose the estimation of the undocumented (asymptomatic) infections fraction together with the rate of transmission of the undocumented infections. They estimate the undocumented rate to be 86%. Engbert et al. [12] use an EnKF for regional transmission modeling. They propose maximizing the likelihood to estimate time-independent parameters in a stochastic SEIR model to capture the dynamic of the epidemic at regional levels. Evensen et al. [13] applied an ensemble Kalman smoother technique to a meta-population model. The evolution of epidemiological parameters is estimated over a long time period assuming a prior density for them. The technique is able to capture the abrupt change in the reproduction number in several countries after lockdown measures. Chinazzi et al. [14] use a meta-population epidemiological model combining the individual spreads between regions, via flight information. The reproduction number (\(R_{0}\)) is estimated using approximate Bayesian computation varying \(R_{0}\) and comparing the resulting simulations with the observed number of imported cases. There is a strong dependence between the severity of COVID-19 symptoms and age. Infections among children and young people often result in asymptomatic cases. On the other hand, adults aged over 60 develop the most severe, and sometimes lethal, cases. Transmission effects have also been associated to age [15], [16], [17]. While children under 10 years old appear to have a low susceptibility to infection, people over 60 are highly susceptible. Identifying age-dependence in the virus transmission is essential for policy making using non-pharmaceutical interventions, e.g. school opening/closing [13]. Estimating the amount of contacts between individuals for a particular population is a challenge, and it is usually achieved by statistically significant population surveys. Klepac et al. [18] use the data collected from a smartphone application in the UK to infer social interactions: The data contains the contact history of each user labeled by agegroups, so the authors have an empirical statistical contact matrix of the population. This was used in an ABM to simulate an influenza-like outbreak for the BBC documentary _Contagion_. Arregui et al. [19] use surveys from eight countries [20] to extrapolate known contact matrices to other countries. These works use a fixed contact matrix to study the evolution of epidemics and there is no estimation of time-varying contact rates. [13] use a base matrix \(\mathcal{C}\) modulated by a time-dependent coefficient \(R(t)\), in which case the transmission matrix is \(\Lambda=R(t)\mathcal{C}\). The base matrix is normalized in such a way that \(R(t)\) is the effective reproduction number. This work aims to study alternatives to time-independent transmission matrix, proposing time-varying parameterizations. Along these parameters, we also estimate relevant parameters, like the effective reproduction number and fraction of detected cases and deaths, using information about the age-structured data of the virus spread. The changes over time in the transmission matrix are also estimated (e.g. changes in mobility in one of the age groups considered). To this end, we combine a meta-population SEIRHD model with a stochastic EnKF to assimilate age-structured cumulative cases and deaths. Finally, we forecast the age-dependent propagation of COVID-19. The outline of this article is as follows: In section 2 we show our model and introduce the data assimilation framework. In section 3 we give details of the real-world data utilized, present the general experimental details and show the different contact matrix parameterizations used. In section 4 we present and discuss the results, each subsection corresponds to a different experiment including synthetic and real-world data experiments. In section 5 we draw the conclusions of our investigation. ## 2 Technique details ### Compartmental epidemiological model The evolution of COVID-19 is modeled for the whole population of a region, which is assumed to be isolated. The model we used is an extension of a basic SEIR (Susceptible, Exposed, Infectious, Recovered) model, where a closed population (i.e. no births, deaths, immigration or emigration) is divided into \(n\) agegroups. A detailed description of classic SEIR models may be found in [21]. The variables considered are \(S_{j}\) (susceptible), \(E_{j}\) (exposed but not infectious), \(I_{j}\) (infected), \(M_{j}\) (mild symptoms), \(T_{j}\) (severe symptoms), \(C_{j}\) (critical symptoms), \(R_{j}\) (recovered) and \(D_{j}\) (deaths). The index \(j=1,\,...,n\) is used to indicate the agegroup. The flow between epidemiological categories of the model is shown in Fig 1. Infected individuals in the agegroup \(j\), \(I_{j}\), can interact with susceptible individuals in the agegroup \(k\), \(S_{k}\), with a transmission rate \(\lambda_{jk}\). The individuals of the susceptible classes \(S_{j}\) that are exposed to the disease are moved to the exposed compartment \(E_{j}\). The individuals in this compartment do not transmit the virus. After a mean incubation time \(\tau^{E}\), the exposed individuals move to the infected group \(I_{j}\). In this stage, individuals can spread the virus to susceptible persons during the period \(\tau^{I}\). After that, the individuals transit to the compartments \(T_{j}\), \(C_{j}\) or \(M_{j}\) with probabilities \(f_{j}^{T}\), \(f_{j}^{C}\) and \(1-f_{j}^{T}-f_{j}^{C}\), respectively. The group \(T_{j}\) contains the individuals presenting severe cases that require hospitalization and, after a time \(\tau^{T}\), recovers from the disease moving to the recovered individuals compartment \(R_{j}\). The compartment \(C_{j}\) (critical) represents the individuals with severe cases that require hospitalizations and, after a time \(\tau^{C}\), die and move to the dead compartment \(D_{j}\). The compartment \(M_{j}\) consists of the individuals who present mild symptoms and require no hospitalization, and after a time \(\tau^{M}\), they transit to the recovered compartment. After a period \(\tau^{R}\), individuals from the recovered compartment becomes susceptible again given that SARS-COV-2 immunity diminishes substantially after 5-7 months [22]. The compartments are designed to characterize the COVID-19 infection dynamics. Individuals are unable to transmit the virus in the initial incubation phase and then are infectious during a period. They are also expected to be isolated once the symptoms are apparent (or tested positive). Therefore, once the individuals transit to \(M_{j}\), \(T_{j}\) or \(C_{j}\) they are expected to be isolated and do not spread the disease, only individuals in the compartment \(I_{j}\) do. The model parameters are the transmission matrix parameters \(\lambda_{jk}\) (which is the number of contacts that a persons in group \(j\) have with persons in group \(k\), in a period of time \(\Delta_{t}\), multiplied by the probability of a contact resulting in an infection), the average time an individual stays in each of the epidemiological states \(\tau^{E}\), \(\tau^{I}\), \(\tau^{M}\), \(\tau^{T}\), \(\tau^{C}\) and \(\tau^{R}\), and the fractions of infections \(f_{j}^{T}\) and \(f_{j}^{C}\) of \(I_{j}\) moving to \(T_{j}\) and \(C_{j}\), respectively. The population of each age compartment is constrained by the total population of the age group. The resulting model equations are Figure 1: Diagram of the compartmental model. An individual moves to the next compartment after a period \(\tau^{\mathcal{X}}\), which depends of the compartment \(\mathcal{X}\). \[N_{j} =S_{j}+E_{j}+I_{j}+M_{j}+T_{j}+C_{j}+R_{j}+D_{j}\] \[\frac{\partial S_{j}}{\partial t} =-\frac{S_{j}}{\tau^{I}N_{j}}\,\sum_{k=1}^{n}\lambda_{jk}\,I_{k}+ \frac{R_{j}}{\tau^{R}}\] \[\frac{\partial E_{j}}{\partial t} =\frac{S_{j}}{\tau^{I}N_{j}}\,\sum_{k=1}^{n}\lambda_{jk}\,I_{k}- \frac{E_{j}}{\tau^{E}}\] \[\frac{\partial I_{j}}{\partial t} =\frac{E_{j}}{\tau^{E}}-\frac{I_{j}}{\tau^{I}}\] \[\frac{\partial T_{j}}{\partial t} =f_{j}^{T}\frac{I_{j}}{\tau^{I}}-\frac{T_{j}}{\tau^{T}} \tag{1}\] \[\frac{\partial C_{j}}{\partial t} =f_{j}^{C}\frac{I_{j}}{\tau^{I}}-\frac{C_{j}}{\tau^{C}}\] \[\frac{\partial M_{j}}{\partial t} =(1-f_{j}^{T}-f_{j}^{C})\frac{I_{j}}{\tau^{I}}-\frac{M_{j}}{\tau^ {M}}\] \[\frac{\partial D_{j}}{\partial t} =\frac{T_{j}}{\tau^{C}}\] \[\frac{\partial R_{j}}{\partial t} =\frac{M_{j}}{\tau^{M}}+\frac{T_{j}}{\tau^{T}}-\frac{R_{j}}{\tau^ {R}}\] Table 1 summarizes the variables and the parameters and Table 2 shows the numeric values of all the fixed parameters except for the transmission matrix, which is one of the variables to be estimated. \begin{table} \begin{tabular}{|l|l|} \hline \multicolumn{2}{|l|}{Variables} \\ \hline \(S_{j}\) & Susceptible individuals \\ \(E_{j}\) & Exposed individuals (non-contagious yet) \\ \(I_{j}\) & Infected individuals (contagious) \\ \(M_{j}\) & Infected with mild symptoms (isolated) \\ \(T_{j}\) & Individuals with severe symptoms that eventually will recovered (isolated) \\ \(C_{j}\) & Individuals with critical symptoms that eventually will die (isolated) \\ \(R_{j}\) & Recovered \\ \(D_{j}\) & Dead \\ \hline \multicolumn{2}{|l|}{Parameters} \\ \hline \(\lambda_{jk}\) & Transmission rate between the agegroup k to j \\ \(\tau^{E}\) & Incubation period. \\ \(\tau^{I}\) & Infection period. \\ \(\tau^{M}\) & Recovery period for mild infections. \\ \(\tau^{T}\) & Recovery period for sever infections. \\ \(\tau^{C}\) & Time until death. \\ \(\tau^{R}\) & Time until immunity vanishes. \\ \(f_{j}^{T}\) & Fraction of the infected individuals in the agegroup \(j\) that develops severe symptoms. \\ \(f_{j}^{C}\) & Fraction of the infected individuals in the agegroup \(j\) that eventually dies. \\ \(N_{j}\) & Total number of individuals in the agegroup \(j\). \\ \hline \end{tabular} \end{table} Table 1: Model variables and parameters. The parameters controlling the propagation of a disease in a meta-population model are the transmission matrix elements. In a population divided in age-compartments, they represent the interaction between the infected and susceptible agegroups and hence it is the main driver of the disease evolution. One of the central objectives of this work is to parameterize and estimate the transmission matrix to obtain a better representation of the propagation between agegroups. In Eq. (1), the elements of the transition matrix are not independent [19]: the total amount of contacts, that individuals of the group \(j\) have with individuals of the group \(k\), has to be equal to the total amount of contacts that individuals of the group \(k\) have with individuals of the group \(j\): \[\lambda_{jk}N_{j}=\lambda_{kj}N_{k}. \tag{2}\] The most relevant parameter in epidemiological modeling is the basic reproduction number. It represents the mean number of new infected individuals caused by one infected person in a totally susceptible population. The basic reproduction number may be estimated in compartmental models by linearising the dynamics of the infected differential subsystem, which is the part of the model that governs the production of new infections when all individuals are susceptible (in a SEIR model, for example, this subsystem are the compartments SEI). The resulting Jacobian matrix is known as _next generation matrix_[23], whose spectral radius corresponds to the basic reproduction number \(R_{0}\). If the linearization of the infected subsystem is conducted at time \(t\), the spectral radius of the resulting matrix is known as the effective reproduction number \(R_{\text{eff}}\). This represents the amount of secondary cases that an infected individual produces at time \(t\), assuming the remaining of the non-infected or recovered population is susceptible. A review of the topic can be found in [24]. Let us assume \(m\) compartments \(x_{m}\) in a compartmental epidemiological model whose individuals can transmit the disease and define the vector \(\mathbf{x}=(x_{1},x_{2},\,...\,x_{m})\), which in our model is \((E_{1},E_{2},\,...\,,E_{n},I_{1},I_{2}\,...\,,I_{n})\). If \(F_{i}\) is defined as the rate of appearance of new exposed individuals and \(V_{i}\) the balance of the entry and exit (by the natural progression of the disease) of individuals in the \(i\)-th compartment, then, the rate of change of a variable \(x_{i}\) in the model is given by \[\frac{\partial x_{i}}{\partial t}=F_{i}(\mathbf{x})-V_{i}(\mathbf{x}). \tag{3}\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Agegroup-dependent parameters} \\ \hline Agegroup & 1 & 2 & 3 \\ Age range & 0-29 & 30-64 & 65-103 \\ \(f^{T}\) & 0.1 & 0.05 & 0.26 \\ \(f^{C}\) & 0.002 & 0.009 & 0.095 \\ \hline \multicolumn{4}{|c|}{global parameters} \\ \hline \(\tau^{E}\) & \multicolumn{2}{c|}{4} \\ \(\tau^{I}\) & \multicolumn{2}{c|}{5} \\ \(\tau^{M}\) & \multicolumn{2}{c|}{7} \\ \(\tau^{T}\) & \multicolumn{2}{c|}{15} \\ \(\tau^{C}\) & \multicolumn{2}{c|}{15} \\ \(\tau^{R}\) & \multicolumn{2}{c|}{150} \\ \hline \end{tabular} \end{table} Table 2: Numeric value of the model parameters. All time scales (\(\tau\)’s) are expressed in days. Next, the Jacobian matrices of the dynamical system, \(\mathbb{F}\) and \(\mathbb{V}\), are defined as \[[\mathbb{F}]_{ij}=\frac{\partial F_{i}}{\partial x_{j}},\qquad[ \mathbb{V}]_{ij}=\frac{\partial V_{i}}{\partial x_{j}}. \tag{4}\] The next generation matrix \(\mathbb{G}\) is then defined as \(\mathbb{G}=\mathbb{F}\mathbb{V}^{-1}\). The \(ij\) element of \(\mathbb{G}\) is interpreted as the rate at which infected individuals in \(x_{j}\) produce new infections in \(x_{i}\) times the average amount of time they spend in the compartment \(j\). The effective reproduction number \(R_{eff}\) is the largest absolute eigenvalue, i.e. the spectral radius of the next generation matrix. In this work, the next generation matrix and the resulting effective reproduction number are inferred with the data assimilation system as diagnostic information of the epidemic. ### State-parameter estimation with ensemble-based data assimilation methods The evolution of epidemiological variables can be modeled as a partially observed time evolving process, i.e. a hidden Markov model. Within this framework, the evolution of the state of the system can be written as \[\mathbf{x}_{k+1}=\mathcal{M}(\mathbf{x}_{k})+\boldsymbol{\eta}_{ k}, \tag{5}\] where \(\mathbf{x}_{k}\) is the state of the system at time \(k\), \(\mathcal{M}()\) is the dynamical model and \(\boldsymbol{\eta}_{k}\) is the model error. The second equation forming the hidden Markov model corresponds to the observational map. The observations \(\mathbf{y}_{k}\) are related to the state \(\mathbf{x}_{k}\) by the observation operator \(\mathbf{H}_{k}\) which maps the space of state variables to the observational space \[\mathbf{y}_{k}=\mathbf{H}_{k}\mathbf{x}_{k}+\boldsymbol{\epsilon}_ {k}, \tag{6}\] where \(\boldsymbol{\epsilon}_{k}\) is the observation error. In this work, \(\mathbf{H}_{k}\) is assumed linear but the method can be generalized to the non-linear case. In filtering theory, the estimation problem involves obtaining the conditional probability density function (pdf) of \(\mathbf{x}_{k}\) knowing the current and past observations \(\mathbf{Y}_{k}=(\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{k})\), denoted by \(p(\mathbf{x}_{k}|\mathbf{Y}_{k})\) (a.k.a. filtering or analysis distribution). We can obtain the prediction pdf by performing a forecast step \[p(\mathbf{x}_{k}|\mathbf{Y}_{k-1})=\int d\mathbf{x}_{k-1}\,p( \mathbf{x}_{k}|\mathbf{x}_{k-1})\,p(\mathbf{x}_{k-1}|\mathbf{Y}_{k-1}) \tag{7}\] then, using Bayes theorem, the posterior density conditioned on the set of observations is obtained: \[p(\mathbf{x}_{k}|\mathbf{Y}_{k})=\frac{p(\mathbf{y}_{k}|\mathbf{x }_{k})p(\mathbf{x}_{k}|\mathbf{Y}_{k-1})}{\int d\mathbf{x}_{k}p(\mathbf{y}_{k }|\mathbf{x}_{k})p(\mathbf{x}_{k}|\mathbf{Y}_{k-1})}. \tag{8}\] Eqs. (7) and (8) can be solved sequentially every time new observations \(\mathbf{y}_{k}\) are available, but they have to be integrated over the entire state space, which is usually computationally intractable. However, using a sample based representation of the distributions, the forecast step can be approximated by a Monte Carlo approach by simply evolving every sample point forward with the model \(\mathcal{M}()\). In this work we use the EnKF, which is a Monte Carlo non-linear extension of the Kalman Filter [25]. The analysis distribution is represented by an ensemble of possible states. The resulting analysis state members are of the form \[\mathbf{x}^{\text{a},(i)}=\mathbf{x}^{\text{f},(i)}+\mathbf{K}(\mathbf{y}^{(i)}- \mathbf{H}\mathbf{x}^{\text{f},(i)}) \tag{9}\] \[\mathbf{K}=\mathbf{P}^{\text{f}}\mathbf{H}^{\top}(\mathbf{H}\mathbf{P}^{f} \mathbf{H}^{\top}+\mathbf{R})^{-1} \tag{10}\] where \(\mathbf{R}\) is the observational covariance matrix (assumed known), and the forecast error covariance \(\mathbf{P}^{f}\) is estimated from the ensemble of forecasted state vectors: \[\overline{\mathbf{x}}^{\text{f}}=\frac{1}{m}\sum_{i=1}^{m}\mathbf{x}^{\text{f},(i)}\,,\qquad\mathbf{P}^{\text{f}}=\frac{1}{m-1}\sum_{i=1}^{m}(\mathbf{x}^{ \text{f},(i)}-\overline{\mathbf{x}}^{\text{f}})(\mathbf{x}^{\text{f},(i)}- \overline{\mathbf{x}}^{\text{f}})^{\top}. \tag{11}\] The analysis ensemble mean, \[\overline{\mathbf{x}}^{\text{a}}=\frac{1}{m}\sum_{i=1}^{m}\mathbf{x}^{\text{a },(i)}, \tag{12}\] provides a point estimate of the state of the system. In Eq. (9), the observation vector is perturbed with Gaussian noise: \(\mathbf{y}^{(i)}=\mathbf{y}+\boldsymbol{\mu}^{(i)}\), where \(\boldsymbol{\mu}^{(i)}\sim N(\mathbf{0},\mathbf{R})\). This is required to obtain a sample covariance of the analysis state members with the expected analysis covariance [26]. During the analysis update, the EnKF can result in non-physical values for some model parameters and ensemble members (e.g. negative values for the transmission matrix elements). This is a consequence of the assumption of Gaussian forecast error in the EnKF. To avoid this complication, we force the lower limit of all the estimated parameters to 0 in each ensemble member. The observation operator is linear, and the conversion from state space to observation space is as follows: we assume that all variables except \(S_{j}\) and \(E_{j}\) are partially documented. This is achieved with a parameter \(0<\gamma_{j}<1\) in the observational operator, \(\mathbf{H}\), which accounts for the sub-detection of cases. In other words, we assume there is a sub-detection bias in some observational variables. This parameter depends on the agegroup, since the symptoms may increase with age, so that the amount of asymptomatic cases is larger for children. In the agegroup \(j\), the relation between the cumulative observed cases (\(y_{j}^{c}\)) and observed deaths (\(y_{j}^{d}\)) and the state variables at time \(k\) is \[\begin{pmatrix}y_{j}^{c}\\ y_{j}^{d}\end{pmatrix}=\begin{pmatrix}0&0&\gamma_{j}&\gamma_{j}&\gamma_{j}& \gamma_{j}&\gamma_{j}&\gamma_{j}\\ 0&0&0&0&0&0&0&1\end{pmatrix}\begin{pmatrix}S_{j}\\ E_{j}\\ I_{j}\\ M_{j}\\ T_{j}\\ C_{j}\\ R_{j}\\ D_{j}\end{pmatrix}+\boldsymbol{\epsilon} \tag{13}\] where \(\boldsymbol{\epsilon}\) is the observational error. The infected compartments are active variables which can increase and decrease over time because a fraction of the population will transit from \(I_{j}\) to \(M_{j}\), \(T_{j}\) or \(C_{j}\), and then will be accumulated in \(R_{j}\) or \(D_{j}\), that is why these variables need to be considered for the accumulated cases. For the parameter estimation, the model parameters \(\boldsymbol{\theta}\) and the state variables \(S_{j}\), \(E_{j}\), \(I_{j}\), \(M_{j}\), \(T_{j}\), \(C_{j}\), \(R_{j}\), \(D_{j}\) are put together in an augmented state vector \(\mathbf{x}\). Then, the model parameters are estimated in the same way as the state variables, using the EnKF. This technique is known as the _augmented state_. A review of parameter estimation using various data assimilation methods based on the state augmentation approach can be found in [27]. The fractions of detected cases \(\gamma_{j}\) are also estimated in this way. Although these parameters are not part of the model equations, their estimation can be conducted in the same way as for the model parameters. The parameters in ensemble-based data assimilation are estimated through their correlations with the observed variables. Therefore, parameter estimation depends crucially on an accurate quantification of the augmented error covariance matrix. While chaotic dynamics drives the evolution in state variables leading to an increase in their ensemble spread, persistence is assumed for the time evolution of the parameters. Because of this an inflation method is required to prevent the parameter ensemble spread from collapsing in an ensemble data assimilation cycle (e.g. Ruiz et al. 2013) [27]. We performed preliminary simulations to evaluate the use of multiplicative inflation in the EnKF framework. Even when we use two independent inflation factors, one for the parameters and one for the state variables [28], it was not posible to find a suitable set of inflation factors, they resulted in filter divergence or poor estimation performance. Because of this, we opted for another approach: to model the parameter evolution of each ensemble member as an independent auto-regressive process or correlated random walk [29] with correlation \(\rho\) and standard deviation \(\sigma\), which is applied only to the estimated parameters \(\theta^{i}\) (no inflation is applied to the state variables): \[\boldsymbol{\theta}_{k+1}^{i}=\bar{\boldsymbol{\theta}}_{k}+\rho\,\left( \theta_{k}^{i}-\bar{\boldsymbol{\theta}}_{k}^{i}\right)+\sigma\sqrt{1-\rho^{2} }\,\eta(0,1) \tag{14}\] where \(\eta\left(0,1\right)\) is a random Gaussian number with zero mean and unitary standard deviation. The inflation is added before the analysis step. To summarize our estimation method, the EnKF methodology is represented concisely in Algorithm 1. ``` \(\mathbf{H}\), \(\mathbf{R}\), \(\mathcal{M}()\) and \(\mathbf{x}^{\mathrm{a},(i)}=\mathbf{x}^{\mathrm{f},(i)}(t=t_{0})\), i=\(1,...\,\mathrm{m}\)\(\triangleright\) Inputs and ensemble initialization do\(t_{k}=1,2,...\) \(\mathbf{x}^{\mathrm{f},(i)}=\mathcal{M}(\mathbf{x}^{\mathrm{a},(i)})\) \(\mathbf{P}^{\mathrm{f}}=\frac{1}{m-1}\sum_{i=1}^{m}(\mathbf{x}^{\mathrm{f},(i )}-\overline{\mathbf{x}}^{\mathrm{f}})(\mathbf{x}^{\mathrm{f},(i)}-\overline{ \mathbf{x}}^{\mathrm{f}})^{\top}\)\(\triangleright\) forecast covariance \(\mathbf{K}=\mathbf{P}^{\mathrm{f}}\mathbf{H}^{\top}(\mathbf{H}\mathbf{P}^{ \mathrm{f}}\mathbf{H}^{\top}+\mathbf{R})^{-1}\)\(\triangleright\) Kalman gain \(\mathbf{y}^{(i)}=\mathbf{y}+\boldsymbol{\mu}^{(i)}\),\(\triangleright\) Perturbed observations \(\boldsymbol{\theta}_{t_{k}}^{(i)}=\bar{\boldsymbol{\theta}}_{k}+\rho\,\left( \boldsymbol{\theta}_{k}^{i}-\bar{\boldsymbol{\theta}}_{k}^{i}\right)+\sigma \sqrt{1-\rho^{2}}\,\eta(0,1)\)\(\triangleright\) Inflate parameters \(\mathbf{x}^{\mathrm{a},(i)}=\mathbf{x}^{\mathrm{f},(i)}+\mathbf{K}(\mathbf{y}^ {(i)}-\mathbf{H}\mathbf{x}^{\mathrm{f},(i)})\)\(\triangleright\) Analysis enddo ``` **Algorithm 1** Stochastic ensemble Kalman Filter ## 3 Experimental details ### Transmission matrix parameterizations For an \(n\times n\) transmission matrix there are \(\frac{n^{2}+n}{2}\) independent parameters to be estimated instead of \(n^{2}\) because of the restriction (2). In our case we use three agegroups, so the resulting transmission matrix is \[\Lambda=\begin{pmatrix}\lambda_{11}&\lambda_{12}&\lambda_{13}\\ \frac{N_{1}}{N_{2}}\lambda_{12}&\lambda_{22}&\lambda_{23}\\ \frac{N_{1}}{N_{3}}\lambda_{13}&\frac{N_{2}}{N_{3}}\lambda_{23}&\lambda_{33} \end{pmatrix} \tag{15}\] where parameters \(\lambda_{ij}\) depend on time. As is shown in the experiments in Section 4, the parameters of (15) are not identifiable if only information of the accumulated infection cases in each group is available, without information about which agegroup was the cause of the new exposed. To overcome this limitation, we propose a parameterization for the transmission matrix with fewer parameters: \[\Lambda=\begin{pmatrix}\lambda_{1}&\alpha\sqrt{\lambda_{1}\lambda_{2}}&\alpha \sqrt{\lambda_{1}\lambda_{3}}\\ \frac{N_{1}}{N_{2}}\alpha\sqrt{\lambda_{2}\lambda_{1}}&\lambda_{2}&\alpha \sqrt{\lambda_{2}\lambda_{3}}\\ \frac{N_{1}}{N_{3}}\alpha\sqrt{\lambda_{3}\lambda_{1}}&\frac{N_{2}}{N_{3}} \alpha\sqrt{\lambda_{3}\lambda_{2}}&\lambda_{3}\end{pmatrix} \tag{16}\] from now on, we call this matrix the parameterized transmission matrix. This parameterization is a particular case of (15) where the upper diagonal parameters \(ij\) are defined as a function of the diagonal elements of the row \(i\) and column \(j\): \(\lambda_{ij}=\sqrt{\lambda_{i}\lambda_{j}}\), and the lower diagonal parameters are defined by the constrain (2). The parameter \(\alpha\) controls the relative importance of inter-agegroup and intra-agegroup infections, with lower values giving more weight to the later. ### Data We use three agegroups in the range of \([0,30)\), \([30,65)\) and \([65,111]\) years. This division is motivated because we want to represent agegroups with different activities, so that children and young individuals activities are mainly school and universities, adults is the working agegroup and the senior population assumed to be mainly retired. At the same time, these groups grossly represent different health profiles, with senior population the ones that most likely will develop severe symptoms, while the first age group are expected to have minor symptoms. The total population is assumed to be \(44.8\) millions divided in the three age groups by \(2.2\times 10^{7}\), \(1.8\times 10^{7}\) and \(4.8\times 10^{6}\), which represent the approximate number of people within the aforementioned agegroups in Argentina (taken from last population census). #### 3.2.1 Synthetic observations The synthetic observations are generated evolving a meta-population model with a prescribed "true" transmission matrix. Cumulative infected cases and deaths disaggregated by age groups are assumed to be daily observed during a period of \(300\) days. The model uses a transmission matrix which has the form (15), and the parameters \(\lambda_{ij}\) are defined as \[\boldsymbol{\lambda}=\begin{cases}[1.6,1.8,1.4,0.5,0.4,0.3]&\text{if }t\in[0,80) \,\text{d}\\,&\text{if }t\in[80,140)\,\text{d}\\,&\text{if }t\in[140,300]\,\text{d}\end{cases} \tag{17}\] where \(\boldsymbol{\lambda}=[\lambda_{11},\lambda_{22},\lambda_{33},\lambda_{12}, \lambda_{13},\lambda_{23}]\). The decrease in the transmission matrix parameters at \(t=80\) mimics the effect of a lockdown. Then, the increase at time \(140\) represents a relaxation to normal conditions but with some sanitary measures (e.g. social distancing, mandatory use of masks in public spaces, etc). These conditions result in a double outbreak situation as observed in Argentina (and several other countries) in the first year of the pandemic. Note that the relative changes in the parameters are different for different agegroups (i.e. not proportional). We chose on purpose a transmission matrix that cannot be fully represented by the parameterization (16), so that the model used in the estimation is not perfect (some structural uncertainty is introduced in the parameterization process). Another motivation was to represent the resulting different levels of mobility that were found in different agegroups. The true values of the fraction of detected cases \(\gamma_{j}\), \(j=1,2,3\) are taken to be \(0.15\), \(0.2\) and \(0.3\) corresponding to the young, adult and senior agegroups. A reference single population detection fraction was estimated in [11]. Intuitively, we expect a higher fraction of symptomatic for the elder agegroups, as it is the most vulnerable population. The fraction of deaths \(f_{j}^{C}\) of each agegroup is assumed to be \(0.002\), \(0.05\) and \(0.1\)[30]. Synthetic observations are generated taking daily values from the true system evolution and adding observational error realization from zero mean Gaussian noise with standard deviation proportional to true value up to a maximum value. After some preliminary experiments we set the standard deviation of the accumulated cases observational error to \(\max(0.05\;y_{j}^{c},\,100)\), where \(y_{j}^{c}\) indicates the observed cumulative cases for every agegroup j. We assume that deaths are well documented so the standard deviation of the deaths observational error is \(\min(0.05\;y_{j}^{c},\,5)\). The way we define the observational error means that eventually all the observations will have the upper limit error variance after some time. #### 3.2.2 Real world observations For the real world experiments we use epidemiological data from Argentina collected by the National Health Surveillance System (SNVS, for its acronym in Spanish). The SNVS dataset is openly available ([http://datos.salud.gob.ar/dataset/covid-19-casos-registrados-en-la-republica-argentina](http://datos.salud.gob.ar/dataset/covid-19-casos-registrados-en-la-republica-argentina)) and consists in all the reported tests from public and private tests. The available information for each case is, among other data, the date of the test, the province of residence, age, and whether the person required hospitalization, intensive cares and/or respiratory support. The first case of SARS-CoV-2 in Argentina was reported on March 3, 2020. Just after 16 days of this on March 19, 2020, a nationwide lockdown was established. The data used in the real-world assimilation experiments will be daily cumulative cases and deaths aggregated over the selected agegroups. ## 4 Results We present our results in the following order: * In the subsection 4.1 we evaluate the model and data assimilation framework with twin experiments. * In the subsection 4.2 we apply the methodology to COVID-19 data of Argentina. * In the subsection 4.3 we conduct forecasts to examine the performance of the meta-population model coupled with the EnKF using the real observations. ### Experiments with synthetic observations The objective of the twin experiments is to evaluate the data assimilation-based parameter estimation in a context in which the true parameters are known and errors in the estimation can be accurately computed. The assimilation filter estimates all the variables of the system and the parameters of the transmission matrix, which are augmented to the system state vector.The dimension of the augmented state vector is 24 and the amount of estimated parameters is six in the case of the parameterized transmission matrix: three belonging to the parameterized transmission matrix and three corresponding to the fractions of detected cases. In the case of the transmission matrix (15), there are nine estimated parameters: six from the matrix and three from the fraction of detected cases so that the augmented state vector dimension is 27. As mentioned, the EnKF for parameter estimations requires an inflation approach for the parameter spread [27]. The filter was able to track the observations using the correlated random walk, (14), for high values of \(\rho(0.999)\) and \(\sigma\) in the range \([0.001,0.2]\). We measured the RMSE of the estimation compared to the true values of the cases, of the deaths and of the transmission matrix parameters. Each RMSE showed a different optimal value of \(\sigma\). We took \(\sigma=0.05\) and \(r=0.999\) which results in almost optimal estimates of the parameterized transmission matrix parameters and at the same time having good estimations of the state variables. The same random walk parameter values are used in the real data experiments (section 4.2). Fig 2 shows the estimated parameters for the twin experiments using the six-parameter transmission matrix (15). To examine the identifiability and sensitivity to initial conditions of the estimated parameters, three experiments with different apriori density of parameters at \(t=0\) are shown. Some of the time variability of the true parameters is captured, however the different experiments converge to different estimated parameter values. The estimations of the parameters are dependent of the initial condition in the sense that different initial conditions of the parameters result in different estimations of the parameters at later times (\(>100\) d), and neither of the three experiments is able to estimate precisely the true parameters (Fig 2). The reason for this is that an increase in the rate of cases, say in the agegroup 1, may be ascribed by the assimilation system to a change in the parameters \(\lambda_{12}\) or to a change in \(\lambda_{11}\) and \(\lambda_{22}\). Both scenarios result in the same infection rates so that the information provided by the observations is not enough to identify the actual scenario. For instance, \(\lambda_{33}\) green curves in Fig 2 present an underestimation at the beginning of the lockdown, this underestimation is balanced with the overestimation of \(\lambda_{13}\) and \(\lambda_{23}\), leading to an evolution of the number of cases consistent with the observations. We point out that the estimation of observed variables, the cases and deaths, is equally accurate for all these experiments. For this reason we conclude that the six parameter transmission matrix is not identifiable using age-dependent cases and deaths observations data. There is some delay in the estimated transmission matrix parameters shown in Fig 2 between the abrupt change due to the lockdown measure (both in the beginning and end) that we imposed to the true parameter and the captured change in the estimated parameter. Estimated parameters start to adjust to these abrupt changes a few days after the change and they converge to a new value 20-30 days after. The reason for this is that parameters in ensemble based assimilation systems are estimated through the correlation with observed variables, so that these state-parameters correlations take some cycles to adapt to abrupt changes. This behavior can be reduced by tuning up the amount of inflation, at the expense of having an increased spread in the estimated parameter and state variables ensemble. Overall, the amplitude of the abrupt change is rather well estimated beyond the mentioned delay. Fig 3 shows the estimated daily new cases (left panels) and deaths (right panels) of the young (upper panels), adult (middle panels) and senior (lower panels) agegroups, using the full transmission matrix 15.The three similar experiments with different initial conditions of the estimated parameters give similar results (curves of the three experiments are indistinguishable in Fig 3). In the three experiments, the EnKF is able to keep track of the observations of cases and deaths in all the agegroups, even though the transmission matrix parameters are not identifiable. The ensemble dispersion in the senior agegroup is relatively larger because the population is almost five times lower than in the other agegroups, and all the agegroups have the same observation error upper limit, so that the relative error of the estimation is higher. Figure 2: Estimated parameters of the full transmission matrix. Left panels: diagonal parameters. Right panels: off-diagonal parameters. Colored curves represent estimations with different initial conditions, and black curves represent the true parameter values. Shades around colored curves represent the parameter spread. Given that the transmission matrix parameters are not identifiable using the matrix form (15), we conduct estimation experiments using the proposed parameterization (16). We took \(\alpha=0.4\) in (15), which represent significant intra-group contagions. Fig 4 shows the estimated parameters of the parameterized transmission matrix. The right panels show the values of the diagonal, and the left ones show the values of the upper off-diagonal. Figure 3: Estimated incident cases (left panels) and deaths (right panels) of the young (upper panels), adult (middle panels) and senior (lower panels) agegroups for the full transmission matrix experiment. Colored curves represent estimations with different initial conditions, red dots represent observations and black curves represent the true parameter values. Shades around colored curves represent the corresponding variable spread. The three experiments converge to the same parameter values, independently of the initial condition. The true values of the parameters cannot be estimated precisely because this parameterization is not able to exactly fit the structure of the true transmission matrix. Because of this, parameters representing intra-group contacts are systematically underestimated while the number of inter-group interactions are overestimated. Note that this bias could be alleviated with a lower \(\alpha\) value, however this optimization based on true transmission matrix values cannot be conducted in realistic cases. The parameter estimates in Fig 4 also show a delay in the representation of the sudden parameter changes found at the beginning and at the end of the lockdown period, as found in Fig 2. Fig 5 shows the fraction of detected cases of each agegroup (left panels). We expect these parameters to be correlated to observed accumulated cases and deaths. Therefore, the system should be able to constrain them. The true values of \(\gamma_{j}\) are accurately estimated by the assimilation system, regardless of the initial condition. The spurious peaks estimated in the parameterized transmission matrix at the lockdown transitions are also found in the \(\gamma_{j}\) parameters around time 80 and, with much less intensity, at 170. Figure 4: Estimated parameters of the parameterized transmission matrix. Left panels: diagonal parameters. Right panels: off-diagonal parameters. Colored curves represent estimations with different initial conditions, and black curves represent the true parameter values. Shades around colored curves represent the parameter ensemble. In the previous shown experiment, we estimated a parameterized transmission matrix and the fraction of detected cases \(\gamma_{j}\). The cases, deaths and the parameters \(\lambda_{j}\) can also be estimated alongside with the fractions of deaths \(f_{j}^{C}\) instead of \(\gamma_{j}\). To illustrate this, we fix \(\gamma_{j}\) equal to the true values and perform three experiments that estimate the transmission matrix and the fraction of deaths. The parameters \(\lambda_{j}\) are similar to the ones showed in Fig 4. The obtained \(f_{j}^{C}\) estimates are shown in the right panels of Fig 5. In all experiments the estimated parameters converge to the true values, and the sudden change in the estimations is again observed at the times where the true transmission matrix parameters change. In figure 5 we can see the lower bound in the estimated parameter \(f_{1}^{C}\): the true value is near zero, and statistically some of the filter corrections tend to be negative, which are then corrected (all the values are positive). Fig 6 shows the effective reproduction number computed with the next generation matrix for the experiment corresponding to the parameterized transmission matrix (left panel) and to the six-parameter matrix (right panel). In both cases, the true values of \(R_{\text{eff}}\) can be accurately estimated with both parameterizations (apart from the delay in parameter changes), even when the true transmission matrix is non-reproducible by the parameterized transmission matrix. This result can be interpreted as follows: Our parameterized transmission matrix is flexible enough to capture the system \(R_{\text{eff}}\) and its temporal evolution and at the same time have a dimensionality low enough to allow its parameters to be identifiable from the available observations. Figure 5: Left: estimated fraction of detected cases of each agegroup. Right: estimated fraction of deaths of each agegroup. Both group of plots correspond to different simulations. Colored curves represent estimations with different initial conditions, and black curves represent the true parameter values. Shades around colored curves represent the parameter ensemble. ### Real data experiments An experiment is conducted with the same assimilation system as in the previous section using the real dataset of Argentina. Contrary to the twin experiments, the observations may be biased and the observational error covariance is unknown. Indeed, the observed cases are highly noisy. One of the sources of the noise is due to the fact that testing and report diminish on weekends, resulting in an under-report of cases during weekends and likely an over-report on Mondays and Tuesdays due to delayed reports. The observations consist on accumulated cases and deaths for each agegroup in the time interval from 2020/03/03 to 2021/09/18 (564 days). We estimate the parameterized transmission matrix 16 with \(\alpha=0.5\) and the time-dependent fraction of deaths is estimated in the real observation experiments to account for the following effects: the data correspond to a time interval of almost 1.5 years, time in which the SARS-Cov-2 virus mutated several times changing the severity of the symptoms; and two, because of improvements of symptom treatments in the health system and the start of the vaccination period. The fraction of detected cases were assumed to be fixed at 0.2, 0.3 and 0.4. The observation error is set to \(\max(0.05\ y_{1}^{c},\,400)\), \(\max(0.05\ y_{2}^{c},\,500)\), \(\max(0.05\ y_{3}^{c},\,50)\) in the young, adult and senior agegroups respectively, and \(\max(0.05\ y_{i}^{d},\,5)\) for the death observations in all agegroups. Fig 7 shows the incident cases (left panels) and incident deaths (right panels) of the young (top panels), adult (middle panels) and senior (bottom panels) agegroups, respectively. The filter is able to keep track of the observations of each agegroup since the cases and deaths are estimated correctly. As in the twin experiments, we use three sets of initial conditions, they yield the same estimation of cases and deaths (in Fig 7 only the green one is visible). The high frequency cycle found in the estimations of cases and deaths correspond to the weekly observations cycle. If required, this effect can be mitigated by increasing the observational error of the cases, at the expense of an increase of the uncertainty of the estimations. Figure 6: Estimated effective reproduction number using the parameterized transmission matrix (left) and the six-parameter transmission matrix (right). Both group of plots correspond to different simulations. Colored curves represent estimations with different initial conditions, and black curves represent the true parameter values. Shades around colored curves represent the parameter ensemble. The estimated deaths in the young agegroup have a high dispersion because of the relatively few observed cases compared to the other agegroups and their relative errors. Although the estimated accumulated number of deaths is always positive, the daily changes in the number of deaths is sometimes negative for some ensemble members for the young compartment. This non-physical behavior is a consequence of the updates introduced by the observations which may eventually result in the reduction of estimated number of deaths in order to better fit the observed values. Fig 8 shows the three independent parameters \(\lambda_{i}\), \(i=1,2,3\) of the parameterized transmission matrix (left panels), and the upper off-diagonal parameters \(\lambda_{ij}=\alpha\sqrt{\lambda_{i}\lambda_{j}}\) (right panels). All different initial conditions yield the same estimations of the parameters. There is a predominance of the parameters \(\lambda_{1}\) and \(\lambda_{2}\) given that the majority of the cases occur at the first two agegroups. Consequently the interaction parameter between young and adults \(\lambda_{12}\) is higher compared to \(\lambda_{13}\) and \(\lambda_{23}\). Figure 7: Estimated incident cases (left-side panels) and deaths (right-side panels) in the real data experiments using the parameterized transmission matrix. Young agegroup: upper panels. Adults agegroup: middle panels. Senior agegroup: bottom panels. Colored curves represent estimations with different initial conditions and red dots represent observations. Shades around colored curves represent the estimated variable uncertainty. Fig 9 shows the fraction of deaths of each agegroup. Once more we can see the independence of the estimation over the initial condition. The estimated parameter of the young population presents a high ensemble dispersion because of the few deaths observed in the agegroup. Figure 8: Estimated parameters of the parameterized transmission matrix. Left panels: diagonal parameters. Right panels: off-diagonal parameters. Colored curves represent estimations with different initial conditions, and shades around colored curves represent the parameter spread. The estimated fractions of deaths are much higher than the reference values 0.002, 0.05 and 0.1 [30] of the young, adult and senior agegroups. This discrepancy may be a consequence of the under-detection of cases: the fraction of deaths need to rise for the system to make sense of the lack of cases. Also the lower bound of the estimated parameters may contribute to this effect. Fig 10 shows the estimated effective reproduction number \(R_{\rm eff}\). The estimated parameter does not depend on the chosen initial conditions. The estimated periods where \(R_{\rm eff}>1\) corresponds to an increase in the cases up to the peaks. Figure 9: Estimated fraction of deaths of each agegroup. Colored curves represent estimations with different initial conditions, and shades around colored curves represent the corresponding variable spread. ### Forecasts To evaluate the potential use of the estimated parameters for decision making, we conducted an evaluation of the performance of the resulting forecasts using the estimated parameterized transmission matrix on the COVID-19 data of Argentina. The methodology is as follows. First linear and quadratic fits are performed on the last 15 days of the estimated parameterized transmission matrix values to obtain the parameter tendencies. Then, these tendencies are projected 30 days forward, starting from the last value of the analysis (current day). Finally, the 30-day forecasts are conducted with the free evolution of the model using the projected parameterized transmission matrix and starting from the current analysis state. We compare the forecasts to the assimilation analysis using the entire set of observations over time as true. Fig 11 shows some 30-day forecasts performed over different times of the pandemic. We can see some forecasts are accurate but some diverge from what actually happened (during the tendency changes). Figure 10: Upper panel: total incident cases in Argentina. Lower panel: estimated reproduction number using COVID-19 data of Argentina. Different colors indicates different initial conditions, and shades around colored curves represent the corresponding variable spread. Vertical lines point out periods where \(R_{\rm eff}>1\). To evaluate the performance of the forecasts, we compute the root mean square error of the incident cases forecast over 400 different forecasts separated over a time window of 1 day for each agegroup. To investigate the impact of considering the interactions among different agegroups into the system, we repeat the forecast experiments using a SEIR model with no agegroup division. The relative initial infected of the meta-population model is used as the fraction of the population to obtain the well-mixed model predictions for each agegroup. The initial condition of the well-mixed forecast is the sum along the agegroups of the meta-population model. The forecasts cover the time window from July 2020 to the end of August 2022, featuring two peaks of the infection so that there is a wide variety of epidemic behaviors, as shown in Fig 11. Fig 12 shows the relative RMSE as a function of the lead time. The behaviour of the forecasts is similar along the agegroups. At the first 15 days the forecasts are similar, but from day 16 to 30 there is an advantage of the quadratic and linear meta-population forecast, closely followed by the constant well-mixed forecast, except for the senior agegroup where the constant well-mixed forecast performs slightly worse than the other two. The constant meta-population and linear well-mixed forecast shows similar performance, while the less accurate by far is the quadratic well-mixed. Figure 11: Forecasts (orange curves) conducted in different stages of the pandemic in Argentina, observed cumulative cases (showed as incidence/daily cases; red dots) and analysis of cases (blue curves) using the EnKF with multiplicative inflation. Shades around curves represent ensemble members. ## 5 Conclusions In this work we used an ensemble Kalman filter applied to a meta-population compartmental model to monitor epidemiological parameters of the SARS-CoV-2 virus and to conduct forecasts. We sequentially calibrated the parameters of the model using state augmentation strategies. Crucially, unlike recent works which use a constant transmission matrix parameters or work directly with well-mixed models, we provided a time-dependent parameterization of the transmission matrix that was identifiable by the system. Besides, in the context of data assimilation, it allows us to detect Figure 12: Relative average root mean square error of the quadratic (orange), linear (blue) and constant (red) forecasts using the parameterized transmission matrix (solid plots) and the well-mixed model (dashed plots). Upper panel: young people agegroup. Middle panel: adult people agegroup. Lower panel: senior people agegroup. nontrivial parameter variations and interactions between agegroups which could not be modeled assuming a time independent transmission matrix. Furthermore, other important epidemiological parameters were recovered such as the mortality, fraction of undocumented cases and the effective reproduction number, the last one diagnosed using the NGO. The assimilation technique can be used as a tool for the monitoring and prediction of current and future contagious diseases. We validated the technique with synthetic and real accumulated cases and deaths observations in Argentina. Three agegroups are used but the technique can be applied to more agegroups containing narrower age ranges for a more precise analysis. Attempting to estimate the full transmission matrix results in the non-identifiability of the parameters. To solve this problem we introduced a parameterization of the transmission matrix (16) because the number of parameters grows linearly with the number of observations. This parameterization introduces a single inter-group transmission parameter, \(\alpha\) parameter, which in our experiments was fixed but it could in principle be estimated by performing forecasts in a validation data set (past evolution of the pandemic up to the 'current' pandemic day) and minimizing the relative root mean square error as a function of \(\alpha\) at an apriori defined forecast lead time. In the EnKF framework, we assume errors are Gaussian which may not be appropriate for some model parameters. Because of this, some model parameters has to be forced to remain within their physically meaningful range. Some model parameters (parameterized transmission matrix, fraction of detected cases and fraction of deaths) are forced to be non-negative to avoid non-physical evolution of the model. This conflicts with the Gaussian assumption particularly when the spread of the variable or parameter are close to the boundaries of their meaningful range. This is the case for the fraction of deaths in the young population. A non-parametric data assimilation framework can be applied, like the mapping particle filter [31], to avoid this limitation and to represent the non-Gaussian density of the near-zero parameters. Furthermore, the variables are assumed to evolve with a smooth behavior, which is achieved for a relatively large number of individuals (country-level observations). In the case of city-level populations, the behavior of the age-meta-population model within the EnKF framework may not be robust, increasing granularity in agegroups and contacts can be achieved by using epidemiological agent-based models. Recently, Cocucci et al. (2022) [32] used an EnKF combined with an ABM using mean field data to infer the COVID-19 pandemic in the city of Buenos Aires, Argentina. Schneider et al. (2022) [33] used a complex agent-based network model to assimilate synthetic data at individual level. The use of the meta-population model resulted in an improvement of the forecasts up to 30 days lead times of the new cases compared to well-mixed models, which does not account for the interaction of compartments among different agegroups. This highlights the importance of disaggregating information in both data and model. The age dependent forecasts may be of interest considering epidemiological models were used by governments in the pandemic decision making. We evaluated different parameter regression functions for the transmission matrix values which are then extrapolate temporally to conduct the forecasts. Up to 15 day lead times, there is practically no difference in the forecast accurate between the three regression functions (constant, linear quadratic), but for longer lead times, the quadratic and linear regression functions give the extrapolated values which results in the most accurate forecasts. Our framework could be greatly improved including hospitalizations as an observed variable. If reliable data of check-in and check-out hospitalizations were available, relevant quantities could be estimated, like average hospitalization times and use of hospital beds, and also parameters like fraction of hospitalizations and fraction of intensive care.
2309.08275
User Power Measurement Based IRS Channel Estimation via Single-Layer Neural Network
One main challenge for implementing intelligent reflecting surface (IRS) aided communications lies in the difficulty to obtain the channel knowledge for the base station (BS)-IRS-user cascaded links, which is needed to design high-performance IRS reflection in practice. Traditional methods for estimating IRS cascaded channels are usually based on the additional pilot signals received at the BS/users, which increase the system training overhead and also may not be compatible with the current communication protocols. To tackle this challenge, we propose in this paper a new single-layer neural network (NN)-enabled IRS channel estimation method based on only the knowledge of users' individual received signal power measurements corresponding to different IRS random training reflections, which are easily accessible in current wireless systems. To evaluate the effectiveness of the proposed channel estimation method, we design the IRS reflection for data transmission based on the estimated cascaded channels in an IRS-aided multiuser communication system. Numerical results show that the proposed IRS channel estimation and reflection design can significantly improve the minimum received signal-to-noise ratio (SNR) among all users, as compared to existing power measurement based designs.
He Sun, Weidong Mei, Lipeng Zhu, Rui Zhang
2023-09-15T09:36:22Z
http://arxiv.org/abs/2309.08275v1
# User Power Measurement Based IRS Channel Estimation via Single-Layer Neural Network ###### Abstract One main challenge for implementing intelligent reflecting surface (IRS) aided communications lies in the difficulty to obtain the channel knowledge for the base station (BS)-IRS-user cascaded links, which is needed to design high-performance IRS reflection in practice. Traditional methods for estimating IRS cascaded channels are usually based on the additional pilot signals received at the BS/users, which increase the system training overhead and also may not be compatible with the current communication protocols. To tackle this challenge, we propose in this paper a new single-layer neural network (NN)-enabled IRS channel estimation method based on only the knowledge of users' individual received signal power measurements corresponding to different IRS random training reflections, which are easily accessible in current wireless systems. To evaluate the effectiveness of the proposed channel estimation method, we design the IRS reflection for data transmission based on the estimated cascaded channels in an IRS-aided multiuser communication system. Numerical results show that the proposed IRS channel estimation and reflection design can significantly improve the minimum received signal-to-noise ratio (SNR) among all users, as compared to existing power measurement based designs. ## I Introduction Intelligent reflecting surface (IRS) has recently emerged as a candidate technology for the future six-generation (6G) wireless communication systems due to its capability of realizing smart and reconfigurable propagation environment cost-effectively [1]. Specifically, an IRS consists of a large number of passive reflecting elements with independently tunable reflection coefficients, which can be jointly designed to alter the phase and/or amplitude of its incident signal to achieve high performance passive beamforming for various purposes, such as signal boosting, interference cancellation, target sensing, etc [1, 2, 3]. To this end, IRS passive beamforming or in general passive reflection should be properly designed. In the existing literature, there are two main approaches for IRS passive beamforming design, which are based on channel estimation pilots and user signal power measurements, respectively. In the former approach, the cascaded base station (BS)-IRS-user/user-IRS-BS channels are first estimated based on the downlink/uplink pilots received at the users/BS with time-varying IRS training reflections, and then the IRS reflection for data transmission is optimized based on the estimated IRS cascaded channels [4, 5, 6, 7]. Alternatively, the authors in [8] proposed to train a deep neural network (NN) to directly learn the mapping from the received pilot signals to the optimal IRS reflection. However, the above pilot-based designs require additional training pilots for IRS channel estimation or NN training, which not only increases the system training overhead but also may not be compatible with the current cellular transmission protocols that cater to the user-BS direct channel (without IRS) estimation only. To efficiently integrate IRS into current wireless systems without the need of changing their protocols, the latter approach designs IRS reflection for data transmission based on the received (pilot or data) signal power measurements at each user's receiver with time-varying IRS reflections, which can be easily obtained in existing wireless systems. For example, passive beam training for IRS-aided millimeter-wave (mmWave) systems [9, 10] and conditional sample mean (CSM)-based IRS reflection for IRS-aided sub-6 GHz systems [11] have been proposed. In particular, it was shown in [11] that in the single-user case, the CSM method can achieve an IRS passive beamforming gain in the order of the number of IRS reflecting elements, which is identical to that under perfect channel state information (CSI) [1]. However, the number of random IRS reflections needed for CSM to obtain sufficient user power measurement samples is very large (hundreds or even thousands) for even the single-user case, which still results in high implementation overhead and large training delay. The fundamental reason for CSM's low efficiency lies in its lack of IRS channel information extraction from the users' power measurements. In this paper, we propose a new IRS cascaded channel estimation and IRS reflection design method based on users' power measurements similar to CSM in [11]. However, different Fig. 1: IRS-aided multicasting with users’ power measurements. from CSM, we first estimate the IRS cascaded channels based on user power measurements and then design the IRS reflection for data transmission based on the estimated channels. This thus overcomes the aforementioned inefficacy of CSM due to the lack of channel information extraction. In particular, our proposed IRS channel estimation method based on user power measurements leverages a simple single-layer NN formulation. Specifically, we first reveal that for any given IRS reflection, the received signal power at each user can be equivalently modeled as the output of a single-layer NN, with its weights corresponding to the coefficients of the cascaded BS-IRS-user channel. Inspired by this, we optimize the weights of the single-layer NN to minimize the mean squared error (MSE) between its output and each user's power measurement via the stochastic gradient descent method, thereby estimating the cascaded BS-IRS-user channel. Next, to evaluate the effectiveness of the proposed channel estimation method, we design the IRS reflection for data transmission based on the estimated cascaded channels in an IRS-aided multiuser multicast communication system, as shown in Fig. 1. We aim to optimize the IRS reflection to maximize the minimum received signal-to-noise ratio (SNR) among all users and solve this problem efficiently by applying various optimization techniques. Numerical results show that the proposed IRS channel estimation and IRS reflection design can yield much better performance than existing user power measurement based schemes such as CSM. _Notations_: Scalars, vectors and matrices are denoted by lower/upper case, boldface lower case and boldface upper case letters, respectively. For any scalar/vector/matrix, \((\cdot)^{*}\), \((\cdot)^{T}\) and \((\cdot)^{H}\) respectively denote its conjugate, transpose and conjugate transpose. \(\mathbb{C}^{n\times m}\) and \(\mathbb{R}^{n\times m}\) denote the sets of \(n\times m\) complex and real matrices, respectively. \(\|\cdot\|\) denotes the Euclidean norm of a vector, and \(|\cdot|\) denotes the cardinality of a set or the amplitude of a complex scalar. \(j=\sqrt{-1}\) denotes the imaginary unit. \(\mathrm{Re}(\cdot)\) and \(\mathrm{Im}(\cdot)\) denote the real and imaginary parts of a complex vector/number, respectively. \(\mathbf{V}\succeq\mathbf{0}\) indicates that \(\mathbf{V}\) is a positive semidefinite matrix. \(\text{Tr}(\cdot)\) denotes the trace of a matrix. The distribution of a circularly symmetric complex Gaussian (CSCG) random variable with zero mean and covariance \(\sigma^{2}\) is denoted by \(\mathcal{CN}(0,\sigma^{2})\). ## II System Model and Problem Formulation As shown in Fig. 1, we consider an IRS-aided multicast communication system, where a single-antenna BS (or multi-antenna BS with fixed downlink precoding) transmits a common message to \(K\) single-antenna users (or independent messages to different users over orthogonal frequency bands), with the help of an IRS consisting of \(N\) reflecting elements. It is assumed that there is a central controller in the system (the BS or another dedicated unit) which can collect the users' received signal power measurements and thereby optimize the IRS passive reflection. Let \(U_{k}\) denote the \(k\)-th user, \(k\in\mathcal{K}\triangleq\{1,2,...,K\}\). In this paper, we consider quasi-static block-fading channels and focus on a given fading block, during which all the channels involved are assumed to be constant. The baseband equivalent channel from the BS to the IRS, that from the BS to \(U_{k}\), and that from the IRS to \(U_{k}\) are denoted by \(\mathbf{h}_{BI}\in\mathbb{C}^{N\times 1}\), \(h_{BU_{k}}\in\mathbb{C}\) and \(\mathbf{h}_{IU_{k}}^{H}\in\mathbb{C}^{1\times N}\), respectively. Let \(\mathbf{\Theta}=\text{diag}(e^{j\theta_{i}},...,e^{j\theta_{N}})\) denote the reflection matrix of the IRS, where \(\theta_{i}\) denotes the phase shift of its \(i\)-th reflecting element, \(1\leq i\leq N\). Due to the hardware constraints, we consider that the phase shift of each reflecting element can only take a finite number of discrete values in the set \(\Phi_{\alpha}=\{\omega,2\omega,3\omega,...,2^{\alpha}\omega\}\), where \(\alpha\) is the number of bits used to uniformly quantize the continuous phase shift in \((0,2\pi]\), and \(\omega=\frac{2\pi}{2^{\alpha}}\)[12]. Let \(P\) denote the transmit power of the BS. The effective channel from the BS to \(U_{k}\) is expressed as \[g_{k}=\sqrt{P}\left(h_{BU_{k}}+\mathbf{h}_{IU_{k}}^{H}\mathbf{\Theta}\mathbf{h}_{BI} \right),\ k\in\mathcal{K}, \tag{1}\] where we have incorporated the effect of the BS transmit power \(P\) into the BS-\(U_{k}\) effective channel, since it may be practically unknown to the central controller. Let \(\bar{\mathbf{v}}^{H}=\left[e^{j\theta_{1}},...,e^{j\theta_{N}}\right]\) denote the passive reflection of the IRS, and \(\bar{\mathbf{h}}_{k}=\text{diag}(\mathbf{h}_{IU_{k}}^{H})\mathbf{h}_{BI}\) denote the cascaded BS-IRS-\(U_{k}\) channel. As such, the channel in (1) can be simplified as \[g_{k}=\sqrt{P}h_{BU_{k}}+\sqrt{P}\bar{\mathbf{v}}^{H}\bar{\mathbf{h}}_{k},\ k\in \mathcal{K}. \tag{2}\] By extending the IRS passive reflection vector into \(\mathbf{v}^{H}=\left[1,\bar{\mathbf{v}}^{H}\right]\) and stacking the direct and cascaded BS-\(U_{k}\) channels into \(\mathbf{h}_{k}^{H}=\sqrt{P}\left[h_{BU_{k}}^{*},\bar{\mathbf{h}}_{k}^{H}\right]\), the baseband equivalent channel in (2) can be further simplified as \[g_{k}=\mathbf{v}^{H}\mathbf{h}_{k},\ k\in\mathcal{K}. \tag{3}\] Let \(s\in\mathbb{C}\) denote the transmitted symbol (pilot or data) at the BS with \(|s|^{2}=1\). Hence, the received signal at \(U_{k}\) is given by \[y_{k}=g_{k}s+n_{k},\ k\in\mathcal{K}, \tag{4}\] where \(n_{k}\sim\mathcal{CN}(0,\sigma^{2})\) denotes the complex additive white Gaussian noise (AWGN) at \(U_{k}\) with power \(\sigma^{2}\). Accordingly, the received SNR at \(U_{k}\) is \[\text{SNR}_{k}=\frac{|g_{k}|^{2}}{\sigma^{2}}=\frac{\mathbf{v}^{H}\mathbf{G}_{k}\mathbf{v}} {\sigma^{2}},\ k\in\mathcal{K}, \tag{5}\] where \(\mathbf{G}_{k}=\mathbf{h}_{k}\mathbf{h}_{k}^{H}\) denotes the covariance matrix of \(\mathbf{h}_{k}\). In this paper, we aim to optimize the IRS passive reflection to maximize the minimum received SNR among all \(K\) users. The associated optimization problem is thus formulated as (P1) \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! measurements (to be presented in the next subsection). Specifically, we assume that the IRS applies randomly generated phase shifts of its reflecting elements (subject to (6b)) to reflect the BS's signals to all \(K\) users simultaneously. Let \(M\) and \(\mathbf{v}_{m}\), \(m\in\mathcal{M}\triangleq\{1,2,...,M\}\), denote the number of random reflection sets generated and its \(m\)-th reflection set, respectively. In the meanwhile, the users independently measure the power of their received signals corresponding to each IRS reflection set and send the results to the central controller (see Fig.1). For each reflection set, we consider that \(U_{k}\) takes \(Q\) samples of its received signal to calculate the RSRP based on them. Note that in practice, it usually holds that \(Q\gg 1\) since the IRS's reflection switching rate is usually much lower than the symbol rate of each user. Thus, the power measurement of \(U_{k}\) under the IRS's \(m\)-th reflection set is given by \[\bar{p}_{k}(\mathbf{v}_{m})=\frac{1}{Q}\sum_{q=1}^{Q}\left|g_{k,m}s+n_{k}(q)\right| ^{2},\ k\in\mathcal{K},\ m\in\mathcal{M}, \tag{7}\] where \(g_{k,m}=\mathbf{v}_{m}^{H}\mathbf{h}_{k}\) and \(n_{k}(q)\sim\mathcal{CN}(0,\sigma^{2})\) denotes the \(q\)-th sampled AWGN at \(U_{k}\). Let \(\mathcal{P}_{k}=[\bar{p}_{k}(\mathbf{v}_{1}),\bar{p}_{k}(\mathbf{v}_{2}),...,\bar{p}_ {k}(\mathbf{v}_{M})]\) denote the collection of \(U_{k}\)'s received signal power measurements under the \(M\) reflection sets of the IRS. After the above power measurements, each user \(U_{k}\) reports \(\mathcal{P}_{k}\) to the central controller, which estimates \(\mathbf{h}_{k},k\in\mathcal{K}\) as presented next. ### _NN-enabled Channel Estimation_ For any given IRS reflection set \(\mathbf{v}\), the desired signal power at each user \(U_{k}\) is given by \[p_{k}(\mathbf{v})=\left|\mathbf{v}^{H}\mathbf{h}_{k}\right|^{2}. \tag{8}\] It is worth mentioning that if the number of samples \(Q\) is sufficiently large, we have \(\bar{p}_{k}(\mathbf{v})\approx p_{k}(\mathbf{v})+\sigma^{2}\). Thus, we aim to estimate \(\mathbf{h}_{k},k\in\mathcal{K}\) based on \(\bar{p}_{k}(\mathbf{v}_{m}),m\in\mathcal{M}\). To this end, note that (8) can be modeled as a single-layer NN, explained as follows. In particular, this NN takes the reflection pattern \(\mathbf{v}\) and the cascaded channel \(\mathbf{h}_{k}\) as its input and weights, respectively, while the nonlinear activation function at the output layer is the squared amplitude of \(\mathbf{v}^{H}\mathbf{h}_{k}\), as given in (8). However, as both \(\mathbf{v}\) and \(\mathbf{h}_{k}\) are complex numbers in general, such a single-layer NN requires the implementation in the complex domain. To avoid this issue, we express (8) equivalently in the real domain as \[p_{k}(\mathbf{v})=\left|\mathbf{v}^{H}\mathbf{h}_{k}\right|^{2}=\left\|\mathbf{x}^{T}\mathbf{R}_{ k}\right\|^{2}, \tag{9}\] where \(\mathbf{x}\) consists of the real and imaginary parts of \(\mathbf{v}\), i.e., \(\mathbf{x}^{T}=\left[\begin{array}{cc}\mathrm{Re}\left(\mathbf{v}^{T}\right),& \mathrm{Im}\left(\mathbf{v}^{T}\right)\end{array}\right]\), and \(\mathbf{R}_{k}\) denotes the real-valued cascaded channel, i.e., \[\mathbf{R}_{k}=\left[\begin{array}{cc}\mathrm{Re}\left(\mathbf{h}_{k}\right)& \mathrm{Im}\left(\mathbf{h}_{k}\right)\\ \mathrm{Im}\left(\mathbf{h}_{k}\right)&-\mathrm{Re}\left(\mathbf{h}_{k}\right)\end{array} \right]\in\mathbb{R}^{(2N+2)\times 2}. \tag{10}\] Based on (9), we can construct an equivalent single-layer NN to (8) in the real-number domain. Specifically, as shown in Fig. 2, the input of this single-layer NN is \(\mathbf{x}\). Let \(W_{k,i,j}\) denote the weight of the edge from the \(i\)-th input to the \(j\)-th neuron in the hidden layer, with \(i=1,2,...,2N+2\) and \(j=1,2\). The two neurons at the hidden layer of this NN are given by \[\left[\begin{array}{c}a_{k}\\ b_{k}\end{array}\right]^{T}=\mathbf{x}^{T}\mathbf{W}_{k}, \tag{11}\] where \(\mathbf{W}_{k}\in\mathbb{R}^{(2N+2)\times 2}\) denotes the weight matrix of this NN, with \(W_{k,i,j}\) being its entry in the \(i\)-th row and the \(j\)-th column. Finally, the activation function at the output layer is given by the squared norm of (11), and the output of this NN is \[\hat{p}_{k}(\mathbf{v})=a_{k}^{2}+b_{k}^{2}=\left\|\mathbf{x}^{T}\mathbf{W}_{k}\right\|^{ 2}. \tag{12}\] By comparing (12) with (9), it is noted that this real-valued NN can imitate the received signal power by \(U_{k}\). In particular, if \(\mathbf{R}_{k}=\mathbf{W}_{k}\), we have \(\hat{p}_{k}(\mathbf{v})=p_{k}(\mathbf{v})\). Motivated by this, we propose to recover \(\mathbf{R}_{k}\) (and \(\mathbf{h}_{k}\)) by estimating the weight matrix \(\mathbf{W}_{k}\) via training this single-layer NN. To this end, we consider that \(\mathbf{W}_{k}\) takes a similar form to \(\mathbf{R}_{k}\) in (10), i.e., \[\mathbf{W}_{k}=\left[\begin{array}{cc}\mathbf{w}_{1,k}&\mathbf{w}_{2,k}\\ \mathbf{w}_{2,k}&-\mathbf{w}_{1,k}\end{array}\right], \tag{13}\] where \(\mathbf{w}_{1,k}\in\mathbb{R}^{N+1}\) and \(\mathbf{w}_{2,k}\in\mathbb{R}^{N+1}\) correspond to \(\mathrm{Re}\left(\mathbf{h}_{k}\right)\) and \(\mathrm{Im}\left(\mathbf{h}_{k}\right)\) in (10), respectively. With (13), we present the following lemma. **Lemma 1**: _If_ \[\|\mathbf{x}^{T}\mathbf{W}_{k}\|^{2}=\|\mathbf{x}^{T}\mathbf{R}_{k}\|^{2} \tag{14}\] _holds for any \(\mathbf{x}\in\mathbb{R}^{2N+2}\), we have \(\mathbf{h}_{k}=\mathbf{w}_{k}e^{j\phi_{k}}\), where \(\mathbf{w}_{k}=\mathbf{w}_{1,k}+j\mathbf{w}_{2,k}\), \(\phi_{k}\in[0,2\pi)\) denotes an arbitrary phase._ By substituting (13) into the left-hand side of (14), we have \[\|\mathbf{x}^{T}\mathbf{W}_{k}\|^{2}=\left|\mathbf{v}^{H}\mathbf{w}_{k}\right|^{2}=\mathbf{v}^{H} \mathbf{w}_{k}\mathbf{w}_{k}^{H}\mathbf{v}. \tag{15}\] Next, by substituting (9) and (15) into (14), we have \[\mathbf{v}^{H}\mathbf{w}_{k}\mathbf{w}_{k}^{H}\mathbf{v}=\left|\mathbf{v}^{H}\mathbf{h}_{k}\right|^{2}= \mathbf{v}^{H}\mathbf{h}_{k}\mathbf{h}_{k}^{H}\mathbf{v},\ \forall\mathbf{v}\in\mathbb{C}^{N+1}, \tag{16}\] which implies that \[\mathbf{v}^{H}\left(\mathbf{h}_{k}\mathbf{h}_{k}^{H}-\mathbf{w}_{k}\mathbf{w}_{k}^{H}\right)\mathbf{v}=0,\ \forall\mathbf{v}\in\mathbb{C}^{N+1}. \tag{17}\] For (17) to hold for any \(\mathbf{v}\in\mathbb{C}^{N+1}\), it should be satisfied that \(\mathbf{h}_{k}\mathbf{h}_{k}^{H}=\mathbf{w}_{k}\mathbf{w}_{k}^{H}\). As such, we have \(\mathbf{h}_{k}=\mathbf{w}_{k}e^{j\phi_{k}}\). The proof is thus completed. It follows from Lemma 1 that we can estimate \(\mathbf{h}_{k}\) by training the single-layer NN in Fig. 2 to estimate \(\mathbf{W}_{k}\) first. Although we cannot derive the exact \(\mathbf{h}_{k}\) due to the presence of the unknown phase \(\phi_{k}\), the objective function of (P1) only depends on the channel covariance matrix \(\mathbf{G}_{k}\), and we have \(\mathbf{G}_{k}=\mathbf{h}_{k}\mathbf{h}_{k}^{H}=\mathbf{w}_{k}\mathbf{w}_{k}^{H},k\in\mathcal{K}\). As such, the unknown common phase does not affect the objective function of (P1). It should also be mentioned that Lemma 1 requires that (14) holds for any \(\mathbf{x}\in\mathbb{R}^{2N+2}\) or \(\mathbf{v}\in\mathbb{C}^{N+1}\). However, due to (6b), the discrete IRS passive reflection set can only take a finite number of values Fig. 2: Single-layer NN architecture for \(U_{k}\). in a subspace of \(\mathbb{C}^{N+1}\), and \(\mathbf{h}_{k}=\mathbf{w}_{k}e^{j\phi_{k}}\) may not always hold in such a subspace. Nonetheless, the proposed design is still effective, as will be explained in Remark 1 later. To estimate \(\mathbf{w}_{k}\) or \(\mathbf{W}_{k}\), we can train the NN in Fig. 2 by using the stochastic gradient descent method to minimize the MSE between its output and the training data. In particular, we can make full use of each user's power measurements, i.e., \(\mathcal{P}_{k},k\in\mathcal{K}\), as the training data. Specifically, we divide them into two data sets, namely, the training set and validation set. The training set consists of \(M_{0}\) (\(M_{0}<M\)) entries of \(\mathcal{P}_{k}\), while the remaining \(M-M_{0}\) entries of \(\mathcal{P}_{k}\) are used as the validation set to evaluate the model fitting accuracy. Accordingly, the MSE for the training data is set as the following loss function, \[\mathcal{L}_{\mathbf{W}_{k}}=\frac{1}{M_{0}}\sum_{m=1}^{M_{0}}{(\hat{p}_{k}(\mathbf{v} _{m})-\bar{p}_{k}(\mathbf{v}_{m}))^{2}}. \tag{18}\] Given this loss function, we can use the backward propagation [14] to iteratively update the NN weights. Specifically, with (13), the weight matrix \(\mathbf{W}_{k}\) can be expressed by a vector \(\mathbf{\gamma}_{k}=\begin{bmatrix}\mathbf{w}_{1,k}^{T},\mathbf{w}_{2,k}^{T}\end{bmatrix} ^{T}\in\mathbb{R}^{2N+2}\). Let \(\mathbf{\gamma}_{k,t}\) denote the updated value of \(\mathbf{\gamma}_{k}\) after the \(t\)-th iteration. As such, the iteration proceeds as \[\mathbf{\gamma}_{k,t+1}=\mathbf{\gamma}_{k,t}-\rho F\left(\mathbf{\gamma}_{k,t}\right), \tag{19}\] where \(\rho>0\) denotes the learning rate, and \(F\left(\mathbf{\gamma}_{k}\right)=\frac{\partial\mathcal{L}_{\mathbf{W}_{k}}}{ \partial\mathbf{\gamma}_{k}}\) denotes the derivative of the loss function \(\mathcal{L}_{\mathbf{W}_{k}}\) with respect to \(\mathbf{\gamma}_{k}\). Here, \(F\left(\mathbf{\gamma}_{k}\right)\) can be calculated using the chain rule, \[F\left(\mathbf{\gamma}_{k}\right)=\frac{\partial\mathcal{L}_{\mathbf{W}_{k}}}{ \partial\hat{p}_{k}}\left[\frac{\partial\hat{p}_{k}}{\partial a_{k}},\;\frac{ \partial\hat{p}_{k}}{\partial b_{k}}\right]\left[\frac{\partial a_{k}}{ \partial\mathbf{\gamma}_{k}},\;\frac{\partial b_{k}}{\partial\mathbf{\gamma}_{k}} \right]^{T}, \tag{20}\] where \(\frac{\partial\mathcal{L}_{\mathbf{W}_{k}}}{\partial\hat{p}_{k}}\) can be calculated based on (18), while the other four derivatives in (20) can be calculated based on (11) as \[\frac{\partial a_{k}}{\partial\mathbf{\gamma}_{k}} =\left[1,\cos\left(\theta_{1}\right),\cdots,\cos\left(\theta_{N} \right),0,-\sin\left(\theta_{1}\right),\cdots,-\sin\left(\theta_{N}\right) \right]^{T},\] \[\frac{\partial b_{k}}{\partial\mathbf{\gamma}_{k}} =\left[0,\sin\left(\theta_{1}\right),\cdots,\sin\left(\theta_{N} \right),1,\cos\left(\theta_{1}\right),\cdots,\cos\left(\theta_{N}\right) \right]^{T},\] \[\frac{\partial\hat{p}_{k}}{\partial a_{k}} =2a_{k},\text{ and }\;\frac{\partial\hat{p}_{k}}{\partial b_{k}}=2b_{k}. \tag{21}\] The NN training process terminates after \(Z\) rounds of iterations, and the weight matrix of the NN is determined as \[\mathbf{W}_{k}^{\mathbf{\divide}}=\arg\min_{1\leq t\leq Z}\left(\sum_{m=M_{0}+1}^{ M}{(\hat{p}_{k,t}(\mathbf{v}_{m})-\bar{p}_{k}(\mathbf{v}_{m}))^{2}}\right), \tag{22}\] based on the validation set, where \(\hat{p}_{k,t}(\mathbf{v}_{m})=\|\mathbf{x}_{m}^{T}\mathbf{W}_{k,t}\|^{2}\) denotes the output of the NN after the \(t\)-th iteration, and \(\mathbf{W}_{k,t}\) denotes the updated version of \(\mathbf{W}_{k}\) after the \(t\)-th iteration. Based on the above, the complex-valued cascaded channel can be estimated as \(\mathbf{w}_{k}^{\mathbf{\divide}}=\mathbf{w}_{1,k}^{\mathbf{\divide}}+j\mathbf{w}_{2,k}^{\mathbf{ \divide}}\). **Remark 1**: _In the case with one-bit IRS phase shifts, i.e., \(\alpha=1\), the cascaded channel \(\mathbf{h}_{k}\) may not be estimated as \(\mathbf{w}_{k}e^{j\phi_{k}}\). This is because in this case, we have \(\mathbf{v}^{\star}=\mathbf{v}\), which results in_ \[\mathbf{v}^{H}\mathbf{h}_{k}\mathbf{h}_{k}^{H}\mathbf{v}=\mathbf{v}^{H}\mathbf{h}_{k}^{\star}\mathbf{h}_{k }^{T}\mathbf{v}. \tag{23}\] Based on (17), we may estimate \(\mathbf{h}_{k}^{\star}\) as \(\mathbf{w}_{k}e^{j\phi_{k}}\), while the actual channel \(\mathbf{h}_{k}\) should be estimated as \(\mathbf{w}_{k}^{\star}e^{-j\phi_{k}}\). However, this does not affect the efficacy of the proposed design, since both estimations lead to the same received signal power due to (23)._ ### _IRS Reflection Optimization_ After estimating \(\mathbf{h}_{k},k\in\mathcal{K}\), we can substitute them into (6a) and solve (P1) accordingly. Next, we present the optimal and suboptimal algorithms to solve (P1) in the cases of \(K=1\) and \(K>1\), respectively. First, if \(K=1\), it has been shown in [15] that (P1) can be optimally solved by applying a geometry-based method, and the details are thus omitted. However, if \(K>1\), problem (P1) is generally difficult to be optimally solved. Next, we consider combining the SDR technique [16] and the successive refinement method [12] to solve it. First, let \(\mathbf{V}=\mathbf{v}\mathbf{v}^{H}\) denote the covariance matrix of \(\mathbf{v}\), with \(\mathbf{V}\succeq\mathbf{0}\). Problem (P1) can be equivalently reformulated as \[\text{(P2):}\max_{\mathbf{V}} \xi\] (24a) s.t. \[\text{Tr}(\mathbf{\hat{G}}_{k}\mathbf{V})\geq\xi,\;\forall\;k, \tag{24b}\] \[\text{rank}(\mathbf{V})=1,\] (24c) \[\mathbf{V}\succeq\mathbf{0},\] (24d) \[\theta_{i}\in\Phi_{\alpha},\;i=1,...,N, \tag{24e}\] where \(\xi\) is an auxiliary variable. Problem (P2) is still difficult to solve due to the rank-one and discrete-phase constraints in (24c) and (24e), respectively. Next, we relax both constraints and thereby transform (P2) into a semidefinite programming (SDP) problem, which can be optimally solved by the interior-point algorithm [17]. However, the obtained solution may not be rank-one, and its entries may not satisfy the discrete constraint (24e). In this case, we can apply the Gaussian randomization method jointly with the solution quantization to construct a rank-one solution that satisfies (24e), denoted as \(\hat{\mathbf{v}}\). Based on the initial passive reflection \(\hat{\mathbf{v}}\), we successively refine \(\theta_{i}\) by enumerating the elements in \(\Phi_{\alpha}\), with \(\theta_{j},j\neq i,j=1,2,...,N\) being fixed, until the convergence is reached. ### _Complexity Analysis_ In the proposed IRS channel estimation and IRS reflection design method, the computational complexity is mainly due to the NN training procedures for channel estimation and the passive reflection optimization for solving (P2). In particular, the training complexity depends on the size of the NN structure. In the NN for \(U_{k},k\in\mathcal{K}\), as shown in Fig. 2, the number of neurons is \(2\), and the number of weights is \(2N+2\), which entails the complexity for all \(K\) users in the order of \(\mathcal{O}\left(KN\right)\)[18]. Furthermore, in the passive reflection optimization, the SDR-based initialization incurs the complexity of \(\mathcal{O}\left((K+N)^{3.5}\right)\), while the successive refinement incurs the complexity of \(\mathcal{O}\left(KN\right)\). Thus, the overall complexity of the proposed design is dominated by the SDR, i.e., \(\mathcal{O}\left((K+N)^{3.5}\right)\). In practice, we can apply the successive refinement only for solving (P2) with affordable performance loss (to be shown in Section IV via simulation), while the complexity is decreased significantly to \(\mathcal{O}\left(KN\right)\), thus reducing the overall complexity to linear over \(N\). ## IV Numerical Results ### _Simulation Setup_ Consider a three-dimensional Cartesian coordinate system in meter (m) with \(K\) users, where the BS is deployed at \((50,-200,20)\), while the locations of all users are randomly generated in a square area with the coordinates of its four corner points given by \((0,0,0),(10,0,0),(10,10,0)\) and \((0,10,0)\), respectively. The IRS is equipped with a uniform planar array (UPA) and assumed to be parallel to the \(y\)-\(z\) plane, with \(N=N_{y}\times N_{z}\) reflecting elements, where \(N_{y}\) and \(N_{z}\) denote the numbers of reflecting elements along the axes \(y\) and \(z\), respectively. We set \(N_{y}=N_{z}=8\) and half-wavelength spacing for the adjacent IRS reflecting elements. The location of the reference point of the IRS is set as \((-2,-1,0)\). Let \(\beta_{0,k}\), \(\beta_{1}\) and \(\beta_{2,k}\) denote the path loss (in dB) of the BS-\(U_{k}\), BS-IRS and IRS-\(U_{k}\) channels, respectively, which are set to \(\beta_{0,k}=33+37\text{log}_{10}(d_{0,k})\), \(\beta_{1}=30+20\text{log}_{10}(d_{1})\) and \(\beta_{2,k}=30+20\text{log}_{10}(d_{2,k})\), respectively, with \(d_{0,k}\), \(d_{1}\) and \(d_{2,k}\) denoting the distance from the BS to \(U_{k}\), that from the BS to the IRS, and that from the IRS to \(U_{k}\). We assume Rayleigh fading for the BS-\(U_{k}\) channel, i.e., \(h_{U_{k}}=10^{-\beta_{0,k}/20}\zeta_{k}\), where \(\zeta_{k}\) denotes the small-scale fading following \(\mathcal{CN}(0,1)\). In addition, a multipath channel model is assumed for the BS-IRS and IRS-\(U_{k}\) channels, and the BS-IRS channel is expressed as \[\mathbf{h}_{BI}=\sqrt{\frac{\varepsilon_{BI}}{1+\varepsilon_{BI}}}\mathbf{h}_{LoS}+ \sqrt{\frac{1}{1+\varepsilon_{BI}}}\mathbf{h}_{NLoS}, \tag{25}\] where \(\varepsilon_{BI}\) is the ratio of the line-of-sight (LoS) path power to that of the non-LoS (NLoS) path. \(\mathbf{h}_{LoS}\) and \(\mathbf{h}_{NLoS}\) denote the LoS and NLoS components, which are respectively given by \[\mathbf{h}_{LoS} =10^{-\beta_{1}/20}e^{\frac{-j2\pi d_{1}}{\lambda}}\mathbf{u}_{N}( \vartheta_{0},\varphi_{0}), \tag{26a}\] \[\mathbf{h}_{NLoS} =\sqrt{\frac{1}{L}}\sum_{l=1}^{L}\kappa_{l}\mathbf{u}_{N}(\vartheta_{ l},\varphi_{l}), \tag{26b}\] where \(\lambda\) denotes the wavelength. In (26), \(L\) denotes the number of NLoS multipath components, \(\kappa_{l}\) denotes the amplitude of the \(l\)-th multipath component following \(\mathcal{CN}(0,10^{-\beta_{l}/10})\), and \(\mathbf{u}_{N}(\vartheta_{l},\varphi_{l})\) denotes the steering vector of the \(l\)-th path from the BS to the IRS with \(\vartheta_{l}\in[0,\pi]\) and \(\varphi_{l}\in[0,\pi]\) denoting the azimuth and the elevation angles of arrival at the IRS in this path, respectively. In particular, let \(\mathbf{e}(\gamma,n)=[1,e^{-j\pi\gamma},e^{-j2\pi\gamma},...,e^{-j(n-1)\pi\gamma }]^{T}\) denote the steering vector function of a uniform linear array with \(n\) elements and directional cosine \(\gamma\). As such, we have \(\mathbf{u}_{N}(\vartheta_{l},\varphi_{l})=\mathbf{e}(\sin(\vartheta_{l})\text{sin}( \varphi_{l}),N_{y})\otimes\mathbf{e}(\cos(\vartheta_{l}),N_{z})\), where \(\otimes\) denotes the Kronecker product. The IRS-\(U_{k}\) channel can be expressed similarly and we denote by \(\varepsilon_{IU_{k}}\) the ratio of its LoS path power to that of the NLoS counterpart. We set \(L=5\), \(\varepsilon_{BI}=10\) and \(\varepsilon_{IU_{k}}=1\). The number of power measurements obtained by one user under each IRS reflection set is \(Q=10\). The transmit power is \(P=30\) dBm, and the noise power is \(\sigma^{2}=-90\) dBm. All results are averaged over \(10^{3}\) realizations of channels and user locations. ### _Benchmark Schemes_ We adopt the CSM [11] and random-max sampling (RMS) [19] methods as benchmark schemes, both of which design the IRS passive reflection for data transmission based on the users' power measurements, but without estimating the IRS cascaded channels by further exploiting the power measurements. Specifically, the RMS method sets the IRS reflection as the one that maximizes the minimum received signal power among all users over \(M\) random IRS reflection sets, i.e., \[\mathbf{v}^{\text{RMS}}=\mathbf{v}_{m^{*}},\quad\text{with}\quad m^{*}=\arg\max_{m\in \mathcal{M}}\ \min_{k\in\mathcal{K}}\bar{p}_{k}(\mathbf{v}_{m}). \tag{27}\] Moreover, the CSM method first calculates the sample mean of the minimum power measurement among all users conditioned on \(\theta_{i}=\psi,\psi\in\Phi_{\alpha}\), i.e., \[\mathbb{E}[p|\theta_{i}=\psi]=\frac{1}{|\mathcal{A}_{i}(\psi)|}\sum_{\mathbf{v} \in\mathcal{A}_{i}(\psi)}\min_{k\in\mathcal{K}}\bar{p}_{k}(\mathbf{v}), \tag{28}\] where \(\mathcal{A}_{i}(\psi)\) denotes a subset of the \(M\) random reflection sets with \(\theta_{i}=\psi\), \(i=1,2,...,N\). Finally, the phase shift of the \(i\)-th reflecting element is set as \[\theta_{i}^{\text{CSM}}=\arg\max_{\psi\in\Phi_{\alpha}}\mathbb{E}[p|\theta_{i} =\psi],\ i=1,...,N. \tag{29}\] In addition, the IRS passive reflection design based on perfect CSI by solving (P2) with \(\mathbf{G}_{k}\) replaced by \(\mathbf{G}_{k},k\in\mathcal{K}\), is included as the performance upper bound to evaluate the efficacy of the proposed scheme. ### _Simulation Results_ We evaluate the received SNR by different schemes in both single-user and multiuser scenarios. For the proposed scheme, we first use the single-layer NN to estimate \(\mathbf{h}_{k},k\in\mathcal{K}\), and then apply the geometry-based method and the SDR method to optimize the IRS passive reflection in the single-user and multiuser scenarios, respectively (labeled as "NN-GE" and "NN-SDR"), as presented in Section III-C. In addition, for both scenarios, we also show the performance by directly applying the Fig. 3: Received SNR versus the number of IRS reflection sets with \(K=1\) and \(\alpha=1\). Fig. 4: Received SNR versus the number of IRS reflection sets with \(K=1\) and \(\alpha=2\). successive refinement method, where the IRS passive reflection is initialized based on RMS given in (27) (labeled as "NN-SR"). First, Fig. 3 and Fig. 4 show the received SNR under different schemes in the single-user case with the number of controlling bits for IRS phase shifts \(\alpha=1\) and \(\alpha=2\), respectively. It is observed that both the NN-GE and NN-SR methods significantly outperform the benchmark schemes by fully exploiting the users' power measurements for channel estimation. In particular, with increasing \(M\), the performance of our proposed scheme quickly converges to the upper bound achievable with perfect CSI. Moreover, the SNR performance improves by increasing \(\alpha\) from 1 to 2, as expected, thanks to the higher phase-shift resolution for both channel estimation and reflection design. Furthermore, the small gap between the NN-SDR and NN-SR schemes demonstrates that the IRS passive reflection can be more efficiently optimized with linear complexity over \(N\) if small performance loss is tolerable. Next, Fig. 5 and Fig. 6 show the minimum received SNR among \(K=5\) users with \(\alpha=1\) and \(\alpha=2\), respectively. Similar observations for the single-user case can be made for the multiuser case. In addition, it is observed that CSM performs worse than RMS in the multiuser case, due to its inefficacy to adapt to more complex utility functions such as that given in (28) when \(K>1\). ## V Conclusion In this paper, we proposed a new IRS channel estimation method based on user received signal power measurements with randomly generated IRS reflections, by exploiting a simple single-layer NN formulation. Numerical results showed that the IRS passive reflection design based on the estimated IRS channels can significantly outperform the existing power measurement based schemes and approach the optimal performance assuming perfect CSI with significantly reduced power measurements in an IRS-aided multiuser communication system. The proposed IRS channel estimation and reflection optimization approach can be extended to other setups such as multi-antenna BS with adaptive precoding, multi-antenna user receivers, multiple IRSs, which will be studied in future work.
2309.13133
Zero-One Laws for Random Feasibility Problems
We introduce a general random model of a combinatorial optimization problem with geometric structure that encapsulates both linear programming and integer linear programming. Let $Q$ be a bounded set called the feasible set, $E$ be an arbitrary set called the constraint set, and $A$ be a random linear transform. We define and study the $\ell^q$-margin, $M_q := d_q(AQ, E)$. The margin quantifies the feasibility of finding $y \in AQ$ satisfying the constraint $y \in E$. Our contribution is to establish strong concentration of the margin for any $q \in (2,\infty]$, assuming only that $E$ has permutation symmetry. The case of $q = \infty$ is of particular interest in applications -- specifically to combinatorial ``balancing'' problems -- and is markedly out of the reach of the classical isoperimetric and concentration-of-measure tools that suffice for $q \le 2$. Generality is a key feature of this result: we assume permutation symmetry of the constraint set and nothing else. This allows us to encode many optimization problems in terms of the margin, including random versions of: the closest vector problem, integer linear feasibility, perceptron-type problems, $\ell^q$-combinatorial discrepancy for $2 \le q \le \infty$, and matrix balancing. Concentration of the margin implies a host of new sharp threshold results in these models, and also greatly simplifies and extends some key known results.
Dylan J. Altschuler
2023-09-22T18:39:29Z
http://arxiv.org/abs/2309.13133v3
# Zero-one laws for random feasibility problems ###### Abstract. We introduce a general random model of a combinatorial optimization problem with geometric structure that encapsulates both linear programming and integer linear programming. Let \(Q\) be a bounded set called the feasible set, \(E\) be an arbitrary set called the constraint set, and \(A\) be a random linear transform. We define and study the _\(\ell^{q}\)-margin_, \[\mathcal{M}_{q}:=d_{\ell^{q}}\left(AQ,E\right)\,.\] The margin quantifies the feasibility of finding \(y\in AQ\) satisfying the constraint \(y\in E\). Our contribution is to establish strong concentration of the \(\ell^{q}\)-margin for any \(q\in(2,\infty]\), assuming only that \(E\) has permutation symmetry. The case of \(q=\infty\) is of particular interest in applications--specifically to combinatorial "balancing" problems--and is markedly out of the reach of the classical isoperimetric and concentration-of-measure tools that suffice for \(q\leq 2\). Generality is a key feature of this result: we assume permutation symmetry of the constraint set and nothing else. This allows us to encode many optimization problems in terms of the margin, including random versions of: the closest vector problem, integer linear feasibility, perceptron-type problems, \(\ell^{q}\)-combinatorial discrepancy for \(2\leq q\leq\infty\), and matrix balancing. Concentration of the margin implies a host of new sharp threshold results in these models, and also greatly simplifies and extends some key known results. D.J. Altschuler Department of Mathematical Sciences, Carnegie Mellon University [email protected] ## 1. Introduction Balancing covariates in experimental design, sparsifying a graph, and training a single-layer neural net are seemingly disparate combinatorial optimization problems. Nonetheless, they as well as a wide range of other problem can be recast as generalizations of the Closest Vector Problem, a core problem in integer programming. _Find the \(\ell^{q}\)-closest point of a set \(Q\) to another set \(E\)._ This is highly non-trivial for general \(Q\) and \(E\); crucially, properties like convexity are not assumed. A natural random model is to take a random linear transformation of either \(Q\) or \(E\). **Definition 1** (Generalized Random Feasibility).: Let \(Q\subset\mathbb{R}^{N}\) and \(E\subset\mathbb{R}^{M}\) be sets called the feasible set and constraint set, respectively. Fix a matrix \(A\in\mathbb{R}^{M\times N}\) with independent standard normal entries. The \(\ell^{q}\)-margin \(\mathcal{M}_{q}(A):=\mathcal{M}_{q,Q,E}(A)\) is defined as: \[\mathcal{M}_{q}(A):=\min_{\sigma\in Q}d_{\ell^{q}}\left(A\sigma,E\right)\,,\] and the \((A,Q,E)\)-feasibility problem is the task of determining if \(\mathcal{M}_{q}\) is zero. The word margin is chosen in analogy to terminology from the literature of perceptron models. The margin quantifies the "distance to satisfiability" for the following program: find \(\sigma\in Q\) with \(A\sigma\in E\). In this program, \(A\) and \(E\) encode a random set of constraints on \(Q\), and the margin \(\mathcal{M}_{q}(A)\) measures the least the constraints can be violated by optimizing over \(\sigma\in Q\). In particular, the margin is zero if and only if this program is feasible. The exponent \(q\) controls how heavily the largest violations are penalized. In the special case of the feasible set \(Q\) being a subset of the integer lattice and the constraint set \(E\) being a rectangle \([-\infty,b_{1}]\times\ldots\ldots[-\infty,b_{M}]\) for some vector \(b\in\mathbb{R}^{M}\), we recover the canonical form of random integer linear feasibility. Namely, the margin is zero if and only if there exists \(\sigma\in Q\) satisfying \[(A\sigma)_{i}\leq b_{i}\,,\quad\forall\,i\in[M]\,.\] If \(Q\) is \(\mathbb{R}^{N}\) instead, we recover random linear programming. The technical contribution of this article is strong concentration bounds for \(\mathcal{M}_{q}\) under the assumptions that \(E\) has sufficient permutation symmetry and \(Q\) is bounded. Concentration for the margin can be interpreted as a sharp threshold as follows. Define the \(\ell^{q}\)-expansion of the constraint set \(E\) by \[E_{\delta}:=\left\{x\in\mathbb{R}^{M}\ :\ d_{q}(x,E)\leq\delta\right\}\,.\] The \(\ell^{q}\)-margin is exactly the smallest \(\delta\) so that \(E_{\delta}\cap AQ\) is non-empty. Thus, if the margin has fluctuations on some vanishing scale, then as one expands \(E\) with respect to \(\ell^{q}\), the probability that it contains a point of \(AQ\) will jump from zero to one in a vanishing window. It is worth highlighting that this notion of sharp threshold is _non-asymptotic_ in an important sense. The dimension is held fixed while \(\delta\) is varied. The existence of a sequence of problems indexed by \(N\) or \(M\) is not assumed. In the examples we will introduce shortly, this will be a significant departure from previous work and a source of increased generality. ### Main Results **Definition 2**.: Say a set \(E\subset\mathbb{R}^{n}\) has _permutation symmetry_ if for any permutation \(\pi\in S_{n}\) and \(x\in\mathbb{R}^{n}\), \[(x_{1},\dots,x_{n})\in E\quad\text{if and only if}\quad\left(x_{\pi(1)},\dots,x_{ \pi(n)}\right)\in E\,.\] This is a combinatorial notion of regularity. Typical examples include Cartesian products \(E:=(E_{0})^{M}\) or the \(\ell^{p}\) unit ball for any \(p\). **Theorem 1** (Main result: concentration of the margin).: _There is a universal constant \(C>0\) so the following holds. Let \(Q\subset\mathbb{R}^{N}\) be a subset of the Euclidean unit ball and \(E\) have permutation symmetry. For \(q\in[2,\infty]\),_ \[\operatorname{Var}\left(\mathcal{M}_{q}(A)\right)\leq\frac{C}{1+\left(\frac{ 1}{2}-\frac{1}{q}\right)\log M}\,. \tag{1.1}\] By homogeneity, the assumption that \(Q\) is a subset of the unit ball could be replaced by multiplying the right-hand side of Eq. (1.1) by a factor of \(\max_{\sigma\in Q}\|\sigma\|_{2}^{2}\). At least for \(q=\infty\), our result cannot be improved without further assumptions. Letting \(Q\) be a singleton on the \(\ell^{2}(\mathbb{R}^{N})\)-unit sphere and \(E\) be the origin in \(\mathbb{R}^{M}\) makes this clear (see Chapter 5, section 6 of [20]). However, Theorem 1 should be seen as an important but purely qualitative improvement over the trivial bound of \(\operatorname{Var}\left(\mathcal{M}_{q}(A)\right)\leq 1\), which follows from the Gaussian Poincare inequality (see details in the proof of Theorem 1). In many of the \((A,Q,E)\)-feasibility problems that appear in actual applications, it should be the case that \(\operatorname{Var}\left(\mathcal{M}_{q}(A)\right)\leq 1/\mathrm{poly}(M)\). The assumption of permutation symmetry on \(E\) is a serious restriction. Unfortunately, it cannot be completely dropped: letting \(Q\) be a singleton and \(E=\mathbb{R}^{M-1}\times[-1,1]\), clearly the fluctuations of the \(\mathcal{M}_{\infty}\) are order one. Nonetheless, it is possible to somewhat relax the permutation symmetry condition on \(E\). The following extension allows for imposing several different types of constraints on a feasibility problem. **Theorem 2** (Block symmetry suffices).: _There is a universal constant \(C>0\) so that the following holds. Let \(Q\subset\mathbb{R}^{N}\) be a subset of the Euclidean unit ball and let \(E=E_{1}\times\dots,E_{k}\) where \(E_{i}\subset\mathbb{R}^{M_{i}}\) has permutation symmetry for each \(i\) and \(\sum_{i=1}^{k}M_{i}=M\). Abbreviate \(m:=\min_{i\in k}M_{i}\). For each \(q\in[2,\infty]\),_ \[\operatorname{Var}\left(\mathcal{M}_{q}(A)\right)\leq C\left(1+\frac{1}{2}\log \left(\frac{m^{1-\frac{2}{q}}}{k}\lor 1\right)\right)^{-1}\,. \tag{1.2}\] **Remark 1**.: Let us briefly highlight the generality of Theorems 1 and 2. While there are permutation symmetry requirements on \(E\), there is nearly complete freedom for \(Q\). The feasible set can be discrete, continuous, or even a singleton. Canonical examples include the sphere, solid cube, discrete cube, bounded subset of the integer lattice, and arbitrary subsets (such as level-sets of an arbitrary function) of any of the previous examples. Our results hold without distinguishing between these situations. Our main technical tool is Talagrand's \(L^{1}\)-\(L^{2}\) (Gaussian) hypercontractive inequality, given below as Lemma 2. There is a long and rich history of this inequality being used to prove similar sharp thresholds in various settings. See the expositions of Kalai [35] and Chatterjee [20] for a wealth of examples in the Boolean and Gaussian settings, respectively. In particular, our result is similar to the classical theorems of Friedgut and Bourgain [30] and Friedgut and Kalai [31] that establish a sharp threshold for any permutation-transitive monotone Boolean function. The assumption of permutation-transitivity is used in their theorems for the same purpose as in ours. ### Future Directions Some open problems that seem attractive but quite challenging: 1. Obtain polynomial improvements over the given rates in the case that \(Q\) has large cardinality and is well-spread (i.e. the average inner product over all pairs of elements of \(Q\) is bounded away from one). A natural place to start is the dynamical variance identity given in Lemma 2.1 of [20]. 2. Explore the quantitative the trade-off between the permutation symmetry of \(E\) and the concentration of the margin. 3. Extend these results to other disorders for the matrix \(A\). Independent Boolean or Rademacher entries are natural and may be easy. Independent columns would be quite interesting but potentially harder. ### Notation The phrase "almost every" will always be with respect to the Gaussian or Lebesgue measure. Since they are mutually absolutely continuous, this is without ambiguity. We say a function \(F:\mathbb{R}^{d}\to\mathbb{R}\) is \((L,q)\)-Lipschitz if \(|F(x)-F(y)|\leq L\|x-y\|_{q}\) for all \(x\) and \(y\). The set \(E\) will always be assumed closed. ## 2. Applications and Previous Works ### Random Integer Programming Average-case linear programming and integer linear programming have been the subject of intense study over the past few decades, in large part due to the enormous gap between worst-case guarantees and average-case empirical performance of (integer) linear programming algorithms [23, 46, 50]. Much of the work on integer programming focuses on understanding integrality gaps. Both the random setting [17, 18, 19, 27] and deterministic setting have been extensively considered. (The latter is too rich to introduce here; we refer the reader to [26].) The study of integrality gaps often utilizes the notions of "sensitivity" and "proximity," which quantify the distance between the vertices and lattice points contained in a polytope [5, 22]. Much more closely related are the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP) [3, 26]. The CVP asks: given a lattice and a target vector \(v\), find \(w\) in the lattice minimizing \(\|v-w\|_{2}\). The SVP asks the same with \(v=0\) and the origin removed from the lattice. The author is not aware of work prior to this article on random versions of these problems. ### Combinatorial Discrepancy Combinatorial discrepancy arises as a fundamental quantity in a plethora of fields including combinatorics, geometry, optimization, information theory, and experimental design [21, 39, 49]. For an \(M\times N\) matrix \(A\), the combinatorial discrepancy \(\operatorname{disc}(A)\) is given by \[\operatorname{disc}(A):=\min_{\sigma\in\frac{1}{\sqrt{N}}\{-1,+1\}^{N}}\|A \sigma\|_{\infty}\,.\] There are a number of outstanding open conjectures that seem out of reach of current tools, motivating much recent interest in random [1, 2, 7, 9, 15, 19, 28, 29, 32, 33, 43, 44, 47, 55] and semi-random models [4, 12, 13, 14]. It is a general feature in random discrepancy that the second moment method will yield a coarse threshold [9], but not a sharp threshold (unless \(M\ll N\)[55]). The second moment method is generically difficult to improve upon. However, a recent series of breakthroughs overcame this technical hurdle and established extremely precise control in the square regime \(M\asymp N\). Perkins and Xu [43] and Abbe, Li, and Sly [2] simultaneously obtained a sharp threshold for the discrepancy of Gaussian and Rademacher matrices, respectively, as well as strong control over the number of solutions. Subsequently, the discrepancy of a Gaussian matrix was shown by the author to concentrate within a \(\mathcal{O}\left(\log(N)/N\right)\) window [6]; the logarithm has been removed by Sah and Sawhney [47]. The mentioned breakthroughs all are quite technically involved. Our main theorem recovers a very simple, direct, and different proof (albeit with a highly non-optimal rate) of the recently established sharp threshold for discrepancy. Additionally, we also obtain some truly new results in related settings. A natural generalization of combinatorial discrepancy, the \(\ell^{q}\) discrepancy of a matrix is given by \[\operatorname{disc}_{q}(A):=\min_{\sigma\in\{-1,+1\}^{N}}\|A\sigma\|_{q}\,.\] While \(\ell^{q}\) discrepancy has certainly been extensively studied for deterministic matrices (see e.g. [8, 25] for a list of references; the literature dates back to at least a question of Dvoretsky in 1960, and is too vast to introduce here), to the best of our knowledge nothing is known about the random setting. Similarly to the \(q=\infty\) setting, it is natural to expect the second moment method to yield a coarse threshold. Any such future result can be automatically upgraded to a sharp threshold by applying our main result: **Theorem 3** (Sharp threshold for \(\ell^{q}\) discrepancy).: _Let \(A\in\mathbb{R}^{N\times N}\) with independent standard normal entries and \(2\leq q\leq\infty\). Then:_ \[\frac{\mathbb{E}\left[\operatorname{disc}_{q}(A)\right]}{\sqrt{\operatorname{ Var}\left(\operatorname{disc}_{q}(A)\right)}}\geq\Omega\left(N^{\frac{1}{q}} \sqrt{1+\left(\frac{1}{2}-\frac{1}{q}\right)\log N}\right)\,.\] Proof.: The variance of \(\mathcal{M}_{q}\) can be upper-bounded by applying Theorem 1 with \(E=\{0\}\) and \(Q\) the normalized discrete cube \(N^{-1/2}\left\{-1,+1\right\}^{N}\). The expectation of \(\mathcal{M}_{q}\) is lower-bounded by \(N^{1/q}\) via a simple application of Markov's inequality. Indeed, it suffices to show that for all \(\sigma\in Q\) that \(\mathbb{P}\left[\left\|A\sigma\right\|_{q}<cN^{1/q}\right]<2^{-(1+c^{\prime})N}\). For a particular \(\sigma\), we have \(A\sigma=:Y\) is distributed as a standard normal in \(\mathbb{R}^{M}\). Let \(\mathcal{G}\) denote the event that at least \(\epsilon n\) entries of \(Y\) are greater than \(1\). On the event \(\mathcal{G}\), we have \(\|A\sigma\|_{q}\geq\epsilon^{1/q}N^{1/q}\). Standard tail bounds for the binomial distribution yield \[\mathbb{P}\left[\mathcal{G}^{c}\right]\leq\exp\left\{-\left|\Omega\left(N \epsilon\log(1/\epsilon)\right)\right|\right\}\leq 2^{-2N}\,,\] where the second inequality follows by taking \(\epsilon\) sufficiently small. Taking \(c=\epsilon^{1/q}\) and \(c^{\prime}=1\), we are done. The result and proof remain essentially unchanged if we optimize discrepancy over any of the feasible sets given in Remark 1 rather than taking \(Q\) as the discrete hypercube. For example, another natural problem for which, to the best of our knowledge, there are no published results is the relaxation of random discrepancy to the sphere rather than the discrete cube. This is analogous to the relaxation of "Ising" spin glasses to "spherical" spin glasses in statistical physics. **Theorem 4** (Sharp threshold for the symmetric spherical perceptron).: _For any \(q\in[2,\infty]\) and \(\alpha\in(0,\infty)\), let \(A\in\mathbb{R}^{\alpha N\times N}\) have iid standard normal entries._ \[\operatorname{Var}\left(\min_{\sigma\in S^{N-1}}\|A\sigma\|_{q}\right)= \mathcal{O}\left(\frac{1}{1+\left(\frac{1}{2}-\frac{1}{q}\right)\log N}\right)\,.\] It seems reasonable to expect the second moment method to yield a coarse threshold in this setting as well, which can thus be automatically promoted to a sharp threshold. We also note a related work of Minzer, Sah, and Sawhney [41] that appeared during the writing of this article. They use the Boolean version of Talagrand's \(L^{1}\)-\(L^{2}\) inequality to establish a sharp threshold in the "perfectly friendly bisection" problem. They note that their methods can be used to show concentration of random \(\ell^{\infty}\) discrepancy with Bernoulli disorder. ### Perceptron Models The Binary Perceptron problem is an enduring model of the classification power of a single-layer neural net. First introduced in the 1960's, it remains a fascinating yet stubborn source of open problems. The binary perceptron is exactly the \((A,Q,E)\)-feasibility problem with parameters: \[q=\infty\,,\quad Q=N^{-1/2}\left\{-1,+1\right\}^{N}\,,\quad E=[K,\infty)^{ \alpha N}\] In the literature surrounding the perceptron, the following approach is taken: fix \(K\) and ask for the largest \(\alpha\), called the _capacity_, such that the above problem is feasible. This exploits the fact that there is a natural sequence of problems indexed by \(\alpha\). Namely, add independent rows to \(A\) and extend \(E\) by taking Cartesian products with more copies of \([K,\infty)\). Concretely, the capacity \(\boldsymbol{\alpha_{\star}}\) is the random variable \[\max\left\{\alpha:\exists\sigma\in Q,\ \langle A_{i},\sigma\rangle>0,\quad \forall i\in[\alpha N]\right\}\,,\] where \(A_{i}\) is the \(i\)'th row of \(A\). Concentration of \(\boldsymbol{\alpha_{\star}}\) was shown somewhat recently in two impressive and technically involved works [42, 56]. Xu used the Fourier-analytic pseudo-junta theorem of Hatami to show concentration of \(\boldsymbol{\alpha_{\star}}\), establishing the first sharp threshold result for the perceptron [56]. Subsequently, Nakajima and Sun established a sharp threshold for a wide variety of related models [42]; their methods extend some prior work of Talagrand [52, 54]. The generalization studied by Nakajima and Sun corresponds to letting \(E:=(E_{0})^{\alpha N}\), where \(E\) can be any set that satisfies a mild structural assumption1. In our notation, previous work can be summarized as: Footnote 1: We will not make this precise. See Assumption 1.2 and Theorem 1.2 of [42] for the full statement. **Theorem 5** (Sharp threshold for the capacity of the generalized perceptron [56, 42]).: _Let \(q=\infty\), \(Q=\{-1,+1\}^{N}\) and \(E_{\alpha}:=(E_{0})^{\alpha N}\). Then there exists some sequence \(a_{c}:=a_{c}(N,E_{0})\) such that: for any \(\epsilon>0\) and \(N\) sufficiently large, the \((A,Q,E_{\alpha})\)-feasibility problem is satisfiable with high probability for \(\alpha<a_{c}-\epsilon\) and unsatisfiable with high probability for \(\alpha>a_{c}+\epsilon\)._ We take a dual approach: fix an \(\alpha\), and look for the largest \(K\) so that there exists \(\sigma\in Q\) with \(A\sigma\in E\). Applying Theorem 1 readily recovers a similar sharp threshold in the parameter \(K\). Additionally, we gain a signficant increase in the generality of the constraint sets. As previously mentioned, our notion of sharp threshold for the margin is _non-asymptotic_; it is well-defined for a fixed dimension. Thus we need not assume that \(E\) has product structure \((E_{0})^{\alpha N}\). Instead, we only need \(E\) to be permutation invariant. (For example: the \(\ell^{2}\)-unit sphere is permutation invariant but not a product set). Recall the notation \(E_{\delta,q}\) for the \(\ell^{q}\)-metric expansion of the set \(E\), \[E_{\delta}:=\left\{x\in\mathbb{R}^{\alpha N}\ :\ d_{\infty}(x,E)\leq\delta \right\}\,.\] Our result is the following immediate consequence of Theorem 1: **Theorem 6** (Sharp threshold for the margin of the generalized perceptron).: _Let \(2<q\leq\infty\), \(Q\) be a feasible set and \(E\) be permutation invariant. Then there exists some sequence \(K_{c}:=K_{c}(N)\) such that: for any \(\epsilon>0\) and \(N\) sufficiently large, the \((A,Q,E_{K})\)-feasibility problem is unsatisfiable with high probability for \(K<K_{c}-\epsilon\) and satisfiable with high probability for \(K>K_{c}+\epsilon\)._ Note that the setting considered here is a significant generalization of the setting of Theorem 5. Not only do we allow for permutation-invariant constraint sets rather than just product sets, we also are able to take \(Q\) as e.g. the sphere. While the positive spherical perceptron is already well-understood [54, 53, 48], to the best of our knowledge a sharp threshold result was not previously known for the negative spherical perceptron. (The "positive" and "negative" spherical perceptron refer to parameter regimes for \(\alpha\) for which \(K_{c}\) is positive or negative, respectively. The negative perceptron is conjectured to exhibit a property called "full replica symmetry breaking", which makes it difficult to analyze.) A natural question is whether these two different notions of sharp threshold considered in Theorem 5 and Theorem 6 coincide when they are both defined--i.e. when \(E=(E_{0})^{\alpha N}\) for some \(E\subset\mathbb{R}\). Very loosely speaking, if \(K_{c}\) and \(\boldsymbol{\alpha}_{\bullet}\) are of the same order, then a sharp threshold and a sharp threshold in the capacity are equivalent. This is called the "proportional regime" in constraint satisfaction literature. For the familiar reader, a slightly more precise statement: the log-number of satisfying \(\sigma\) needs to decrease at asymptotically the same rate if we add \(\delta N\) new constraints or decrease \(K\) by \(-\delta\), for any positive constant \(\delta\). A formal and rigorous statement of this is available in the special case of \(Q=N^{-1/2}\left\{-1,+1\right\}^{N}\) and \(E_{0}=[-K,K]\), given by Lemma 3.2 of [6]. We conclude with a final generalization: Theorem 2--the extension of our main theorem to block-symmetric constraint sets--immediately yields a sharp threshold for the margin of perceptron problems with "random labels," such as the model raised in [36]. This corresponds to a feasibility problem with constraint set \(E=(E_{0})^{M_{1}}\times(E_{0}^{c})^{M-M_{1}}\) and \(M_{1}\asymp M\). ### Matrix Balancing The "Matrix Spencer" conjecture of R. Meka asks the following: given \(N\) symmetric matrices \(A_{1},\ldots,A_{N}\) of dimension \(d\times d\), each of operator norm at most one, determine the feasibility of finding \(\sigma\in\frac{1}{\sqrt{N}}\left\{-1,1\right\}^{N}\) such that \[\left\|\sum_{i}\sigma_{i}A_{i}\right\|_{\text{op}}\leq 1\,.\] This problem has interesting applications to quantum communication complexity [34] and graph sparsification [16, 45]. There has been some recent progress [34, 11, 24, 38] in the low-rank setting. Very recently, the random version of this problem has been considered; a lower bound by the first moment method is given in Theorem 1.13 of [37]. Due to the extremely strong lower-tail concentration of the operator norm of a GOE, the main regime of interest is \(N\asymp d^{2}\). Only when \(N\) is at least this large does optimizing over \(\sigma\) allow \(\|\sum_{i}\sigma_{i}A_{i}\|\) to be changed to first order; conversely, if \(N\) is much larger, the discrepancy will be vanishing [10]. **Theorem 7** (Sharp threshold for matrix balancing).: _Let \(A_{1},\ldots,A_{N}\) be \(d\times d\) GOE matrices, and define_ \[\operatorname{disc}(A_{1},\ldots,A_{N}):=\min_{\sigma\in N^{-1/2}\{-1,+1\}^{N }}\frac{1}{\sqrt{d}}\left\|\sum\sigma_{i}A_{i}\right\|_{\mathrm{op}}\,.\] _Then, for \(N\asymp d^{2}\),_ \[\frac{\mathbb{E}\left[\operatorname{disc}(A_{1},\ldots,A_{N})\right]}{\sqrt{ \operatorname{Var}\left(\operatorname{disc}(A_{1},\ldots,A_{N})\right)}} \geq\Omega\left(\sqrt{d}\ \right)\,. \tag{2.1}\] The proof is deferred to the end of the next section. Matrix balancing can be seen as an integer feasibility problem as follows: flatten each matrix \(A_{i}\) into a \(d^{2}\times 1\) column vector. And let \(A=(A_{1},\ldots,A_{N})\). Let the constraint set \(E\subset\mathbb{R}^{d^{2}}\) be the flattening of the \(d\times d\) operator norm ball, and let \(Q\) be the discrete cube \(N^{-1/2}\left\{-1,+1\right\}^{N}\). However, matrix balancing is not easily quantified in terms of the \(\ell^{q}\)-margin and does not quite fit nicely into the framework of Theorem 1 for two reasons. First, \(E\) does not have much permutation symmetry since the operator norm of a matrix is only invariant under permuting rows or columns, not arbitrary entries. Second, Theorem 1 would give results on the distance to \(E\) in terms of entry-wise norms of the matrix \(M\), which is not quite right. Nonetheless, the ideas behind the proof of Theorem 1 are extremely general and easy modifications suffice here. ## 3. Preliminaries Before proving our main theorems, we introduce our main tools as well as some technical observations that will help us apply it. We begin with the celebrated Gaussian Poincare inequality. **Lemma 1** (Gaussian Poincare).: _Let \(n\) be finite, \(f:\mathbb{R}^{n}\to\mathbb{R}\) be an absolutely continuous function, and \(\gamma^{n}\) denote the standard Gaussian measure on \(\mathbb{R}^{n}\)._ \[\operatorname{Var}\left(f\right)\leq\mathbb{E}_{\gamma^{n}}\left[\|\nabla f \|_{2}^{2}\right]\,.\] The Gaussian Poincare inequality is often quite useful, but sometimes fails to give optimal rates. In his influential monograph [20], Chatterjee gives the name "superconcentration" to the variance of a random variable being far smaller than the Poincare inequality implies. Chatterjee also shows the equivalence of superconcentration to the fascinating properties of "chaos" and "multiple valleys." It remains a major open problem in this area to establish general methods for providing polynomial improvements over the Gaussian Poincare inequality. However, there is a general tool for obtaining logarithmic improvements: **Lemma 2** (Talagrand's \(L^{1}\)-\(L^{2}\) inequality; Theorem 5.1 of [20]).: _There is some constant \(C\) so that the following holds. Let \(\gamma^{n}\) be the Gaussian measure on \(\mathbb{R}^{n}\) for any \(n\), and \(f:\mathbb{R}^{n}\to\mathbb{R}\) any absolutely continuous function._ \[\operatorname{Var}\left(f\right)\leq C\sum_{i=1}^{n}\|\partial_{i}f\|_{L^{2}( \gamma^{n})}^{2}\left(1+\log\left(\frac{\|\partial_{i}f\|_{L^{2}(\gamma^{n})} }{\|\partial_{i}f\|_{L^{1}(\gamma^{n})}}\right)\right)^{-1}\,. \tag{3.1}\] _In particular,_ \[\operatorname{Var}\left(f\right)\leq C\left(\sum_{i=1}^{n}\|\partial_{i}f\|_ {L^{2}(\gamma^{n})}^{2}\right)\left(1+\frac{1}{2}\log\left(\frac{\sum_{i}\| \partial_{i}f\|_{L^{2}(\gamma^{n})}^{2}}{\sum_{i}\|\partial_{i}f\|_{L^{1}( \gamma^{n})}^{2}}\right)\right)^{-1}\,. \tag{3.2}\] The usual formulation of Talagrand's inequality is Eq. (3.1). Here, Eq. (3.2) will be more convenient for us. It readily follows from Eq. (3.1) by applying Jensen's inequality to the function \(g(x)=(1+\log(x)/2)^{-1}\), which is concave on \((0,1)\). The details can be found in the proof of Theorem 5.4 of [20]. Talagrand's inequality is based on hypercontractivity--a way of quantifying the extreme smoothing properties of the heat flow. Roughly speaking, the Poincare inequality can fail to yield optimal rates because it forces a heavy quadratic penalty on values of \(X\) where \(f\) has large derivative, even if the Gaussian measure assigns little mass to these locations. Hypercontractivity allows for mitigation of this penalty. We plan to apply Talagrand's inequality to the variance of the margin. This requires differentiating the margin. Since the margin is defined in terms of a distance between two sets, this will consist of two tasks. First, interchanging a derivative and an infimum. Second, differentiating the \(\ell^{q}\) distance. The former will be accomplished by the classical "envelope theorem" of Milgrom and Segal (Theorem 1 in [40]). **Lemma 3** (Envelope Theorem).: _Let \(f(x,A)\) be a map \(f:X\times[0,1]\to\mathbb{R}\) where \(X\) is a subset of \(\mathbb{R}^{n}\). Define the value function_ \[V(t):=\sup_{x\in X}f(x,t)\,.\] _For any \(t\in(0,1)\) and any \(x^{*}\in\operatorname*{arg\,max}_{x\in X}f(x,t)\), if \(V^{\prime}(t)\) and the partial derivative \(f_{t}(x^{*},t)\) both exist,_ \[V^{\prime}(t)=f_{t}(x^{*},t)\,.\] For the latter task of differentiating the \(\ell^{q}\) distance, we collect some easy regularity results on \(\ell^{q}\) distances. In what follows, let \(E\subset\mathbb{R}^{m}\) be an arbitrary closed set and define \[f_{q}(x):=d_{q}(x,E)\,.\] Recall our notation that a function \(F:\mathbb{R}^{M}\to\mathbb{R}\) is called \((L,q)\)-Lipschitz if it is Lipschitz continuous with constant \(L\) with respect to \(\ell^{q}\) perturbations to the input. **Proposition 1**.: _Fix \(q\in[2,\infty)\) and a non-empty set \(E\subset\mathbb{R}^{M}\). Let \(f_{q}(x):=d_{q}(x,E)\) be the \(\ell^{q}\) distance between \(x\) and \(E\)._ * \(f_{q}\) _is_ \((1,q)\)_-Lipschitz continuous everywhere._ * \(f_{q}\) _is absolutely continuous everywhere and continuously differentiable for Lebesgue a.e._ \(x\)_._ * _Let_ \(x\) _be a point of differentiability for_ \(f_{q}(x)\)_. Then there is a unique_ \(\ell^{q}\)_-projection of_ \(x\) _onto_ \(E\)_; denote it by_ \(z\)_. Letting_ \(v:=x-z\) _and_ \(i\in[M]\)_, we additionally have:_ \[\big{|}(\nabla f_{q}(x))_{i}\big{|}=\frac{|v_{i}|^{q-1}}{\|v\|_{q}^{q-1}}\,.\] These are standard facts, but we give a proof for completeness. Proof of Proposition 1.: Let \(z\in E\) be arbitrary and let \(x\) and \(y\) be outside \(E\). Fix \(q\geq 2\) and set \(d:=d_{q}\). By triangle inequality, \[d(x,E)\leq d(x,z)\leq d(x,y)+d(y,z)\,.\] Taking an infimum over \(z\in E\) yields \(d(x,E)-d(y,E)\leq d(x-y)\). Reversing the roles of \(x\) and \(y\) yields a symmetric bound, completing the proof of the first claim. By equivalence of norms (i.e. since \(\ell_{q}\leq\ell_{2}\leq\sqrt{N}\ell_{q}\) for any \(q\in[2,\infty]\)), we have that \(f_{q}\) is Lipschitz continuous with respect to the Euclidean norm. This implies a.e. differentiability by Rademacher's theorem, as well as absolute continuity everywhere. Continuity of the derivative will follow by explicit computation. Let \(x\) be a point of differentiability of \(f_{q}\) and \(y\) be an \(\ell^{q}\)-metric projection of \(x\) onto \(E\). By definition of the derivative, there exists a unique vector \(v\) such that for \(w\) sufficiently near \(x\), \[f_{q}(w)-f_{q}(x)=v\cdot(w-x)+\mathrm{o}\left(\|w-x\|\right)\,. \tag{3.3}\] Since we just showed \(f_{q}\) is \((1,q)\)-Lipschitz, then \(\|v\|_{q^{*}}\leq 1\) where \(q^{*}\) is the conjugate exponent to \(q\). Letting \(w=x+\epsilon(y-x)\), we obtain by homogeneity of the \(\ell^{q}\) norm \[|f_{q}(w)-f_{q}(x)|=\epsilon\|y-x\|_{q}\,.\] Rearranging Eq. (3.3), \[|v\cdot(w-x)|=\epsilon\|y-x\|_{q}+\mathrm{o}\left(\epsilon\right)\,. \tag{3.4}\] Since \(\|v\|_{q^{*}}\leq 1\), we have by Holder's inequality \[|v\cdot(w-x)|\leq\|w-x\|_{q}\|v\|_{q^{*}}=\epsilon\|y-x\|_{q}\|v\|_{q^{*}} \leq\epsilon\|y-x\|_{q}\,. \tag{3.5}\] But, Eq. (3.4) shows that Holder's inequality Eq. (3.5) actually holds with equality, up to some vanishing error. For \(q<\infty\), it is a standard fact [51] that this implies \[|v_{i}|=\frac{|y-x|_{i}^{q-1}}{\|y-x\|_{q}^{q-1}}\,.\] In summary, given a projection \(y\) of \(x\) onto \(E\), we can explicitly solve for the derivative. It is clear that if there is another projection \(y^{\prime}\neq y\), this would contradict the uniqueness of \(v\). With the formula for the derivative of \(f_{q}\) established, continuity of the derivative is now clear by inspection. Although similar statements hold for \(q=\infty\), there is a lack of uniqueness of the projection onto \(E\), which makes direct analysis more difficult--although definitely still possible. We sidestep this issue with the standard approach of simply taking \(q\) large. Let us make this precise. **Proposition 2**.: _Let \(X\) and \(Y\) be random variables with finite second moments. Then:_ \[\operatorname{Var}\left(X+Y\right)\leq 2\left(\operatorname{Var}\left(X \right)+\operatorname{Var}\left(Y\right)\right) \tag{3.6}\] _and_ \[\operatorname{Var}\left(X\right)\leq 2\left(\mathbb{E}\left[|X-Y|^{2} \right]+\operatorname{Var}\left(Y\right)\right) \tag{3.7}\] Proof.: Both are trivial consequences of the inequality \((a+b)^{2}\leq 2(a^{2}+b^{2})\), valid for all real \(a\) and \(b\). **Proposition 3**.: _For any \(q\geq\log(M)^{2}\), we have_ \[\mathcal{M}_{\infty}=\mathcal{M}_{q}\left(1+\mathcal{O}\left(\frac{1}{\log M} \right)\right)\,.\] In particular, combining Proposition 2 and Proposition 3 will allow us to study the variance of \(\mathcal{M}_{\infty}\) via the variance of \(\mathcal{M}_{\log(N)^{2}}\), which is easier to differentiate due to Proposition 1. We omit the proof of Proposition 3 since it is an immediate application of the classical fact: \[\|\cdot\|_{\infty}=\|\cdot\|_{q}\left(1+\mathcal{O}\left(\frac{1}{\log M} \right)\right)\,.\] We conclude our preliminary discussion by illustrating the utility of the Poincare inequality with a quick proof of Theorem 7, the sharp threshold for random matrix balancing problem. The following classical fact is needed. **Lemma 4**.: _Let \(A\) be a symmetric matrix of full rank with distinct eigenvalues. Let \(\lambda\) be the largest eigenvalue and let \(u\) be the corresponding unique eigenvector of unit norm._ \[\frac{\partial}{\partial A_{ij}}\lambda=u_{i}u_{j} \tag{3.8}\] Proof of Lemma 4.: Implicitly differentiate the formula \[Au=\lambda u\,,\] and then left multiply by \(u^{t}\). Using that \(u^{t}du=(d\|u\|_{2}^{2})/2=0\), and then using symmetry of \(A\) so that \(u^{t}A=\lambda u^{t}\), we obtain the result. Proof of Theorem 7.: We apply the classical Gaussian Poincare inequality to \(\operatorname{disc}(A)\). Clearly the operator norm of \(\sum A_{i}\sigma_{i}\) for a particular \(\sigma\) is Lipschitz in \(A\), and thus so is \(\operatorname{disc}(A)\). By Rademacher's theorem, both are then differentiable for almost every \(A\). So, applying Lemma 3, for almost every \(A\) there is a vector \(\sigma^{*}\) with \(\operatorname{disc}(A)=\|\sum_{i}\sigma_{i}^{*}A_{i}\|\) and \[\nabla\operatorname{disc}(A)=\frac{1}{\sqrt{d}}\nabla\left\|\sum\sigma_{i}^{* }A_{i}\right\|_{\operatorname{op}}=\left(u_{i}^{*}u_{j}^{*}\right)_{i,j\in[d]}\,.\] Here, \(u^{*}\) denotes the (unit) eigenvector associated with the top eigenvalue of \(\sum\sigma_{i}^{*}A_{i}\). Then: \[\operatorname{Var}\left(\operatorname{disc}(G)\right)\leq\frac{1}{d}\ \mathbb{E}\left[\|\nabla \operatorname{disc}(G)\|_{2}^{2}\right]=\frac{1}{d}\ \|u^{*}\|_{2}^{4}=\frac{1}{d}\,.\] This is far from sharp and can easily be improved, even polynomially. But it suffices for a sharp threshold. This concludes the upper bound on the variance. The lower bound that \(\operatorname{disc}(G)\geq\Omega\left(1\right)\) follows from a first moment method (see Theorem 1.13 of [37]). ## 4. Concentration of the margin Proof of Theorem 1.: By Proposition 3 in conjunction with Proposition 2, we may assume without loss of generality that \(q\leq\log(M)^{2}\). In particular, \(q\) is finite (but possibly \(M\)-dependent). Let us verify the differentiability of the margin. Define the function \(g(A,\sigma)=d(A\sigma,E)\). For any matrix \(B\in\mathbb{R}^{M\times N}\), we have by triangle inequality: \[|g(A,\sigma)-g(B,\sigma)| \leq d_{q}(A\sigma,B\sigma)\] \[\leq\|A-B\|_{2,q}\|\sigma\|_{2}\] \[\leq\|A-B\|_{2,q}\,.\] The final inequality follows from the theorem assumption that the feasible set is bounded in \(\ell^{2}(\mathbb{R}^{N})\). By equivalence of matrix norms, we have that \(g\) is \(\ell^{2}\)-Lipschitz with some finite (possibly \(N\)-dependent) constant. By Rademacher's theorem, \(\nabla_{A}g(A,\sigma)\) exists for almost every \(A\). Similarly, the margin is also Lipschitz and thus differentiable for almost every \(A\). Indeed, by triangle inequality: \[|\mathcal{M}_{q}(A)-\mathcal{M}_{q}(B)|\leq\sup_{\sigma\in Q}\|A\sigma-B \sigma\|_{q}\leq\|A-B\|_{2,q}\,.\] By Proposition 1 and the chain rule, we have for each \(\sigma\in Q\) and almost every \(A\) the identity: \[\big{|}\partial_{A_{ij}}g(A,\sigma)\big{|}=|\sigma_{j}|\frac{|v_{i}|^{q-1}}{ \|v\|_{q}^{q-1}}\,,\quad\forall\;i\in[M],\ j\in[N] \tag{4.1}\] where \(z\) is the unique \(\ell^{q}\)-projection of \(A\sigma\) onto \(E\) and \(v=A\sigma-z\). So, for almost every \(A\) and any \(\sigma^{*}\) with \(g(A,\sigma^{*})=\mathcal{M}(A)\), Lemma 3 yields: \[\nabla\mathcal{M}(A)=\left(|\sigma^{*}_{j}|\frac{|v_{i}|^{q-1}}{\|v\|_{q}^{q- 1}}\right)_{i\in[M],\ j\in[N]}\,.\] Here \(v\) again denotes the unique \(\ell^{q}\) projection of \(A\sigma^{*}\) onto \(E\). We turn to bounding the variance of \(\mathcal{M}_{q}\) using Talagrand's \(L^{1}\)-\(L^{2}\) inequality. The quantities involved in Eq. (3.2) of Lemma 2 are: \[a_{ij}^{2} :=\mathbb{E}\left[\big{|}\partial_{A_{ij}}\mathcal{M}_{q,E}(A) \big{|}\right]^{2}\] \[b_{ij}^{2} :=\mathbb{E}\left[\big{|}\partial_{A_{ij}}\mathcal{M}_{q,E}(A) \big{|}^{2}\right]\,.\] By Holder's inequality, the \(q\) and \((q-1)\) norm of \(v\) cannot be too far apart: \[\|v\|_{q}^{q-1}\leq\|v\|_{q-1}^{q-1}\leq(M)^{\frac{1}{q}}\,\|v\|_{q}^{q-1}\,. \tag{4.2}\] Additionally, by permutation-invariance of the constraint set \(E\), \[a_{ij}^{2}=\mathbb{E}\left[\big{|}\sigma^{*}_{j}|\,\frac{|v_{i}|^{q-1}}{\|v\|_ {q}^{q-1}}\right]^{2}=\left(\frac{1}{M}\,\,\mathbb{E}\,\left[\big{|}\sigma^{*} _{j}\big{|}\,\frac{\|v\|_{q-1}^{q-1}}{\|v\|_{q}^{q-1}}\right]\right)^{2}\,. \tag{4.3}\] Combining Eq. (4.2) and Eq. (4.3), \[\sum_{ij}a_{ij}^{2}=\frac{1}{M}\sum_{j}\ \mathbb{E}\left[\big{|}\sigma^{*}_{j} \big{|}\,\frac{\|v\|_{q-1}^{q-1}}{\|v\|_{q}^{q-1}}\right]^{2}\leq M^{\frac{2}{ q}-1}\sum_{j}\ \mathbb{E}\left[\big{|}\sigma^{*}_{j}\big{|}\right]^{2}=M^{\frac{2}{q}-1}\mathbb{E }\left[\|\sigma^{*}\|_{2}^{2}\right]\leq M^{\frac{2}{q}-1}\,. \tag{4.4}\] The last inequality follows from the assumption that the feasible set is bounded. By monotonicity of \(\ell^{p}\) norms, we also have \[\sum_{ij}b_{ij}^{2}=\sum_{j}\ \mathbb{E}\left[\big{|}\sigma^{*}_{j}\big{|}^{2} \left(\frac{\|v\|_{2(q-1)}}{\|v\|_{q}}\right)^{2(q-1)}\right]\leq\sum_{j}\ \mathbb{E}\left[\big{|}\sigma^{*}_{j}\big{|}^{2}\right]\leq 1\,. \tag{4.5}\] Finally, it is readily checked that for any \(a>0\), the following is monotone increasing in \(x\) on \([a^{-1},\infty)\): \[f(x)=\frac{x}{1+\frac{1}{2}\log(ax)}\,.\] By Jensen's inequality and Eq. (4.5), we have \(\sum_{ij}a_{ij}^{2}\leq\sum_{ij}b_{ij}^{2}\leq 1\). So, (in order of the inequalities below) applying Eq. (3.2) of Lemma 2, then monotonicity of \(f\), and then Eq. (4.4) yields: \[\operatorname{Var}\left(\mathcal{M}_{q}\right)\leq\frac{C\sum_{ij}b_{ij}^{2}} {1+\frac{1}{2}\log\left(\frac{\sum_{ij}b_{ij}^{2}}{\sum_{ij}a_{ij}^{2}}\right) }\leq\frac{C}{1+\frac{1}{2}\log\left(\frac{1}{\sum_{ij}a_{ij}^{2}}\right)} \leq\frac{C}{1+\frac{1}{2}\log\left(M^{1-\frac{2}{q}}\right)}\,.\] **Remark 2**.: The reason that it works to apply Holder's inequality so crudely is the following dichotomy: we are already done by Poincare if \(\sum_{ij}b_{ij}^{2}\ll 1\). On the other hand, if \(\sum_{ij}b_{ij}^{2}\approx 1\) then the \(q\) and \(2(q-1)\) norms of \(v\) are not very far apart, so \(v\) must be a highly structured vector, in which case the \(q\) and \(q-1\) norms of \(v\) must also be comparable. Then \(b^{2}\gg a^{2}\) and Talagrand's inequality easily yields a logarithmic improvement. Proof of Theorem 2.: We adopt the notation and proof of Theorem 1. The first section on differentiability of the margin remains unchanged. Partition \([M]\) into \(I_{1},\ldots,I_{k}\) so that \(|I_{j}|=M_{j}\) by setting \(I_{1}=[M_{1}]\), \(I_{2}=(M_{1}+1,\ldots,M_{1}+M_{2})\), and so on. Let \(w_{1}:=v_{I_{1}}\) be the \(M_{1}\)-dimensional vector induced by taking the subset of \(v\) with indices in \(I_{1}\). Define \(w_{j}:=v_{I_{j}}\) similarly. Then the concatenation \((w_{1},\ldots w_{k})\) is simply \(v\). Arguing as in the proof of Theorem 1, it is easy to check by permutation symmetry: \[\sum_{i\in I_{t}}\sum_{j}a_{ij}^{2}=\frac{1}{M_{t}}\sum_{j}\mathbb{E}\left[| \sigma_{j}^{*}|\left(\frac{\|w^{(t)}\|_{(q-1)}}{\|v\|_{q}}\right)^{(q-1)} \right]^{2}\leq M_{t}^{\frac{2}{q}-1}\sum_{j}\mathbb{E}\left[|\sigma_{j}^{*}| \left(\frac{\|w^{(t)}\|_{q}}{\|v\|_{q}}\right)^{(q-1)}\right]^{2}\leq M_{t}^{ \frac{2}{q}-1}\,.\] Note that by Jensen's inequality \(\sum_{ij}b_{ij}^{2}\geq\sum_{ij}a_{ij}^{2}\) always. Thus, by Eq. (3.2) of Lemma 2, we have: \[\operatorname{Var}\left(\mathcal{M}_{q}\right) \leq C\left(\sum_{ij}b_{ij}^{2}\right)\left(1+\frac{1}{2}\log \left(\frac{\sum_{ij}b_{ij}^{2}}{\sum_{ij}a_{ij}^{2}}\lor 1\right)\right)^{-1}\] \[\leq C\left(1+\frac{1}{2}\log\left(\frac{1}{\sum_{t=1}^{k}M_{t}^ {\frac{2}{q}-1}}\lor 1\right)\right)^{-1}\] \[\leq C\left(1+\frac{1}{2}\log\left(\frac{\left(\min_{t\in[k]}M_ {t}\right)^{1-\frac{2}{q}}}{k}\lor 1\right)\right)^{-1}\,.\] From the first to the second line we have used monotonicity in \(x\), for any \(a>0\) and all \(x\in\mathbb{R}\), of the function \[f(x)=\frac{x}{1+\frac{1}{2}\log(ax\lor 1)}\,.\] ## Acknowledgments I am deeply grateful to Jonathan Niles-Weed for invaluable support, advice, and mentorship throughout all stages of this project. I also benefited from useful and encouraging conversations with Paul Bourgade, Guy Bresler, Brice Huang, Mark Sellke, Joel Spencer, Nike Sun, Konstantin Tikhomirov, and Will Perkins during the long exploratory phase of this project. Finally, the monograph [20] of Chatterjee was an important source of inspiration for this work. This article is adapted from a chapter of the author's dissertation at NYU and was supported by a MacCracken fellowship, an NSF GRFP grant, and NSF grant DMS-2015291.
2309.03320
CoNeS: Conditional neural fields with shift modulation for multi-sequence MRI translation
Multi-sequence magnetic resonance imaging (MRI) has found wide applications in both modern clinical studies and deep learning research. However, in clinical practice, it frequently occurs that one or more of the MRI sequences are missing due to different image acquisition protocols or contrast agent contraindications of patients, limiting the utilization of deep learning models trained on multi-sequence data. One promising approach is to leverage generative models to synthesize the missing sequences, which can serve as a surrogate acquisition. State-of-the-art methods tackling this problem are based on convolutional neural networks (CNN) which usually suffer from spectral biases, resulting in poor reconstruction of high-frequency fine details. In this paper, we propose Conditional Neural fields with Shift modulation (CoNeS), a model that takes voxel coordinates as input and learns a representation of the target images for multi-sequence MRI translation. The proposed model uses a multi-layer perceptron (MLP) instead of a CNN as the decoder for pixel-to-pixel mapping. Hence, each target image is represented as a neural field that is conditioned on the source image via shift modulation with a learned latent code. Experiments on BraTS 2018 and an in-house clinical dataset of vestibular schwannoma patients showed that the proposed method outperformed state-of-the-art methods for multi-sequence MRI translation both visually and quantitatively. Moreover, we conducted spectral analysis, showing that CoNeS was able to overcome the spectral bias issue common in conventional CNN models. To further evaluate the usage of synthesized images in clinical downstream tasks, we tested a segmentation network using the synthesized images at inference.
Yunjie Chen, Marius Staring, Olaf M. Neve, Stephan R. Romeijn, Erik F. Hensen, Berit M. Verbist, Jelmer M. Wolterink, Qian Tao
2023-09-06T19:01:58Z
http://arxiv.org/abs/2309.03320v3
# CoNeS: Conditional neural fields with shift modulation for multi-sequence MRI translation ###### Abstract Multi-sequence magnetic resonance imaging (MRI) has found wide applications in both modern clinical studies and deep learning research. However, in clinical practice, it frequently occurs that one or more of the MRI sequences are missing due to different image acquisition protocols or contrast agent contraindications of patients, limiting the utilization of deep learning models trained on multi-sequence data. One promising approach is to leverage generative models to synthesize the missing sequences, which can serve as a surrogate acquisition. State-of-the-art methods tackling this problem are based on convolutional neural networks (CNN) which usually suffer from spectral biases, resulting in poor reconstruction of high-frequency fine details. In this paper, we propose Conditional Neural fields with Shift modulation (CoNeS), a model that takes voxel coordinates as input and learns a representation of the target images for multi-sequence MRI translation. The proposed model uses a multi-layer perceptron (MLP) instead of a CNN as the decoder for pixel-to-pixel mapping. Hence, each target image is represented as a neural field that is conditioned on the source image via shift modulation with a learned latent code. Experiments on BraTS 2018 and an in-house clinical dataset of vestibular schwannoma patients showed that the proposed method outperformed state-of-the-art methods for multi-sequence MRI translation both visually and quantitatively. Moreover, we conducted spectral analysis, showing that CoNeS was able to overcome the spectral bias issue common in conventional CNN models. To further evaluate the usage of synthesized images in clinical downstream tasks, we tested a segmentation network using the synthesized images at inference. The results showed that CoNeS improved the segmentation performance when some MRI sequences were missing and outperformed other synthesis models. We concluded that neural fields are a promising technique for multi-sequence MRI translation. Our code is available at [https://github.com/cyjdswx/CoNeS.git](https://github.com/cyjdswx/CoNeS.git). Keywords:Neural fields, Magnetic Resonance Imaging, generative models, image-to-image translation, segmentation ## 1 Introduction Multi-sequence magnetic resonance imaging (MRI) plays a key role in radiology and medical image computing. One advantage of MRI is the availability of various pulse sequences, such as T1-weighted MRI (T1), T2-weighted MRI (T2), T1-weighted with contrast (T1ce), and T2-fluid-attenuated inversion recovery MRI (FLAIR), which can provide complementary information to clinicians (Cherubini et al., 2016). The importance of the availability of multi-sequence MRI was also indicated by recent deep learning research (Cercignani and Bouyagoub, 2018), which shows that the more sequences were used for segmentation, the better results could be obtained. However, due to clinical restrictions on the use of contrast agents and the diversity in imaging protocols in different medical centers, it is difficult and time-consuming to always obtain exactly the same MRI sequences for training and inference, which may damage the generalization and performance of deep learning segmentation models. One way to tackle this problem is to generate missing sequences from existing images based on the information learned from a set of paired images, known as image-to-image translation. Like in other computer vision tasks, convolutional neural networks (CNNs) with an encoder and decoder architecture are normally used for this specific task (Severlidis et al., 2016; Joyce et al., 2017; Wei et al., 2019). Despite the significant improvement over traditional non-deep-learning methods, these methods still suffer from the limitation of a pixel-wise loss function, such as the L1 or MSE loss, which tends to result in blurry results with undesirable loss of details in image structures (Isola et al., 2017; Dalmaz et al., 2022). To overcome this limitation, generative adversarial networks (GANs) were introduced for image-to-image translation and rapidly became a training protocol benchmark for medical image translation (Isola et al., 2017; Nie et al., 2018; Armanious et al., 2020; Sharma and Hamarneh, 2019). GANs improve translation results both visually and quantitatively owing to the adversarial learning loss, which penalizes the images that are correctly classified by the discriminator. However, research showed that generative models that use a CNN as a backbone network consisting of ReLU activation functions and transposed or up-convolutional layers usually suffer from spectral biases (Rahaman et al., 2019; Durall et al., 2020). Therefore, these generative models fit low-frequency signals first and may again fail to capture details in image structures during training. Transformers, which instead use multi-head self-attention blocks and multi-layer perceptrons (MLPs) have gained tremendous attention in computer vision research (Liu et al., 2023; Jiang et al., 2021). Due to the absence of convolutional layers, transformers show great potential for preserving fine details and long-range dependencies and have recently been applied to medical image translation (Liu et al., 2023; Dalmaz et al., 2022). However, despite the numerous efforts made by these studies, such as hybrid architectures and image patch-based processing, the training of transformers is still considered heavy and data-demanding (Dosovitskiy et al., 2021; Esser et al., 2021). The inherently high computational complexity of the transformer block and expensive memory cost of low-level tasks, such as denoising and super-resolution, further complicate the application of transformers in medical image translation (Chen et al., 2021). To address these limitations, we propose image-to-image translation using neural fields (Xie et al., 2022). In contrast to CNN or transformer-based methods, a neural field represents the target images on a continuous domain using a coordinate-based network, which can be conditioned on the information extracted from the source images. We previously proposed an image-to-image translation approach based on neural fields (Chen et al., 2023). Here, we substantially extend this model by proposing **C**onditional **N**eural fields with **S**hift modulation (CoNeS). In contrast to traditional deep learning computer vision techniques, CoNeS parameterizes the target images as neural fields that can be queried on a grid to provide pixel-wise predictions. Specifically, we use an MLP as the decoder to map the voxel coordinates to the intensities on the target images. To capture instance-specific information, we condition the neural fields on the latent codes extracted from the source images. By applying shift modulation, the neural fields can be further varied across the coordinates to enhance their ability to preserve high-frequency signals. Although plenty of work has shown great progress in medical image translation, most previous works have been evaluated based on image similarity metrics and only a few papers have evaluated the benefits of using synthesized images for downstream analysis. Amirrajab et al. (2023) fine-tuned a segmentation model with synthesized cardiac images to improve the performance of different modalities; Skandarani et al. (2020) introduced a variational auto-encoder (VAE) based network for image translation based data augmentation to improve the generalization capabilities of a segmentation model. In practice, however, it would be more straightforward and beneficial to use the synthesized images directly without fine-tuning or training a new network. In this study, we perform downstream experiments using a pre-trained segmentation model to further evaluate different image translation models. The main contributions of our work are: * We developed a novel generative adversarial network for medical image translation based on conditional neural fields. In the proposed model, we build neural fields on the coordinates across the image to fit the target image. To improve the performance and the stability of the model, we introduce shift modulation, which conditions the neural fields on the output of a hypernetwork. * We evaluated the proposed model by synthesizing various MRI sequences on two paired multi-sequence brain MRI datasets. The results show that the proposed model outperforms state-of-the-art methods both visually and quantitatively. We additionally performed spectral analysis, which indicates that our method is not affected by spectral biases in the way that traditional CNN-based generative models are. * We compare different medical image translation models in downstream tasks by testing a segmentation model with the synthesized images. Our experiments indicate that by applying image translation, we can improve segmentation performance for incomplete MRI acquisition and our synthesized images outperform the state-of-the-art methods. ## 2 Related work Missing MRI sequencesSeveral studies have dealt with the missing MRI sequences problem in medical image analysis (Azad et al., 2022a). One early idea was to translate all available sequences into a shared latent space for downstream analysis. Following this idea, Havaei et al. (2016) developed the Hetero-Modal Image Segmentation (HeMIS) method, where sequence-specific convolutional layers are applied to each image sequence for establishing a common representation which enables robust segmentation when some images are missing. Later, Hu et al. (2020) and Azad et al. (2022b) introduced knowledge distillation to improve the segmentation performance in the same situation. In such a model, a network using all modalities as input (teacher network) and another network using a subset of them (student network) are optimized synchronously. During training, information from all modalities is distilled from the teacher to the student network to improve the performance of the student network. Recently, Liu et al. (2023b) developed a transformer-based model for Alzheimer's classification that can handle missing data scenarios. All these models managed to build a robust model for the situation that only a part of the modalities are available. However, since the missing MRIs are not explicitly constructed, it is still difficult for medical doctors to interpret and make decisions with these methods in clinical practice. Image-to-image translationImage-to-image translation, on the contrary, focuses on synthesizing missing images from existing ones based on prior knowledge learned from the dataset. By predicting the missing images, clinicians can offer comprehensive diagnoses and also find an explanation of the results in downstream analysis. Recent progress in generative modeling, such as generative adversarial networks (GAN), variational auto-encoders (VAE), and diffusion models, has shown extraordinary capabilities in image generation (Isola et al., 2017; Kawar et al., 2022). In the domain of medical image translation, Dar et al. (2019) proposed pGAN based on a conditional GAN combined with an adversarial loss, a pixel-wise loss, and a perceptual loss. Sharma and Hamarneh (2019) proposed a multi-modal GAN (MM-GAN) that extends the idea by using multi-modality imputation, which ensures the ability to handle multiple input and output modalities. Inspired by the recent progress of the transformer model, Dalmaz et al. (2022) proposed ResViT based on a hybrid architecture that consists of convolutional operators and transformer blocks. Although promising, most studies focus on the image quality of the output images, and only a few have extended their work to the use of synthesized images in downstream tasks (Iglesias et al., 2013; Van Tulder and de Bruijne, 2015; Amirrajab et al., 2023). Neural fieldsNeural fields, also known as implicit neural representations (INRs) or coordinate-based networks, are increasingly popular in computer vision and medical image analysis (Xie et al., 2022; Molaei et al., 2023). The core idea behind neural fields is that neural networks are not used to learn an operator between signals, as in CNNs or vision transformers, but to _represent_ a complex signal on a continuous spatial or spatiotemporal domain. Neural fields can be used to solve a wide range of problems, including 3D scene reconstruction and generative shape modeling. Park et al. (2019) proposed DeepSDF which learns a continuous signed distance function to represent 3D surfaces. Wolterink et al. (2022) proposed to use INRs to represent a transformation function for deformable image registration. One distinguished benefit of using neural fields is the capability to handle data with variable resolution because of the absence of up-sampling architectures. Inspired by this, Chen et al. (2021b) proposed a Local Implicit Image Function (LIIF) for image super-resolution, which also shows the potential of handling image generation. Recently, Shaham et al. (2021) developed Spatially-Adaptive Pixelwise Networks (ASAP-Net), which is most relevant to our work, to speed up image-to-image translation by locally conditioned MLPs. Different from prior work, the neural fields in CoNeS are conditioned on a latent code varying across the coordinates through shift modulation, inspired by Dupont et al. (2022). Specifically, CoNeS consists of a global MLP shared by the whole image and a varying latent code, which determines pixel-wise affine transformations to modulate the neural fields. ## 3 Methods ### Model overview To formalize the problem, consider we have a set of MRI sequences: \(\mathcal{M}=\{M^{1},M^{2},\ldots,M^{N}\}\). Suppose all sequences are available in the training dataset, while \(N_{t}\) sequences are missing in the test dataset. We can denote the sequence set as \(\mathcal{M}=\{\mathcal{M}_{t},\mathcal{M}_{s}\}\), where \(\mathcal{M}_{t}=\{M_{t}^{1},\ldots,M_{t}^{N_{t}}\}\) refers to the missing sequences (target domain) and \(\mathcal{M}_{s}=\{M_{s}^{1},\ldots,M_{s}^{N-N_{t}}\}\) refers to the sequences that are available (source domain). For each patient, let \(\mathbf{I}_{t}=\{I_{t}^{i}\}_{i=0}^{N_{t}}\), where \(I_{t}^{i}\in M_{t}^{i}\), be the missing images and \(\mathbf{I}_{s}=\{I_{s}^{i}\}_{i=0}^{N-N_{t}}\), where \(I_{s}^{i}\in M_{s}^{i}\), be the available images. We assume that all images from an instance are co-registered so that there is no Figure 1: The overall architecture of CoNeS. The generator in the proposed models consists of a hypernetwork and a coordinate-based network. We condition the coordinate-based network on a varying latent code, which is generated by the hypernetwork, across coordinates via shift modulation. The conditional discriminator, which takes both the source images and real/fake images as input, further improves the performance of the generator. The proposed model is optimized using a reconstruction loss \(L_{\text{rec}}\), an adversarial loss \(L_{\text{adv}}\), a feature matching loss \(L_{\text{fm}}\) and latent code regularization \(L_{\text{reg}}\). extra deformation between the images. As a result, our problem is identical to learning a mapping function \(\Phi:\mathbf{I}_{s}\rightarrow\mathbf{I}_{t}\) using the training dataset, which can be applied to all patients in the test dataset and generate the corresponding missing image \(\mathbf{I}_{t}\). Similar to traditional GAN models, the proposed model consists of a generator that performs the mapping and a discriminator that aims to tell the real target image and the synthesized one apart. As introduced in pix2pix (Isola et al., 2017), we apply a conditional discriminator that takes the source images as extra input and aims to classify each image patch in the proposed model. The overall architecture of our approach is shown in Fig. 1. In the following section, we introduce how to use a coordinate-based network to model conditional neural fields for image-to-image translation. ### Coordinate-based network In a typical neural field algorithm, the target quantity is represented as a continuous function with respect to coordinates. Specifically in our problem, we train an MLP that takes coordinates as input and outputs image intensities. Given a normalized d-dimensional coordinate \(\mathbf{x}\in\mathbb{R}^{d}\), where each component lies in [-1,1], we use \(\mathbf{t}_{x}=\{t_{\mathbf{x}}^{i}\}\) and \(\mathbf{s}_{x}=\{s_{\mathbf{x}}^{i}\}\) to denote the intensities at position \(\mathbf{x}\), where \(t_{x}^{i}\) refers to the intensity value in \(I_{t}^{i}\) and \(s_{x}^{i}\) refers to the intensity value in \(I_{s}^{i}\), respectively. Hence, the function \(\Phi\) can be formulated as a pixel-wise mapping that generates intensities over a d-dimensional space: \[\mathbf{t}_{\mathbf{x}}=\Phi(\mathbf{x};\mathbf{z}), \tag{1}\] where \(\mathbf{z}\) is a latent code that contains instance-specific information. By applying the mapping function \(\Phi\) to all the coordinates, we can obtain the estimated target images \(\hat{\mathbf{I}}_{t}=\{\hat{I}_{t}^{i}\}\). A network directly operating on the Cartesian coordinates tends to fit the low-frequency signals first and, as a result, fails to reconstruct the high-frequency image details (Mildenhall et al., 2021; Rahaman et al., 2019). One popular approach to overcome this problem is to map the Cartesian coordinates to a higher dimensional space via positional encoding \(\gamma:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\). In the proposed model, we use sinusoidal functions to perform positional encoding as follows (Zhong et al., 2020): \[\gamma(\mathbf{x}) =[\gamma_{1}(\mathbf{x}),\gamma_{2}(\mathbf{x}),\dots,\gamma_{m}(\mathbf{x})], \tag{2}\] \[\gamma_{2i}(\mathbf{x}) =\sin(2^{i-1}\pi\mathbf{x}),\] (3) \[\gamma_{2i+1}(\mathbf{x}) =\cos(2^{i-1}\pi\mathbf{x}), \tag{4}\] where \(m\) is a frequency parameter. Positional encoding can also be seen as Fourier feature mapping, which enables the network to fit the neural field containing high-frequency variation. ### Conditional neural fields To let the neural field adapt to different input images, we condition it on a set of latent codes \(\mathbf{z}\), which contain instance-specific information. In the proposed model, we introduce a hypernetwork \(H\) that generates the latent code from the source images: \(\mathbf{z}=H(\mathbf{I}_{s})\). By extracting \(\mathbf{z}\), we can then vary and adapt the neural fields to different instances. Below, we explain how we obtain the latent code \(\mathbf{z}\) and how the proposed method parameterizes the neural fields with the conditioning via \(\mathbf{z}\). #### 3.3.1 Hypernetwork A hypernetwork refers to an extra neural network that generates parameters for the main network (Ha et al., 2017). The main network behaves like a typical neural network, while the hypernetwork encodes information from the inputs and transfers the information to the main network via the generated parameters. For clarity, we use \(\mathbf{z}_{i}=[\mathbf{\alpha}_{i},\mathbf{\beta}_{i}]\) to denote the latent code used by the i-th layer of the MLP, where \(\mathbf{\alpha}_{i}\) are the weights and \(\mathbf{\beta}_{i}\) are the biases, both generated by \(H\). Hence, for each layer of the MLP, we have: \[\mathbf{l}_{i+1}=\phi(\mathbf{\alpha}_{i}\mathbf{l}_{i}+\mathbf{\beta}_{i}), \tag{5}\] where \(\mathbf{l}_{i}\) is the input feature of the i-th layer, and \(\phi\) is the activation function. Inspired by ASAP-Net (Shaham et al., 2021), we vary the neural field of each pixel by varying the latent code across the coordinates, which can be denoted as \(\mathbf{z}(\mathbf{x})=[\mathbf{\alpha}_{i}(\mathbf{x}),\mathbf{\beta}_{i}(\mathbf{x})]\), to improve the representation capability. We use \(H_{\mathbf{x}}\) to represent the latent code mapping for each pixel, and thus, \(\Phi\) can be denoted as: \[t_{\mathbf{x}}=\Phi(\gamma(\mathbf{x});\mathbf{z}(\mathbf{x}))=\Phi(\gamma(\mathbf{x});H_{\mathbf{x}}( \mathbf{I}_{s})), \tag{6}\] and each layer of the MLP can be denoted as: \[\mathbf{l}_{i+1}(\mathbf{x})=\phi(\mathbf{\alpha}_{i}(\mathbf{x})\mathbf{l}_{i}(\mathbf{x})+\mathbf{\beta} _{i}(\mathbf{x})), \tag{7}\] where \(\mathbf{l}_{i}(\mathbf{x})\) refers to the i-th input feature at position \(\mathbf{x}\). Different from ASAP-Net, we adapt the bottom-up pathway from feature pyramid network (Lin et al., 2017) as the hypernetwork \(H\), which outputs the latent code \(\mathbf{z}(\mathbf{x})\) for each pixel with a feasible memory cost (detailed in Section 4.2). #### 3.3.2 Shift modulation By conditioning neural fields on varying latent codes across the coordinates, we can improve the representation capability of the network and better model the structure details (Xie et al., 2022; Peng et al., 2020). However, the number of parameters also increases with the spatial expansion of the neural fields, which may induce high computational costs and damage the performance due to over-fitting. This problem may become worse with larger input images. To compact the model while maintaining spatially varying neural fields, we propose to condition the neural network through feature-wise linear modulation (FiLM) (Perez et al., 2018). Instead of generating all parameters of the MLP per pixel, an affine transformation (scale and shift) is applied to every neuron of a single, global MLP. Thus, each layer of the one MLP can be denoted as: \[\mathbf{l}_{i+1}(\mathbf{x})=\mathbf{\alpha}_{i}(\mathbf{x})\phi(\mathbf{w}_{i}\mathbf{l}_{i}+\mathbf{b}_{ i})+\mathbf{\beta}_{i}(\mathbf{x}), \tag{8}\] where \(\mathbf{w}_{i}\) and \(\mathbf{b}_{i}\) are the trainable parameters shared by all coordinates, and \(\mathbf{\alpha}_{i}(\mathbf{x})\) and \(\mathbf{\beta}_{i}(\mathbf{x})\) are again generated by \(H\). Note that \(\mathbf{\alpha}_{i}(\mathbf{x})\) is now a diagonal matrix, and thus, we can obtain a modified neural field for each coordinate with fewer parameters. Research shows that by using shifts only, which is so-called shift modulation, we can achieve comparable results with half of the parameters (Dupont et al., 2022). Hence in practice, we split the parameters of the MLP into two parts: trainable weights and biases, and additional biases generated from \(H\). The final formulation of each layer in the MLP is: \[\mathbf{l}_{i+1}(\mathbf{x})=\phi(\mathbf{w}_{i}\mathbf{l}_{i}+\mathbf{b}_{i}+\mathbf{\beta}_{i}(\mathbf{x})), \tag{9}\] where \(\mathbf{\beta}_{i}\) equals to \(\mathbf{z}_{i}\). The parameters of \(H\) are optimized together with the MLP during training. In the experimental section, we will show by using shift modulation our model can achieve better performance at reduced complexity. #### 3.3.3 Intensity concatenation In addition to shift modulation, we also condition the neural fields on the source images directly. Different from the latent codes, the pixel intensities provide first-hand uncoded local information. We concatenate the image intensities from all the source images as an additional input of the MLP. The mapping function of the neural fields therefore becomes: \[t_{\mathbf{x}}=\Phi(\mathbf{\gamma}(\mathbf{x}),\mathbf{s}_{\mathbf{x}};H_{\mathbf{x}}(\mathbf{I}_{s})) \tag{10}\] ### Loss function Like the standard GAN model, the discriminator and the generator in the proposed model are optimized alternately. In each iteration, we train the discriminator using the hinge loss (Lim and Ye, 2017): \[L_{D}=\mathbb{E}[\max(0,1-D(\mathbf{I}_{t},\mathbf{I}_{s}))]+\mathbb{E}[\max(0,1+D( \hat{\mathbf{I}}_{t},\mathbf{I}_{s}))], \tag{11}\] where \(D\) is the discriminator and \(\mathbb{E}\) is the expectation over the whole dataset. The generator is trained by a loss function \(L\) that contains a reconstruction loss, an adversarial loss, a feature matching loss, and latent code regularization. Reconstruction lossTo ensure the synthesized images are as close to the real images as possible, we apply a reconstruction loss that maximizes the similarity between output images and ground truth. We use the \(\ell_{1}\) loss function, and calculate the average loss across \(N_{t}\) missing images: \[L_{\text{rec}}=\frac{1}{N_{t}}\sum_{i=0}^{N_{t}}\mathbb{E}[\|\hat{I}_{t}^{i}-I _{t}^{i}\|_{1}], \tag{12}\] Adversarial lossAdversarial loss is applied to enforce that the generated images are good enough to fool the discriminator. Like the discriminator loss, we use the hinge function, which is defined as: \[L_{\text{adv}}=-\mathbb{E}[\log D(\hat{\mathbf{I}}_{t},\mathbf{I}_{s})], \tag{13}\] Feature matching lossTo stabilize the training, we apply a feature matching loss introduced by Wang et al. (2018). Specifically, we feed both the real and generated images to the discriminator and extract the intermediate features from each forward pass. The two groups of the intermediate features are matched using the \(\ell_{1}\) loss function. Hence, the feature matching loss is defined as: \[L_{\text{fm}}=\mathbb{E}\sum_{i=1}^{T}\frac{1}{N_{i}}[\|D_{i}(\mathbf{I}_{t},\mathbf{ I}_{s})-D_{i}(\hat{\mathbf{I}}_{t},\mathbf{I}_{s})\|_{1}], \tag{14}\] Latent code regularizationLast, we apply the \(\ell_{2}\) norm to \(\mathbf{z}\) as a latent code regularization to stabilize the training: \[L_{\text{reg}}=\|H_{\mathbf{x}}(\mathbf{I}_{\mathbf{s}})\|_{2} \tag{15}\] Overall lossThe overall loss function then becomes \[L=\lambda_{\text{rec}}L_{\text{rec}}+\lambda_{\text{adv}}L_{\text{adv}}+ \lambda_{\text{fm}}L_{\text{fm}}+\lambda_{\text{reg}}L_{\text{reg}}, \tag{16}\] where \(\lambda_{\text{rec}}\), \(\lambda_{\text{adv}}\), \(\lambda_{\text{fm}}\), and \(\lambda_{\text{reg}}\) are the weights of the loss functions. ## 4 Experiments and results ### Dataset To evaluate the proposed translation model, we conducted experiments on two datasets: (1) BraTS 2018 (Menze et al., 2014) and (2) an in-house Vestibular Schwannoma MRI (VS) dataset (Neve et al., 2022). BraTS 2018BraTS 2018 is a multi-sequence brain MRI dataset for tumor segmentation. The dataset consists of 285 patients for training and 66 patients for validation. Each patient has four co-registered MRI sequences: T1 (1-6 mm slice thickness), T1ce (1-6 mm slice thickness), T2 (2-6 mm slice thickness) and FLAIR (2-6 mm slice thickness). The tumor mask that includes the non-enhanced tumor, the enhanced tumor, and the edema was delineated by experts from multiple centers as a segmentation ground truth. All the scans in BraTS 2018 are resampled to 1 mm isotropic resolution. Vestibular schwannoma MRI datasetThe VS dataset is MRI scans of patients with vestibular schwannoma, which is a benign tumor arising from the neurilemma of the vestibular nerve. 191 patients were collected from 37 different hospitals using 12 different MRI scanners. In our study, 147 patients are selected for training and the remaining 44 patients are the validation set. All patients have a gadolinium-enhanced T1-weighted MRI (shortened to T1ce) and a high-resolution T2 (shortened to T2). The spatial resolution of the T1ce ranges from \(0.27\times 0.27\times 0.9\) to \(1.0\times 1.0\times 5.0\), and the spatial resolution of T2 scans ranges from \(0.23\times 0.23\times 0.5\) to \(0.7\times 0.7\times 1.8\). The intra- and extrameatal tumor was manually delineated by four radiologists. Different from BraTS 2018, there are only two sequences available in the VS dataset and the high resolution of the T2 offers better visibility of the lesion but may also degrade the image quality, which makes the image translation more challenging on this dataset. ### Experimental setup Network architectureThe hypernetwork \(H\) of the proposed model is adapted from feature pyramid network (Lin et al., 2017). \(H\) consists of four convolutional modules containing [2, 4, 23, 3] residual blocks in sequence. Each convolutional module is followed by a \(3\times 3\) convolutional smoothing layer and up-sampling layer, computing a feature map that is downscaled by a factor of 2. We take the output of the last module, which has the same size as the input resolution, as the latent code \(z\). Adapted from (Shaham et al., 2021), the MLP in the proposed model contains five 64-channel layers. The Leaky ReLU function with a negative slope of 0.2 is applied as the activation function after all intermediate layers. The output layer is followed by a Tanh function which can constrain the range of the intensities to [-1, 1]. The discriminator is a convolutional neural network that contains five \(4\times 4\) convolutional blocks followed by a Leaky ReLU function except for the last layer. The strides and number of filters of the blocks are [2, 2, 2, 1, 1] and [64, 128, 256, 512, 1] respectively. Like pix2pix (Isola et al., 2017), the discriminator down-samples the inputs by 8 and penalizes structures at the scale of patches. Pre-processingRegistration was applied to the VS dataset before training. We considered the T1ce as the fixed image and performed rigid registration with the T2, for which we used Elastix software (Klein et al., 2009). All images from both datasets were then normalized to the range of [-1,1] and the background was cropped based on the bounding box of the foreground before training to reduce the image size. Both sequences in the VS dataset were resampled to \(0.29\times 0.29\) mm in-plane resolution, which is the median value of the T1ce domain. During training, random cropping was conducted on the images, with a cropping size of \(160\times 128\) for BraTS 2018 and a cropping size of \(320\times 320\) for the VS dataset, respectively. Implementation detailsAll experiments were conducted using Python 3.10 and PyTorch 1.12.1 on a mixed computation server equipped with Nvidia Quadro RTX 6000 and Nvidia Tesla V100 GPUs. The models were trained by the Adam optimizer using the Two Time-scale Update Rule (TTUR) training scheme, in which the generator and discriminator have a different initial learning rate (Heusel et al., 2017). We found that an initial learning rate of \(1\times 10^{-4}\) for the generator and an initial learning rate of \(4\times 10^{-4}\) for the discriminator worked best for our experiments. The learning rates were further decayed using a linear learning rate scheduler. Adapted from the choices of loss weights and parameters in Shaham et al. (2021), we set \(\lambda_{\mathrm{adv}}=1.0\), \(\lambda_{\mathrm{rec}}=100.0\), \(\lambda_{\mathrm{reg}}=10.0\), \(\lambda_{\mathrm{fm}}=10.0\) and the frequency parameter \(m=6\) for positional encoding. Lastly, we focus on 2D image translation in this paper and hence use 2D coordinates (\(d=2\)). Benchmark overviewWe compared our model with the following state-of-the-art methods: (1) pix2pix: pix2pix is a GAN-based image translation model, which consists of a UNet-based generator and a patch-based discriminator (Isola et al., 2017); (2) pGAN: pGAN is a GAN-based image translation model using ResNet which follows an encoder-bottleneck-decoder architecture as backbone (Dar et al., 2019). Perceptual loss is introduced to improve the results; (3) ResViT: ResViT is an image translation model that combines pGAN with a transformer-based information bottleneck; (4) ASAP-Net: ASAP-Net is a neural field-based image translation model (Shaham et al., 2021). Different from the proposed model, ASAP-Net parameterizes patch-wise neural fields, which are conditioned through a UNet-shape hypernetwork without a shared MLP. For all implementations, we used the official GitHub repositories provided by the authors. ### Multi-sequence MRI translation We first examined the quality of the images generated from the proposed model. Theoretically, CoNeS can be applied to any number of missing or present sequences by adapting input and output channels to \(N_{s}\) and \(N_{t}\). For simplicity, we assumed one sequence was missing for all the patients during inference (\(N_{t}=1\)), and thus, we trained models that generate one MRI sequence from the other sequences in the dataset for evaluation. Specif Figure 2: Comparison results of different image translation models on BraTS 2018: (a) T1, T2, FLAIR \(\rightarrow\) T1ce; (b) T1ce, T2, FLAIR \(\rightarrow\) T1; (c) T1ce, T1, FLAIR \(\rightarrow\) T2; (d) T1ce, T1, T2 \(\rightarrow\) FLAIR. For each translation experiment, three examples are selected for display. Each odd column shows the ground truth and translation results of the different models. Zoomed-in results indicated in red rectangles are shown below the whole images. ically, four image translation experiments were performed on BraTS 2018: (1) T1, T2, FLAIR \(\rightarrow\) T1ce (shortened to T1ce translation); (2) T1ce, T2, FLAIR \(\rightarrow\) T1 (shortened to T1 translation); (3) T1ce, T1, FLAIR \(\rightarrow\) T2 (shortened to T2 translation); and (4) T1ce, T1, T2 \(\rightarrow\) FLAIR (shortened to FLAIR translation). We used two different metrics for quantitative analysis in our study: peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Both the synthesized images and real images were normalized to [0,1] before evaluation. Wilcoxon signed-rank test between each benchmark and the proposed model was performed on all image translation experiments. The quantitative results are listed in Table 1. As shown in the table, the proposed model performs significantly better (\(p<.05\)) than other state-of-the-art methods in most metrics, except that pGAN obtains higher PSNR on T1 translation. Using T1ce translation on BraTS 2018 as an example, the PSNR and SSIM of the proposed model on BraTS 2018 are 31.2 dB and 0.951, which outperforms pix2pix by 1.1 dB PSNR and 1.0% SSIM, pGAN by 0.5 dB PSNR and 0.8% SSIM, ResViT by 2.0 dB PSNR and 1.6% SSIM, and ASAP-Net by 0.4 dB PSNR and 0.3% SSIM. Translation examples are shown in Fig. 2 in which we can see that the proposed model can recover more detailed structures, such as the contrast-enhanced tumor in T1ce, which is clinically highly relevant. Both the PSNR and SSIM show global similarity, while the quality of the region around the tumor is more clinically interesting. To further evaluate the proposed model, we cropped the images using the bounding box of the tumor region and then evaluated the similarity of these sub-images using the aforementioned metrics. The bounding box was generated from the segmentation results of nnUNet (Isensee et al., 2021) for the reason that the Figure 3: Comparison results of different image translation models on the VS dataset: (a) T2 \(\rightarrow\) T1ce; (b) T1ce \(\rightarrow\) T2. For each translation experiment, three examples are selected for display. Each odd column shows the ground truth and translation results of the different models. Zoomed-in results indicated in red rectangles are shown below the whole images. segmentation ground truths of BraTS 2018 validation set are not available. The results are listed in Table 2. As we can see, the proposed model also performs significantly better (\(p<.05\)) in most tasks within this sub-region, which is consistent with our observation from zoomed-in results in Fig. 2. We observed that the performance of the proposed model decreased after cropping due to the lack of background. Again Using T1ce translation as an example, the PSNR and SSIM of the proposed model are 20.9 dB and 0.667, which outperforms pix2pix by 1.0 dB PSNR and 5.7% SSIM, pGAN by 0.3 dB and 2.1% SSIM, ResViT by 0.7 dB and 5.5% SSIM, and ASAP-Net by 0.5 dB and 3.3% SSIM. Next, we performed two image translation experiments on the VS dataset: (1) T2 \(\rightarrow\) T1ce (shortened to T1ce translation) and (2) T1ce \(\rightarrow\) T2 (shortened to T2 translation). We again evaluated the entire image as well as the cropped region around the tumor, similar \begin{table} \begin{tabular}{l c c c c c c c c} \multirow{2}{*}{**model**} & \multicolumn{2}{c}{**T1ce translation**} & \multicolumn{2}{c}{**T1 translation**} & \multicolumn{2}{c}{**T2 translation**} & \multicolumn{2}{c}{**FLAIR translation**} \\ \cline{2-9} & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline \multirow{2}{*}{pix2pix} & 30.1 & 0.941 & 27.0 & 0.945 & 28.0 & 0.926 & 27.6 & 0.910 \\ & \(\pm 2.65^{\dagger}\) & \(\pm 0.014^{\dagger}\) & \(\pm 3.69\) & \(\pm 0.013^{\dagger}\) & \(\pm 2.63^{\dagger}\) & \(\pm 0.062^{\dagger}\) & \(\pm 3.03^{\dagger}\) & \(\pm 0.097^{\dagger}\) \\ \hline \multirow{2}{*}{pGAN} & 30.7 & 0.943 & **27.5** & 0.945 & 29.2 & 0.943 & 28.5 & 0.916 \\ & \(\pm 3.18^{\dagger}\) & \(\pm 0.015^{\dagger}\) & \(\pm 3.72\) & \(\pm 0.015^{\dagger}\) & \(\pm 2.80^{\dagger}\) & \(\pm 0.020^{\dagger}\) & \(\pm 3.26^{\dagger}\) & \(\pm 0.095^{\dagger}\) \\ \hline \multirow{2}{*}{ResViT} & 29.2 & 0.935 & 25.0 & 0.918 & 26.6 & 0.923 & 24.7 & 0.876 \\ & \(\pm 2.37^{\dagger}\) & \(\pm 0.014^{\dagger}\) & \(\pm 2.60^{\dagger}\) & \(\pm 0.014^{\dagger}\) & \(\pm 2.30^{\dagger}\) & \(\pm 0.020^{\dagger}\) & \(\pm 2.08^{\dagger}\) & \(\pm 0.092^{\dagger}\) \\ \hline \multirow{2}{*}{ASAP-Net} & 30.8 & 0.948 & 27.3 & 0.948 & 28.6 & 0.940 & 28.4 & 0.916 \\ & \(\pm 2.97^{\dagger}\) & \(\pm 0.017^{\dagger}\) & \(\pm 3.79\) & \(\pm 0.015^{\dagger}\) & \(\pm 2.74^{\dagger}\) & \(\pm 0.019^{\dagger}\) & \(\pm 3.10^{\dagger}\) & \(\pm 0.098^{\dagger}\) \\ \hline CoNeS & **31.2** & **0.951** & 27.3 & **0.953** & **29.6** & **0.950** & **29.1** & **0.926** \\ (proposed) & \(\pm\)**3.11** & \(\pm\)**0.017** & \(\pm 4.03\) & \(\pm\)**0.014** & \(\pm\)**3.03** & \(\pm\)**0.021** & \(\pm\)**2.99** & \(\pm\)**0.097** \\ \end{tabular} \end{table} Table 1: Quantitative comparison of different image translation models on BraTS 2018. The mean value and standard deviation of PSNR and SSIM are reported. The mean value and standard deviation of PSNR and SSIM are reported. The highest values per column are indicated in boldface; The \(\dagger\) after each metric of the benchmarks indicates a significant difference (\(p<.05\)) compared to the proposed method. \begin{table} \begin{tabular}{l c c c c c c c c} \multirow{2}{*}{**model**} & \multicolumn{2}{c}{**T1ce translation**} & \multicolumn{2}{c}{**T1 translation**} & \multicolumn{2}{c}{**T2 translation**} & \multicolumn{2}{c}{**FLAIR translation**} \\ \cline{2-9} & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline \multirow{2}{*}{pix2pix} & 19.9 & 0.610 & 15.7 & 0.636 & 19.9 & 0.658 & 18.1 & 0.585 \\ & \(\pm 3.30^{\dagger}\) & \(\pm 0.092^{\dagger}\) & \(\pm 4.41\) & \(\pm 0.089^{\dagger}\) & \(\pm 2.85^{\dagger}\) & \(\pm 0.082^{\dagger}\) & \(\pm 3.31^{\dagger}\) & \(\pm 0.081^{\dagger}\) \\ \hline \multirow{2}{*}{pGAN} & 20.6 & 0.646 & **16.1** & 0.663 & 20.8 & 0.721 & 18.9 & 0.636 \\ & \(\pm 3.73^{\dagger}\) & \(\pm 0.096^{\dagger}\) & \(\pm 4.61\) & \(\pm 0.099\) & \(\pm 3.23^{\dagger}\) & \(\pm 0.093^{\dagger}\) & \(\pm 3.70^{\dagger}\) & \(\pm 0.086^{\dagger}\) \\ \hline \multirow{2}{*}{ResViT} & 20.2 & 0.612 & 15.1 & 0.599 & 19.5 & 0.672 & 17.0 & 0.545 \\ & \(\pm 3.56^{\dagger}\) & \(\pm 0.098^{\dagger}\) & \(\pm 4.32^{\dagger}\) & \(\pm 0.090^{\dagger}\) & \(\pm 3.50^{\dagger}\) & \(\pm 0.093^{\dagger}\) & \(\pm 3.26^{\dagger}\) & \(\pm 0.111^{\dagger}\) \\ \hline \multirow{2}{*}{ASAP-Net} & 20.4 & 0.634 & 15.7 & 0.626 & 20.3 & 0.669 & 18.5 & 0.593 \\ & \(\pm 3.67^{\dagger}\) & \(\pm 0.115^{\dagger}\) & \(\pm 4.48\) & \(\pm 0.103^{\dagger}\) & \(\pm 3.05^{\dagger}\) & \(\pm 0.089^{\dagger}\) & \(\pm 3.60^{\dagger}\) & \(\pm 0.086^{\dagger}\) \\ \hline CoNeS & **20.9** & **0.667** & 15.8 & **0.666** & **21.5** & **0.739** & **19.6** & **0.663** \\ (proposed) & \(\pm\)**3.66** & \(\pm\)**0.099** & \(\pm 4.44\) & \(\pm 0.094\) & \(\pm\)**3.35** & \(\pm\)**0.095** & \(\pm\)**3.49** & \(\pm\)**0.084** \\ \end{tabular} \end{table} Table 2: Quantitative comparison of different image translation models after cropping on BraTS 2018. The mean value and standard deviation of PSNR and SSIM are reported. The highest values per column are indicated in boldface; The \(\dagger\) after each metric of the benchmarks indicates a significant difference (\(p<.05\)) compared to the proposed method. to BraTS 2018. Both quantitative results are listed in Table 3 and Table 4. All models struggle with the VS dataset and show decreased performance compared to BraTS 2018, and CoNeS still performs significantly better (\(p<.05\)) in most of the metrics. Taking T1ce translation as an example, CoNeS obtains a PSNR of 21.9 dB and a SSIM score of 0.638, which outperforms pix2pix by 0.8 dB PSNR and 3.6% SSIM, pGAN by 0.3 dB PSNR and 0.3% SSIM, ResViT by 0.9 dB PSNR and 6.3% SSIM, and ASAP-Net by 1.5 dB PSNR and 8.6% SSIM. Qualitatively, we can observe improved synthesized images using the proposed model as shown in Fig. 3. It is worth pointing out that although pGAN obtained better SSIM scores (0.575) in T2 translation, the visualization suggests that our results contain more informative details, while pGAN's results are blurry. ### Spectral analysis Research has shown that CNN-based generative models with up-sampling layers usually struggle with reproducing the spectral distribution correctly (Durall et al., 2020; Anokhin et al., 2021). On the contrary, coordinate-based networks like CoNeS build a direct pixel-to-pixel mapping without any up-sampling layer. In this section, we further evaluated the \begin{table} \begin{tabular}{l c c c c} **model** & \multicolumn{2}{c}{**T1ce translation**} & \multicolumn{2}{c}{**T2 translation**} \\ \hline & PSNR & SSIM & PSNR & SSIM \\ \hline pix2pix & \(14.8\pm 3.28\) & \(0.415\pm 0.122^{\dagger}\) & \(16.6\pm 1.72^{\dagger}\) & \(0.321\pm 0.084^{\dagger}\) \\ \hline pGAN & \(14.2\pm 3.31^{\dagger}\) & \(0.417\pm 0.133^{\dagger}\) & \(16.8\pm 1.98^{\dagger}\) & \(0.372\pm 0.134\) \\ \hline ResViT & \(14.5\pm 2.75\) & \(0.400\pm 0.106^{\dagger}\) & \(16.7\pm 1.66^{\dagger}\) & \(0.342\pm 0.099^{\dagger}\) \\ \hline ASAP-Net & \(13.0\pm 2.93^{\dagger}\) & \(0.340\pm 0.132^{\dagger}\) & \(15.3\pm 1.64^{\dagger}\) & \(0.300\pm 0.101^{\dagger}\) \\ \hline CoNeS (proposed) & \(\mathbf{15.0\pm 3.17}\) & \(\mathbf{0.451\pm 0.118}\) & \(\mathbf{17.3\pm 1.58}\) & \(\mathbf{0.379\pm 0.101}\) \\ \end{tabular} \end{table} Table 4: Quantitative comparison of different image translation models after cropping on VS dataset. The mean value and standard deviation of PSNR and SSIM are reported. The highest values per column are indicated in boldface; The \(\dagger\) after each metric of the benchmarks indicates a significant difference (\(p<.05\)) compared to the proposed method. \begin{table} \begin{tabular}{l c c c c} **model** & \multicolumn{2}{c}{**T1ce translation**} & \multicolumn{2}{c}{**T2 translation**} \\ \hline & PSNR & SSIM & PSNR & SSIM \\ \hline pix2pix & \(21.1\pm 1.39\) & \(0.602\pm 0.068\) & \(21.4\pm 1.78\) & \(0.506\pm 0.121\) \\ \hline pGAN & \(21.6\pm 1.55\) & \(0.635\pm 0.077\) & \(22.2\pm 2.04\) & \(\mathbf{0.575\pm 0.131}\) \\ \hline ResViT & \(21.0\pm 1.58\) & \(0.575\pm 0.090\) & \(21.5\pm 1.80\) & \(0.489\pm 0.110\) \\ \hline ASAP-Net & \(20.4\pm 1.24\) & \(0.552\pm 0.061\) & \(20.9\pm 1.97\) & \(0.500\pm 0.117\) \\ \hline CoNeS (proposed) & \(\mathbf{21.9\pm 1.69}\) & \(\mathbf{0.638\pm 0.077}\) & \(\mathbf{22.6\pm 2.03}\) & \(0.560\pm 0.126\) \\ \end{tabular} \end{table} Table 3: Quantitative comparison of different image translation models on VS dataset. The mean value and standard deviation of PSNR and SSIM are reported. The highest values per column are indicated in boldface; All metrics of the benchmarks in this table show significant differences (\(p<.05\)) compared to the proposed method. synthesized images in the frequency domain to demonstrate the improvement we obtained by performing spectral analysis on the T1ce translation model of both datasets. Specifically, we applied a 2D Fourier transform to all synthesized results as well as the real images, and then calculated a 1D representation of the 2D spectrum using Azimuthal integration (Durall et al., 2017). Figure 4: Spectral analysis of different image translation models. (a) and (b) show the analysis results on BraTS 2018 and the VS dataset, respectively. For each analysis, the Fourier transform of different synthesized images and the real image are shown in the top row. The bottom row shows the spectral distribution, in which the high-frequency range is zoomed in by the red rectangle. et al., 2020). Azimuthal integration is defined as an integration over the radial frequencies: \[\text{AI}(\omega_{k})=\int_{0}^{2\pi}\|\mathcal{F}(\omega_{k}\cos\theta,\omega_{ k}\sin\theta)\omega_{k}\|_{2}d\theta, \tag{17}\] for \(k=0,\ldots,M/2-1\), and where \(\mathcal{F}(m,n)\) is the Fourier Transform of a 2D image, \(\theta\) is the radian, \(\omega_{k}\) is the spatial frequency and \(M\) is the length of a square image. Figure 5: The results of segmentation experiments: (a) A segmentation example on BraTS 2018 and (b) an example on the VS dataset. The rows show the segmentation results with different MRI sequences replaced. The columns show ground truth (for BraTS 2018, segmentation results with full sequences) and segmentation results using different synthesized images. A log transformation was performed to the 2D spectrum for better visualization, and we calculated the average 1D representation over the dataset to avoid biased sampling. As shown in Fig. 4, both ASAP-Net and CoNeS, which are coordinate-based networks, can reproduce the spectrum over the entire frequency range on BraTS 2018. Specifically, all the spectral curves are very close in the low-frequency range (spatial frequency \(<50\)) which enables the generative models to reconstruct the general structure of the images. However, the spectral curves of GAN-based models dramatically drop in the high-frequency range (spatial frequency \(>75\)), while the curves of ASAP-Net and CoNeS remain close to the real distribution. This shows that neural fields are able to overcome the spectral biases issue of convolutional neural networks. On the VS dataset, all the models yield higher spectrum magnitudes in the high-frequency range compared to the real images, which suggests that these translation models might add high-frequency noise to the synthesized images. Consistent with the similarity measurement results, ASAP-Net is not robust enough to reproduce the spectrum on the VS dataset and may induce more artifacts. On the contrary, CoNeS still outputs images whose spectrum is closest to the real images among all the translation models. The results indicate that by using neural fields conditioned via shift modulation, CoNeS is able to keep the representation capability and reproduce the spectrum distribution. ### Synthesized images for tumor segmentation To further examine the impact of synthesized images in downstream analysis, we performed tumor segmentation using the synthesized images at inference. To do this, we first adopted the architecture from nnUnet (Isensee et al., 2021) and trained a segmentation network that uses all the sequences in the dataset as input. Note that all the images were normalized to a range of [-1,1] during training to make the input channels consistent with the synthesized images. During inference, we tested the segmentation model with synthesized images and compared the results with the performance of the model when filling the missing channel with zeros, called zero imputation, in our experiments. For simplicity, we again assumed one specific sequence was missing and replaced this sequence while keeping the rest unchanged. Similar to the image translation experiments, we compared the segmentation performance using synthesized images generated from the proposed model to the other images via the Wilcoxon signed-rank test. The tests were performed for each MRI sequence (T1ce, T1, T2, and FLAIR) on BraTS 2018. The performance was evaluated using three specific categories: 1) enhanced tumor (ET); 2) tumor core (TC, non-enhanced tumor, and edema); and 3) the whole tumor (WT, enhanced tumor, non-enhanced tumor, and edema). Dice score and 95% Hausdorff distance (95% HD) of all the three categories are reported for quantitative evaluation in Table 5. We can see that the presence of sequences dramatically influences the performance of the segmentation model. For instance, when the T1ce is missing, the Dice score of the enhanced tumor is 0.068 because the enhanced information is only visible in the T1ce. As expected, most of the metrics show that inference with synthesized images performs worse than inference with full sequences. However, we also noticed that when the real T2 or FLAIR were replaced with synthesized ones, we obtained a lower mean of 95% HD. This occurs due to the influence of certain outliers. For example, sometimes the model can identify the enhanced tumor at the wrong position using real images, leading to a large 95% HD, while the other inferences using synthesized images completely miss the tumor. When we removed three outliers, the mean of 95% HD of the enhanced tumor became 2.99 mm, which is better than the others. The best results among all the inferences using synthesized images (including zero imputation) for each sequence were highlighted in Table 5. The results indicate that using synthesized images for inference can significantly improve the segmentation performance and that the synthesized images of our model yield the best segmentation performance with a significant difference (\(p<.05\)) among all the translation models. Using the inferences without T1ce as examples, the Dice scores of the proposed model are 0.386, 0.870, and 0.662 in the enhanced tumor, the whole tumor, and the tumor core respectively. In \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \multicolumn{3}{c|}{input sequences} & \multicolumn{3}{c|}{**Dice**} & \multicolumn{3}{c}{**95\% HD (mm)**} \\ \hline T1ce & T1 & T2 & FLAIR & method & ET & WT & TC & ET & WT & TC \\ \hline ✓ & ✓ & ✓ & ✓ & ✓ & 0.770 & 0.888 & 0.822 & 4.49 & 6.27 & 8.92 \\ \hline ✗ & ✓ & ✓ & ✓ & zero imputation & 0.068\({}^{\dagger}\) & 0.845\({}^{\dagger}\) & 0.362\({}^{\dagger}\) & 27.9\({}^{\dagger}\) & 8.80\({}^{\dagger}\) & 18.1\({}^{\dagger}\) \\ \(\circ\) & ✓ & ✓ & ✓ & pix2pix & 0.191\({}^{\dagger}\) & 0.850\({}^{\dagger}\) & 0.537\({}^{\dagger}\) & 15.0\({}^{\dagger}\) & 8.10\({}^{\dagger}\) & 15.0\({}^{\dagger}\) \\ \(\circ\) & ✓ & ✓ & ✓ & pGAN & 0.317\({}^{\dagger}\) & 0.858\({}^{\dagger}\) & 0.598\({}^{\dagger}\) & 14.9\({}^{\dagger}\) & 8.01\({}^{\dagger}\) & 14.0\({}^{\dagger}\) \\ \(\circ\) & ✓ & ✓ & ✓ & ResViT & 0.223\({}^{\dagger}\) & 0.858\({}^{\dagger}\) & 0.555\({}^{\dagger}\) & 15.0\({}^{\dagger}\) & 7.87\({}^{\dagger}\) & 14.1\({}^{\dagger}\) \\ \(\circ\) & ✓ & ✓ & ✓ & ASAP-Net & 0.332\({}^{\dagger}\) & 0.866\({}^{\dagger}\) & 0.597\({}^{\dagger}\) & 13.3\({}^{\dagger}\) & **6.95** & 13.2\({}^{\dagger}\) \\ \(\circ\) & ✓ & ✓ & ✓ & CoNeS & **0.386** & **0.870** & **0.662** & **13.1** & 7.23 & **13.0** \\ \hline ✓ & ✗ & ✓ & ✓ & zero imputation & 0.717\({}^{\dagger}\) & 0.865\({}^{\dagger}\) & 0.753\({}^{\dagger}\) & 7.86\({}^{\dagger}\) & 11.3\({}^{\dagger}\) \\ ✓ & \(\circ\) & ✓ & ✓ & pix2pix & 0.747 & 0.869\({}^{\dagger}\) & 0.780\({}^{\dagger}\) & 5.07\({}^{\dagger}\) & 7.70\({}^{\dagger}\) & 9.84\({}^{\dagger}\) \\ ✓ & \(\circ\) & ✓ & ✓ & pGAN & 0.747\({}^{\dagger}\) & 0.868\({}^{\dagger}\) & 0.779\({}^{\dagger}\) & 5.60\({}^{\dagger}\) & 7.61\({}^{\dagger}\) & 10.2\({}^{\dagger}\) \\ ✓ & \(\circ\) & ✓ & ✓ & ResViT & 0.751\({}^{\dagger}\) & 0.869\({}^{\dagger}\) & 0.784\({}^{\dagger}\) & **4.62** & 7.39\({}^{\dagger}\) & 9.75\({}^{\dagger}\) \\ ✓ & \(\circ\) & ✓ & ✓ & ASAP-Net & 0.753\({}^{\dagger}\) & 0.881 & 0.806 & 5.43\({}^{\dagger}\) & **6.73** & 9.39 \\ ✓ & \(\circ\) & ✓ & ✓ & CoNeS & **0.764** & **0.885** & **0.808** & 5.30 & 7.05 & **8.94** \\ \hline ✓ & ✓ & ✗ & ✓ & zero imputation & 0.748\({}^{\dagger}\) & 0.835\({}^{\dagger}\) & 0.752\({}^{\dagger}\) & 5.64\({}^{\dagger}\) & 8.67\({}^{\dagger}\) & 11.6\({}^{\dagger}\) \\ ✓ & ✓ & \(\circ\) & ✓ & pix2pix & 0.761 & 0.862\({}^{\dagger}\) & 0.784\({}^{\dagger}\) & 3.90\({}^{\dagger}\) & 7.53\({}^{\dagger}\) & 9.82\({}^{\dagger}\) \\ ✓ & ✓ & \(\circ\) & ✓ & pGAN & 0.767 & 0.872\({}^{\dagger}\) & 0.797\({}^{\dagger}\) & 3.83\({}^{\dagger}\) & 7.34\({}^{\dagger}\) & 9.27 \\ ✓ & ✓ & \(\circ\) & ✓ & ResViT & 0.759 & 0.855\({}^{\dagger}\) & 0.788\({}^{\dagger}\) & 4.19\({}^{\dagger}\) & 8.18\({}^{\dagger}\) & 9.74 \\ ✓ & ✓ & \(\circ\) & ✓ & ASAP-Net & 0.764 & 0.880\({}^{\dagger}\) & 0.817 & 3.84\({}^{\dagger}\) & 6.50\({}^{\dagger}\) & 9.05 \\ ✓ & ✓ & \(\circ\) & ✓ & CoNeS & **0.778** & **0.886** & **0.829** & **3.15** & **6.01** & **8.34** \\ \hline ✓ & ✓ & ✓ & ✗ & zero imputation & 0.679\({}^{\dagger}\) & 0.403\({}^{\dagger}\) & 0.690\({}^{\dagger}\) & 27.8\({}^{\dagger}\) & 30.3\({}^{\dagger}\) & 23.2\({}^{\dagger}\) \\ ✓ & ✓ & ✓ & \(\circ\) & pix2pix & 0.760 & 0.805\({}^{\dagger}\) & 0.771\({}^{\dagger}\) & **3.74** & 9.75\({}^{\dagger}\) & 11.1\({}^{\dagger}\) \\ ✓ & ✓ & ✓ & \(\circ\) & pGAN & 0.766 & 0.833\({}^{\dagger}\) & 0.777\({}^{\dagger}\) & 4.60 & 8.59\({}^{\dagger}\) & 10.3\({}^{\dagger}\) \\ ✓ & ✓ & ✓ & \(\circ\) & ResViT & 0.783 & 0.768\({}^{\dagger}\) & 0.752\({}^{\dagger}\) & 5.16\({}^{\dagger}\) & 11.7\({}^{\dagger}\) & 11.5\({}^{\dagger}\) \\ ✓ & ✓ & ✓ & \(\circ\) & ASAP-Net & **0.785** & 0.823\({}^{\dagger}\) & 0.808\({}^{\dagger}\) & 3.80\({}^{\dagger}\) & 9.36\({}^{\dagger}\) & **9.36\({}^{\dagger}\)** \\ ✓ & ✓ & ✓ & \(\circ\) & CoNeS & 0.768 & **0.853** & **0.809** & 4.30 & **7.56** & 9.38 \\ \end{tabular} \end{table} Table 5: Results of using different images for segmentation inference on BraTS 2018. The real sequences used are indicated by ✓, the missing ones by ✗, and the ones replaced by synthesized images by \(\circ\). The mean of Dice scores and 95% HD (mm) of the enhanced tumor (ET), the whole tumor (WT), and the tumor core (TC) are reported. The highest values per column are indicated in boldface; The \(\dagger\) after each metric of the benchmarks indicates a significant difference (\(p<.05\)) compared to inference using synthesized images from CoNeS. comparison, the proposed model outperforms zero imputation by 31.8%, 2.5%, and 30.0%, pix2pix by 19.5%, 2.0%, and 12.5%, pGAN by 6.9%, 1.2%, and 6.4%, ResViT by 16.3%, 1.2%, and 10.7%, and ASAP-Net by 5.4%, 0.4%, and 6.5%. The 95% HDs of the proposed model are 13.1 mm, 7.23 mm, and 13.0 mm in the enhanced tumor, the whole tumor, and the tumor core respectively. In comparison, the proposed model outperforms zero imputation by 14.8 mm, 1.57 mm, and 5.1 mm, pix2pix by 1.9 mm, 0.87 mm, and 2.0 mm, pGAN by 1.8 mm, 0.78 mm, 1.0 mm, ResViT by 1.9 mm, 0.64 mm, 1.1 mm. Although ASAP-Net obtained a higher 95% HD (6.95 mm) in the whole tumor, we did not observe significant differences between it and the proposed model. Some example segmentation results are presented in Fig. 5. It is worth noting that the synthesized T1 of CoNeS performs better in segmentation than the ones from pGAN, although we got higher PSNR for pGAN in the former experiment. We also performed the same segmentation experiments on the VS dataset. We evaluated the performance using three specific categories: 1) intrameatal tumor; 2) extrameatl tumor); and 3) the whole tumor (including intra- and extrameatl tumor). Dice score and 95% HD of all three categories are reported in Table 6. Similarly to BraTS 2018, all the synthesized images compensate for the performance loss due to the drop of sequences, and the proposed model performs significantly better (\(p<.05\)) than the other models. For instance, the synthesized T1ce generated by the proposed model obtained Dice scores of 0.567, 0.714, and 0.749 in the intrameatal tumor, the extrameatl tumor, and the whole tumor respectively. In comparison, the proposed model outperforms zero imputation by 56.6%, 68.4%, and 72.1%, pix2pix by 9.6%, 2.8%, and 3.6%, pGAN by 12.6%, 3.8%, and 8.8%, ResViT by 2.7%, 0.1%, and 3.0%, and ASAP-Net by 25.9%, 22.2%, and 24.1%. The 95% HDs of the proposed model are 2.33 mm, 3.54 mm, and 4.05 mm in the intrameatal tumor, the extrameattal tumor, and the whole tumor respectively. These results outperform zero imputation by 5.71 mm, 25.37 mm, and 30.05, pix2pix by 0.21 mm, 2.13 mm, and 2.45 mm, pGAN by 0.15 mm, 3.03 mm, and 3.82 mm, ASAP-Net by 0.77 mm, 3.72 mm, and 8.35 mm. We observed that ResViT obtained lower 95% HD (3.32 mm) in the extrameattal tumor, however, the proposed model still performs better than ResViT in most of the experiments. Example segmentation results are displayed in Fig. 5. information is already encoded in the latent code, conditioning the network on intensity directly can still add extra information and improve performance. We next demonstrated the stability of the models by comparing the loss curves of the ablated models. Both the adversarial loss \(L_{\mathrm{adv}}\) and the total loss \(L\) are shown in Fig. 6. We observe that both losses of the models using the full hypernetwork fluctuated substantially and \(L_{\mathrm{adv}}\) increased midway through training. On the contrary, both loss curves of the models using shift modulation remained stable throughout the learning. The experiments suggested that by reducing the number of parameters generated, shift modulation is able to improve the stability of the image translation model. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \multicolumn{2}{c|}{input sequences} & \multicolumn{2}{c|}{\multirow{2}{*}{method}} & \multicolumn{2}{c|}{**Dice**} & \multicolumn{2}{c}{**95\% HD (mm)**} \\ \cline{3-10} \multicolumn{2}{c|}{T1ce} & \multicolumn{1}{c|}{T2} & \multicolumn{1}{c|}{IT} & \multicolumn{1}{c|}{ET} & \multicolumn{1}{c|}{WT} & \multicolumn{1}{c|}{IT} & \multicolumn{1}{c|}{ET} & \multicolumn{1}{c|}{WT} \\ \hline ✓ & ✓ & \(\sim\) & 0.761 & 0.853 & 0.896 & 1.34 & 1.71 & 1.45 \\ \hline ✗ & ✓ & zero imputation & \(0.001^{\dagger}\) & \(0.030^{\dagger}\) & \(0.028^{\dagger}\) & \(8.04^{\dagger}\) & \(28.91^{\dagger}\) & \(34.1^{\dagger}\) \\ \(\circ\) & ✓ & pix2pix & \(0.471^{\dagger}\) & \(0.686^{\dagger}\) & \(0.713^{\dagger}\) & \(2.54\) & \(5.67^{\dagger}\) & \(6.50\) \\ \(\circ\) & ✓ & pGAN & \(0.441^{\dagger}\) & \(0.676^{\dagger}\) & \(0.661^{\dagger}\) & \(2.48^{\dagger}\) & \(6.57^{\dagger}\) & \(7.87^{\dagger}\) \\ \(\circ\) & ✓ & ResViT & \(0.540^{\dagger}\) & \(0.713^{\dagger}\) & \(0.719\) & \(2.36\) & \(\mathbf{3.32^{\dagger}}\) & \(5.73\) \\ \(\circ\) & ✓ & ASAP-Net & \(0.308^{\dagger}\) & \(0.492^{\dagger}\) & \(0.508^{\dagger}\) & \(3.10^{\dagger}\) & \(7.26^{\dagger}\) & \(12.4^{\dagger}\) \\ \(\circ\) & ✓ & CoNeS & \(\mathbf{0.567}\) & \(\mathbf{0.714}\) & \(\mathbf{0.749}\) & \(\mathbf{2.33}\) & \(3.54\) & \(\mathbf{4.05}\) \\ \hline ✓ & ✗ & zero imputation & \(0.184^{\dagger}\) & \(0.397^{\dagger}\) & \(0.400^{\dagger}\) & \(4.06^{\dagger}\) & \(18.0^{\dagger}\) & \(22.2^{\dagger}\) \\ ✓ & \(\circ\) & pix2pix & \(0.713^{\dagger}\) & \(0.856^{\dagger}\) & \(0.874^{\dagger}\) & \(1.54\) & \(2.19^{\dagger}\) & \(2.09\) \\ ✓ & \(\circ\) & pGAN & \(0.701^{\dagger}\) & \(0.839^{\dagger}\) & \(0.844^{\dagger}\) & \(1.82^{\dagger}\) & \(2.60^{\dagger}\) & \(2.58^{\dagger}\) \\ ✓ & \(\circ\) & ResViT & \(0.716^{\dagger}\) & \(0.831^{\dagger}\) & \(0.862\) & \(1.61\) & \(2.48^{\dagger}\) & \(2.32\) \\ ✓ & \(\circ\) & ASAP-Net & \(0.677^{\dagger}\) & \(0.834^{\dagger}\) & \(0.854^{\dagger}\) & \(1.95^{\dagger}\) & \(2.63^{\dagger}\) & \(2.45^{\dagger}\) \\ ✓ & \(\circ\) & CoNeS & \(\mathbf{0.746}\) & \(\mathbf{0.858}\) & \(\mathbf{0.878}\) & \(\mathbf{1.40}\) & \(\mathbf{2.09}\) & \(\mathbf{1.96}\) \\ \end{tabular} \end{table} Table 6: Results of using different images for segmentation inference on the VS dataset. The real sequences used are indicated by ✓, the missing ones by ✗, and the ones replaced by synthesized images by \(\circ\). The mean of Dice scores and 95% HD (mm) of the intrameatl tumor (IT), the extrameatal tumor (ET), and the whole tumor (WT) are reported. The highest values per column are indicated in boldface; The \(\dagger\) after each metric of the benchmarks indicates significant differences (\(p<.05\)) compared to inference using synthesized images from CoNeS. \begin{table} \begin{tabular}{c c c c c} \multicolumn{2}{c}{shift modulation} & intensity & \#param generated & PSNR & SSIM \\ \hline no & no & 14.5k & \(29.6\pm 2.13^{\dagger}\) & \(0.933\pm 0.013^{\dagger}\) \\ \hline no & yes & 14.5k & \(29.9\pm 2.32\) & \(0.938\pm 0.013^{\dagger}\) \\ \hline yes & no & 0.26k & \(\mathbf{30.0\pm 2.13}\) & \(0.938\pm 0.013^{\dagger}\) \\ \hline yes & yes & 0.26k & \(\mathbf{30.0\pm 2.22}\) & \(\mathbf{0.941\pm 0.014}\) \\ \end{tabular} \end{table} Table 7: Quantitative comparison of ablated models on BraTS 2018. The mean value and standard deviation of PSNR and SSIM are reported. The highest values per column are indicated in boldface; The \(\dagger\) after each metric indicates a significant difference (\(p<.01\)) compared to the baseline model (the bottom row). ## 5 Discussion and conclusion In this work, we proposed CoNeS, a novel conditional neural fields-based model for MRI translation. We modeled the image translation problem using neural fields, which are conditioned on the source images, and learned latent codes through a coordinate-based network. The proposed model adapts the predicted neural fields by varying the latent codes across coordinates to ensure better local representations. We then introduced a shift modulation strategy for the conditioning to reduce the model complexity and stabilize the training. We compared the proposed model with state-of-the-art image translation models and our experiments showed that CoNeS performs better in the entire image scope as well as the tumor region, which is clinically relevant. Through visualization results, we also showed that the proposed method can reproduce more structural details, while the other methods' results are visually more blurry. We performed a spectral analysis to demonstrate improvements in image translation when using neural fields. As expected, all the CNN-based models and ResViT, which is a hybrid transformer model containing transposed layers during decoding, were unable to reproduce high-frequency signals due to their spectral bias (Rahaman et al., 2019; Durall et al., 2020). In contrast, the proposed model was able to preserve the high-frequency information and reconstruct the spectrum in the entire frequency range on both datasets. We also observed that ASAP-Net, a neural field-based benchmark, did not show consistent performance across the two datasets and could not reproduce the spectral distribution on the VS dataset either. These results are consistent with prior studies demonstrating that the full hypernetwork, in which all the parameters of the main network are generated, is sensitive to its initialization and difficult to optimize (Chang et al., 2019). The ablation studies further Figure 6: Training loss curves of the ablated models. (a) adversarial loss (b) total generator loss including reconstruction loss, adversarial loss, feature matching loss, and latent code regularization. The models using shift modulation show more stable training loss against the models using a full hypernetwork. indicated that compared to a full hypernetwork, the conditioning via shift modulation can make the training of neural fields more stable and maintain the representation capability. To evaluate the value of synthesized MRI in downstream analysis, we performed tumor segmentation experiments. We first demonstrated that dropping sequences during inference of a segmentation model can significantly damage the performance, which shows the complementary importance of multiple MRI sequences in segmentation. We next tested the segmentation model using different synthesized images and compared the results with the inference using incomplete input images. The experiments demonstrated that image translation models can significantly improve segmentation accuracy by replacing the missing input channel with synthesized images. Furthermore, the images generated by our proposed CoNeS model performed best among the state-of-the-art methods in most of the experiments, which is consistent with the visual improvement observed in the translation experiments. Nevertheless, we found that synthesized images cannot fully replace real images, and a baseline model trained on all real images performed best. One limitation of our work is that separate models are required to deal with various incomplete MRI scans, in which the sequences are randomly excluded (Li et al., 2023). Further work would be adapting the proposed model to random incomplete MRI scans by incorporating techniques like learning disentangled representations (Shen et al., 2020) or latent representation fusion (Chartsias et al., 2017). Moreover, the choice of the positional encoding frequency \(m\) may bias the network to fit the signal of a certain bandwidth (Wang et al., 2021). To ease the optimization and improve the generalization, it may be worthwhile to integrate periodic activation functions (Sitzmann et al., 2020) in our design instead of positional encoding for better representation capability. In summary, we presented a neural fields-based model that synthesizes missing MRI from other sequences with excellent performance, which can be further integrated into the downstream analysis. All experiments showed improved performance compared to state-of-the-art translation models, while the spectrum analysis and ablation studies demonstrated the strengths of the proposed model over traditional CNN and neural fields models. Neural fields hold great promise in MRI translation to solve the missing MRI sequence problem in the clinic. ## Ethical Standards The work follows appropriate ethical standards in conducting research and writing the manuscript, following all applicable laws and regulations regarding the treatment of animals or human subjects. ## Conflicts of Interest We declare we do not have conflicts of interest.
2309.05989
Latest results from Daya Bay using the full dataset
The Daya Bay Reactor Neutrino Experiment was designed with the primary goal of precisely measuring the neutrino mixing parameter, $\theta_{13}$. Eight identically-designed gadolinium-doped liquid scintillator detectors installed in three underground experimental halls measure the reactor antineutrinos from six nuclear reactors at different distances. Until its shutdown at the end of 2020, Daya Bay experiment has acquired nearly 6 million inverse beta decay candidates with neutron captured on gadolinium. In this talk, the latest neutrino oscillation analysis results based on full data will be presented. The resulting oscillation parameters are $\sin^{2}2\theta_{13}$ = 0.0851 $\pm$ 0.0024, $\Delta m^{2}_{32}$ = (2.466 $\pm$ 0.060) $\times$ $10^{-3}$ ${\rm eV}^{2}$ for the normal mass ordering or $\Delta m^{2}_{32}$ = -(2.571 $\pm$ 0.060) $\times$ $10^{-3}$ ${\rm eV}^{2}$ for the inverted mass ordering, which are the most precise measurement of $\theta_{13}$ and $\Delta m^{2}_{32}$ so far. Moreover, latest results on other topics such as the search of high energy reactor neutrino is included as well.
Zhiyuan Chen
2023-09-12T06:35:36Z
http://arxiv.org/abs/2309.05989v1
# Latest results from Daya Bay using the full dataset ###### Abstract The Daya Bay Reactor Neutrino Experiment was designed with the primary goal of precisely measuring the neutrino mixing parameter, \(\theta_{13}\). Eight identically-designed gadolinium-doped liquid scintillator detectors installed in three underground experimental halls measure the reactor antineutrinos from six nuclear reactors at different distances. Until its shutdown at the end of 2020, Daya Bay experiment has acquired nearly 6 million inverse beta decay candidates with neutron captured on gadolinium. In this talk, the latest neutrino oscillation analysis results based on full data will be presented. The resulting oscillation parameters are \(\sin^{2}2\theta_{13}=0.0851\pm 0.0024\), \(\Delta m^{2}_{32}=(2.466\pm 0.060)\times 10^{-3}\)eV\({}^{2}\) for the normal mass ordering or \(\Delta m^{2}_{32}=-(2.571\pm 0.060)\times 10^{-3}\)eV\({}^{2}\) for the inverted mass ordering, which are the most precise measurement of \(\theta_{13}\) and \(\Delta m^{2}_{32}\) so far. Moreover, latest results on other topics such as the search of high energy reactor neutrino is included as well. CC-BY-4.0 licence ## 1 The Daya Bay Reactor Neutrino Experiment The Daya Bay Reactor Neutrino Experiment was designed to measure the mixing angle \(\theta_{13}\) via the investigation of reactor antineutrino disappearance at a about 2 km baseline resulting from neutrino oscillation. It began data taking in late 2011 and finished operation at the end of 2020. To reduce systematic issues, Daya Bay performed relative measurement with Far/Near ratio. As shown in Figure 1 (left), Daya Bay used eight antineutrino detectors (ADs) to detect \(\overline{\nu}_{e}\)s emitted from the reactors at the Daya Bay-Ling Ao nuclear power facility in China. The ADs were installed in three underground experimental halls and were submerged in water pools to reduce ambient radiation, shown in Fig. 1 (right). Each pool was optically divided into inner (IWS) and outer (OWS) water Cherenkov detectors for detecting cosmic-ray muons as muon veto systems. Four layers of resistive plate chambers (RPCs) cover the top of each water pool to provide another independent muon detector. As an neutrino target, 20 tonnes of liquid scintillator doped with 0.1% gadolinium by weight (GdLS) was contained in a 3-m-diameter acrylic cylinder enclosed inside a 4-m-diameter acrylic cylinder filled with 22 tonnes of undoped liquid scintillator (LS) in each AD. 192 photomultiplier tubes (PMTs) were installed on the barrel surface of the AD to detect optical photons generated in the scintillator. Reactor antineutrinos are detected via the inverse beta decay (IBD) reaction: \(\overline{\nu}_{e}+p\to e^{+}+n\). Positron deposits the energy quickly and forms the prompt signal. Neutron is captured on a nucleus and becomes the delayed signal. With the coincidence of prompt and delayed signals, we are able to suppress the backgrounds remarkably. ## 2 Neutrino Oscillation Results In this section, we report a new measurement of \(\sin^{2}2\theta_{13}\) and \(\Delta m^{2}_{32}\) using a final sample of \(5.55\times 10^{6}\) IBD candidates with the final-state neutron captured on gadolinium (nGd) acquired in 3158 days of operation [1]. Details of the analysis process and techniques can be found in Refs. [2, 3]. In this Proceeding, we focus on Figure 1: (Left) Layout of the Daya Bay experiment. Two near experimental halls, EH1 and EH2, monitor reactor neutrino flux and spectrum, while the far hall EH3 observes the oscillation driven by \(\theta_{13}\) mixing angle. (Right) Sketch of the detectors in one of the near halls. The ADs are installed in a water pool and covered with RPCs. the improvements to the analysis techniques. In terms of the energy response of the detector, a correction for the nonlinear response of the electronics was applied to each channel. This correction was derived from the waveform output from a flash-ADC readout system running in parallel with the default ADC system of EH1-AD1 in 2016 [4]. A new source of PMT flashers was observed in the 7-AD operation period that were not rejected by the previous criteria. By utilizing the characteristic charge pattern and temporal distribution of these new flashers, we proposed additional selection criteria that removed over 99% of this instrumental background with a negligible IBD selection inefficiency. The largest correlated background is \(\beta\)-n decay of cosmogenic radioisotopes \({}^{9}\)Li/\({}^{8}\)He. In order to determine this background, we paired muons with all IBD candidates within \(\pm\)2 s. To improve discrimination of \({}^{9}\)Li/\({}^{8}\)He from other processes, candidate events were separated into several samples based on the visible energy deposited by the muon in the AD and the distance between the prompt and delayed signals, \(\Delta r\). The rates and energy spectra of the dominant cosmogenic radioisotopes were extracted with a simultaneous fit to 12 two-dimensional histograms defined by the different muon samples in the three experimental halls for the two \(\Delta r\) regions. We simply measured the sum of these two radioisotopes, in consideration of the comparable lifetimes of \({}^{9}\)Li and \({}^{8}\)He. This method provides higher statistics and a better determination of the low-energy part of the \(\beta\) spectrum of \({}^{9}\)Li/\({}^{8}\)He than the previous determination while reducing the rate uncertainty to less than 25%. Due to the gradual loss of functional PMTs near the top of the water pools, the muon detection efficiency of water pools dropped with time, particularly in the 7-AD period. As a consequence, a new muon-induced background, named as "muon-x" became significant. The muon-x background was caused by low energy muons that passed through the IWS undetected. These events typically consisted of the muon as the prompt signal and a Michel electron from muon decay a product of muon capture or a spallation neutron as the delayed signal. Muon-x background was efficiently suppressed by rejecting events with a delayed signal less than 410 \(\mu\)s after a muon identified with a more stringent IWS PMT-hit multiplicity requirement of 6 \(<\)nHit \(\leq\) 12 which led to a \(<\)0.1% loss in livetime. To determine the rate of these two backgrounds, the prompt-energy spectra of the IBD-candidate sample were extended to 250 MeV and were fitted to the spectra of the previously described fast-neutron sample and the muon-x sample with IWS nHit = 7. Through extrapolation, we obtained their rates in the range of 0.7 MeV \(<\)\(E_{p}<\)12 MeV. We extracted the oscillation parameters using the survival probability of three-flavor oscillation given by \[\begin{split} P=& 1-\cos^{4}\theta_{13}\sin^{2}2\theta_{12} \sin^{2}\Delta_{21}\\ &-\sin^{2}2\theta_{13}(\cos^{2}\theta_{12}\sin^{2}\Delta_{31}+ \sin^{2}\theta_{12}\sin^{2}\Delta_{32}),\end{split} \tag{1}\] where \(\Delta_{ij}=\Delta m^{2}_{ij}L/(4\hbar cE)\) with \(\Delta m^{2}_{ij}\) in eV\({}^{2}\), \(L\) is the baseline in meters between an AD and a reactor core and \(E\) is the energy of the \(\overline{\nu}_{e}\) in MeV. Alternatively, for short baselines of a few kilometers, the survival probability can be parametrized as \[P=1-\cos^{4}\theta_{13}\sin^{2}2\theta_{12}\sin^{2}\Delta_{21}-\sin^{2}2\theta _{13}\sin^{2}\Delta_{ee}. \tag{2}\] Here, the effective mass-squared difference \(\Delta m^{2}_{ee}\) is related to the wavelength of the oscillation observed at Daya Bay. Fitting method B desribed in Ref. [2] was used in this work. Figure 2 shows the covariance contours in the \(\Delta m^{2}_{ee}-\sin^{2}2\theta_{13}\) space. The best-fit point with \(\chi^{2}\)/ndf = 559/517 yields \(\sin^{2}2\theta_{13}=0.0851\pm 0.0024\), \(\Delta m^{2}_{32}=(2.466\pm 0.060)\times 10^{-3}\)eV\({}^{2}\) for the normal mass ordering or \(\Delta m^{2}_{32}=-(2.571\pm 0.060)\times 10^{-3}\)eV\({}^{2}\) for the inverted mass ordering [1]. As shown in Fig. 3, the best-fit prompt-energy distribution is in excellent agreement with the observed spectra in each experimental hall. The normalized signal rate of the three halls as a function of \(L_{\rm eff}/\langle E_{\overline{\nu}_{e}}\rangle\) with the best-fit curve superimposed is ploted in Fig. 4, where \(L_{\rm eff}\) and \(\langle E_{\overline{\nu}_{e}}\rangle\) are the effective baseline and average \(\overline{\nu}_{e}\) energy, respectively. The oscillation pattern related to \(\theta_{13}\) is unambiguous. ## 3 First Evidence of High Energy Reactor Antineutrinos High-energy reactor antineutrinos are likely generated by only a handful of short-lived \(\beta\)-decay nuclei with high end-point energies, such as \({}^{88,90}\)Br and \({}^{94,96,98}\)Rb. Nevertheless, for reactor antineutrino experiments, all previous measurements have focused on \(E_{p}<\) 8 MeV, due to rare signals in the higher-energy region and large contamination with cosmogenic backgrounds. In spite of the rarity, high-energy Figure 3: The measured prompt-energy spectra of EH1, EH2, and EH3 with the best-fit and no-oscillation curves superimposed in the upper panels. The shape of the backgrounds are apparent in the spectra with a logarithmic ordinate shown in the insets. Figure 2: Error ellipses in the \(\Delta m^{2}_{ee}-\sin^{2}2\theta_{13}\) space with the bestfit point indicated. The error bars display the one-dimensional 1-standard-deviation confidence intervals. The colored contours correspond to 1, 2, and 3 standard deviations. reactor antineutrinos serve as a significant background for future measurements, such as the diffuse supernova neutrino background expected to permeate the Universe. Moreover, direct measurements of high-energy antineutrinos can provide a valuable new perspective for nuclear data validations relevant well beyond the bounds of neutrino physics. In this section, we report the first measurement of high-energy reactor antineutrinos at Daya Bay, with nearly 9000 inverse beta decay candidates in the prompt energy region of 8-12 MeV from 1958 days of data collection [5]. The main backgrounds with \(E_{p}\) in 8-12 MeV are from muon decays, cosmogenic fast neutrons, and cosmogenic isotope decays. A multivariate analysis is performed to distinguish 2500 signal events from background statistically. The hypothesis of no reactor antineutrinos with neutrino energy above 10 MeV is rejected with a significance of 6.2 standard deviations. We observed a 29% antineutrino flux deficit in the prompt energy region of 8-11 MeV compared to a recent model prediction. Additionally, our work provide the unfolded antineutrino spectrum above 7 MeV as a data-based reference for other experiments, as shown in Fig. 5. ## 4 Summary To sum up, we have made a new measurement of \(\sin^{2}2\theta_{13}\) with a precision of 2.8% and the mass-squared differences reaching a precision of about 2.4%. The reported \(\sin^{2}2\theta_{13}\) will likely remain the most precise measurement of \(\theta_{13}\) in the foreseeable future and be the crucial input for next-generation neutrino experiments studying the mass hierarchy and CP violation. Moreover, the hypothesis of no reactor antineutrinos with energy above 10 MeV is rejected with a significance of \(6.2\sigma\). More importantly, the energy region of reactor antineutrinos above 10 MeV is extended by direct measurement for the first time. Other latest results such as joint analysis of reactor antineutrino spectra with PROSPECT, and the measurement of the evolution of the flux and spectrum are not covered here, which can be seen in Ref. [6, 7]. Figure 4: Measured disappearance probability as a function of the ratio of the effective baseline \(L_{\rm eff}\) to the mean antineutrino energy \(\langle E_{\overline{\nu}_{e}}\rangle\).
2309.05751
Compressive Mahalanobis Metric Learning Adapts to Intrinsic Dimension
Metric learning aims at finding a suitable distance metric over the input space, to improve the performance of distance-based learning algorithms. In high-dimensional settings, it can also serve as dimensionality reduction by imposing a low-rank restriction to the learnt metric. In this paper, we consider the problem of learning a Mahalanobis metric, and instead of training a low-rank metric on high-dimensional data, we use a randomly compressed version of the data to train a full-rank metric in this reduced feature space. We give theoretical guarantees on the error for Mahalanobis metric learning, which depend on the stable dimension of the data support, but not on the ambient dimension. Our bounds make no assumptions aside from i.i.d. data sampling from a bounded support, and automatically tighten when benign geometrical structures are present. An important ingredient is an extension of Gordon's theorem, which may be of independent interest. We also corroborate our findings by numerical experiments.
Efstratios Palias, Ata Kabán
2023-09-11T18:15:51Z
http://arxiv.org/abs/2309.05751v3
# The Effect of Intrinsic Dimension on Learning a Mahalanobis Metric under Compression ###### Abstract Metric learning aims at finding a suitable distance metric over the input space, to improve the performance of distance-based learning algorithms. In high-dimensional settings, it can also serve as dimensionality reduction by imposing a low-rank restriction to the learnt metric. In this paper, we consider the problem of learning a Mahalanobis metric, and instead of training a low-rank metric on high-dimensional data, we use a randomly compressed version of the data to train a full-rank metric in this reduced feature space. We give theoretical guarantees on the error for Mahalanobis metric learning, which depend on the stable dimension of the data support, but not on the ambient dimension. Our bounds make no assumptions aside from i.i.d. data sampling from a bounded support, and automatically tighten when benign geometrical structures are present. An important ingredient is an extension of Gordon's theorem, which may be of independent interest. We also corroborate our findings by numerical experiments. ## 1 Introduction In clustering and classification, there have been numerous distance-based algorithms proposed. While the Euclidean metric is the "standard" notion of distance between numerical vectors, it does not always result in accurate learning. This can be e.g. due to the presence of many dependent features, noise, or features with large ranges that dominate the distances [1]. Mahalanobis metric learning aims at lessening this caveat by linearly transforming the feature space in a way that properly weights all important features, and discards redundant ones. In its most common form, metric learning focuses on learning a Mahalanobis metric [2, 3, 4]. Metric learning algorithms can be divided into two types based on their purpose [1]. _Distance-based metric learning_ aims to increase the distances between instances of different classes (inter-class distances) and decrease the distances inside the same class (intra-class distances). On the other hand, _classifier-based metric learning_ focuses on directly improving the performance of a particular classification algorithm, and is therefore dependent on the algorithm in question. Examples of both types of algorithms abound in the literature, see e.g. [5] and references therein. Despite the success of Mahalanobis metric learning, high-dimensionality of the data is a provable bottleneck that arises fairly often in practice. The work of [1] has shown through both upper and lower bounds that the sample complexity of Mahalanobis metric learning in general case increases linearly with the data dimension. In addition, so does the computational complexity. Indeed, high-dimensionality is known to quickly degrade the performance of machine learning algorithms in practice. This means that, even if a suitable distance metric is found, the subsequent algorithm might still perform poorly. All these issues are collectively known as the _curse of dimensionality_[6]. It has been observed, however, that many real-world data sets do not fill their ambient spaces evenly in all directions, but instead their vectors cluster along a low-dimensional subspace with less mass in some directions, or have many redundant features [7]. We refer to these data sets, in a general sense, in a broad sense, as having a low _intrinsic dimension_ (low-ID). Due to their lower information content, it is intuitively expected that learning from such a data set should be easier, both statistically and computationally. One of the most popular ways to take advantage of a low-ID is to compress the original data set into a low-dimensional space and then proceed with learning in this smaller space [8]. Random projections is a widely used compression method with attractive theoretical guarantees. These are universal in the sense of being oblivious to the data being compressed. All instances are subjected to a random linear mapping without significantly distorting Euclidean distances, and reducing subsequent computing time. There has been much research on controlling the loss of accuracy with random projections for various learning algorithms, see e.g. [9, 10]. Another advantage, is that no pre-processing step is necessary beforehand, making random projections simple to implement [8]. In the case of Mahalanobis metric learning, an additional motivation is to reduce the number of parameters to be estimated. ### Our contributions We consider the problem of learning a Mahalanobis metric from random projections (RP) of the data, and for the case of Gaussian RP give the following theoretical guarantees: * a high-probability uniform upper bound on the generalisation error * a high-probability upper bound on the excess empirical error of the learnt metric, relative to the empirical error of the metric learnt in the original space. The quantities in these two theoretical guarantees (given in Theorems 5 and 7 respectively) capture a trade-off in compressive learning of a Mahalanobis metric: as the projection dimension decreases the first quantify becomes lower and the second becomes higher. Most importantly, unlike metric learning in the original high-dimensional space, we find that neither of these two quantities depend on the ambient dimension explicitly, but only through a notion of ID, namely the so-called _stable dimension_, defined in Definition 2. The stable dimension is a robust version of the classical algebraic dimension, which "averages" the spread of a set across different directions. This shows that the aforementioned trade-off can be reduced, should the stable dimension be low. We corroborate our theoretical findings with numerical experiments on synthetic data in order to show the extent to which the stable dimension plays a role in the effectiveness of metric learning in practice. We also experiment with real data sets, to illustrate the aforementioned trade-off between accuracy and complexity. As an important ingredient of our analysis, we revisit a well-known result due to Gordon [11] that uniformly bounds the maximum and minimum norm of vectors in the compressed unit sphere under a Gaussian RP. We extend this result into a dimension-free version, for arbitrary domains, in Lemma 4, which may be of independent interest. While this analytic tool is specific to Gaussian RPs, we find experimentally that other Johnson-Lindenstrauss projections behave similarly in the Mahalanobis metric learning problem. ## 2 Main Results: Metric Learning Under Gaussian Random Projection **Notation.** We denote scalars and vectors by lowercase letters, and matrices with capital letters. The Euclidean norm of a vector is denoted \(\|\cdot\|\), whereas the Frobenius norm of a matrix is denoted \(\|\cdot\|_{F}\). The trace of a matrix is denoted \(\operatorname{tr}(\cdot)\). We let \(\sigma_{\min}(\cdot)\) and \(\sigma_{\max}(\cdot)\) be respectively the smallest and largest singular values of a matrix. \(I_{n}\) denotes the \(n\times n\) identity matrix, and \(0_{n}\) denotes the \(n\)-dimensional zero vector. The notation \(\mathcal{N}(\mu,\Sigma)\) stands for the Gaussian distribution with mean vector \(\mu\) and covariance matrix \(\Sigma\). We denote by \(\mathbb{E}\) the expectation of a random variable (or random vector). \(\mathbf{1}\{\cdot\}\) is the indicator function, that equals \(1\) if its argument is true, and \(0\) otherwise. For a set \(T\), we write \(T-T:=\{x-x^{\prime}:x,x^{\prime}\in T\}\). We denote by \(\mathcal{S}^{n-1}\) the \(n\)-dimensional unit sphere. We now formally introduce the problem of Mahalanobis metric learning, as well as the random compression that we use. Let \(\mathcal{X}\times\mathcal{Y}\) be the instance space, where \(\mathcal{X}\subset\mathbb{R}^{d}\) is the feature space and \(\mathcal{Y}=\{-1,1\}\) is the set of labels. We consider the usual setting where all instances are assumed to have been sampled i.i.d. from a fixed but unknown distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Y}\). For our derivations, the diameter1 of \(\mathcal{X}\) is assumed finite, that is \(\operatorname{diam}(\mathcal{X})<\infty\). Footnote 1: Recall that for a set \(T\subset\mathbb{R}^{d}\), its diameter is defined as \(\operatorname{diam}(T):=\sup_{x,x^{\prime}\in T}\|x-x^{\prime}\|\). The goal of Mahalanobis metric learning is to learn a matrix \(M\in\mathbb{R}^{d\times d}\), such that the Mahalanobis distance between any two instances \(x,x^{\prime}\), i.e. \(\|Mx-Mx^{\prime}\|\), is larger if \(x,x^{\prime}\) have different labels and smaller if \(x,x^{\prime}\) share the same label. For the purpose of dimensionality reduction, given a fixed \(k\), where \(k\leq d\), we let \(R\in\mathbb{R}^{k\times d}\) be our random projection (RP) matrix. We assume that each datum instance is available only in its RP-ed form (as in compressed sensing applications). We will be referring to \(d\) and \(k\) as the _ambient dimension_ and the _projection dimension_ respectively. While there are several possible choices for the random matrix \(R\) (i.e. the data sensing matrix), in our theoretical analysis we employ the _Gaussian random projection_. That is, the elements of \(R\) are drawn i.i.d. from \(\mathcal{N}(0,1/k)\). The motivation for this choice is twofold: it is known to have the ability to approximately preserve the relative distances among the original data with high probability [12, 13], and in addition it allows us to employ some specialised theoretical results for tighter guarantees. However, in the experimental section, we will also test other types of distance-preserving RPs and observe similar behaviour to the Gaussian RP in the problem of compressively learning a Mahalanobis metric. Next, we define the hypothesis classes of Mahalanobis metrics. Let \[\mathcal{M}:=\{M_{0}\in\mathbb{R}^{d\times d}:\sigma_{\max}(M_{0})=1/ \operatorname{diam}(\mathcal{X})\} \tag{1}\] be the hypothesis class in the ambient space, and \[\mathcal{M}_{k}:=\{M\in\mathbb{R}^{k\times k}:\sigma_{\max}(M)=1/ \operatorname{diam}(\mathcal{X})\} \tag{2}\] be the hypothesis class in the compressed space2\(R\mathcal{X}\), where the constraints on \(\sigma_{\max}\) are to avoid arbitrary scaling, and to make our main results scale-invariant. Let Footnote 2: With a slight abuse of notation, if \(T\) is a set and \(A\) is a conformable matrix, we write \(AT:=\{Ax:x\in T\}\). \[(\mathcal{X}\times\mathcal{Y})^{2}\supset T:=\{((x_{2i-1},y_{2i-1}),(x_{2i},y _{2i}))\}_{i=1}^{n} \tag{3}\] be a training set of \(n\) pairs of instances, that contains no pairs of duplicate elements. Also, let \(\ell_{l,u}:\mathbb{R}\times\{0,1\}\to[0,1]\) be a distance-based loss function defined as \[\ell_{l,u}(x,y):=\begin{cases}\min\{1,\rho(x-u)_{+}\}\text{ if }y=1\\ \min\{1,\rho(l-x)_{+}\}\text{ if }y=0.\end{cases} \tag{4}\] where \((\cdot)_{+}:=\max\{\cdot,0\}\), and \(\rho,l,u\) are positive numbers with \(l<u\). Note that \(\ell_{l,u}\) is \(\rho\)-Lipschitz in its first argument, a property we exploit later in the derivations. This loss function penalizes small inter-class distances and large intra-class distances, and is a common choice for distance-based metric learning [1]. We next define the true error of a hypothesis \(M\in\mathcal{M}_{k}\), given the matrix \(R\), as \[L_{\mathcal{D}}^{R}(M):=\underset{((x,y),(x^{\prime},y^{\prime}))\sim\mathcal{ D}^{2}}{\mathbb{E}}\ell_{l,u}(\|MRx-MRx^{\prime}\|^{2},\mathbf{1}\{y=y^{\prime} \}), \tag{5}\] and its empirical error, given the training set \(T\) from (3), as \[\hat{L}_{T}^{R}(M):=\frac{1}{n}\sum_{i=1}^{n}\ell_{l,u}(\|MRx_{2i-1}-MRx_{2i} \|^{2},\mathbf{1}\{y_{2i-1}=y_{2i}\}). \tag{6}\] For a hypothesis \(M_{0}\in\mathcal{M}\), the true and empirical error are defined analogously, by omitting \(R\) and considering the original vectors. That is, the true error is defined as \[L_{\mathcal{D}}(M_{0}):=\underset{((x,y),(x^{\prime},y^{\prime}))\sim\mathcal{ D}^{2}}{\mathbb{E}}\ell_{l,u}(\|M_{0}x-M_{0}x^{\prime}\|^{2},\mathbf{1}\{y=y^{ \prime}\}), \tag{7}\] and the empirical error is defined as \[\hat{L}_{T}(M_{0}):=\frac{1}{n}\sum_{i=1}^{n}\ell_{l,u}(\|M_{0}x_{2i-1}-M_{0}x _{2i}\|^{2},\mathbf{1}\{y_{2i-1}=y_{2i}\}). \tag{8}\] We would first like to upper bound the generalisation error \((L_{\mathcal{D}}^{R}(M)-\hat{L}_{T}^{R}(M))\), uniformly, for all \(M\in\mathcal{M}_{k}\), with high-probability, with respect to the random draws of \(R\). To this end, let us introduce some complementary definitions and results, that appear in the derivations. **Definition 1** (Gaussian width [14, Definition 7.5.1]).: _Let \(T\subset\mathbb{R}^{d}\) be a set, and \(g\sim\mathcal{N}(0_{d},I_{d})\). The Gaussian width of \(T\), is defined as_ \[\omega(T):=\mathbb{E}\sup_{x\in T}g^{\top}x, \tag{9}\] _and the squared version of the Gaussian width of \(T\), is defined as_ \[\psi(T):=\sqrt{\mathbb{E}\sup_{x\in T}(g^{\top}x)^{2}}. \tag{10}\] Definition 1 allows us to introduce a more robust version of the algebraic dimension, as follows. **Definition 2** (Stable dimension [14, Definition 7.6.2]).: _The stable dimension of a set \(T\subset\mathbb{R}^{d}\), with \(0<\operatorname{diam}(T)<\infty\), is defined as_ \[s(T):=\frac{\psi(T-T)^{2}}{\operatorname{diam}(T)^{2}}. \tag{11}\] Intuitively, we can view the stable dimension as a more "detailed" version of the algebraic dimension, that takes into account the relative spread of a set, across different directions. It is straightforward to show that for any bounded set \(T\subset\mathbb{R}^{d}\), \(s(T)\leq d\) (see again [14, Section 7.6]). However, the stable dimension can be much lower than the algebraic dimension, even if the latter is allowed to be infinite. As we shall see, the stable dimension of the data support, appears in the upper bounds we derive for the generalisation error, and for the excess empirical error. We will also be using the following lemma about the relation of \(\omega(\cdot)\) and \(\psi(\cdot)\). **Lemma 3** ([14, Section 7.6]).: _For any set \(T\subset\mathbb{R}^{d}\),_ \[\omega(T-T)\leq\psi(T-T). \tag{12}\] The backbone of our two main results is an extension of the upper bound of the well-known Gordon's theorem [11] (see also Theorem 5.6. in [15]), from the unit sphere to arbitrary sets.3 While we are aware of more general results that assume sub-gaussian random matrices (e.g. [14, Section 9.1]), we offer a simpler proof for the Gaussian case, that is free of any unspecified constants, and can thus be of interest in its own right. This is provided in the following lemma. Footnote 3: The proofs of all our new results, are deferred to the supplementary material. **Lemma 4**.: _Let \(R\in\mathbb{R}^{k\times d}\) be a matrix, with elements i.i.d. from \(\mathcal{N}(0,1)\), and let \(T\subset\mathbb{R}^{d}\) be a set, such that \(\sup_{x\in T}\|x\|=b\). Also, let \(a(k):=\mathbb{E}\left\|z_{k}\right\|\), where \(z_{k}\sim\mathcal{N}(0_{k},I_{k})\). Then, for any \(\epsilon>0\), with probability at least \(1-\exp(-\epsilon^{2}/2b^{2})\), we have_ \[\sup_{x\in T}\|Rx\|\leq ba(k)+\omega(T)+\epsilon, \tag{13}\] _where \(\omega(\cdot)\) is the Gaussian width from Definition 1._ In is well-known that \(\frac{k}{\sqrt{k+1}}\leq a(k)\leq\sqrt{k}\). Applying Lemma 4, we can derive the following uniform, high-probability upper bound for the generalisation error of the compressed hypothesis class. **Theorem 5** (Compressed generalisation error).: _Let \(R\in\mathbb{R}^{k\times d}\), with elements i.i.d. from \(\mathcal{N}(0,1/k)\), \(T\subset(\mathcal{X}\times\mathcal{Y})^{2}\) be the training set defined in (3), \(\mathcal{M}_{k}\) be the hypothesis class defined in (2), \(L^{R}_{\mathcal{D}}\) be the compressed true error defined in (5), and \(\hat{L}^{R}_{T}\) be the compressed empirical error defined in (6). Then, for any \(0<\epsilon<1\), and for all \(M\in\mathcal{M}_{k}\), with probability at least \(1-\epsilon\), we have_ \[L^{R}_{\mathcal{D}}(M)-\hat{L}^{R}_{T}(M)\leq 2\rho\sqrt{\frac{k}{n}}\left(1+ \sqrt{\frac{s(\mathcal{X})}{k}}+\sqrt{\frac{2\ln\frac{2}{\epsilon}}{k}} \right)^{2}+\sqrt{\frac{\ln\frac{2}{\epsilon}}{2n}} \tag{14}\] _where \(s(\cdot)\) is the stable dimension from Definition 2._ We can see that the ambient dimension does not appear in the generalisation bound, and is instead replaced by the stable dimension of the data support. This implies that, unless the data support fills the whole ambient space, the empirical error calculated in the compressed space is closer to the true error in the compressed space. The behaviour of this bound with \(k\) and \(n\) is as expected, since higher values of \(k\) result in more complex hypothesis classes, whereas a larger \(n\), reduces the discrepancy between the true end empirical error. **Remark 6**.: For learning a Mahalanobis metric in the original data space, previous work of [1] implies the following uniform upper bound on the generalisation error. For any \(0<\epsilon<1\), and for all \(M_{0}\in\mathcal{M}\), with probability at least \(1-\epsilon\), we have \[L_{\mathcal{D}}(M_{0})-\hat{L}_{T}(M_{0})\leq 2\rho\sqrt{\frac{d}{n}}+\sqrt{ \frac{\ln\frac{1}{\epsilon}}{2n}}, \tag{15}\] where \(L_{\mathcal{D}}\) and \(\hat{L}_{T}\) are respectively the true error defined in (7), and the empirical error defined in (8). If in addition a Frobenius norm constraint is imposed on the class (1) (which we did not impose), then \(d\) is replaced by the upper bound on the Frobenius norm constraint in the bound. Although our uniform bound for \(\mathcal{M}_{k}\) in Theorem 5 is similar in flavour to this latter result under the norm constraint, its purpose is different. In [1], one tries to learn a metric with a low-Frobenius norm. In our case, we are instead interested to quantify the trade-off induced by the random projection between the generalisation error and the excess empirical error (see Theorem 7 below for the latter), without norm constraints. An advantage we gain is not having to know beforehand about the bounded Frobenius norm of the metric, instead we only need to set the projection dimension \(k\). Besides this, of course, the main gain lies in the time and space savings of learning a \(k\times k\) instead of a \(d\times d\) matrix. However, a generalisation bound is not the complete story when we work with the RP-ed data, as there is usually a trade-off between accuracy and complexity. Intuitively, we can expect that, as the projection dimension \(k\) decreases, we obtain a lower complexity of the compressed hypothesis class, but a higher empirical error (due to the potential distortion that results from the compression). We already upper bounded the former in Theorem 5, so we next upper bound the latter, with high-probability, as follows: **Theorem 7** (Excess empirical error).: _Let \(R\in\mathbb{R}^{k\times d}\), with elements i.i.d. from \(\mathcal{N}(0,1/k)\), \(T\subset(\mathcal{X}\times\mathcal{Y})^{2}\) be the training set defined in (3), \(\mathcal{M}\) and \(\mathcal{M}_{k}\) be the hypothesis classes defined in (1) and (2) respectively, \(\hat{L}_{T}\) be the empirical error defined in (8), and \(\hat{L}_{T}^{R}\) be the compressed empirical error defined in (6). Then, for any \(0<\epsilon<1\), and for all \(M\in\mathcal{M}_{k}\) and \(M_{0}\in\mathcal{M}\), with probability at least \(1-\epsilon\), we have_ \[\hat{L}_{T}^{R}(M)-\hat{L}_{T}(M_{0})\leq\rho\left(1+\sqrt{\frac{s(\mathcal{ X})}{k}}+\sqrt{\frac{2\ln\frac{1}{\epsilon}}{k}}\right)^{2} \tag{16}\] Examining the bound in Theorem 7, we can see it does not depend on the ambient dimension, but on the stable dimension of the data support, just like the bound in Theorem 5. This means that if the empirical error in the ambient space is small, the empirical error in the compressed space scales with the stable dimension, instead of the ambient dimension. It is also decreasing in \(k\) as expected. Finally, the sample size, \(n\), does not appear at all, as it is assumed the same for training both \(M\) and \(M_{0}\), and simplifies out in the derivation. **Remark 8**.: The motivation behind generalising Gordon's theorem to our Lemma 4, was to make our main results dimension-free. Indeed, applying the original Gordon's theorem to our derivations of Theorems 5 and 7, we would obtain the same formulas, but with \(d\) in place of \(s(\mathcal{X})\). As we already mentioned, it can be the case that \(s(\mathcal{X})\ll d\), thus our results adapt to a notion of low-ID, and unveiling such a low-ID dependence, was the overall goal of our paper. To summarise, a Gaussian random projection incurs a lower generalisation gap for Mahalanobis metric learning, but induces an excess empirical error, compared to learning the metric in the ambient space. In our bounds, both quantities depend on the stable dimension of the data support, instead of the ambient dimension, so these bounds automatically tighten when the stable dimension is low. We next illustrate the effects that the stable dimension has on metric learning, in numerical experiments. ## 3 Experiments In this section we conduct numerical experiments to validate our theoretical guarantees in practice, on both synthetic and benchmark data sets, when learning a Mahalanobis metric in compressed settings. To design our experiments, let us recall that we derived theoretical guarantees for two quantities: * the generalisation error of metric learning under Gaussian random projection; and * the excess empirical error incurred relative to that of metric learning in the ambient space; and that, both of them, were found to depend on \(k\) and \(s(\mathcal{X})\), instead of \(d\). The main goal of our experiments, is to find how much distortion is incurred by different choices of the projection dimension, \(k\), and how is it affected by \(s(\mathcal{X})\). The motivation is that if the distortion is minimal for some \(k\), we can enjoy almost the same empirical performance as in the ambient space, but with a much lower time complexity, as we operate in the compressed space. Therefore, the trade-off between accuracy and complexity can be minimised, by choosing an appropriate value for \(k\). Due to space constraints, in our figures, we only report the error rates achieved by the compressive algorithm, and omit the computational time - which is clearly strictly increasing in \(k\). We start with a brief overview of our experimental setup.4 We first choose the original data set in the ambient space. We then perform a Gaussian random projection and train a metric using Large Margin Nearest Neighbour (LMNN) [3] in the compressed space. Finally, we use 1-Nearest Neighbours (1-NN) to evaluate the quality of the learned Mahalanobis metric on the compressed set, and report the out-of-sample test error. We repeat this process 10 times independently, for a number of choices of the projection dimension. Footnote 4: The formal steps for all of our experiments are detailed in the Appendix. ### Experiments with synthetic sets Synthetic data allow easy control of the stable dimension of their support, hence they allow us to test the explanatory abilities of our theoretical results. We take the data support to be an ellipsoid of the form \(\mathcal{X}=A\mathcal{S}^{d-1}\), where \(A\in\mathbb{R}^{d\times d}\) with \(\sigma_{\max}(A)=1\) is a diagonal, positive-definite matrix (without loss of generality, since the algorithm is rotation-invariant). We vary the stable dimension \(s(\mathcal{X})\) by considering different rates of decay of the diagonal elements of \(A\). We generate a sample set of 2000 instances, \(\{x_{i}\}_{i=1}^{2000}\), sampled uniformly randomly over \(\mathcal{X}\), and employ a train/test ratio of 80%/20%. By construction, in this setting the stable dimension of \(\mathcal{X}\) has the closed form expression \(s(\mathcal{X})=(\|A\|_{F}/\sigma_{\max}(A))^{2}\)[14, Section 7.6]. Hence, according to our theoretical results, we expect that increasing \(d\) should not blow up the out of sample test error, as long as \(s(\mathcal{X})\) does not increase significantly. We employ the Gaussian random projection in these experiments. Further results with other types of random projections for the same sets are included in the Appendix. We want to compare the out-of-sample test error in the compressed space, with the error in the ambient space, across several choices of \(k\). Due to this, we consider settings where the empirical error in the ambient space is small, and thus it is enough to examine only the empirical error in the compressed space, thus saving computational time. For the purpose of maintaining a small (but not zero) empirical error in the ambient space, we considered linearly separable class supports, where 1-NN can achieve almost perfect classification. Specifically, the original labels were set to \(y_{i}:=\text{sign}(w^{\top}x_{i})\) for all \(i\in[2000]\), where \(w\) was sampled from \(\mathcal{N}(0_{d},I_{d})\), and then fixed for each value of \(d\). Figure 1: Out-of-sample error of 1-NN on synthetic data sets, with metric learning (solid lines), and without metric learning (dashed lines), averaged over 10 Gaussian random projections, for several choices of \(d\), and \(s(\mathcal{X})\). The data support was \(\mathcal{X}=A\mathcal{S}^{d-1}\), where \(A\in\mathbb{R}^{d\times d}\) is a diagonal matrix, and the legend shows its \(i\)-th diagonal element, for \(i\in[d]\). The curves represent averages over 10 independent Gaussian random projections. The error bars show intervals of one standard error. Figure 1 shows the empirical results obtained. As expected from the theory, we see that the error is affected by the stable dimension, which, in turn, depends on the rate of decay on the eigenvalues of \(A\) (shown in the legends), and is unaffected by the ambient dimension. To confirm, we repeated these experiments with different values of \(d\), and for different decay rates on the eigenvalues of \(A\). We can see that going from \(d=100\) to \(d=500\), incurs a small increase in the error, for all decay rates. This is because \(s(\mathcal{X})\) increases considerably, when that decay is not small enough, and so does the error. When going from \(d=500\) to \(d=1000\), however, all rates of decay, retain about the same error, as only small eigenvalues are added, and \(s(\mathcal{X})\) increases only slightly. This observation, of course, does not consider the small fluctuations, that result from the randomness of the compression. In addition, comparing the solid and dashed lines in Figure 1, we see that metric learning appears to be useful even for small values of \(k\), provided \(s(\mathcal{X})\) is small. While it is expected that random compression makes the covariance of the data distribution more "spherical", and can potentially distort any separation that existed, if the support is highly anisotropic, i.e., \(s(\mathcal{X})\) is low, then the separability can still be preserved to a high degree. Thus, a learnt metric can still outperform the classic Euclidean metric. Indeed, we can see that, in the \((1/i)\)-case, the dashed lines diverge from the respective solid lines, as \(k\) increases. An exception to the above is when \(s(\mathcal{X})\) is too small, and the empirical error without metric learning is already low, so learning a metric does not significantly improve it. ### Experiments with benchmark data sets Benchmark data sets will serve to test the usefulness and effectiveness of metric learning under compression in a more general context, and its adaptability to noisy settings. In real data, the value of the stable dimension of the support is unknown, but one may expect some structure that metric learning can exploit. We follow the same experimental protocol as in synthetic data sets (\(80\%/20\%\) split), and compute the empirical error, for varying degrees of compression. We want to test if the trade-off can be minimised by some value of \(k\). Our test experiments are somewhat inspired from the evaluation idea in [1], where noise features were appended to low-dimensional data to test the abilities of metric learning. We start from three benchmark UCI data sets with moderate ambient dimension from [16]: Ionosphere (2 labels, 33 features, 351 instances), Wine (3 labels, 13 features, 178 instances), and Sonar (2 labels, 60 features, 208 instances). For each set, we normalised its features to \([0,1]\), embedded it onto a higher-dimensional ambient space, and added some Gaussian noise to all features and all instances, with variance \(\gamma\). This simulates the "noisy subspace hypothesis", in which the data cluster in a noisy low-dimensional subspace in the ambient space [17, Section 1.1]. We aim to test whether Gaussian random projection is still able to preserve information from the features that span the underlying subspace. We also repeated the experiments for different values of \(\gamma\), to test how easily metric learning can adapt in each case. In the Appendix, we repeat these experiments for different compression schemes, to draw comparisons with the Gaussian. Figure 2 shows the results. As we can see, the higher the noise variance \(\gamma\), the higher the average error incurred by the algorithm. However, in almost all cases, there seems to be a lower bound for \(k\), above which the performance stops increasing significantly. This means that the trade-off between accuracy and complexity can be minimised, by choosing that value of \(k\) (e.g. by employing cross-validation type procedures). In the Appendix, we revisit this setting, for higher-dimensional embeddings. Regarding the performance of metric learning, compared to the Euclidean metric, it is not straightforward to draw any conclusions, because it depends on the unknown structure of the data and the available sample size, although in the higher-noise regime we see a consistently outperformance from learning the metric. ## 4 Related work Mahalanobis metric learning was introduced in [2] and has attracted a significant amount of research since. Shortly after its introduction, two of the most popular metric learning algorithms were Figure 2: Out-of-sample error of 1-NN classification with metric learning (solid lines) and with Euclidean metric (dashed lines), of benchmark UCI data sets. All data sets were normalised to \([0,1]\), embedded to a 100-dimensions, and had i.i.d. Gaussian random noise of variance \(\gamma\) (shown in the legend) added to each of their instances. A train/test ratio of 80%/20% was used. The curves represent averages over 10 independent Gaussian random projections. The error bars show intervals of one standard error. proposed; Large Margin Nearest Neighbour (LMNN) [3], and Information Theoretic Metric Learning (ITML) [4]. Generalisations and extensions to metric learning algorithms have also been well-studied. We refer the reader to the surveys in [18, 19] for a more detailed review on metric learning algorithms. There have also been attempts to learn non-linear metrics (e.g. [20, 21]), as well as to train neural networks in metric learning, known as deep metric learning (see [22] for a survey). Metric learning has also been applied to other fields, e.g. collaborative filtering [23], and facial recognition [24]. Much recent literature has been devoted to mitigate the undesirable effects of the curse of dimensionality on metric learning. A typical approach is to train a low-dimensional metric in the ambient space, this was demonstrated to improve the classification performance - see e.g. [25] and the references therein. A notable thread of research, which our work builds upon, appears in a NeurIPS paper [1]. The authors consider both distance-based and classifier-based Mahalanobis metric learning, and show that in general its sample complexity necessarily grows with the dimension of the features in the data, unless a Frobenius norm-constraint is imposed onto the hypothesis class of Mahalanobis metrics, in which case a smaller sample size is required, for a low generalisation error - provided that the problem admits such constraint. Because the optimal constraint is not known in advance, their choice of Frobenius norm-constraint is somewhat arbitrary. In a closely related model, namely a quadratic classifier class, [26] found the nuclear-norm constraint leads to the ability of the error to adapt to a notion of intrinsic dimension of the data (the effective rank of the true covariance), while the Frobenius norm constraint was shown to lack such ability. Their bound still has a mild logarithmic dependence on the ambient dimension. All of the above methods (and most others) work with the full data set, which can be limiting with high-dimensional data. In real-world high-dimensional settings, it is common to only have access to a compressed version of the original features. This can be either due to space constraints, or because a lower-dimensional data set is easier to work with from a computational point of view. Learning from, or reconstructing compressed observations, has been well-studied in the literature, in a field known as _compressed sensing_[27]. In fact, the geometric structures that make compressed sensing easier have also been examined in the analysis of learning tasks to some extent, e.g. [9]. Novel data acquisition sensors from compressed sensing enable collecting data in a randomly compressed form, alleviating the need to select and discard significant fractions of it during pre-processing [28]. In this work we only assumed to have access to an already compressed version of the original data and never need the original. Yet, the guarantees we provide are relative to the original high-dimensional parameters. To the best of our knowledge, distance-based metric learning has eluded a systematic study under compression so far. This paper aims to lessen this gap between theory and practice, and quantify the extra loss we suffer from the random compression, as well as identify the conditions that reduce that loss. ## 5 Conclusions and Future Work We considered Mahalanobis metric learning when working with a randomly compressed version of the data. We derived high-probability theoretical guarantees for its generalisation error, as well as for its excess empirical error under Gaussian random projection. We showed theoretically that both quantities are unaffected by the ambient dimension, and instead depend on the stable dimension of the data support. We supported these findings with experiments on both synthetic and benchmark data sets in conjunction with Nearest Neighbour classification, using its empirical performance to evaluate the learnt metric learning. In this work we only considered properties of the support of the data. Future work may focus on effects from other distributional traits. This may be particularly useful in settings where the covariance of the distribution is far from isotropic, and the data support is only bounded with high-probability. Related work has been done for quadratic classifiers in [26], which showed that the effective rank of the covariance matrix (a measure of ID) affects the generalisation error. The second-moment matrix is usually unknown, so it would be insightful to see how metric learning can automatically adapt to some particular structure in that matrix. Another possible extension is to study the setting where each compressed instance is perturbed by random noise. Metric learning under noisy regimes has already been examined, e.g. [29], but only for the ambient space. Considering the effect of noise on metric learning under compression may also be of interest in many real-world settings.
2309.14884
To Do or Not to Do: Semantics and Patterns for Do Activities in UML PSSM State Machines
State machines are used in engineering many types of software-intensive systems. UML State Machines extend simple finite state machines with powerful constructs. Among the many extensions, there is one seemingly simple and innocent language construct that fundamentally changes state machines' reactive model of computation: doActivity behaviors. DoActivity behaviors describe behavior that is executed independently from the state machine once entered in a given state, typically modeling complex computation or communication as background tasks. However, the UML specification or textbooks are vague about how the doActivity behavior construct should be appropriately used. This lack of guidance is a severe issue as, when improperly used, doActivities can cause concurrent, non-deterministic bugs that are especially challenging to find and could ruin a seemingly correct software design. The Precise Semantics of UML State Machines (PSSM) specification introduced detailed operational semantics for state machines. To the best of our knowledge, there is no rigorous review yet of doActivity's semantics as specified in PSSM. We analyzed the semantics by collecting evidence from cross-checking the text of the specification, its semantic model and executable test cases, and the simulators supporting PSSM. We synthesized insights about subtle details and emergent behaviors relevant to tool developers and advanced modelers. We reported inconsistencies and missing clarifications in more than 20 issues to the standardization committee. Based on these insights, we studied 11 patterns for doActivities detailing the consequences of using a doActivity in a given situation and discussing countermeasures or alternative design choices. We hope that our analysis of the semantics and the patterns help vendors develop conformant simulators or verification tools and engineers design better state machine models.
Márton Elekes, Vince Molnár, Zoltán Micskei
2023-09-26T12:30:51Z
http://arxiv.org/abs/2309.14884v3
# To Do or Not to Do: Semantics and Patterns for Do Activities in UML PSSM State Machines ###### Abstract State machines are used ubiquitously in engineering software-intensive systems. UML State Machines extend simple finite state machines with powerful constructs. Among the many extensions, there is one seemingly simple and innocent language construct that fundamentally changes state machines' reactive model of computation: doActivity behaviors. DoActivity behaviors describe behavior that is executed independently from the state machine once entered in a given state, typically modeling complex computation or communication as background tasks. However, the UML specification or textbooks are vague about how the doActivity behavior construct should be appropriately used. This lack of guidance is a severe issue as, when improperly used, doActivities can cause concurrent, non-deterministic bugs that are especially challenging to find and could ruin a seemingly correct software design. The Precise Semantics of UML State Machines (PSSM) specification introduced detailed operational semantics for state machines. To the best of our knowledge, there is no rigorous review yet of doActivity's semantics as specified in PSSM. We analyzed the semantics by collecting evidence from cross-checking the text of the specification, its semantic model and executable test cases, and the simulators supporting PSSM. We synthesized insights about subtle details and emergent behaviors relevant to tool developers and advanced modelers. We reported inconsistencies and missing clarifications in more than 20 issues to the standardization committee. Based on these insights, we studied 11 patterns for doActivities detailing the consequences of using a doActivity in a given situation and discussing countermeasures or alternative design choices. We hope that our analysis of the semantics and the patterns help vendors develop conformant simulators or verification tools and engineers design better state machine models. UML, PSSM, state machine, semantics, concurrency, pattern. ## 1 Introduction State machines are used ubiquitously in engineering software-intensive systems [1, 2, 3]. There are numerous state machine variants starting from Harel's statements [4] to SCXML [5]. The Unified Modeling Language (UML) [6] introduced a state machine variant specifically targeting software design, which evolved significantly in the last decades with each subsequent release. New language elements were added, and the semantics of state machines was refined, especially with the publication of the Precise Semantics of UML State Machines (PSSM) specification [7]. UML State Machines extend simple finite state machines with powerful constructs that help to design complex software systems. Composite and orthogonal states introduce hierarchy and concurrency; entry/exit behaviors and transition effects make it possible to describe detailed behavior. However, there is one seemingly simple and innocent language construct that fundamentally changes state machines' reactive model of computation: _doActivity behaviors_. DoActivity behaviors describe behavior that is executed independently from the state machine once entered in a given state, typically used to model _"computation or continuous activity that take time to complete and that may be interrupted by event"_[8]. DoActivity behaviors are especially significant if state machines are used to model detailed, _executable behavior_ that can be later used for simulation [9], code generation [10] or verification [11]. DoActivity behaviors are preferred [12] to express long-running tasks that otherwise would block the event processing of the state machine. **Motivation** However, the UML specification or textbooks are vague about how the doActivity behavior construct should be properly used. This lack of guidance is a severe issue, as a doActivity behavior is a powerful construct that can cause internal and external effects even when the state machine is in a stable configuration. When improperly used, they can introduce the worst problems: concurrent, non-deterministic bugs that are especially challenging to find and could ruin a seemingly correct software design. The Precise Semantics of UML State Machines (PSSM) specification offered a detailed operational semantics and execution model to answer numerous questions about the semantics of state machines. Publishing PSSM was a massive leap towards executable, precise semantics that could be the solid basis for simulators or code generators. However, the specification is more oriented toward tool developers; "everyday" model users have a hard time grasping the big picture from the detailed operational rules and the subtle interactions of the language elements [13]. There are numerous academic works on the informal and formal semantics of state machines [14, 15, 16]. However, most of them are either for older versions of the specification (pre-PSSM) or skip the semantics of doActivities. To the best of our knowledge, there is no rigorous review of the newest, precise semantics of UML State Machines. Moreover, there are no clear guidelines on how to use or not use doActivity behaviors in state machines to avoid serious errors that are later nearly impossible to catch with simulation or testing. Our goal is to provide an analysis of PSSM-based semantics targeted for advanced model users (software engineers and researchers), and patterns to help design effective and correct state machines using doActivities. **Method** Following our previously recommended method for assessing modeling language semantics [13], we specifically focused on doActivity behaviors in this paper. We collected evidence [17] by reviewing and cross-checking the text of the specification, its semantic model and executable test cases, and the simulators supporting PSSM (Eclipse Moka and Cameo Simulation Toolkit). We investigated available modeling guidelines [12], industrial models [18], and repositories of open-source models [19]. **Contributions** By synthesizing these sources and insights, we made the following contributions in this paper. **Semantics**: We compiled an analysis of doActivity behaviors' operational semantics from the fUML and PSSM specification highlighting previously not reported subtle details and emergent behaviors relevant to tool developers and advanced modelers. The identified challenging or ambiguous parts are backed by evidence from the normative text of the specification, PSSM's test cases or test executions in simulators (Section 3). **Patterns**: Based on these insights, we systematically investigated 11 patterns for doActivities detailing the consequences of using a doActivity in a given situation, and recommending countermeasures or alternative design choices. The patterns are modular, take into account potential combinations of elements, and are built up from simple states to state machines with composite states having doActivities in orthogonal regions (Section 4). The semantic insights emphasize that doActivities _fundamentally change the reactive nature of state machines_ by performing externally visible actions when the state machine is waiting or by accepting previously accepted events without initiating a new run-to-completion step. Moreover, as doActivities execute independently on their own thread, they introduce _concurrency and non-deterministic choices_ in every state machine. Therefore, engineers must always consider alternative traces and concurrency issues when adding a doActivity to a state machine. Our analysis and the described patterns help engineers design better state machine models, and vendors develop conformant simulators or verification tools. We reported the main issues found in the PSSM specification to OMG1, and based on the findings, we plan to improve the future SysMLv2 modeling language2. Footnote 1: [https://issues.omg.org/issues/spec/PSSM/1.0](https://issues.omg.org/issues/spec/PSSM/1.0) Footnote 2: The second author is a member of the committee responsible for the SysMLv2 specifications and co-author of [20]. **Structure** Section 2 illustrates state machine semantics and typical questions with doActivities. Section 3 presents a deep dive into PSSM semantics structured around the lifecycle of a doActivity. Section 4 recommends patterns for when and when not to use doActivities. The patterns are described in a practical format without going into the details of the operational semantics described in the previous section. Finally, Section 5 summarizes the related work, and Section 6 concludes the paper. ## 2 Illustrating state machine semantics This section introduces an example state machine created for illustrating the subtle semantic questions and possible issues when using doActivity behaviors. Fig. 1 shows a state machine describing the behavior of a measuring component of a fictitious system inspired by the industrial modeling practices of the Thirty Meter Telescope (TMT) [18, 21]. The example illustrates the main patterns for using doActivities in engineering practice [12, 18]: doActivity behaviors could represent long-running computations or could communicate and wait for external events that would otherwise block processing further events (i.e., placing an accept event action in an entry behavior or a transition effect could block the whole RTC step). The example is extended by outgoing log events to make the execution observable. The challenging part of UML state machine semantics is that there could be many inherent _nondeterministic choices_ Fig. 1: A state machine modeling the behavior of a fictitious measuring component. Notes show the name of elements. and _concurrent executions_3 even when a simple input sequence is received. Simulation tools usually produce one execution trace for the given input: Fig. 2 shows a sequence diagram produced by a simulator tool depicting how the current state of the component changes and what signals are sent when receiving the turnOn, measure event sequence. Circled numbers denote the steps in the sequence diagram. Let us follow this seemingly simple execution! Footnote 3: According to UMLL: “concurrent execution simply means that there is no required order in which the nodes must be executed; a conforming execution— may execute the nodes sequentially in either order or may execute them in parallel” [6, 15.2.3.2] 1. The state machine alternates between stable state configurations and _run-to-completion steps_ (RTC steps, i.e., no other events are dispatched until the processing of the current one is completed). With the starting of the component begins the initial RTC step, in which firing transition \(\mathsf{T}_{0}\) leaves the initial pseudostate and the component arrives at the Standby state. This is the initial _stable state configuration_. 2. The state machine has an associated _event pool_ from which events (e.g., signal receptions) are dispatched, then matching transitions are collected and triggered. 3. After receiving a turnOn signal 1 the state machine starts its first RTC step traversing several transitions forming a _compound transition_[6, 14.2.3.8.4]. 4. First, the source state (Standby) is exited, then the effect of the transition is executed (not present here), and finally, the target state (Active) is entered. That state is a _composite state_ and comprises two _orthogonal regions_ which are entered and executed concurrently. 5. After the Active state is entered, its doActivity behavior is started. The initial substates, Wait1 and Wait2 are entered concurrently (i.e., their order is not defined) and their entry behaviors are executed 223. At this point the RTC step initiated by turnOn is finished. 6. The doActivity behavior 4 is executing concurrently with other behaviors of state Active, and may continue even after the RTC step. 7. Receiving a measure signal 5 starts a new RTC step and triggers firing \(\mathsf{T}_{\mathsf{m1}}\) and \(\mathsf{T}_{\mathsf{m2}}\) transitions in both regions concurrently. 8. The Wait states are exited and their exit behaviors are executed 62, then the transitions are traversed with their effects executed 889, finally the respective Measure states are entered, i.e., their entry behaviors are executed 1011 then their doActivities are started. The RTC step completes, and a new stable configuration is reached (although the doActivities are still executing). 9. Note that Fig. 2 depicts one possible ordering, but there are numerous potential interleavings. In the following we describe the steps in each region separately. 10. The doActivity of MeasureTemp performs the measurement 22, waits for a confirmation (tempOk signal 44), and then sends tempCompleted 44 to the state machine. 11. The doActivity of MeasureGravity is similar 13 but instead of self-signaling it models the behavior differently by using a _completion transition_, which is triggered by the _completion event_ of its source state. MeasureGravity state generates a completion event after all of its internal activities, i.e., here its entry and doActivity behaviors, have completed. The doActivity in this case waits until confirmation of the measurement, a gravityOk signal is received 15, then MeasureGravity is completed and it emits a completion event (not depicted). Completion events have priority over regular events. 16. tempCompleted and the completion event trigger exiting the respective Measure state and executing its exit behavior 1719. In each region, the respective Wait state is entered and its entry behavior is executed 889. Fig. 3 summarizes the execution steps reconstructed based on the sequence diagram (Fig. 2) in a simplified Fig. 2: Sequence diagram for simulating state machine in Fig. 1 using Cameo Simulation Toolkit (CST). Input events: turnOn, measure. form: alternating RTC steps and stable configurations, with the actual content of the event pool and the traversed transitions. For simplicity, the figure omits those trivial RTC steps that dispatch and discard completion events in the lack of completion transitions. This view can illustrate the (significant) changes of the event pool, e.g. that completion events are put to the front of the pool, or that doActivities can add or handle events from the event pool even during stable configurations (see SC2). However, this figure again captures one possible execution of one simulated trace (Fig. 2). We made arbitrary choices about the otherwise unobservable execution details that are not specified even in the PSSM test cases (e.g., we cannot observe the exact start or finish of the doActivity in the simulator). This state machine seems to be unsophisticated, but even for such an example it is complicated to understand the exact order and timing of concurrent executions, or identify all valid alternative traces. Starting from the steps where doActivity and orthogonal regions are sending and receiving signals, following the executions depicted in the sequence diagram is quite convoluted. Numerous questions could arise to clarify the relations between the doActivity and the state machine, such as the following. * A doActivity starts concurrently with other behaviors in a composite state. When exactly? * What happens, if a doActivity and the state machine waits for the same event? Are there priorities? * A doActivity is aborted if its containing state is exited. Can it be aborted even before executing any behavior? The UML specification describes general semantic rules, but such specific cases are sometimes hard to answer. The PSSM specification attempts to answer such questions. ## 3 Deep dive into PSSM semantics This section presents the subtle details of the semantics defined in PSSM organized according to the lifecycle of a doActivity (starting, executing, event handling and finalization). Understanding doActivity semantics is especially challenging as both the event-based reactive state machine semantics from PSSM and the data-flow based activity semantics from fUML [22] have to be considered. Following our previously defined method [13], we synthesized these insights by reviewing the specification, cross-checking the definition of the operational semantics with the test cases, executing the test cases in Cameo Simulation Toolkit, and examining the source code and debugger executions of the Papyrus Moka4[23] reference implementation. Footnote 4: [https://marketplace.eclipse.org/content/papyrus-moka](https://marketplace.eclipse.org/content/papyrus-moka) We highlight the parts that necessitate careful consideration when developing state machine models (e.g., concurrent behaviors or non-deterministic choices in priorities), and inconsistencies in the specification artifacts that could cause misunderstanding between engineers or tool vendors. The supplementary material of the paper [17] contains detailed artifacts collected from the specifications and simulator executions (e.g., tables, screenshots and models). ### _Overview of the operational semantics in PSSM_ The fUML specification defines an operational semantics for activities and actions, which is extended for state machines in PSSM. These specifications define an execution model for a subset of the UML language. The execution model includes an abstract execution engine (fUML classes Locus and Executor), classes for event handling (e.g., SM_Object and EventAccepter), and semantic visitors for each supported syntax class (e.g., StateActivation for State). Operations of the semantic visitors encode the semantics of the given element (e.g., enter operation of StateActivation for entering a state). Fig. 4 captures the most important semantic classes and their connections relevant to doActivities. The following paragraphs will introduce their main role, and the subsequent subsections will go into more detail. Note that to ease understanding several classes and associations are left out (see detailed instance model in supplementary material). _Syntax_: Syntax classes are classes from the UML specification. They include classes like Region and State for state machines (SM Syntax box), or AcceptEventAction and Activity for activities (DoAct Syntax box). _Execution & Visitors_: Each syntax class has an appropriate semantic visitor (*Activation classes). The visitors of the Activity and StateMachine top-level behavior classes are called ActivityExecution and StateMachineExecution, which are responsible for starting the respective behavior and serve as a container for the other visitors. The execution model includes classes for semantic concepts that are not directly represented in a UML model. For example, StateMachineConfiguration describes the active state configuration for a state machine in a recursive structure, while Token is an explicit representation of the token game in activities. Fig. 3: Simplified execution steps based on trace in Fig. 2. Notation: gray columns: RTC steps, white columns: stable configurations, boldface in event pool: dispatched event. (Trivial RTC steps where CEs are discarded or events are forwarded to the doActivity are omitted.) ActivityNodeActivationGroup groups nodes activated or suspended together (top-level nodes form a group, and further groups are created for structured activities). _SM Semantic Objects_: These classes represent the active objects whose behavior is executed. An SM_Object is an active object with a state machine as its classifier behavior5. It has operations for sending and receiving event occurrences6. Once an instance of SM_Object is started, an SM_ObjectActivation is created for managing event handling. The superclass from fUML (ObjectActivation) has an event pool: a pool of EventOccurrences that the object received. SM_ObjectActivation extends this with a separate deferred event pool. EventAccepters can be registered to receive specific event occurrences. Dispatching events from the event pool is handled by a dispatch loop7. Once an event occurrence is dispatched, the registered accepters are checked whether they match the current event. If more than one accepters match, then one of them is chosen according to a predefined choice strategy (which can be FirstChoiceStrategy or even a non-deterministic choice). Contrary to activities, SM_ObjectActivation registers _only one_ StateMachineEventAccepter for the whole state machine, which examines the actual state configuration and handles transition priorities and conflicts. Footnote 5: To simplify the description, we will skip the case, where a standalone behavior is directly executed without a context object. _DoActivity Semantic Objects_: PSSM introduces doActivity-specific specializations of fUML semantic classes. DoActivityContextObject references the context object of the state machine (SM_Object) and can access its structural features. As event occurrences cannot be sent directly to the doActivity [7, 8.5.6], handling events is a two-phase process. If the doActivity wants to wait for an event, it first registers typically an AcceptEventActionEventAccepter to its DoActivityContextObjectActivation. Next, it encapsulates the original accepter in a DoActivityExecutionEventAccepter, and registers the encapsulating accepter with the state machine. If the state machine dispatches an event that matches the doActivity's accepter, then the encapsulating accepter adds the event occurrence to the event pool of the doActivity. Once that event occurrence is dispatched asynchronously in the context of the doActivity, the original accepter can handle it in an RTC step of the doActivity [22, 8.8.1]. _Tasks & Threating_: The fUML and PSSM specifications define a very generic, concurrent execution model. However, it does not contain an explicit definition of execution threads, only some partial ordering constraints are defined (e.g., an RTC step is not finished until the entry behaviors are executed). An execution tool does not need to execute all possible concurrent executions in parallel; as long as it produces a legal trace satisfying the partial orderings, the tool conforms to the specification. The specification explicitly states that each active object, each doActivity, and the Fig. 4: Overview of the semantic classes defined in fUML and PSSM for state machines and doActivities. _Notation_: rectangles: UML (meta)classes; other vertices: implicit concepts in PSSM; solid lines: associations; dashed lines: dependencies. transmission of each EventOccurrence runs asynchronously on their own thread. These executions can be fully parallel, i.e., the actions of a DoActivity can be executed in parallel with the actions of the state machine's transition effect8. Actions not contained in isolated regions can see intermediate results of other activities [22, 8.10.1], and even simple actions might not be atomic [6, 16.2.3.1]. Footnote 8: As there is no parallel implementation available, we could not cross-check this. For example, Moka defines meta-tasks for event sending and event dispatching, and schedules these tasks sequentially. However, the traces of PSSM tests Transition 019 and Entering 011 suggest such concurrency. **Insights** Threading in fUML/PSSM is complex: * A simple state machine has one thread of execution, triggered by dispatching a new event. * However, an activity used in a transition effect, entry and exit behavior could contain concurrent actions. * Moreover, if there is an orthogonal region, then activities inside the different regions are concurrent and can interleave with each other (even the exit-effect-entry steps in a transition firing are not atomic). * Each doActivity has its own thread of execution, independent from the RTC step of its state machine. ### _Starting a doActivity behavior_ A doActivity commences execution when the State is entered and the _entry_ Behavior has completed [6, 14.2.3.4.3]. The important details are the following: * what exactly "starting a doActivity" means, * what steps are finished in the state machine's RTC step, * what behaviors are concurrent with the doActivity. The UML specification explicitly states that the execution of a doActivity is concurrent with the entry Behaviors of substates. The fUML and PSSM specification refines and clarifies what "commencing execution" means. When a classifier behavior is created and started, then its execution does not run immediately. Instead, during startBehavior the object activation of the doActivity registers a specific ClassifierBehaviorInvocationEventAccepter and adds an InvocationEventOccurrence to its event pool9. These steps are performed in the RTC step of the state machine. Later, this invocation event occurrence is dispatched asynchronously, and handled in a so called initial RTC step of the doActivity, where elements of the activity are activated and fired. Footnote 9: This mechanism was added in fUML 1.2, and its rationale is explained in the issue fUML12-35. Fig. 6 illustrates starting a doActivity by showing a timeline of a possible order of state changes and execution steps consistent with the sequence diagram observed during simulation (Fig. 2). The figure shows the event pool, active states in each region, and their associated behaviors being executed. The example points out two challenges when reasoning about even one possible trace of a state machine. 1. _Lack of standard notation_: We used a format similar to timing diagrams, but even that needed to be modified, e.g., to depict that entering a state takes time while the entry is executed. Having such a detailed view is essential to understand the relations of each step. 2. _Observability issues_: The figure features several arbitrary choices that cannot be derived from simulation results [13], e.g., the exact start and end of steps in the activities, especially when the doActivity finishes. Fig. 6 only depicts one possible ordering for a given trace, but due to concurrency, numerous steps could interleave. We recommend thinking about partial orders and alternate traces for an RTC step. Fig. 5 focuses on the ordering constraints of concurrent steps executed during RTC1: even starting the doActivity can be in parallel with entering each of the sub-regions. **Insights** Nothing is guaranteed to be executed from the doActivity during the state machine's RTC step, and even starting the doActivity is concurrent with behaviors from substates. There is no guarantee that anything will ever be executed from the started doActivity (see Section 3.5). Fig. 5: Ordering constraints of concurrent steps during RTC1 from Fig. 2. The doActivity is only started, but its execution takes place asynchronously on its own thread. Note that these actions are not atomic, their internal steps can interleave. Fig. 6: Detailed steps for entering the Active state. The prepareInstruments doActivity is started, then its behavior is asynchronously executed. Circled numbers in orange represent events observed during simulation (Fig. 2). ### _Executing a doActivity behavior_ As discussed in Sections 3.1 and 3.2, a doActivity executes asynchronously from the state machine. This execution can happen concurrently with state machine RTC steps (the one in which it was started and possibly later steps as well) or even between RTC steps when the state machine is waiting. If executed during a state machine RTC step, then the doActivity can interleave10 with any other behaviors that may be associated with the State [6, 14.2.3.4]. Specifically, doActivity runs concurrently with Footnote 10: Potential interleaving is exemplified by PSSM test cases Transition 017 or Deferred 006-C. Note that any part of the doActivity can interleave, and not just parts that are after waiting for an event, as some PSSM tests might suggest. * other doActivities in any active states (either parent states, nested substates, or other orthogonal regions), * entry/exit behaviors and transition effects in other regions or nested substates, * internal transitions in the doActivity's state, other regions, parent states or nested substates. Moreover, the doActivity can send signals to external components during execution. If this happens between RTC steps, then the state machine performs externally visible actions in what should be stable configurations. **Insights** DoActivity runs during and between RTC steps. * Parts of a doActivity can interleave with any running behaviors of the state machine or other doActivities. * Externally visible actions in a doActivity can continue execution after the RTC step of the state machine. ### _Handling events in a doActivity behavior_ Since doActivities can accept and wait for events, the exact timing, ordering and priorities are important to understand. **Timing of acceptives** A state machine registers a single event accepter as soon as it starts. A doActivity registers an event accepter only after it fires an accept event action during its execution. The fact that the doActivity can only receive events from the state machine's event pool alters its behavior compared to a standalone activity. 1) The doActivity may receive an event that was pending in the state machine's event pool even before the start of the doActivity. 2) If there are long-running actions before the accept action, then a doActivity can "miss" an event it could handle because the state machine may discard the event due to the lack of a matching accepter. **Conflict and priorities** Without doActivities, there is only a single StateMachineEventAcepter responsible for handling events. Using doActivities waiting for events changes this simple situation: doActivities can register competing event accepters; doActivities in composite states can each register their accepters at the same time, and even one doActivity can register several accepters if it contains multiple concurrent AcceptEventActions. These multiple accepters can wait for the same event, resulting in a _conflict_. One might expect that there is a clear conflict resolution strategy similar to conflicting transitions and substate's behavior always overrides the parent state's behavior. However, from the semantics of the dispatchNextEvent operation of ObjectActivation it seems to us that there are _no priorities_ between these accepters and one accepter is chosen _non-deterministically11_. There are no explicit PSSM test cases exemplifying this but simulation in Moka [17] confirms competing accepters12. Footnote 11: More precisely, the choice is made according to the current choice strategy. Even if FirstChoiceStrategy is used, the result is hard to predict, as the order in which concurrent doActivities register accepters is arbitrary. Moreover, the state machine’s accepter gets removed and re-registered after each event dispatch, thus it can be the very last registered accepter in some cases. See supplementary Moka screenshots [17] **Defer** Some conflicts can be resolved by deferring as suggested by the specification13. The defer keyword can be used to "capture" an event in a given state, place it in the deferred pool, and only release it once the state deferring the event is left. A state machine can defer an event if Footnote 13: \(\cdots\) in general, an executing doActivity Behavior will compete with the executing StateMachine that invoked it to accept EventOccurrences dispatched from the same eventPool. Nevertheless, in some situations it is necessary to ensure that a doActivity is able to accept certain EventOccurrences instead of the StateMachine. To allow this, a deferred Trigger should be used on the State that owns the doActivity, in which case any EventOccurrences deferred while the StateMachine is in that State may be consumed by the executing doActivity.” [7, 8.5.6] Priorities between transitions and defer presumably follow the same principles as firing priorities between transitions: transitions from direct and indirect substates have priority over transitions from containing states, while other transitions have no priority difference [6, 14.2.3.9.4]. In the presence of doActivities the above is extended with two rules [7, 8.5.6]. _1) If the state machine is about to defer an event for which a doActivity has also registered an accepter, the state machine is not allowed to defer it, and the event can be accepted by the doActivity._ To analyze priorities when using defer, Fig. 7 shows transitions all triggered by event e in various positions to the deferring state Sa1. Consider that the state machine is in S[Sa1], Sb1] state configuration. Assume that each transition marked with a note exists in a separate state machine on its own (so that they are only in conflict with the defer and not with each other). There are two cases: 1. \(\mathsf{T}_{-1}\) has lower priority, and \(\mathsf{T}_{\perp}\) in an orthogonal region (other than the deferring state) has no order of priority. Thus the state machine is about to defer but is not allowed due to the waiting doActivity accepter. The result is that the state machine's event accepter does not match the event and the doActivity can accept it. 2. \(\mathsf{T}_{0}\) is an overriding transition, and \(\mathsf{T}_{1}\) originating from a nested state has higher priority. Therefore, no defer occurs and the state machine matches the event. The result is that both the state machine and the doActivity match the event and therefore - possibly against the modeler's intention - one of the acceptors is chosen non-deterministically. In fact, the presence of an un-guarded T\({}_{0}\) renders the defer completely useless. With a guarded T\({}_{0}\) or any transition like T\({}_{1}\), the defer will be ineffective whenever these transitions are enabled. To summarize, defer will give priority to the doActivity only in case A)14. Note, however, that there is no mechanism to give priority to the state machine's transitions. Footnote 14: We were not able to cross-check this conclusion using PSSM tests, as there is no such specific example. Closest tests are Deferred 002 and Deferred 003 but without doActivities. _2) If the state machine has already deferred an event for which the doActivity registers an accporter, then that event can be accepted by the doActivity directly from the deferred event pool [7, 8.5.6]._ However, it is not discussed in PSSM when this event acceptance happens with respect to the state machine (during an RTC step or between RTC steps in stable configurations). On the one hand, both the semantic description of PSSM and the code of Moka confirm that event accepters in doActivity can remove events from the deferred pool directly when registering without considering the state machine's status. As a doActivity has an independent thread of execution, removing an event from the deferred pool can happen between RTC steps without any reaction in the state machine (see test Deferred 006-B, where there is no respective RTC step for the state machine). On the other hand, the state machine can perform an RTC step when the doActivity is about to register an accporter for an event in the deferred pool. Is the doActivity allowed to remove the event in this case? It is hard to have a definite answer as the locking mechanism for event pools is not described in the standards. However, Moka for example, implements the deferred pool as a simple List without any mutual exclusion. **Insights**: Event handling in doActivities is complicated. * DoActivities register accepters during execution and not when starting; thus, some events can be "missed". * Event accepters for the state machine and doActivities _always_ compete for events. In conflicting situations, a non-deterministic choice is made unless defer is used (but even defer resolves only some cases). * There is no mechanism to give priority to the state machine's transitions over the doActivity. * DoActivity can remove events from the deferred pool even during RTC steps and between RTC steps. ### _Finalization of a doActivity behavior_ The execution of a doActivity can be finalized in two ways: * _Completion_: the doActivity completes its execution, * _Destruction_: its containing state is exited, and the doActivity is aborted. In the completion case, the execution finishes naturally (completing all node activations or executing a final node), and a completion event may be generated. Note that PSSM contains a specific clause that after a doActivity RTC step, if there are no more event accepters registered, then the doActivity completes and notifies its SM_Object [7, 8.5.6]. Regarding the destruction case, as the doActivity starts its execution asynchronously, the doActivity may not even execute any of its actions before aborting. This is illustrated in PSSM test cases Event 017-B and Terminate 002 15. Footnote 15: Note that the Behavior 003-A test case shows that the doActivity is only aborted after waiting for an event, but we think it is an error and the test case misses an alternative trace. Moreover, as UML explicitly states that actions are not atomic, it would be important to see whether an abort can happen during the execution of an action (e.g. for long-running actions or for modifying structural features). However, there are no such PSSM test cases, and it is not possible to validate this hypothesis with currently available non-parallel implementations. Footnote 16: If prepareInstruments contains an accept waiting for an event, then placing it in the entry is definitely a bad practice. The entry behavior is executed as part of the state machine’s RTC step, which blocks dispatching further events. Thus it will cause a deadlock, according to our understanding. However, Campo Simulation Toolkit waits for and dispatches a new event even in an entry, while Moka skips the AcceptEventAction and completes the activity. The doActivity is always aborted before executing the exit behavior of its state. The destroy call responsible for abortion is likely a synchronous call ("Exiting a StateActivation involves the following sequential steps" [7, 8.5.5]). But properly implementing this synchronization is not trivial. **Insights**: A doActivity can be destroyed any time during its execution, possibly even during an action. * DoActivity can be aborted before executing anything. * DoActivity must be fully aborted before the exit behavior is executed, which needs synchronization. ### _Takeaway messages_ As this section explained in detail, having a doActivity in a state machine has fundamental consequences on the computation model applied, which is usually not evident for engineers designing and modeling with state machines. Taking into account these insights about the semantics, we might wonder whether there are any issues with the state machine of Fig. 1. For example, once we know that a doActivity can be abruptly aborted, can we be sure that prepareInstruments finishes preparations before a measurement? How should we refactor it? Would it be better to place this activity in the entry behavior?17 The next section answers such questions by proposing practical patterns for using doActivities in certain modeling situations. Fig. 7: DoActivity and transitions triggered by the same event **e** with event deferral in state Sa1. **Key insights** Using doActivities fundamentally changes the reactive nature of state machines. 1. doActivities can compete with the state machine for events (without doActivities, there is always only one event accepter for the state machine). 2. doActivities can perform externally visible actions between the state machine's RTC steps. 3. doActivities can accept previously received and deferred events without initiating an RTC step. ## 4 DoActivity patterns We systematically collected patterns describing how doActivities can - but not necessarily should - be used in state machines w.r.t other modeling elements (e.g., from using a doActivity in a simple state to multiple doActivities in regions). They are not meant to be design patterns, more like patterns used in static analysis. For each pattern, we list possible intentions why a modeler could use a doActivity in the given context. Based on insights from previous sections, we highlight consequences and issues, then discuss potential countermeasures and advice. The patterns are modular, and only the more significant issues are repeated. Therefore, advice should be combined if more patterns are applicable. ### _DoActivity in a simple state_ #### 4.1.1 Basic case **Pattern:** In Fig. 8 a simple state has a doActivity behavior (without an accept event action). The state has an outgoing transition (external transition with explicit trigger). **Possible intention:** In the state there is a long-running activity representing an idle task or an interruptable, non-blocking background process, which can run while in \(\mathsf{S1}\). **Potential issues:** It is important to note that this pattern means the implication "while in \(\mathsf{S1}\), the doActivity can run" and not the reverse "while the doActivity runs, \(\mathsf{S1}\) is active". In fact, the doActivity is an independent unit of execution with its own lifecycle. 1. There is no guarantee when it will actually perform its actions. We only know that it starts the execution after the state's entry behavior has completed (if any). 2. The doActivity is aborted once the state is exiting and we cannot be sure what part of the execution, i.e., which actions have finished before the abort. In the worst case, the doActivity might have not performed anything. 3. There is no guarantee about the atomicity of actions in doActivities, i.e., it is possible that the doActivity is aborted during \(\mathsf{task1}\) and only parts of it have finished. **Discussion:** If the intention is to "wrap" the execution of the doActivity in a state, or the intention was to wait for the doActivity to finish and then wait for the arrival of an event1, the model should explicitly enforce this by either using a _completion transition_ (Section 4.1.2) or _self-signaling_ Section 4.1.3. Also note that how abort is implemented is not a precisely defined part in UML. During development of execution environments, special care should be taken to enforce the rules for abort, e.g., the execution of the doActivity is fully aborted before running the exit behavior. #### 4.1.2 DoActivity and completion transition **Pattern:** In Fig. 8a a simple state has a doActivity behavior (without an accept event action). The state has an outgoing completion transition to move to the next state (\(\mathsf{S2}\)) once the doActivity has finished. \(\mathsf{S2}\) waits for event \(\mathsf{e1}\) to continue. **Possible intention:** In the state there is a long-running activity that the modeler wants to be completely finished before continuing on event \(\mathsf{e1}\). Thus, a single completion transition (an outgoing transition without a trigger) is used. **Potential issues:** A single completion transition fulfills the expectation that the doActivity should be finished before moving to the next state (\(\mathsf{I1}\)-\(\mathsf{I3}\)). 1. In the lack of other outgoing transitions from \(\mathsf{S1}\) the state machine will stay indefinitely in \(\mathsf{S1}\) if the doActivity runs forever. There is no means to abort. 2. If there are incoming events, the state machine will dispatch and discard them due to the lack of an enabled transition. This means an \(\mathsf{e1}\) arriving before an unusually long doActivity completes will be missed. **Discussion:** Fig. 8b shows a possible solution for the issues above. Explicit outgoing transition(s) (\(\mathsf{e}_{\mathsf{abort}}\)) can be used to abort the doActivity and exit state \(\mathsf{S1}\). Defer is used to collect events (\(\mathsf{e1}\)) that need to be processed after the doActivity finishes (see Section 4.2.2). Based on gray literature17 and our experience, some engineers suggest avoiding completion transitions, partly because a transition without a trigger might be confusing for users familiar with other state machine variants18. Self-signaling might be an alternative solution (Section 4.1.3). Footnote 17: For example: Completion transitions and implicit triggers – [https://www.webel.com.au/node/2651](https://www.webel.com.au/node/2651) Footnote 18: SCML uses done.state.id for similar constructs. #### 4.1.3 DoActivity using self-signaling **Pattern:** In Fig. 8 a simple state has a doActivity behavior (without an accept event action). The doActivity notifies the state machine about the completion by sending a signal. The state uses \(\mathsf{cont}\) events to continue to the next states. Defer is used to collect events to be processed after the doActivity. **Possible intention:** The doActivity can signal the completion of its behavior to the state machine by sending signals. Outgoing transitions triggered by these signals are Fig. 8: A state with doActivity and transition. Fig. 9: A doActivity and a completion transition. used to leave the state. Self-signals are similar to a completion event but here the doActivity can use multiple signals (cont*) to differentiate based on results of its execution. **Potential issues:** Lack of abort and missed events are solved similarly as in Section 4.1.2. However, self-signaling introduces other problems. 1. The cont* events will be put into the event pool like any other incoming signal and will be processed later, once they get dispatched. As signals are sent asynchronously, they can be arbitrarily delayed or reordered [22, 8.8.1]. Moreover, the fUML/PSSM execution model has no assumption that the communication is reliable and the signals are never lost or duplicated [22, 2.3]. **Discussion:** Self-signaling has the advantage that all transitions have explicit, named triggers. This could be important in case of complex doActivities, which can send different signals to trigger different transitions (cont*) [24]. Using a completion transition - if available in the modeling tool - solves I6. Otherwise, such signals should be handled specially to ensure that they are not delayed or lost. In addition to defining internal modeling guidelines, the whole toolchain should guarantee the special behavior of the self-signaling event, e.g., simulators, code generators, test environments or manual implementations. #### 4.1.4 A state with internal transition **Pattern:** In Fig. 11 a simple state has an internal transition and a doActivity behavior (without an accept event action). **Possible intention:** A state machine can use internal transitions to react to incoming events without aborting the doActivity. Contrary to self transitions, when firing an internal transition, the state is not exited and re-entered, thus exit and entry behavior is not executed, and the doActivity executing a long-running task is not interrupted. **Potential issues:** The execution of a doActivity might be concurrent with the firing of an internal transition. 1. As there is no guarantee that actions are atomic, the execution of the task1 behavior in the doActivity and task2 effect in the internal transition can overlap. **Discussion:** I7 might cause problems if the tasks are not independent, e.g., they use shared variables, send out signals that should not interleave, or the internal transition effect expects that the execution of (some parts of) the doActivity has completed (see I1 and I14 later). ### _DoActivity accepting events_ #### 4.2.1 A doActivity with an accept event action **Pattern:** In Fig. 11(a) a simple state has a doActivity that includes an accept event action triggered by an event. The state has an outgoing transition triggered by another event. In Fig. 11(b) the same event triggers an outgoing transition. **Possible intention:** A doActivity can contain accept event actions, which can be used to react to external events, e.g., wait for a reply message during communication. **Potential issues:** In this case the doActivity's reactive behavior interferes with the reactivity of the state machine. 1. In Fig. 11(a), if the doActivity has not reached the accept event action by the time event e1 is dispatched, the doActivity cannot accept the event, and the event is discarded due to the lack of a matching accepter. 2. In Fig. 11(b) the situation is more complicated when both the state machine and the doActivity compete for the same event e1. Unlike transitions in the state machine where substates have priority, there is no priority defined between the state machine and its doActivities waiting for the same event. As a consequence, one of them will be non-deterministically chosen. 3. An explicit wait point might suggest that the doActivity can be aborted only when it is waiting for an event19. However, there is no guarantee that any action in the doActivity is completely executed before an outgoing transition (e.g., e2) aborts the doActivity. (See II-I3.) Footnote 19: E.g., PSSM test case Behavior 003-A seems to assume that abort only happens while waiting for events but test case Event 017-B shows a possible abort before the doActivity execution could be observed. **Discussion:** In the non-conflicting case (Fig. 11(a)), using defer to delay events for later processing in a doActivity can solve issue I8 (see Section 4.2.2 for details). In the conflicting case (Fig. 11(b)), the specification does not define priority between the state machine and the doActivity. Hence, avoid using the same or overlapping triggering events in doActivities and the state machine. #### 4.2.2 A doActivity with an accept event action and the state defers the event **Pattern:** In Fig. 11(a) a simple state has a doActivity behavior that includes an accept event action triggered by an event e1. The state defers this event. The state has an outgoing transition triggered by a different event, which targets state S2 with an outgoing transition on e1. A special case is depicted in Fig. 11(b), where state S1 has an outgoing transition on e1 but also defers the same event. **Possible intention:** If there is no transition triggered by an event and the event would be discarded, defer can be used to avoid discarding and delay the processing of the event to a later stage, e.g., to an upcoming state or to a later action in a doActivity. Fig. 11: A state with doActivity and internal transition. Fig. 12: A doActivity with an accept event action. Fig. 10: A doActivity with self-signaling. **Potential issues:** While defer is ought to be a solution to harmonize the reactive behavior of the state machine and the doActivity, it comes with its own caveats. 1. While S1 is active it defers all incoming e1 events. When a doActivity registers event accepters, it will first check the deferred event pool, even if an event with the same type is present in the normal event pool. These events can differ, e.g., if the events have parameters (e.g., counters), and the doActivity is not guaranteed to get the latest event. 2. While S1 is active, it defers _all_ incoming e1 events. If the doActivity consumes only one or a few events, then exiting S1 releases the remaining deferred e1 events. If next states (here S2) have transitions triggered by e1, the released e1 events will be dispatched first in S2 instead of new incoming events. 3. In the special case if events e1 and e2 are the same (Fig. (b)b), the outgoing transition overrides the defer, and we cannot ensure that the doActivity receives the event instead of the state machine. Before the doActivity registers an accepter, only the state machine can and will dispatch the event (similar to I8); after the registration they both compete for it (similar to I9). **Discussion:** In the non-conflicting case (Fig. (a)a), defer is a solution to keep e1 events dispatched in state S1 when the doActivity is not able to process it yet. These events are delayed and not discarded. However, in the conflicting case (Fig. (b)b), defer _cannot be used_ to prioritize between the state machine and its doActivities due to the overriding transition. The situation is even more complicated because priority depends on the exact timing of the event: before doActivity accepter registration the state machine accepts it; after the registration they compete for it. Therefore, pay attention deferring an event when a state or its substate have an outgoing transition triggered by the same event, because that renders defer useless. Defer does not change how the doActivity runs, it only delays incoming events for a later accept. II-I3 about the uncertainty when the doActivity starts and which actions are executed before the abort are still applicable here. ### _DoActivity in a composite state_ #### 4.3.1 DoActivity in a parent state **Pattern:** In Fig. 14 a doActivity is present in a composite state. The states have outgoing transitions, and the composite state has a completion transition. **Possible intention:** A composite state is a concise representation of the shared behavior of multiple states. A long-running activity is executed in the doActivity but the system can continue its other tasks by handling external events in a more complex way than internal transitions (Section 4.1.4. **Potential issues:** While concurrency is usually modeled with orthogonal regions, doActivities alone can introduce concurrency for composite states which should be noted. 1. The doActivity has its own separate lifecycle, and there are no guarantees about when it will start its execution or what parts are finished by the time a substate/transition behavior is executed. See II. 2. The doActivity can run concurrently with the state machine. The doActivity and the effect of transitions to/from substate S2 can concurrently execute steps observable from outside (e.g., sending signals) or can affect each other (e.g., via shared variables). See I7. 3. A composite state completes if _all_ of its internal activities have completed, i.e., entry and doActivity Behaviors, and all of its regions have reached a FinalState, so either the doActivity or the region can delay the firing of the completion transition T\({}_{\text{e}}\). **Discussion:** Orthogonal regions clearly show concurrent behaviors but doActivities themselves without orthogonal regions can introduce concurrency for composite states which should be considered. Care should be taken not to expect anything in the doActivity to be already performed by the time a nested transition is triggered. For example, initialization required by the substates should be put to separate states to ensure they are finished. This bad practice can be observed in the example in Fig. 1. Also expect that the doActivity can overlap with any behavior in the region. #### 4.3.2 DoActivity in a substate **Pattern:** In Fig. 15 a doActivity is present in a substate of a composite state. The states have outgoing transitions, and the composite state has a completion transition. **Potential issues:** This is very similar to Section 4.1.1 (I1-I3), the main difference is that not only direct outgoing transitions can abort the doActivity but transitions from the parent state too. See also I16. If the doActivity accepts event e2, then it will compete with the parent state's outgoing transition (I9). However, if S2 defers e2, then the doActivity should receive the event. #### 4.3.3 Multiple doActivities with accept event actions **Pattern:** DoActivities are present in a composite state and also in its substate. The states have outgoing transitions. **Potential issues:** As the state machine's behaviors and doActivities can run concurrently, more doActivities in a Fig. 14: A doActivity in a composite parent state. Fig. 13: An accept event action and deferring an event. Fig. 15: A doActivity in a substate. state machine can cause many more possible execution traces, all of which should be considered (I15). Both doActivities and the state machine can wait for the same event, in which case all doActivities and the state machine compete for it. Unlike transitions from substates, which have priority, there is no priority defined between S1's doActivity and a transition from S2 substate, or the doActivity of S2 substate if they wait for the same event (I9). Which one gets selected depends on the exact order of accepter registrations and event dispatching (I8). See also I14 and I16. #### 4.3.4 A doActivity with an accept event action and the composite state defers the event **Pattern:** In Fig. 17 a composite state defers an event and its doActivity has an accept event action for it. **Possible intention:** A long-running activity is running in the doActivity while the substates handle other tasks and external events. The doActivity waits for event e1 at some point in its execution. Defer is used to delay the event until the doActivity can process it. **Potential issues:** Deferring in a composite state presents new issues on top of existing ones (I8, I11 and I13-I15). Whether the S1 state can defer the e1 event (and thus the doActivity can consume it) changes depending on the current state configuration (i.e., it cannot defer in S1[S3] due to the higher-priority transition from S3). If an event has been deferred, the doActivity can dispatch the event directly from the deferred event pool while the state machine is busy in an RTC step processing another event in a substate (e.g., e3 in state S2). **Discussion:** As whether the state machine can defer an event is re-evaluated upon every new dispatch, the result of defer can change between events depending on which substate is active. In S1[S2], the doActivity will get a new e1, but in S1[S3] it will compete with the state machine. Consider the other case (I18) that the state machine has deferred an event e1 because the doActivity has not reached its AcceptEventAction yet. If later the doActivity registers an accepter, then the doActivity "steals" the event from the deferred event pool even if the state machine is still in the middle of an RTC step and has not reached a stable state configuration, e.g., it is processing event e3 in state S1[S2]. This behavior contradicts the UML clause that an event is not dispatched while the state machine is busy processing the previous one [6, 14.2.3.9.1], and the obvious expectation that the event pool(s) are stable during RTC steps and only new events can be added to them. This is rather surprising because when an RTC step is running, it is not clear whether the event is still to be deferred as in the source state or should be released back based on the new target state. Moreover, the state machine does not have such a "shortcut" (I18); if it moves to S1[S3], it cannot accept a deferred e1 event, only a newly dispatched one. ### _Composite states and orthogonal regions with doActivities_ **Pattern:** There is a composite state with multiple orthogonal regions and doActivities at different hierarchy levels. DoActivities can accept events, and the composite state has a completion transition. **Possible intention:** Composite states comprise shared behavior of the included states. Orthogonal regions can be used to model different aspects of state machine behavior running concurrently. **Potential issues:** All concurrency and complexity that composite states, orthogonal regions and doActivities separately introduce can be present if we combine these modeling elements. Therefore we refer to issues mentioned above that are also applicable here. Orthogonal regions have the advantage of explicitly showing the modeler's intention that the regions are concurrent with each other. The modeler should consider all possible concurrency conflicts (e.g., between doActivities, entry/exit actions, (internal) transition effects) and the execution traces (I15). Event acceptance in the doActivity depends on the exact order of accepter registration and event dispatching (I8). See also I14 and I16. **Discussion:** Special care should be taken to consider all possible dependency, timing and concurrency issues, as even a small model can cause these kinds of issues. This complexity cries out for tool support [11, 15] that would help the modelers to highlight the possible concurrent behavior, exact conditions for event accepters, shared variables used in concurrent settings, etc. Unfortunately such advanced tooling is still only scarcely available. ### _Issue categories_ This section summarizes the doActivity patterns and issues mentioned above. Table I shows which issue is relevant to which pattern. The issues are grouped into categories based on the phases of the doActivity, as examined in Section 3: * **Start** [Section 3.2]: I1 points out that when the doActivity actually starts its execution is uncertain due to its separate lifecycle. Fig. 16: Multiple doActivities with accept event actions. Fig. 17: DoActivity and composite state deferring events Fig. 18: Orthogonal regions and doActivities * **Executing** [Section 3.3]: I7, I14 and I15 call attention to activities concurrent with doActivities (e.g., internal transitions, substates and their transitions, orthogonal regions), which might cause problems, e.g., if they use the same variables, send out interleaving signals, or depend on potentially unfinished parts of the doActivity. * **Event handling** [Section 3.4]: I5, I8 and I9 show that the timing and priority of event acceptors in the state machine and the doActivities are complex. I6 describes problems with signals in the case of self-signaling. I11-I13, I17 and I18 draw attention to event deferral, which should be used with care: it depends on state configuration and accepters; deferred events have priority over normal ones both in a doActivity and when they are no longer deferred; a doActivity can dispatch a deferred event while the state machine is busy executing an RTC step; and transitions can override defer. * **Finalization** [Section 3.5]: I2-I4 and I10 show that the doActivity can run without limits until exiting the state aborts it. Abort might happen before executing any action or even during an action. I16 emphasizes the conditions needed for a state to complete. ## 5 Related work ### _Usage of doActivities in UML state machines_ Although doActivity behaviors are mentioned in most textbooks, limited public information is available about their usage in engineering practice. We analyzed public datasets about UML usage, but found them less relevant. According to our experience originating from industrial R&D projects, doActivities become more important in complex, executable models typical in embedded or safety-critical domains. Most publicly available UML artifacts are for simpler models mainly used for high-level documentation, in which doActivities and other complex language constructs are not used. For example, Langer et al. [25] present a study on the usage of UML sublanguages and model elements in 150 Enterprise Architect models collected using Google Search; but state machines are not detailed. To find further real-world UML state machine models, we analyzed the Lindholmen Dataset [19], the largest still available dataset containing about 93 thousand UML files from 24 thousand GitHub repositories. We used the "state" search keyword for file paths and found 2635 file versions from 734 GitHub repositories. We filtered on UML images and models containing complex constructs (e.g., entry/exit actions, doActivities, composite states), and we manually inspected the remaining 96 images and 288 XMI/UML files more thoroughly. We excluded repositories of UML modeling tools since they contain mostly UML metamodels, test and sample models. The remaining models mainly were home assignments for university courses, sample UML models, and there were some illustrative diagrams to document the project. None of them was a system designed using UML state machines. The authors of the Lindholmen dataset also warn that GitHub contains a high amount of student and toy repositories. This warning confirms our experience that open repositories are not the preferred choice for storing complex design models, and mining such repositories cannot be used to estimate the usage of complex UML elements realistically. A more realistic source of information about industry practices is, for example, the guidelines and patterns of the OpenSE Cookbook [12], which collects the experiences of numerous model-based systems engineering projects from NASA JPL and ESO. A significantly extended, v2 version of the cookbook is available online [24]. The cookbook shows patterns of how doActivities are used to model long-running tasks in simulation and duration analysis, or how self-signaling and completion transitions combine activities and state machines. Moreover, the cookbook presents complex, real-world examples from the TMT model [18] (e.g., it is worth looking at the Procedure Executive and Analysis Software state machine containing numerous doActivities). ### _Guidelines and patterns for UML state machines_ Several style guides and validation rules are available for UML state machines, but they do not provide detailed patterns for doActivities. Ambler [26] provides useful but high-level advice for arranging and naming UML elements. Torre et al. [27] collected 687 consistency rules for various UML diagrams. The SAIC Digital Engineering Validation Tool20 is one of the most extensive public validation rule sets. However, none of the rules mention doActivities. Footnote 20: [https://www.saiic.com/digital-engineering-validation-tool](https://www.saiic.com/digital-engineering-validation-tool). Das and Dingel [28] collected state machine conventions, patterns and antipatterns for UML-RT. Similarly, they provided consequences and possible refactorings for each pattern. But as UML-RT does not have doActivities, our results are orthogonal. Alenazi et al. [29] collected 42 mistakes \begin{table} \begin{tabular}{l l l|l|l l mentioned in studies of modeling with SysML, but they also do not cover doActivities. ### _Semantics for UML state machines_ **Execution and simulation** Ciccozzi et al. [30] provide a survey about execution approaches and tools for UML models. Most of the approaches support class, state machine and activity diagrams. However, 13 of the 82 solutions are based on the precise fUML semantics, and the survey does not mention PSSM. Micskei et al. [31] collected available tools and experiences about executable UML, specifically about fUML activities and Alf action language. Guermazi et al. [23] report the lessons learned while implementing fUML and Alf modeling in Eclipse Papyrus and the Moka execution plug-in. Regarding the representation of possible execution traces (Fig. 5), most approaches do not detail the exact steps while firing a complex transition. The closest approach was by Pinter and Majzik [32], who used PERT graphs to express the execution dependencies between basic steps. **Code generation** DoActivity semantics is also interesting from a code generation perspective. A systematic review of papers between 1992-2010 performed by Dominguez et al. [10] shows that only a minority of the approaches support concurrency and even fewer support doActivities to a minimum extent. Metz et al. [33] introduce code generation concepts, but only for UML 1.1 statechart diagrams, including interruptable doActivities that run on separate threads and are interrupted by incoming events, automatic (completion) transitions, and also mention deferred events. They define modeling constraints, e.g., proposing interruptable activities to be modeled as doActivities and non-interruptable activities as entry behaviors, which are atomic in their work. Pham et al. [34] provide C++ code generation for UML composite structures and state machines in Papyrus. The approach supports concurrency, including doActivities on separate threads, RTC steps, and an event pool with completion and deferred events. Event accpters are not mentioned. They compared the Moka simulation of 66 PSSM test cases and results from their tool to verify the semantics. However, it is not detailed how they produced the same execution trace both in Moka and their tool for concurrent behaviors where multiple possible traces exist. **Verification** Andre et al. [15] provide a survey of UML state machine semantics formalizations for model checking purposes from 1997 to 2021. They collected 45 translation-based and 16 operational semantics, and categorized whether they support the 17 non-trivial UML features chosen. We overview of the relevant UML 2.x papers where either doActivity support was explicitly mentioned or advanced elements are supported (RTC, orthogonal regions). Fecher and Schonborn [35] give a translation from UML state machines into the "core state machines" semantic domain using natural language. Although it is one of the most complete approaches, it does not define RTC steps properly and does not support entry/exit behaviors. Andre et al. [36] propose a formalization using colored Petri nets (CPN) that supports RTC steps and certain concurrency aspects without support for deferred events, and only simple states can have doActivities. The semantics is non-standard because a doActivity can be executed as often as wished. Liu et al. [37] propose a formal operational semantics using Labeled Transition System (LTS) for all features of UML version 2.4.1 (except time events) and implemented their approach in the USMMC model checker. The semantics support deferred events and doActivities. The doActivities are in real concurrency with other behaviors of the containing state, however, how event acceptance works in doActivities is not mentioned. Due to unclarities in UML they restrict the possible executions of (compound) transitions in orthogonal regions by treating certain parts as atomic instead of executing them completely in parallel as PSSM does. This restriction also applies to entry/exit behaviors. The limitation of their work comes from the assumptions made to resolve UML unclarities, and the lack of formally defined action language, which would be useful for the definition and analysis of entry/do/exit behaviors and transition effects. Abdelhalim et al. [38] formalize UML activity diagrams using Communicating Sequential Processes (CSP) for model checking. They model the event pool and waiting event accpters following the fUML standard. However they do not consider activities in the context of state machines. ### _DoActivities in other state machine variants_ Some other state machine variants also have doActivities or similar features. Harel's statecharts [4] from 1987 already had a doActivity-like feature: an activity can be carried out continuously _throughout_ the system being in a specific state, i.e., it is started on entering and stopped on leaving that state. In the original version, most of the issues discussed were not present since there were no detailed executable activity definitions. Other variants like UML-RT, a UML profile for real-time systems, do not support doActivities. In SCXML [5], the closest construct to doActivity is _invoke_, which is used to create an instance of an external service. The invoked service can be interrupted if the containing state is exited,and the state is left only when the service is completely stopped. The state machine and the service communicate by sending events. We are not aware of works focusing on invoke-related concurrency issues. Crane and Dingel [14] compare Harel, UML and Rhapsody state machines, and emphasize that modelers should be aware of different interpretations of models in various formalisms, especially for different execution behaviors. ## 6 Conclusion This paper presented the use of doActivity behaviors in UML state machines and made two contributions. 1) We examined the operational semantics defined in the PSSM specification, and based on an analysis of available artifacts, we synthesized several insights about the complex interplay of doActivity and state machine semantics. Some of these insights are not mentioned anywhere in the description or conformance tests of the specification and, to the best of our knowledge, have not been reported in the literature and are unknown to most engineers (e.g., priority of accpters). These observations might seem to be minor details, but they can introduce serious non-deterministic errors that are especially hard to detect or debug. 2) Based on the semantic insights, we systematically collected 11 patterns of using doActivities, highlighted things to consider in each situation, and recommended countermeasures to potential issues. The descriptions of the patterns are practical, and we hope they offer actionable advice to engineers without requiring deep knowledge of the operational semantics. Such semantic details are essential when state machines are used for detailed, executable design. DoActivities are inherently concurrent and non-deterministic elements. Numerous alternative traces shall be considered and reasoned about when doActivities are used even in simple state machines. If the engineers who develop or implement these models are unaware of all the possible behaviors, or tools working with the models do not conform to the specification, then the conflicting understanding of the semantics could lead to numerous problems [39]. Our patterns and guidelines aim to mitigate these issues. As future directions, we plan to conduct a case study on employing the patterns in an industrial setting and will analyze the redesigned semantics of the new SysMLv2 language [40] to make recommendations regarding do actions, contributing to the Execution Annex of KerML [20]. ## Acknowledgments Project no. 2019-1.3.1-KK-2019-00004 has been implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the 2019-1.3.1-KK funding scheme.
2309.03446
Skew Product Groups for 2-Groups of Maximal Class
Skew morphisms, which generalise automorphisms for groups, provide a fundamental tool for the study of regular Cayley maps and, more generally, for finite groups with a complementary factorisation $X = GY$, where $Y$ is cyclic and core-free in $X$. $X$ is called the skew product group associated with $G$ and $Y$. In this paper, we classify skew product groups for the maximal class 2-groups.
Wenjuan Luo, Hao Yu
2023-09-07T02:00:02Z
http://arxiv.org/abs/2309.03446v1
###### Abstract ###### Abstract Skew morphisms, which generalise automorphisms for groups, provide a fundamental tool for the study of regular Cayley maps and, more generally, for finite groups with a complementary factorisation \(X=GY\), where \(Y\) is cyclic and core-free in \(X\). \(X\) is called the skew product group associated with \(G\) and \(Y\). In this paper, we classify skew product groups for the maximal class 2-groups. **Skew Product Groups for 2-Groups of Maximal Class** Wenjuan Luo and Hao Yu1 Footnote 1: Corresponding author: [email protected]. This work is supported in part by the National Natural Science Foundation of China (12071312). **Keywords** skew product groups, 2-groups, regular Cayley map, skew morphism **MSC(2010)** 20F19, 20B20, 05E18, 05E45. Capital Normal University, School of Mathematical Sciences, Beijing 100048, People's Republic of China ## 1 Introduction All groups in this paper are assumed to be finite. A _skew-morphism_ of a group \(G\) is a permutation \(\sigma\) on \(G\), having the properties that \(\sigma(1_{G})=1_{G}\) and there exists an integer-valued function \(\pi\) on \(G\) such that \(\sigma(gh)=\sigma(g)\sigma^{\pi(g)}(h)\) for all \(g,h\in G\). \(\pi\) is \(\sigma\)'s associated _power function_. Note that if \(\pi(g)=1\) for all \(g\in G\), then the skew-morphism \(\sigma\) is an automorphism of \(G\). Thus skew morphisms generalise the concept of automorphisms for groups. The investigation of skew-morphisms is at least related to the following two topics. (1) _Group factorizations_: Use \(L_{G}:=\{L_{g}\mid g\in G\}\) to denote the left regular representation of \(G\). Then \(\sigma\), \(L_{g}\in\mathrm{Sym}(G)\). For any \(g,h\in G\), we have \[(\sigma L_{g})(h)=\sigma(gh)=\sigma(g)\sigma^{\pi(g)}(h)=(L_{\sigma(g)}\sigma^ {\pi(g)})(h),\] and so \(\sigma L_{g}=L_{\sigma(g)}\sigma^{\pi(g)}\). Therefore, \(\langle\sigma\rangle L_{G}\subseteq L_{G}\langle\sigma\rangle\). Since \(|\langle\sigma\rangle L_{G}|=|L_{G}\langle\sigma\rangle|\), we have \(\langle\sigma\rangle L_{G}=L_{G}\langle\sigma\rangle\), which implies that \(X:=L_{G}\langle\sigma\rangle\) is a subgroup of \(\mathrm{Sym}(G)\), called the _skew-product_ of \(L_{G}\) by \(\sigma\), see [4, 42]. Moreover, one can show that \(\langle\sigma\rangle\) is core-free in \(X\), meaning that there is no nontrivial normal subgroup of \(X\) contained in \(\langle\sigma\rangle\). Conversely, let \(X\) be a finite group admitting a factorization \(X=GY\) with \(G\cap Y=1\) and \(Y=\langle y\rangle\) being cyclic and core-free in \(X\). Then for any \(g\in G\), there exists a unique \(g^{\prime}\in G\) and a unique \(i\in\{1,2,\ldots,|Y|-1\}\) such that \(yg=g^{\prime}y^{i}\). This induces a permutation \(\sigma\) on \(G\) by \(\sigma(g)=g^{\prime}\), and an integer-valued function \(\pi\) on \(G\) by \(\pi(g)=i\). Then one may check that \(\sigma\) is a skew-morphism of \(G\) with power function \(\pi\). (2) _Cayley maps_: The concept of skew morphism was first introduced as a fundamental tool for the study of regular _Cayley maps_[16]. Let \(G\) be a group and let \(S\) be a subset of \(G\) such that \(1_{G}\not\in S,S=S^{-1}\) and \(G=\langle S\rangle\). Let \(\rho\) be a cycle on \(S\). A Cayley map \(\mathcal{M}=\mathrm{CM}(G,S,\rho)\) is a 2-cell embedding of a Cayley graph \(\mathrm{Cay}(G,S)\) into an orientable closed surface such that, at each vertex \(g\) of \(\mathcal{M}\), the local orientation \(R_{g}\) of the darts \((g,gx)\) incident with \(g\) agrees with \(\rho\) on \(S\), that is, \(R_{g}(g,gx)=(g,gx^{\rho})\) for all \(g\in G\) and \(x\in S\). The automorphism group \(\mathrm{Aut}\left(\mathcal{M}\right)\) of a Cayley map \(\mathcal{M}\) contains a vertex-regular subgroup induced by left multiplication of the elements of \(G\) and acts semi-regularly on the darts of \(\mathcal{M}\). If \(\mathrm{Aut}\left(\mathcal{M}\right)\) is regular, then the map \(\mathcal{M}\) is called a _regular Cayley map_. It was shown by Jajcay and Siran that a Cayley map \(\mathcal{M}\) is regular if and only if \(\rho\) extends to a skew-morphism of \(G\), see [16, Theorem 1]. Thus the problem of determining all regular Cayley maps of a group \(G\) is equivalent to the problem of determining all skew-morphisms of \(G\) containing a generating orbit which is closed under taking inverses. Therefore, it is sufficient for us to consider skew product groups \(X=GY\) with \(G\cap Y=1\) and \(Y=\langle y\rangle\) being cyclic and core-free in \(X\). Now we are ready to recall the studying history of skew-morphisms of groups. An interesting and important problem in this area is a determination of the skew-morphisms of a given family of groups. The problem seems challenging because even skew-morphisms of the cyclic groups have not yet been completely determined. For partial results of cyclic groups, see [4, 5, 8, 18, 19, 24]. For finite nonabelian simple group or finite nonabelian characteristically simple groups, they were classified in [2] and [3], respectively, and for elementary abelian \(p\)-groups, a global structure was characterized in [9]. Based on big efforts of several authors working on regular Cayley maps (see [4, 12, 25, 20, 21, 22, 31, 23, 33, 34, 38, 39, 40, 41, 42]), the final classification of skew product groups of dihedral groups was given in [12]. For generalized quaternion groups, there are some partial results, see [13] and [26]. A 2-group of order \(2^{n}\geq 8\) is said to be of maximal class if it has nilpotency class \(n-1\). In this paper, we shall classify skew product groups for 2-groups of maximal class. Given the _skew product group_\(L_{G}\langle\sigma\rangle\) of \(L_{G}\) by \(\sigma\), for the purpose of this paper, we may define the _skew product group_\(X:=G\langle\sigma\rangle\) of \(G\) by \(\sigma\) as follows: every element of \(X\) is uniquely written as \(g\sigma^{i}\) where \(g\in G\) and \(i\) is a positive integer less than the order of \(\sigma\); for each pair of elements \(a\sigma^{i},b\sigma^{j}\in X\), we have \((a\sigma^{i})(b\sigma^{j})=a\sigma^{i}(b)\sigma^{\sum_{k=0}^{i-1}\pi(\sigma^{ k}b)+j}\). It is straightforward to check by using the definition of the skew-morphism \(\sigma\) that \(X\) is indeed a group with operation defined above. Sometimes, we just say \(X\) a skew-product group of \(G\) for short. Throughout this paper, set \(C=\langle c\rangle\) and \[\begin{array}{l}Q=\langle a,b\mid a^{2n}=1,b^{2}=a^{n},a^{b}=a^{-1}\rangle \cong Q_{4n},\,n\geq 2,\\ D=\langle a,b\mid a^{n}=b^{2}=1,a^{b}=a^{-1}\rangle\cong D_{2n},\,n\geq 2. \end{array} \tag{1}\] Let \(G\in\{Q,D\}\) and let \(X=X(G)=GC=\langle a,b\rangle\langle c\rangle\) be a group. In Theorem 1.3, a classification of \(X(Q)\) is given, provided that \(C\) is core-free. For skew product groups of \(p\)-groups, we have the following characterization: **Theorem 1.1**: _Let \(X=GC\) be a group, where \(G\) is a \(p\)-group and \(C\) is a cyclic group such that \(G\cap C=1\). Set \(C=C_{1}\times C_{2}\), where \(C_{1}\) is the Sylow \(p\)-subgroup of \(C\). If \(C_{X}=1\), then \(F(X)=O_{p}(X)=G_{1}C_{1}\), where \(G_{1}=O_{p}(X)\cap G\neq 1\) and \(G_{1}C_{1}\rtimes C_{2}\lhd X\)._ **Theorem 1.2**: _Let \(X=GC\) be a group, where \(C\) is a cyclic group, and suppose that \(G\) is a maximal class 2-group and \(|G|=2^{n}\geq 32\). Assume that \(G\cap C=1\) and that \(C_{X}=1\). Then \(X\) is a \(2\)-group._ **Theorem 1.3**: _Let \(X=GC\) be a 2-group, where \(G\) is a maximal class group, \(C\) is a cyclic group and \(G\cap C=1\). If \(C_{X}=1\), then \(G_{X}\) is \(\langle a_{0}\rangle\), \(\langle a^{2},b\rangle\) or \(G\)._ **Theorem 1.4**: _Let \(X=GC\) be a 2-group, where \(G\) is a maximal class group, \(C\) is a cyclic group and \(G\cap C=1\). Set \(R\) is the defined relation of \(G\). Then \(X\) is isomorphic to one of the following groups:_ 1. \(X=\langle a,b,c|R,a^{c}=a^{r},b^{c}=a^{s}b\rangle,\) _where_ \(r^{2^{m}}\equiv 1(2^{n-1})\)_, and_ \(r^{2^{m-1}}\not\equiv 1(2^{n-1})\) _or_ \(s\frac{r^{2^{m-1}}-1}{r-1}\not\equiv 0(2^{n-1})\)_. Moreover, if_ \(G\) _is a semidihedral 2-groups, then_ \(2|s\)_;_ 2. \(X=\langle a,b,c|R,a^{2}=a^{2r},\,c^{b}=a^{2s}c,\,c^{a}=a^{2t}b^{u}c^{v}\rangle,\) _where_ \(r^{2^{m}}\equiv 1(\mbox{\rm mod }2^{n-2})\)_,_ \(s\sum_{l=1}^{2^{m}}r^{l}\equiv 0(\mbox{\rm mod }2^{n-2})\)_, either_ 1. \(u=0\)_,_ \(r^{v-1}\equiv 1(\mbox{\rm mod }2^{n-2})\)_,_ \((s+2t)r\equiv(1-r)+s\sum_{l=1}^{v}r^{l}(\mbox{\rm mod }2^{n-2})\)_,_ \(t\sum_{l=1}^{2^{m}}r^{l}\equiv 0(\mbox{\rm mod }2^{n-2}).v^{2}\equiv 1(\mbox{\rm mod }2^{m})\) _and_ \(1-r\equiv tr+t\sum_{l=1}^{v}r^{l}(\mbox{\rm mod }2^{n-2})\)_._ 2. \(u=1\)_,_ \(r^{v-1}+1\equiv 0(\mbox{\rm mod }2^{n-2}).(sr+1-r)\sum_{l=0}^{v-1}r^{l}\equiv(s+2t+1)r(\mbox{ \rm mod }2^{n-1}).(t(1-r^{-1})+s\sum_{l=0}^{v-1}r^{l})\sum_{l=0}^{2^{m-1}-1}r^{2l} \equiv 0(\mbox{\rm mod }2^{n-2}).r^{2}[t(1-r^{-1})+s\frac{r^{v}-1}{r-1}]\frac{r^{v-1}-1}{ r^{2}-1}+2^{n-3}i\equiv 0(2^{n-2});\)__ 3. \(X=\langle a,b,c|R,(a^{2})^{c^{2}}=a^{2},(c^{2})^{a}=a^{2s}c^{-2},(c^{2})^{b}= a^{2u}c^{2},a^{c}=bc^{2y}\rangle,\) _where_ \(sy\equiv 1+i2^{n-3}(\mbox{\rm mod }2^{n-2})\) _and_ \(yu\equiv-1(\mbox{\rm mod }2^{n-3})\)_,_ \(i=1\) _if_ \(G\) _is a generalized quaternion group and_ \(i=0\) _if_ \(G\) _is either a dihedral group or a semidihedral group._ ## 2 Preliminaries In this section, the notation and elementary facts used in this paper are collected. ### Notation In this paper, all the groups are supposed to be finite. We set up the notation below, where \(G\) and \(H\) are groups, \(M\) is a subgroup of \(G\), \(n\) is a positive integer and \(p\) is a prime number. \(|G|\) and o(\(g\)): the order of \(G\) and an element \(g\) in \(G\), resp.; \(H\leq G\) and \(H<G\): \(H\) is a subgroup of \(G\) and \(H\) is a proper subgroup of \(G\), resp.; \([G:H]\): the set of cosets of \(G\) relative to a subgroup \(H\); \(H\lhd G\) and \(H\,\mbox{char}\,\ G\): \(H\) is a normal and characteristic subgroup of \(G\), resp.; \(G^{\prime}\) and \(Z(G)\): the derived subgroup and the center of \(G\) resp.; \(M_{G}\): the core of \(M\) in \(G\) which is the maximal normal subgroup of \(G\) contained in \(M\); \(G\mathchar 13334\relax H\): a semidirect product of \(G\) by \(H\), in which \(G\) is normal; \(G.H\): an extension of \(G\) by \(H\), where \(G\) is normal; \(C_{M}(G)\): centralizer of \(M\) in \(G\); \(N_{M}(G)\): normalizer of \(M\) in \(G\); \(\mbox{Syl}_{p}(G)\): the set of all Sylow \(p\)-subgroups of \(G\); \([a,b]:=a^{-1}b^{-1}ab\), the commutator of \(a\) and \(b\) in \(G\); \(\Omega_{1}(G)\): the subgroup \(\langle g\in G\mid g^{p}=1\rangle\) of \(G\) where \(G\) is a \(p\)-group; \(\U_{n}(G)\): the subgroup \(\langle g^{p^{n}}\mid g\in G\rangle\) of \(G\) where \(G\) is a \(p\)-group; \(O_{p}(G)\) and \(O_{p^{\prime}}(G)\): the maximal \(p\)-subgroup and \(p^{\prime}\)-subgroup of \(G\),resp.; \(F(X)\): the Fitting subgroup (the product of all nilpotent normal subgroups of G). ### Elementary facts **Proposition 2.1**: _[_14_, Theorem 11.9]_ _Let \(G\) be a maximal class group and \(|G|=2^{n}\). Then G is isomorphic to one of the following three groups:_ * \(D_{2^{n}}:=\langle a,b|a^{2^{n-1}}=b^{2}=1,\,a^{b}=a^{-1}\rangle,\,n\geq 3\)_;_ * \(Q_{2^{n}}:=\langle a,b|a^{2^{n-1}}=1,\,b^{2}=a^{2^{n-2}},\,a^{b}=a^{-1}\rangle, \,n\geq 3\)_;_ * \(SD_{2^{n}}:=\langle a,b|a^{2^{n-1}}=b^{2}=1,\,a^{b}=a^{-1+2^{n-2}}\rangle,\,n \geq 4\)_._ **Lemma 2.2**: _Let \(G\) be a maximal class group and \(|G|=2^{n}\), where \(n\geq 5\). Then \(\mbox{\rm Aut}\,(G)\) is a \(2\)-group._ **Proposition 2.3**: _[_14_, Theorem 4.5]_ _Let \(H\) be the subgroup of \(G\). Then \(N_{G}(H)/C_{G}(H)\) is isomorphic to a subgroup of \(\mathrm{Aut}\,(H)\)._ **Proposition 2.4**: _[_29_, Theorem]_ _If \(G\) is a transitive permutation group of degree \(n\) with a cyclic point-stabilizer, then \(|G|\leq n(n-1)\)._ **Proposition 2.5**: _[_15_, Satz 1 and Satz 2]_ _Let \(G=AB\) be a group, where both \(A\) and \(B\) are abelian subgroups of \(G\). Then_ 1. \(G\) _is meta-abelian, that is,_ \(G^{\prime}\) _is abelian;_ 2. _if_ \(G\neq 1\)_, then_ \(A\) _or_ \(B\) _contains a normal subgroup_ \(N\neq 1\) _of_ \(G\)_._ **Proposition 2.6**: _[_14, 9, Kap. III, Satz 4.2(b)_]_ _Suppose that \(G\) is a solvable group and \(F(G)\) is the Fitting subgroup of \(G\). Then \(C_{G}(F(G))\leq F(G)\)._ **Proposition 2.7**: _[_8_, Remark 1.2(i)]_ _The order of each skew morphisms of a cyclic group of order \(2^{n}\) is equal to \(2^{m}\) for some \(m<n\) and so the corresponding skew-product group is a bicyclic \(2\)-group with a core-free cyclic factor._ ## 3 Proof of Theorem 1.1 **Lemma 3.1**: _Let \(X=GC\) be a group, where \(G\) is a \(p\)-group and \(C\) is a cyclic group such that \(G\cap C=1\). Set \(C=C_{1}\times C_{2}\), where \(C_{1}\in\mathrm{Syl}_{p}(C)\). If \(C_{X}=1\), then \(F(X)=O_{p}(X)=G_{1}C_{1}\), where \(G_{1}=O_{p}(X)\cap G\neq 1\) and \(G_{1}C_{1}\rtimes C_{2}\lhd X\)._ **Proof** Since \(X\) is a product of two nilpotent groups, it is solvable and so \(F(X)\neq 1\). Note that \(O_{p^{\prime}}(X)\leq C\). Thus \(O_{p^{\prime}}(X)=1\) as \(C_{X}=1\). Then \(F(X)=O_{p}(X)\). Let \(P=GC_{1}\in\mathrm{Syl}_{p}(X)\). Obviously, \(O_{p}(X)=\bigcap_{x\in C_{2}}P^{x}\), hence \(C_{1}\leq O_{p}(X)\) and so \(O_{p}(X)=(O_{p}(X)\cap G)C_{1}\). Note that \(G_{1}:=O_{p}(X)\cap G\neq 1\) as \(C_{X}=1\). Let \(\overline{X}=X/O_{p}(X)=\overline{GC}_{2}\). Observe that \(O_{p}(\overline{X})=1\), which implies \(F(\overline{X})=O_{p^{\prime}}(\overline{X})\leq\overline{C}_{2}\). By Proposition 3.7, we have \(\overline{C}_{2}\leq C_{\overline{X}}(F(\overline{X}))\leq F(\overline{X})\leq \overline{C}_{2}\), and therefor \(O_{p^{\prime}}(\overline{X})=\overline{C}_{2}\). Thus \(O_{p}(X)\rtimes C_{2}=G_{1}C_{1}\rtimes C_{2}\lhd X\). \(\square\) ## 4 Proof of Theorem 1.2 Note that both \(D_{4}\) and \(D_{8}\) have a skew-morphism \(Z_{3}\). However, when \(n\geq 4\), we have the following results. **Lemma 4.1**: _Let \(X=GC\) be a group, where \(C\) is a cyclic group, and suppose that \(G\) is a maximal class group and \(|G|=2^{n}\geq 32\). Assume that \(G\cap C=1\) and that \(C_{X}=1\). Then \(X\) is a \(2\)-group._ **Proof** The result is true for \(n=5\), and so we assume that \(n>5\), and we shall proceed by induction on \(n\). Set \(C=C_{1}\times C_{2}\), where \(C_{1}\in Syl_{2}(C)\). Obviously, \(GC_{1}\) is a Sylow \(2\)-subgroup of \(X\). By Lemma 3.1, we have \(F(X)=O_{2}(X)\) and \(O_{2}(X)\rtimes C_{2}\lhd X\). Assume that \(O_{2}(X)<P\). Take \(X_{1}:=G_{1}C\). Then \(G_{1}:=O_{2}(X)\cap G<G\), and observe that \(G_{1}\) is a cyclic group or a maximal class group. For the former case, \(X_{1}/C_{X_{1}}\) is a \(2\)-group by Proposition 2.7, and hence \(C_{2}\mathop{\rm char}X_{1}\lhd X\). Then \(C_{2}\lhd X\). Since \(C_{X}=1\), we get \(C_{2}=1\), as desired. For the latter case, \(32\leq|G_{1}|=\frac{|G|}{2}\), by the induction hypothesis, \(X_{1}/C_{X_{1}}\) is a \(2\)-group, and hence \(C_{2}\mathop{\rm char}G_{1}C\lhd X\). Obviously, \(C_{2}=1\) as \(C_{X}=1\). Now assume that \(O_{2}(X)=P\). Let \(G=\langle a,b\rangle\) and \(C_{1}=\langle c_{1}\rangle\), where \(|a|=2^{n-1}\). Set \(a_{0}=a^{2^{n-2}}\). Note that \(P=GC_{1}=\langle a,b,c_{1}\rangle\). Let \(\Phi(P)\) be the Frattini subgroup of \(P\). Observe that \(|P:\Phi(P)|\) is either \(4\) or \(8\) as \(P=\langle a,b,c_{1}\rangle\). Let \(\overline{X}=X/\Phi(P)=\overline{GC}\). For the former case, \(|\overline{G}|=|G/\Phi(P)\cap G|=2\), then \(G\cap\Phi(P)=\langle a^{2},b\rangle\) or \(\langle a\rangle\). If \(G\cap\Phi(P)=\langle a^{2},b\rangle\), then \(\overline{X}=(\langle\overline{a}\rangle\times\langle\overline{c}_{1}\rangle) \rtimes\langle\overline{c}_{2}\rangle\), and therefor \(\langle\overline{c}_{2}\rangle\lhd\overline{X}\). Note that \(\langle a^{2},b\rangle C=\Phi(P)C_{2}\lhd X\). Since \(32\leq|\langle a^{2},b\rangle|=\frac{|G|}{2}\), by the induction hypothesis, \(C_{2}\mathop{\rm char}\langle a^{2},b\rangle C\), and therefor \(C_{2}\lhd X\). Observe that \(C_{2}=1\) as \(C_{X}=1\). If \(\Phi(P)\cap G=\langle a\rangle\), then \(\overline{X}=(\langle\overline{b}\rangle\times\langle\overline{c}_{1}\rangle) \rtimes\langle\overline{c}_{2}\rangle\), and therefor \(\langle\overline{c}_{2}\rangle\lhd\overline{X}\). Note that \(\langle a\rangle C=\Phi(P)C_{2}\lhd X\). By Proposition 2.7, \(C_{2}\mathop{\rm char}\langle a\rangle C\), and therefor \(C_{2}\lhd X\). Note that \(C_{2}=1\) as \(C_{X}=1\). For the latter case, \(\langle a,c_{1}\rangle=\langle a\rangle\langle c_{1}\rangle\) is a subgroup of \(X\). Observe that \(a^{2}\in\Phi(P)\) and \({c_{1}}^{2}\in\Phi(P)\). Then \(\Phi(P)=\langle a^{2}\rangle{\langle c_{1}}^{2}\rangle\). If \(C_{2}\neq 1\), then \(|c_{1}|<|a|\), and therefor \(\Omega_{(m)}=\langle a^{2^{m}}\rangle\neq 1\). Since \(\langle a_{0}\rangle\mathop{\rm char}\Omega_{(m)}\mathop{\rm char}P\mathop{ \rm char}X\), we get \(\langle a_{0}\rangle\lhd X\). Since \(32\leq|G/\langle a_{0}\rangle|=\frac{|G|}{2}\) and \(G/\langle a_{0}\rangle\) is the maximal class \(2\)-group, by the induction hypothesis, \(\langle a_{0}\rangle\langle c_{2}\rangle/\langle a_{0}\rangle\lhd X/\langle a_{0}\rangle\), and therefor \(\langle a_{0}\rangle\langle c_{2}\rangle\lhd X\). Note that \(C_{2}\lhd X\) as \(C_{2}\mathop{\rm char}\langle a_{0}\rangle\langle c_{2}\rangle\lhd X\), contradicting with \(C_{X}=1\). \(\Box\) ## 5 Proof of Theorem 1.3 _Notation:_ Recall \(D_{2^{n}}=\langle a,b|a^{2^{n-1}}=b^{2}=1,\,a^{b}=a^{-1}\rangle\), \(Q_{2^{n}}=\langle a,b|a^{2^{n-1}}=1,\,b^{2}=a^{2^{n-2}}\), \(a^{b}=a^{-1}\rangle\) and \(SD_{2^{n}}=\langle a,b|a^{2^{n-1}}=b^{2}=1,\,a^{b}=a^{-1+2^{n-2}}\rangle\). Set \(n\geq 5\). Then let \(X=GC\) be a \(2\)-group where \(G\in\{D_{2^{n}},Q_{2^{n}},SD_{2^{n}}\}\), \(C=\langle c\rangle\cong\mathbb{Z}_{2^{m}}\), \(G\cap C=1\) and \(C_{X}=1\). Then \(X\) is a skew product group of \(G\). By Proposition 2.4, we get \(m<n\), that is \(o(c)\leq o(a)\). Set \(a_{0}:=a^{2^{n-2}}\) and \(z:=c^{2^{m-1}}\). Recall \(\Phi(X)\) is the Frattini subgroup of \(X\). It follows from \(X=\langle a,b,c\rangle\), that is \(d(G)\leq 3\). We prove Theorem 1.3 in the following three lemma proofs. **Lemma 5.1**: \(G_{X}\neq 1\)_._ **Proof** Since \(X\) is \(2\)-group, we get \(Z(X)\neq 1\). Then for any \(gc^{k}\in Z(X)\) where \(gc^{k}\neq 1\), we have \(g\neq 1\) as \(C_{X}=1\). Since \([gc^{k},c]=[g,c]^{c^{k}}=1\), we have \([g,c]=1\) and \(g\in\cap_{c^{l}\in C}G^{c^{i}}=G_{X}\). Thus \(G_{X}\neq 1\). \(\Box\) **Lemma 5.2**: _If \(G_{X}\lhd\langle a\rangle\) and \(|G_{X}|\geq 4\), then \(\langle a\rangle\langle c\rangle<X\)._ **Proof** Suppose that \(\langle a\rangle\langle c\rangle\) is not a group. Then \(X=\langle a,c\rangle\) as \(\langle a\rangle\langle c\rangle\subseteq\langle a,c\rangle\) and \(|\langle a\rangle\langle c\rangle|=\frac{X}{2}\), and thus \(|\Phi(X)|=\frac{|X|}{4}\). Observe that \(G<G\Phi(X)<X\) as \(G<X\) and \(C<X\). Then \(\Phi(X)\cap G=\langle a^{2},b_{1}\rangle\) for some \(b_{1}\in G\setminus\langle a\rangle\) because \(2\leq|G\Phi(X)/\Phi(X)|\leq 4\). Note that \(\Phi(X)=\langle a^{2},b_{1}\rangle\langle c^{2}\rangle\) as \(c^{2}\in\Phi(X)\). Since \(|G_{X}|\geq 4\) and \(G_{X}\leq\langle a\rangle\), \(\langle a^{2^{n-3}}\rangle\mathop{\rm char}G_{X}\lhd X\), and so \(\langle a^{2^{n-3}}\rangle\lhd X\). Let \(H=C_{X}(\langle a^{2^{n-3}}\rangle)\). Note that \(X/H\lesssim\mathop{\rm Aut}\left(\langle a^{2^{n-3}}\rangle\right)\) and \(|\mathop{\rm Aut}\left(\langle a^{2^{n-3}}\rangle\right)|=2\), and therefor \(|X/H|=2\). Then \(\Phi(X)<H\), and so \([a^{2^{n-3}},b_{1}]=1\), a contradiction. Thus \(\langle a\rangle\langle c\rangle<X\). \(\Box\) **Lemma 5.3**: _If \(\langle a\rangle\langle c\rangle\leq X\), then \(G_{X}\) is either \(\langle a^{2},b\rangle\) or \(G\)._ **Proof** For the contrary, assume that \(G_{X}\) is neither \(G\) nor \(\langle a^{2},b\rangle\). Then \(G_{X}\leq\langle a\rangle.\) Let \(z\) be defined as above. Pick \(a_{2}\in G_{X}\) such that \(\langle c\rangle\langle a_{2}^{2}\rangle/\langle a_{2}^{2}\rangle\) is core-free in \(X/\langle a_{2}^{2}\rangle\), but \(\langle c\rangle\langle a_{2}\rangle/\langle a_{2}\rangle\) has the nontrivial core, say \(\langle c^{i}\rangle\langle a_{2}\rangle/\langle a_{2}\rangle\) in \(X/\langle a_{2}\rangle\). Then in \(\overline{X}:=X/\langle a_{2}^{2}\rangle\), \(\Omega_{1}(\langle\overline{a}_{2}\rangle\times\langle\overline{c}^{i} \rangle)=\langle\overline{c}^{2i}\rangle\lhd\overline{X}\), which implies \(c^{i}=z\). In particular, \(\langle a_{2}\rangle\rtimes\langle z\rangle\lhd X\). Considering the conjugacy of \(\overline{G}\) on \(\langle\overline{a},\overline{z}\rangle\cong D_{4}\), there exists an involution \(\overline{a^{i}b}\in\overline{G}\) exchanging \(\overline{z}\) and \(\overline{a}_{2}\overline{z}\) (simply we denote \(\overline{a^{i}b}\) by \(\overline{b}\)). Since \(\overline{X}=\overline{GC}=(\langle\overline{a}\rangle\langle\overline{c} \rangle)\rtimes\langle\overline{b}\rangle\), firstly we write \(\overline{c}^{\overline{b}}=\overline{a}^{s}\overline{c}^{t}\), where \(t\neq 0\). Then \[\overline{c}=\overline{c}^{\overline{b}^{2}}=(\overline{a}^{s}\overline{c}^{ t})^{\overline{b}}=\overline{a}^{-s}(\overline{a}^{s}\overline{c}^{t})^{t}= \overline{c}^{t}(\overline{a}^{s}\overline{c}^{t})^{t-1},\] that is \((\overline{a}^{s}\overline{c}^{t})^{t-1}=\overline{c}^{1-t}\). Then we have \[(\overline{c}^{t-1})^{\overline{b}}=(\overline{c}^{\overline{b}})^{t-1}=( \overline{a}^{s}\overline{c}^{t})^{t-1}=\overline{c}^{1-t}.\] If \(t\neq 1\), then \(\overline{z}^{\overline{b}}\in\langle c^{t-1}\rangle^{\overline{b}}=\langle \overline{c}^{t-1}\rangle\), contradicting to \(\overline{z}^{\overline{b}}=\overline{a}_{2}\overline{z}\). So \(t=1\), that is \(\overline{c}^{\overline{b}}=\overline{a}^{s}\overline{c}\). Secondly, we write \(\overline{c}^{\overline{b}}=\overline{c}^{t_{1}}\overline{a}^{s_{1}}\). With the same arguments, we may get \(t_{1}=1\) and \(\overline{c}^{\overline{b}}=\overline{c}\overline{a}^{s_{1}}\). Therefore, we have \[\overline{a}^{s}\overline{c}=\overline{c}^{\overline{b}}=\overline{c}\overline{ a}^{s_{1}},\] that is \((\overline{a}^{s})^{\overline{c}}=\overline{a}^{s_{1}}\). Clearly \(\langle\overline{a}^{s}\rangle=\langle\overline{a}^{s_{1}}\rangle\), that is \(c\) normalises \(\langle\overline{a}^{s},\overline{b}\rangle\). Then \[\langle\overline{a}^{s},\overline{b}\rangle\leq\cap_{\overline{c}^{t}\in \langle\overline{c}\rangle}\overline{G}^{\overline{c}^{t}}=\cap_{\overline{x} \in\overline{X}}\overline{G}^{\overline{x}}=\overline{G}_{\overline{X}}< \langle\overline{a}\rangle,\] a contradiction. \(\Box\) ## 6 Classification To prove Theorem 1.3, set \(R:=\{a^{2n}=c^{m}=1,\,b^{2}=a^{n},\,a^{b}=a^{-1}\}.\) Then we shall deal with the five cases in Theorem 1.1 in the following five subsections, separately. Let \(A=G.\langle t\rangle\) where \(G\lhd A\) and \(t^{l}=g\in G\). Then \(t\) induces an automorphism \(\tau\) of \(G\) by conjugacy. Recall that by the cyclic extension theory of groups, this extension is valid if and only if \[\tau^{l}=\mathop{\rm Inn}(g)\quad\mbox{and}\quad\tau(g)=g.\] ### \(G\lhd X\) **Lemma 6.1**: _Suppose that \(G\lhd X\) and \(C_{X}=1\). Then_ \[X=\langle a,b,c|R,a^{c}=a^{r},b^{c}=a^{s}b\rangle,\] _where \(r^{2^{m}}\equiv 0(2^{n-1})\), and \(r^{2^{m-1}}\not\equiv 1(2^{n-1})\) or \(s^{\frac{r^{2^{m-1}}-1}{r-1}}\not\equiv 0(2^{n-1})\). Moreover, if \(G\) is a semidihedral 2-groups, then \(2|s\)._ **Proof** Since \(G\lhd X\), we set \(a^{c}=a^{r}\) and \(b^{c}=a^{s}b\). Let \(\pi\in\mbox{\rm Aut}\,(G)\) such that \(\pi(a)=a^{r}\) and \(\pi(b)=a^{s}b\). Then \(o(\pi(a))=o(a)\) and \(\pi^{2^{m}}=1\), that is \(r^{2^{m}}\equiv 1(2^{n-1})\). Note that if \(G\) is a semidihedral 2-groups, then \(o(b)=o(\pi(b))=o(a^{s}b)=2\), and so \(2|s\). Insure \(\langle c\rangle_{X}=1\): \(a^{z}=a^{c^{2^{m-1}}}=a^{r^{2^{m-1}}}\neq a\) or \(b^{z}=b^{c^{2^{m-1}}}=a^{s\frac{r^{2^{m-1}}}{r-1}}b\neq b\), that is \(r^{2^{m-1}}\not\equiv 1(2^{n-1})\) or \(s^{\frac{r^{2^{m-1}}}{r-1}}\not\equiv 0(2^{n-1})\). \(\square\) ### \(G_{X}=\langle a^{2},b\rangle\) **Lemma 6.2**: _Suppose that \(G_{X}=\langle a^{2},b\rangle\). Then_ \[X=\langle a,b,c|R,a^{2}=a^{2r},\,c^{b}=a^{2s}c,\,c^{a}=a^{2t}b^{u}c^{v}\rangle,\] _where \(r^{2^{m}}\equiv 1(\mbox{\rm mod }2^{n-2})\), \(s\sum_{l=1}^{2^{m}}r^{l}\equiv 0(\mbox{\rm mod }2^{n-2})\), either_ * \(u=0\)_,_ \(r^{v-1}\equiv 1(\mbox{\rm mod }2^{n-2})\)_,_ \((s+2t)r\equiv(1-r)+s\sum_{l=1}^{v}r^{l}(\mbox{\rm mod }2^{n-2})\)_,_ \(t\sum_{l=1}^{2^{m}}r^{l}\equiv 0(\mbox{\rm mod }2^{n-2}).\)__\(v^{2}\equiv 1(\mbox{\rm mod }2^{m})\) _and_ \(1-r\equiv tr+t\sum_{l=1}^{v}r^{l}(\mbox{\rm mod }2^{n-2})\)_._ * \(u=1\)_,_ \(r^{v-1}+1\equiv 0(\mbox{\rm mod }2^{n-2}).(sr+1-r)\sum_{l=0}^{v-1}r^{l}\equiv(s+2t+1)r( \mbox{\rm mod }2^{n-1}).\)__\((t(1-r^{-1})+s\sum_{l=0}^{v-1}r^{l})\sum_{l=0}^{2^{m-1}-1}r^{2l}\equiv 0(\mbox{\rm mod }2^{n-2}).\)__\(r^{2}[t(1-r^{-1})+s\frac{r^{v}-1}{r-1}]\frac{r^{v-1}-1}{r^{2}-1}+2^{n-3}i\equiv 0(2^{n-2}).\)__ **Proof**\(X=((\langle a^{2}\rangle\rtimes\langle c\rangle).\langle b\rangle).\langle a\rangle\). Set \(a^{2}=a^{2r}\), \(c^{b}=a^{2s}c\), \(c^{a}=a^{2t}b^{u}c^{v}\), where \(u\in\{0,1\}\). What we should to determine the parameters \(r,s,t,u\) and \(v\) by analysing three extensions. (1) \(\langle a^{2}\rangle\rtimes\langle c\rangle\), where \((a^{2})^{c}=a^{2r}\). Set \(\pi_{1}\in\mbox{\rm Aut}\,(\langle a^{2}\rangle)\) such that \(\pi_{1}(a^{2})=a^{2r}\). As mentioned before, this extension is valid if and only if \(\mbox{\rm o}(\pi_{1}(a^{2}))=\mbox{\rm o}(a^{2})=2^{n-1}\) and \(\pi_{1}^{2^{m}}=1\), that is \[r^{2^{m}}\equiv 1(\mbox{\rm mod }2^{n-2}). \tag{2}\] (2) \((\langle a^{2}\rangle\rtimes\langle c\rangle).\langle b\rangle\), where \(c^{b}=a^{2s}c\). Set \(\pi_{2}\in\mbox{\rm Aut}\,((\langle a^{2}\rangle\rtimes\langle c\rangle)\): \(a^{2}\to a^{-2}\) and \(c\to a^{2s}c\). This extension is valid if and only if the following three equalities hold: (i) \(\pi_{2}\) preserves \((a^{2})^{c}=a^{2r}\), as desired. (ii) \({\rm o}(\pi_{2}(c))=2^{m}\): \[(a^{2s}c)^{2^{m}}=c^{2^{m}}(a^{2s})^{c^{2^{m}}}\cdots(a^{2s})^{c}=c^{2^{m}}a^{2s \sum_{l=1}^{2^{m}}r^{l}}=c^{2^{m}}a^{2s\sum_{l=1}^{2^{m}}r^{l}}=1,\] that is \[s\sum_{l=1}^{2^{m}}r^{l}\equiv 0({\rm mod}\ 2^{n-2}). \tag{3}\] (iii) \(\pi_{2}^{2}={\rm Inn}(b^{2}):\) Since \(b^{2}\in Z(X)\), we get \(c=c^{b^{2}}=(a^{2s}c)^{b}=a^{-2s}a^{2s}c\), as desired. (3) \(((\langle a^{2}\rangle\rtimes\langle c\rangle).\langle b\rangle).\langle a\rangle\), where \(c^{a}=a^{2t}b^{u}c^{v}\) and \(u\in\{0,1\}\). Set \(\pi_{3}\in{\rm Aut}\,((\langle a^{2}\rangle\rtimes\langle c\rangle).\langle b\rangle)\): \(a^{2}\to a^{2}\), \(b\to ba^{2}\) and \(c\to a^{2t}b^{u}c^{v}\). We divide the proof into two cases according to \(u\), separately. _Case 1: \(u=0\)._ In this case, we get \(c^{a}=a^{2t}c^{v}\) and \(\pi_{3}(c)=a^{2t}c^{v}\). (i) \(\pi_{3}\) preserves \((a^{2})^{c}=a^{2r}\): \[r^{v-1}\equiv 1({\rm mod}\ 2^{n-2}). \tag{4}\] (ii) \(\pi_{3}\) preserves \(c^{b}=a^{2s}c\): \[(c^{b})^{a} = (c^{a})^{ba^{2}}=(a^{2t}c^{v})^{ba^{2}}=(a^{-2t}(a^{2s}c)^{v})^{a ^{2}}\] \[= (a^{-2t}c^{v}a^{2s\sum_{l=1}^{v}r^{l}})^{a^{2}}=a^{-2t}(ca^{2-2r} )^{v}a^{2s\sum_{l=1}^{v}r^{l}}\] \[= a^{-2t}c^{v}a^{(2-2r)\sum_{l=0}^{v-1}r^{l}}a^{2s\sum_{l=1}^{v}r^ {l}}\] \[= c^{v}a^{-2tr^{v}+2(1-r^{v})+2s\sum_{l=1}^{v}r^{l}},\] \[(a^{2s}c)^{a} = a^{2(s+t)}c^{v}=c^{v}a^{2(s+t)r^{v}},\] that is \[(s+2t)r\equiv(1-r)+s\sum_{l=1}^{v}r^{l}({\rm mod}\ 2^{n-2}). \tag{5}\] (iii) \({\rm o}(\pi_{3}(c))=2^{m}\): \[1=(a^{2t}c^{v})^{2^{m}}=a^{2t\sum_{l=1}^{2^{m}}r^{vl}},\] that is \[t\sum_{l=1}^{2^{m}}r^{l}\equiv 0({\rm mod}\ 2^{n-2}). \tag{6}\] (iv) \(\pi_{3}^{2}={\rm Inn}(a^{2})\): \[ca^{2-2r}={\rm Inn}(a^{2})(c)=\pi_{3}^{2}(c)=(a^{2t}c^{v})^{a}=a^{2t}(a^{2t}c^ {v})^{v}=c^{v^{2}}a^{2tr^{v^{2}}+2t\sum_{l=1}^{v}r^{vl}},\] that is \[v^{2}\equiv 1({\rm mod}\ 2^{m})\quad{\rm and}\quad 1-r\equiv tr+t\sum_{l=1}^{v}r^{l}({ \rm mod}\ 2^{n-2}). \tag{7}\] _Case 2: \(u=1\)._ In this case, we get \(c^{a}=a^{2t}bc^{v}\) and \(\pi_{3}(c)=a^{2t}bc^{v}\). (i) \(\pi_{3}\) preserves \((a^{2})^{c}=a^{2r}\): \[r^{v-1}+1\equiv 0({\rm mod}\ 2^{n-2}). \tag{8}\] (ii) \(\pi_{3}\) preserves \(c^{b}=a^{2s}c\): \[\begin{array}{lll}(c^{b})^{a}&=&(c^{a})^{ba^{2}}=(a^{2t}bc^{v})^{ba^{2}}=(a^{ -2t}b(a^{2s}c)^{v})^{a^{2}}\\ &=&a^{-2t}a^{-4}b(a^{2s}ca^{2-2r})^{v}\\ &=&a^{-2t-4}b(ca^{2sr+2-2r})^{v}\\ &=&a^{-2t-4-2(sr+1-r)r^{-v}\sum_{l=0}^{v-1}r^{l}bc^{v}}\\ &=&a^{-2t-4+2(sr+1-r)r^{-1}\sum_{l=0}^{v-1}r^{l}bc^{v}},\\ (a^{2s}c)^{a}&=&a^{2(s+t)}bc^{v},\end{array}\] that is \[(sr+1-r)\sum_{l=0}^{v-1}r^{l}\equiv(s+2t+1)r({\rm mod}\ 2^{n-1}). \tag{9}\] (iii) \({\rm o}(\pi_{3}(c))=2^{m}\): \[1=(a^{2t}bc^{v})^{2^{m}}=(a^{2t}bc^{v}ba^{-2t}c^{v})^{2^{m-1}}=a^{2r^{2}(t(1-r ^{-1})+s\sum_{l=0}^{v-1}r^{l})\sum_{l=0}^{2^{m-1}-1}r^{2l}},\] that is \[(t(1-r^{-1})+s\sum_{l=0}^{v-1}r^{l})\sum_{l=0}^{2^{m-1}-1}r^{2l}\equiv 0({ \rm mod}\ 2^{n-2}). \tag{10}\] (iv) \(\pi_{3}^{2}={\rm Inn}(a^{2})\): \[ca^{2-2r}={\rm Inn}(a^{2})(c)=\pi_{3}^{2}(c)=(a^{2t}bc^{v})^{a}=a^{2t-2}b(a^{2 t}bc^{v})^{v}=ca^{-2r+2r^{2}[t(1-r^{-1})+s\frac{r^{v}-1}{r-1}]\frac{r^{v-1}-1} {r^{2}-1}}a_{0}^{i},\] where \(i=\frac{v+1}{2}\) if \(G\) is a generalized quaternion group, and \(i=0\) if \(G\) is not a generalized quaternion group. Hence \[r^{2}[t(1-r^{-1})+s\frac{r^{v}-1}{r-1}]\frac{r^{v-1}-1}{r^{2}-1}+2^{n-3}i \equiv 0(2^{n-2}). \tag{11}\] (4) Insure \(\langle c\rangle_{X}=1\): If \(u=0\), then \(z^{a}=(c^{2^{m-1}})^{a}=(a^{2t}c^{v})^{2^{m-1}}=za^{2t\pi\frac{r^{2^{m-1}-1}}{r-1 }}\ \neq z\) or \(z^{b}=(c^{2^{m-1}})^{b}=(a^{2s}c)^{2^{m-1}}=za^{2s\pi\frac{r^{2^{m-1}-1}}{r-1}}\ \neq z\), that is \(t\frac{r^{2^{m-1}-1}}{r-1}\not\equiv 0(2^{n-2})\) or \(s\frac{r^{2^{m-1}-1}}{r-1}\not\equiv 0(2^{n-2})\). If \(u=1\), then \(z^{a}=(c^{2^{m-1}})^{a}=(a^{2t}bc^{v})^{2^{m-1}}=za^{2r^{2}[t(1-r^{-1})+s\frac{ r^{v}-1}{r-1}]\frac{r^{2^{m-1}-1}}{r^{2}-1}}\neq z\) or \(z^{b}=za^{2s\pi\frac{r^{2^{m-1}-1}}{r-1}}\ \neq z\), that is \([t(1-r^{-1})+s\frac{r^{v}-1}{r-1}]\frac{r^{2^{m-1}-1}}{r^{2}-1}\not\equiv 0\) or \(s\frac{r^{2^{m-1}-1}}{r-1}\not\equiv 0\). \(\Box\) ### \(|G_{X}|=2\) **Lemma 6.3**: _Suppose that \(n\geq 5\) and \(|G_{X}|=2\). Then \(X=KC=((\langle a^{2},b\rangle\!\rtimes\!\langle c^{2}\rangle).\langle a\rangle). \langle c\rangle\), where \(K=G\langle c^{2}\rangle\). Moreover, \(K^{\prime}=\langle a^{2}\rangle\times\langle c_{1}^{2}\rangle\), \(G_{K}=\langle a^{2},b\rangle\) and \(\langle a^{2}\rangle\rtimes\langle c_{1}\rangle\lhd X\)._ **Proof** Suppose that \(G_{X}=\langle a_{0}\rangle\). Consider the faithful permutation representation of \(X\) on \([X:G]\). Since if \(M,\ M^{c}\leq G\), then \(|M|\leq 4\), and so \(Gc^{-1}G\) contains at least \(\frac{|G|}{4}=2^{n-2}\) coset of \(G\), and \(1+2^{n-2}\leq|[X:G]|=2^{m}\). Note that \(m=n-1\) as \(n-2<m<n\). Then \(o(a)=o(c)\). Since \(X/G_{X}=X/\langle a_{0}\rangle=(G/\langle a_{0}\rangle)(C\langle a_{0}\rangle /\langle a_{0}\rangle)\) and \(|G/\langle a_{0}\rangle|=|C\langle a_{0}\rangle/\langle a_{0}\rangle|\), the core of \(C\langle a_{0}\rangle/\langle a_{0}\rangle\) in \(X/\langle a_{0}\rangle\) is \(\langle z\rangle\langle a_{0}\rangle\). Set \(\overline{X}=X/\langle a_{0},z\rangle=\overline{GC}\), where \(\overline{G}\cong D_{2^{n-1}}\) and \(\overline{C}\) is core-free. Let \(\overline{H}=\overline{G}_{\overline{X}}\), with \(\langle a_{0},z\rangle\leq H\), and note that \(H\lhd X\). Observe that \(\overline{G}_{\overline{X}}\) is either dihedral or cyclic. Then we have the following two cases: _Case 1: \(\overline{G}_{\overline{X}}\) is dihedral._ In this case, \(\overline{G}_{\overline{X}}\) is either \(\overline{G}\) or \(\langle\overline{a}^{2},\overline{b}\rangle\). Then \(H=\langle a,b\rangle\rtimes\langle z\rangle\) or \(H=\langle a^{2},b\rangle\rtimes\langle z\rangle\). Note that \(\langle a^{4}\rangle\leq H^{\prime}<G\) and \(H^{\prime}\operatorname{char}H\lhd X\), and thus \(\langle a^{4}\rangle\leq H^{\prime}\leq G_{X}\lhd X\). Since \(G_{X}=\langle a_{0}\rangle\), we have \(a^{4}=a_{0}\), and so \(n=4\), contradict to \(n\geq 5\). _Case 2: \(\overline{G}_{\overline{X}}\) is cyclic._ Assume that \(\overline{G}_{\overline{X}}=\langle\overline{a}^{i}\rangle\) is cyclic. By Lemma 5.2, we get \(\overline{G}_{\overline{X}}=\langle\overline{a}_{1}\rangle\), and therefor \(H:=\langle a_{1}\rangle\rtimes\langle z\rangle\lhd X\). Observe that \(H\cong D_{8}\) or \(H\cong\mathbb{Z}_{4}\times\mathbb{Z}_{2}\). If \(H\cong D_{8}\), then \(\langle a_{1}\rangle\operatorname{char}H\lhd X\), which implies \(\langle a_{1}\rangle\leq G_{X}\), a contradiction. Then \(H=\mathbb{Z}_{4}\times\mathbb{Z}_{2}\). Note that \(X/C_{X}(\langle a_{0}\rangle\times\langle z\rangle)\cong\mathbb{Z}_{2}\) and \(c\in C_{X}(\langle a_{0}\rangle\times\langle z\rangle)\). By Lemma 5.3, \(\langle a\rangle\langle c\rangle\) is not a group, and hence \(a\not\in C_{X}(\langle a_{0}\rangle\times\langle z\rangle)\). Set \(a_{1}^{c}=a_{1}^{i}z\) and \(z^{a}=a_{0}z\), where \(i\in\{1,-1\}\). Note that \(\langle a^{2},c^{2}\rangle\leq C_{X}(H)\) as \(a_{1}^{c^{2}}=a_{1}^{i^{2}}=a_{1}\) and \(z^{a^{2}}=z\). Since \([a_{1},b]\neq 1\) and \([a,z]\neq 1\), we have \(G/C_{X}(H)\cap G\cong\mathbb{Z}_{2}\times\mathbb{Z}_{2}\). If \(X=GC_{X}(H)\), then \(\langle a_{1}\rangle\lhd X\), a contradiction. Thus \(GC_{X}(H)<X\) and \(\frac{|X|}{|C_{X}(H)|}\geq 2^{3}\). Note that \(X/C_{X}(H)\lessneq\operatorname{Aut}(H)\cong D_{8}\). Hence \(C_{X}(H)=\langle a^{2}\rangle\langle c^{2}\rangle\lhd X\) and \(X/C_{X}(H)\cong D_{8}\). Let \(K:=GC_{X}(H)=G\langle c^{2}\rangle\). Note that \(a_{1}\in G_{K}\), by Theorem 1.3, \(G_{K}=G\) or \(G_{K}=\langle a^{2},b\rangle\) for some \(b\in G\setminus\langle a\rangle\). If \(G_{K}=G\), then \(\langle a^{2}\rangle\leq K^{\prime}\leq G\), and so \(K^{\prime}\lhd X\), and hence \(\langle a^{2}\rangle\leq G_{X}\), a contradiction. Thus \(G_{K}=\langle a^{2},b\rangle\) and \(X=((\langle a^{2},b\rangle\rtimes\langle c^{2}\rangle).\langle a\rangle) \langle c\rangle\). Note that \(\langle a^{2}\rangle\lhd K\) as \(\langle a^{2}\rangle\operatorname{char}G_{K}\lhd K\). Obviously, \(\langle a^{2}\rangle\lhd C_{X}(H)\), and hence \(C_{X}(H)^{\prime}\leq\langle a^{2}\rangle\). Note that \(C_{X}(H)^{\prime}\lhd X\) as \(C_{X}(H)^{\prime}\operatorname{char}C_{X}(H)\lhd X\). Then \(C_{X}(H)^{\prime}\leq\langle a_{0}\rangle\). Thus \([a^{2},c^{4}]=1\). Since \(K/\langle a^{2}\rangle\rtimes\langle c^{2}\rangle\cong\mathbb{Z}_{2}^{2}\) and \(G<K\), this means that \(K^{\prime}\leq\Omega(K)<\langle a^{2}\rangle\langle c^{2}\rangle\), and so we set \(K^{\prime}=\langle a^{2}\rangle\times\langle c^{4j}\rangle\) for some integer \(j\). Since \(G_{X}=\langle a_{0}\rangle\) and \(\Omega_{o(c^{4j})}(K^{\prime})\operatorname{char}K\lhd X\), therefor \(\Omega_{o(c^{4j})}(K^{\prime})=\langle a_{0}\rangle\), and so \(K^{\prime}=\langle a^{2}\rangle\times\langle c^{4}\rangle\). \(\Box\) **Lemma 6.4**: _Suppose that \(|G_{X}|=2\). Then_ \[X=\langle a,b,c|R,(a^{2})^{c^{2}}=a^{2},(c^{2})^{a}=a^{2s}c^{-2},(c^{2})^{b}= a^{2u}c^{2},a^{c}=bc^{2y}\rangle,\] _where \(sy\equiv 1+i2^{n-3}(\operatorname{mod}\,2^{n-2})\) and \(yu\equiv-1(\operatorname{mod}\,2^{n-3})\), \(i=1\) if \(G\) is a generalized quaternion group and \(i=0\) if \(G\) is either a dihedral group or a semidihedral group._ **Proof** Since \(X/\langle a^{2}\rangle\rtimes\langle c^{2}\rangle=\langle\overline{a}, \overline{b}\rangle\langle\overline{c}\rangle\cong D_{8}\), we can choose \(\overline{b}\) such that the form of \(X/M_{X}\) is the following: \(\overline{a}^{x}=\overline{b}\) and \(\overline{b}^{\overline{c}}=\overline{a}.\) Set \(c_{1}:=c^{2}\) and \(a_{1}:=a^{2}\). Noting \(\langle a^{2},b\rangle\lhd\langle a,b,c_{1}\rangle\), we can set \[a_{1}^{c_{1}}=a_{1}^{r},c_{1}^{a}=a_{1}^{s}c_{1}^{t},c_{1}^{b}=a_{1}^{u}c_{1} \quad\text{and}\quad a^{c}=bc_{1}^{y}.\] Then one can check \(b^{c}=a^{1-2sr}c_{1}^{1-t-y}\). Set \(H:=\langle a_{1}\rangle\rtimes\langle c_{1}\rangle\). Then \(H^{\prime}\leq\langle a_{1}\rangle\). Since \(H^{\prime}\operatorname{char}H\lhd X\) and \(|G_{X}|=2\), we get \(H^{\prime}\leq\langle a_{0}\rangle\), which implies \(c_{1}^{2}\in C_{X}(a_{1})\). If \(y\equiv 0(\operatorname{mod}\,2^{n-1})\), then \(\operatorname{o}(a)=\operatorname{o}(a^{c})=\operatorname{o}(b)\) is either \(2\) or \(4\), which implies \(|G|=4\) or \(8\), a contradiction. Therefore, \(y\not\equiv 0(\operatorname{mod}\,2^{n-1})\). What we should to determine the parameters \(r,s,t,u\) and \(v\) by analysing three extensions. (1) \(\langle a_{1}\rangle\rtimes\langle c_{1}\rangle\), where \(a_{1}^{c_{1}}=a_{1}^{r}\). Set \(\pi_{1}\in\operatorname{Aut}\left(\langle a^{2}\rangle\right)\) such that \(\pi_{1}(a^{2})=a^{2r}\). As mentioned before, this extension is valid if and only if \(\operatorname{o}(\pi_{1}(a^{2}))=\operatorname{o}(a^{2})=2^{n-1}\) and \(\pi_{1}^{2}=1\), that is \(r\) is either \(1\) or \(1+2^{n-3}\). (2) \((\langle a_{1}\rangle\rtimes\langle c_{1}\rangle).\langle b\rangle\), where \(c_{1}^{b}=a_{1}^{u}c_{1}\). Set \(\pi_{2}\in\operatorname{Aut}\left((\langle a_{1}\rangle\rtimes\langle c_{1} \rangle)\colon a_{1}\to a_{1}^{-1}\) and \(c_{1}\to a_{1}^{u}c_{1}\). Note that \(b^{2}\in\langle a_{0}\rangle\leq Z(X)\), thus one can check that \(\pi_{2}\) preserves \(a_{1}^{c}=a_{1}^{r}\), \(\operatorname{o}(\pi_{2}(c))=2^{m}\) and \(\pi_{2}^{2}=\operatorname{Inn}(b^{2})\). (3) \(((\langle a_{1}\rangle\rtimes\langle c_{1}\rangle).\langle b\rangle). \langle a\rangle\), where \(c_{1}^{a}=a_{1}^{s}c_{1}^{t}\). Set \(\pi_{3}\in\operatorname{Aut}\left(\langle a_{1}\rangle\rtimes\langle c_{1} \rangle\right).\langle b\rangle)\): \(a_{1}\to a_{1}\), \(c_{1}\to a_{1}^{s}c_{1}^{t}\) and \(b\to ba_{1}\) or \(ba_{1}a_{0}\). (i) \(\pi_{3}\) preserves \((a^{2})^{c}=a^{2r}\), as desired. (ii) \(\pi_{3}\) preserves \(c_{1}^{b}=a_{1}^{u}c_{1}\), that is \((a_{1}^{s}c_{1}^{t})^{ba}=a_{1}^{u}c_{1}\): \[a_{1}^{u}c_{1} = (a_{1}^{s}c_{1}^{t})^{ba}\] \[= a_{1}^{-s}c_{1}^{t^{2}}a_{1}^{(u+s)\sum_{l=1}^{t}r^{l}},\] that is \[t^{2}\equiv 1(\operatorname{mod}\,2^{n-2})\quad\text{and}\quad(u+s)(r+1)\frac{ t-1}{2}\equiv 0(\operatorname{mod}\,2^{n-2}). \tag{12}\] (iii) \(\operatorname{o}(\pi_{3}(c_{1}))=2^{n-2}\): \[1=(a_{1}^{s}c_{1}^{t})^{2^{n-2}}=a_{1}^{s\sum_{l=1}^{2^{n-2}}r^{l}},\] that is \[s\sum_{l=1}^{2^{n-2}}r^{l}\equiv 0(\mbox{mod }2^{n-2}). \tag{13}\] (iv) \(\pi_{3}^{2}=\mbox{Inn}(a_{1})\): Recall that \(\mbox{Inn}(a_{1})(c_{1})=c_{1}a_{1}^{1-r}\). \[c_{1}a_{1}^{1-r}=\mbox{Inn}(a^{2})(c_{1})=\pi_{3}^{2}(c_{1})=(a_{1}^{s}c_{1}^{ t})^{a}=c^{t^{2}}a^{sr+s\sum_{l=1}^{t}r^{l}},\] that is \[t^{2}\equiv 1(\mbox{mod }2^{n-2})\quad\mbox{and}\quad 1-r\equiv sr+s\sum_{l=1}^ {t}r^{l}(\mbox{mod }2^{n-2}). \tag{14}\] (4) \(((\langle a^{2},b\rangle_{\rtimes}\langle c^{2}\rangle).\langle a\rangle). \langle c\rangle\), where \(a^{c}=bc_{1}^{y}\) and \(b^{c}=a^{1-2sr}c_{1}^{1-t-y}\). Set \(\pi_{4}\in\mbox{Aut}\,(\langle a,b,c_{1}\rangle):a\to bc_{1}^{y}\), \(b\to a^{1-2sr}c_{1}^{1-t-y}\) and \(c_{1}\to c_{1}\). Let \(i=1\) if \(G\) is a generalized quaternion group and \(i=0\) if \(G\) is either a dihedral group or a semidihedral group. We need to carry out the following seven steps: (i) \(\mbox{o}(\pi(a))=2^{n-1}\): Since \(a_{0}\in Z(X)\), we only show \((a^{c})^{2^{n-2}}=a_{0}\) \[(bc^{2w})^{2^{n-2}}=a_{1}^{u2^{n-3}\sum_{l=1}^{y}r^{l}}=a_{1}^{2^{n-3}},\] that is \[u\sum_{l=1}^{y}r^{l}\equiv 1(\mbox{mod }2), \tag{15}\] which implies that both \(u\) and \(y\) are odd. (ii) \(\pi_{4}\) preserves \(a_{1}^{c_{1}}=a_{1}^{r}\): \[(a_{1}^{c_{1}})^{c}=c^{2y}a^{u\sum_{l=1}^{y}r^{l}+2^{n-3}}\quad\mbox{and} \quad(a_{1}^{r})^{c}=c^{2yr}a^{u\sum_{l=1}^{y}r^{l}+i2^{n-3}},\] that is \[2y(r-1)\equiv 0(\mbox{mod }2^{n-2}). \tag{16}\] (iii) \(\pi_{4}\) preserves \(c_{1}^{a}=a_{1}^{s}c_{1}^{t}\): \[(c_{1}^{a})^{c}=a_{1}^{ur}c_{1}\quad\mbox{and}\quad(a_{1}^{s}c_{1}^{t})^{c}=a_ {1}^{s(ur\sum_{l=1}^{y}r^{l}+i2^{n-3})}c_{1}^{t+2ys},\] that is \[ur\equiv s(ur\sum_{l=1}^{y}r^{l}+i2^{n-3})(\mbox{mod }2^{n-2})\quad\mbox{and} \quad 1\equiv t+2ys(\mbox{mod }2^{n-2}). \tag{17}\] (iv) \(\pi\) preserves \(c_{1}^{b}=a_{1}^{u}c_{1}\): \[(c_{1}^{b})^{c}=c_{1}^{t}a_{1}^{sr+2(sr-1)(1-r)}\quad\text{and}\quad(a_{1}^{u}c_{ 1})^{c}=c_{1}^{2yu}a_{1}^{u(ur\sum_{l=1}^{y}r^{l}+i2^{n-3})}c_{1},\] that is, \[t\equiv 1+2yu(\text{mod }2^{n-2})\quad\text{and}\quad sr\equiv u^{2}\sum_{l=1}^{ y}r^{l}+i2^{n-3}(\text{mod }2^{n-2}). \tag{18}\] (v) \(\pi^{2}=\text{Inn}(c_{1})\): Recall \(\text{Inn}(c_{1})(a)=a^{1-2sr}c_{1}^{1-t}\), \(\text{Inn}(c_{1})(a_{1})=a_{1}^{r}\) and \(\text{Inn}(c_{1})(b)=a_{1}^{ur}b\). \[a^{1-2sr}c^{2-2t}=\text{Inn}(c_{1})(a)=\pi^{2}(a)=b^{c}c^{2y}=a^{1-2sr}c^{2-2 t-2y+2y},\] as desired; \[a_{1}^{r}=\text{Inn}(c_{1})(a^{2})=\pi^{2}(a^{2})=(c_{1}^{2y}a_{1}^{ur\sum_{l =1}^{y}r^{l}+i2^{n-3})}c=c_{1}^{2y+2yur\sum_{l=1}^{y}r^{l}}a_{1}^{(u\sum_{l=1} ^{y}r^{l})^{2}},\] that is \[2y+2yur\sum_{l=1}^{y}r^{l}\equiv 0(\text{mod }2^{n-2})\quad\text{and}\quad r \equiv(u\sum_{l=1}^{y}r^{l})^{2}(\text{mod }2^{n-2}); \tag{19}\] and \[a^{2ur}b=\text{Inn}(c_{1})(b)=\pi^{2}(b)=(a^{1-2sr}c_{1}^{1-t-y})^{c}=a_{1}^{- su\sum_{l=1}^{y}r^{l}+i2^{n-3}}c_{1}^{-2sy}bc_{1}^{1-t},\] as desired. Now we are ready to determine the parameters by summarizing Eq(12)-Eq(19). By Eq(17) and Eq(18), we get \(2(u+s)\equiv 0(\text{mod }2^{n-2})\) and \(s^{2}r\equiv u^{2}(\text{mod }2^{n-2})\). Noting that \(s^{2}\equiv u^{2}(\text{mod }2^{n-2})\) and \(u\) is odd, we get that \(s\) is odd and \(r=1\). Inserting \(r=1\) into Eq(12)-Eq(19), we get \(t=-1\) by Eq(14). Then we get \(sy\equiv 1+i2^{n-3}(\text{mod }2^{n-2})\) by Eq(17) and \(yu\equiv-1(\text{mod }2^{n-3})\) by Eq(18).
2309.13655
Adaptation of the super resolution SOTA for Art Restoration in camera capture images
Preserving cultural heritage is of paramount importance. In the domain of art restoration, developing a computer vision model capable of effectively restoring deteriorated images of art pieces was difficult, but now we have a good computer vision state-of-art. Traditional restoration methods are often time-consuming and require extensive expertise. The aim of this work is to design an automated solution based on computer vision models that can enhance and reconstruct degraded artworks, improving their visual quality while preserving their original characteristics and artifacts. The model should handle a diverse range of deterioration types, including but not limited to noise, blur, scratches, fading, and other common forms of degradation. We adapt the current state-of-art for the image super-resolution based on the Diffusion Model (DM) and fine-tune it for Image art restoration. Our results show that instead of fine-tunning multiple different models for different kinds of degradation, fine-tuning one super-resolution. We train it on multiple datasets to make it robust. code link: https://github.com/Naagar/art_restoration_DM
Sandeep Nagar, Abhinaba Bala, Sai Amrit Patnaik
2023-09-24T14:47:29Z
http://arxiv.org/abs/2309.13655v3
# Adaptation of the super resolution SOTA for Art Restoration in camera capture images ###### Abstract Preserving cultural heritage is of paramount importance. In the domain of art restoration, developing a computer vision model capable of effectively restoring deteriorated images of art pieces was difficult but now we have good computer vision state-of-art. Traditional restoration methods are often time-consuming and require extensive expertise.The aim of this work is to design an automated solution based on computer vision models that can enhance and reconstruct degraded artworks, improving their visual quality while preserving their original characteristics and artifacts. The model should handle a diverse range of deterioration types, including but not limited to noise, blur, scratches, fading, and other common forms of degradation. We adapt the current state-of-art for the image super resolution based on Diffusion Model (DM) and fine-tune it for Image art restoration. Our result show that the instead of fine-tuning multiple different model for different kind of degradation, fine-tunning one super-resolution. We train it on the multiple datasets to make it robust. code link. Art restoration, Computer vision, Image restoration, Super-resolution, Diffusion Models. ## I Introduction Preserving cultural heritage is of paramount importance. While history has preserved countless masterpieces, the raveages of time have left many artworks faded, damaged, or on the brink of disappearance. Traditional restoration methods are often time-consuming and require extensive expertise. By resurrecting damaged or obscured artworks, we breathe life back into these forgotten stories, reviving the narratives that have shaped our collective consciousness. The domain of image art restoration (IR) holds significant importance within the low-level vision discipline, aiming to enhance the perceptual quality of images that have suffered from a wide array of degradation. This intricate task operates as a versatile and interpretable solution to a range of inverse problems, utilizing readily available denoising techniques as implicit image priors, as demonstrated by [1]. Within the realm of low-level vision research, IR has persistently remained a focal point, contributing substantially to the enhancement of image aesthetics, as evidenced by the work of [2]. In the context of deep learning advancements, a plethora of IR methodologies have harnessed the power of datasets tailored for diverse IR challenges, such as super-resolution (DIV2K, Set5, Set14), rain removal (Rain800, Rain200, Raindrop, DID-MDN), and motion deblurring (REDS, Gopro) [2]. Notably, the emergence of diffusion models (DM) has ushered in a new paradigm within generative models, catalyzing remarkable breakthroughs across various visual generation tasks. The diffusion model, as demonstrated by [3], excels through a sequential application of denoising networks to replicate the image synthesis process. Capitalizing on the exceptional generative prowess of diffusion models, we employ them as a benchmark for image restoration. Traditional supervised learning approaches hinge upon extensive collections of distorted/clean image pairs, while zero-shot methods predominantly rely on known degradation modes. However, these methodologies encounter limitations in real-world scenarios characterized by diverse and unknown distortions. To address this concern, some researchers have extended diffusion models to accommodate blind/real-world image restoration scenarios by integrating real-world distortion simulations and kernel-based techniques. This expansion seeks to bridge the gap between diffusion models and the complexity of real-world image restoration challenges, offering a potential avenue for more effective applications in practical settings. About challenge:This work is part of the _Competitions @ ICETCI 2023_ link. Motivation: By resurrecting damaged or obscured artworks, we breathe life back into these forgotten stories, reviving the narratives that have shaped our collective consciousness. The participants are required to develop an innovative model that can automate the restoration process and ensure the longevity of art pieces for future generations. Objective: The objective of the challenge is to design and implement an advanced computer vision model tailored for the restoration of deteriorated art images. Participants are encouraged to explore various techniques, architectures, and training methodologies to develop a robust and efficient solution. ## II Related Work Traditional image restoration methods:Diffusion-based image restoration techniques rely on partial differential and variational methods, grounded in geometric image models. These methods employ edge information in the damaged area to guide diffusion direction, propagating known information to the target region. While effective for minor image damage, they may yield fuzzy results when handling extensive damage or complex textures [4]. Deep learning based methods::Convolutional neural networks (CNNs) possess remarkable capabilities for learning and representing image features, enabling effective prediction of missing image content. The image restoration process primarily relies on supervised learning methods [5]. In contrast to CNNs, which face challenges in supervised image restoration learning, autoencoders (AE) are artificial neural networks proficient at unsupervised learning, effectively learning and expressing input data [6]. AEs tries to regenerate the images from the latent vector and fails to remove the unordered noise/distortion. The GAN-based based image restoration method is different from the convolution autoencoder based method [7]. The GAN-based image restoration method generates the image to be repaired directly through the generator, and the input can be a random noise vector, and the former is through the whole damaged image to generate the repair area. While GANs excel in generating high-quality images, training them can be challenging, primarily due to the complexity of the loss function employed in the training process. Another branch of deep learning and generative models, normalizing flows (NF) [8] are also used for the image restoration. NFs are based on the invertible CNN layers [9, 10] but NFs are slow and cost more computation for high quality input images as compared to CNNs, GANs, and VAEs. NFs works better for the debluring due to their inevitability and tractable nature [11]. Image restoration (IR) represents an essential and demanding endeavor within the realm of low-level vision. Its objective is to enhance the perceptual quality of images afflicted by diverse degradation types. Notably, the diffusion model has made remarkable strides in the visual generation of AIGC, prompting a natural inquiry: "Can the diffusion model enhance image restoration?" [2]. Motivated by its question, we used the Diffusion Model (DM) base super-resolution to solve the problem of art restoration in the images. ## III Method Description In the realm of artistic representation, the integrity of artworks can be compromised by a multitude of factors such as motion-induced disruptions, various forms of noise, application of filters, and even the intrusion of water. This degradation or distortion also extends to the process of encapsulating art within images, further entailing inherent discrepancies within the representation. Consequently, the task of restoring genuine artistic essence and preserving the authentic artifacts presents a formidable challenge due to the reliance on these images as the solitary source of information regarding the artwork. The endeavor of rejuvenating impaired artifacts is intricate, marked by its irrevocable nature. This compels the exploration of computer vision models as a recourse, harnessing their capacity to leverage embedded attributes within the images. Among the array of techniques, diffusion models emerge as the paramount state-of-the-art method for both image generation and restoration. In particular, the application of image super-resolution models proves to be salient in the restoration process. These models, in their pursuit of enhanced resolution, inherently address a broad spectrum of prevailing distortions and degradation present in the captured images. It is imperative to acknowledge that the acquisition of images itself is predisposed to quality deterioration, often stemming from the intricacies of the capturing apparatus. This encompasses the introduction of supplementary noise and filters, thereby exacerbating the challenges intrinsic to preserving the fidelity of art images. So we propose to use the super resolution SOTA (StableSR) [12] to restore the art. Further, we fine tune the StableSR model for the art restoration and reconstruction. Further to verify and compare the StableSR model to other existing super-resolution SOTA, We also test the sample images using the ResShift [13] super resolution model (see fig-4). ## IV Experimental Results Within this section, we present the outcomes of our experimentation and conduct a comparative analysis between the ground truth images and their restored counterparts: StableSR, (see Fig-1, 2). The ensuing paragraphs elaborate on the results obtained through this dual-model approach, shedding light on the performance of StableSR in terms of image restoration. ## V Conclusion and Future work In conclusion, this endeavor has spotlighted the efficacy of contemporary diffusion models in the realm of image restoration (IR), harnessing their robust generative potential to amplify both structural and textural revitalization. The initial phase of this work entailed leveraging pre-trained weights to establish a foundational baseline, followed by the progressive evolution of the diffusion model for IR applications, with a specific focus on the adaptation of StableSR through a systematic fine-tuning process. This research has further delved into the comprehensive categorization of ten distinct distortions, shedding light on their nuances through the lens of training strategies and degradation scenarios. Through meticulous analysis, we undertook a comparative assessment of existing works, encompassing both super-resolution and IR domains. Each approach was dissected with precision, affording an intricate taxonomy that delineated their respective strengths and weaknesses. The evaluation process involved an overview of prevalent datasets and evaluation metrics within the diffusion model-based IR landscape. This culminated in a comprehensive comparison of two cutting-edge open-source state-of-the-art (SOTA) methodologies, evaluated through a fusion of distortion and perceptual metrics across three quintessential tasks: image super-resolution, deblurring, and inpainting. Remarkably, our observations highlighted the effectiveness of training diffusion models on specialized datasets tailored to distinct degradation types. This strategy yielded commendable outcomes, particularly in scenarios mirroring the noise or degradation patterns akin to the training data. As we steer toward future prospects, addressing the challenges inherent in diffusion model-based IR entails exploring diverse baseline datasets and refining training strategies. By doing so, the realm of diffusion models can be further optimized for achieving superior outcomes, marking a promising direction for future exploration. ## Acknowledgment Compettitions (c IETCI 2023)
2309.07620
Neural Field Representations of Articulated Objects for Robotic Manipulation Planning
Traditional approaches for manipulation planning rely on an explicit geometric model of the environment to formulate a given task as an optimization problem. However, inferring an accurate model from raw sensor input is a hard problem in itself, in particular for articulated objects (e.g., closets, drawers). In this paper, we propose a Neural Field Representation (NFR) of articulated objects that enables manipulation planning directly from images. Specifically, after taking a few pictures of a new articulated object, we can forward simulate its possible movements, and, therefore, use this neural model directly for planning with trajectory optimization. Additionally, this representation can be used for shape reconstruction, semantic segmentation and image rendering, which provides a strong supervision signal during training and generalization. We show that our model, which was trained only on synthetic images, is able to extract a meaningful representation for unseen objects of the same class, both in simulation and with real images. Furthermore, we demonstrate that the representation enables robotic manipulation of an articulated object in the real world directly from images.
Phillip Grote, Joaquim Ortiz-Haro, Marc Toussaint, Ozgur S. Oguz
2023-09-14T11:29:25Z
http://arxiv.org/abs/2309.07620v1
# Neural Field Representations of Articulated Objects for Robotic Manipulation Planning ###### Abstract Traditional approaches for manipulation planning rely on an explicit geometric model of the environment to formulate a given task as an optimization problem. However, inferring an accurate model from raw sensor input is a hard problem in itself, in particular for articulated objects (e.g., closets, drawers). In this paper, we propose a _Neural Field Representation_ (NFR) of articulated objects that enables manipulation planning directly from images. Specifically, after taking a few pictures of a new articulated object, we can forward simulate its possible movements, and, therefore, use this neural model directly for planning with trajectory optimization. Additionally, this representation can be used for shape reconstruction, semantic segmentation and image rendering, which provides a strong supervision signal during training and generalization. We show that our model, which was trained only on synthetic images, is able to extract a meaningful representation for unseen objects of the same class, both in simulation and with real images. Furthermore, we demonstrate that the representation enables robotic manipulation of an articulated object in the real world directly from images. **Video: [https://phgrote.github.io/nfr/](https://phgrote.github.io/nfr/)** ## I Introduction Robots could support humans with everyday chores like cleaning if they were able to reliably interact with articulated objects such as closets and drawers. Every concrete interaction with the environment (e.g., the opening of a closet) can be formalized as a constrained minimization problem. By defining the objective function in terms of manipulation features, which map the environment to numerical quantities (e.g., the position of an object), we are not limited to solve only for the robot's own movement, but are able to optimize for instance the location of other objects within the environment. In order to formulate such optimization problems the robot needs a good representation of objects in the scene. In general, this representation has to be inferred from raw sensory inputs like images or point clouds. Traditional approaches represent objects explicitly, for instance as a mesh or a combination of geometric shapes (e.g., spheres, boxes, etc.). The dynamic behavior of articulated objects is modeled explicitly as well, e.g., by inferring the location of the rotational axes for revolute joints [1, 2, 3] or by estimating how the perceived object relates to a known canonical representation [4] or prototype [5]. Similar to the work of Eisner et al. [6], we investigate the use of implicit representations for articulated objects, demonstrate how such representations can be inferred from raw RGB images, and how they can be used for manipulation planning. An implicit neural field representation can be inferred from raw sensory RGB input by minimizing the loss between rendered and observed images, thereby making depth sensors, traditional approaches rely on, dispensable. We encode this representation by a low dimensional structured latent code. The proposed structure of the latent code allows us to manipulate the latent code in a predictable way in order to _simulate_ the whole range of motion of a perceived object. Finally, we show that this representation can be transformed to a semantic 3D keypoint representation [7] to enable category-level manipulation using existing manipulation planning frameworks [8]. The proposed interaction with an articulated object is depicted in Fig. 1. To summarize, our main contributions in this work are: * Framework for generating neural field representations conditioned on a structured latent code, which enables the forward simulation of possible movements * Integrated architecture to extract implicit object representations from posed images, in order to generate images, semantically labeled point clouds and keypoint predictions for arbitrary articulations * Integration of the neural representation within a sequential manipulation planning framework We evaluate our approach in multiple ways. First, we demonstrate the generative capabilities by interpolating between different representations and by generating new representations for unobserved articulations. Next, we evaluate the prediction of keypoint positions, which is essential for manipulation planning. We demonstrate in simulation as well as on a real robot that we are able to manipulate an articulated object based on the representation extracted from posed images. Finally, we show that our method is robust to out-of-distribution scenarios, i.e., it can infer good representations from real RGB images, even though our architecture was trained on synthetic images with different camera parameters. ## II Related Work ### _Implicit Representations in Robotics_ Implicit representations are gaining popularity within the robotics community. They have been used for long-horizon planning from visual inputs [9], navigation [10], pose estimation [11] and reinforcement learning [12]. Furthermore, they are capable of predicting how articulated parts move under kinematic constraints without knowing the explicit kinematic model [6]. Instead of adopting NeRF as in [10, 11, 12], we are using _Scene Representation Networks_ (SRN) [13] as an underlying implicit representation in order to encode surface distances directly. By adopting an auto-decoder approach instead of encoding observations directly [12, 9] we are robust to out-of-distribution scenarios. Instead of using a static representation for pose estimation [10, 11], this work focuses on how to manipulate an inferred representation in order to predict how the articulation of the object affects the position of keypoints. Finally, by predicting the handle position our method does not require a suction-type gripper as in [6]. ### _Implicit Representations for articulated objects_ Representing 3D objects as continuous and differentiable implicit functions is a well established field of research [14, 15, 16, 17, 18, 19, 13, 20, 21]. This line of research typically focuses on static objects, but representing dynamic articulated objects is starting to emerge as a new direction [22, 23, 24, 25]. Mu et al. [23] propose to use an _Articulated-Sign Distance Function_ (A-SDF), a learned _Sign Distance Function_ (SDF) based on the work of Park et al. [18], to represent articulated objects. In regard to the separation of shape code and articulation code, our approach is similar, but instead of using an SDF as an implicit function we are using a more general function \(\Phi\), which maps spatial coordinates to feature vectors. This allows us to formulate the reconstruction loss on images rather than point clouds. Thus we do not have to assume that point cloud data is available. Su et al. [24] extend NeRF [16] for learning a 3D representation of the human body from 2D observations. While they refine an initial estimation of the articulation given by an off-the-shelf estimator, we are estimating the articulation directly without the need of an additional estimator. Learning the motion constraints through interaction is addressed by [22]. Our approach does not require additional interaction with a new instance from a learned category in order to perform motion planning. The study by Tseng et al. [25] addresses articulated objects by extracting an explicit kinematic model of the perceived object by fitting a rotation axis between intersecting parts. In contrast, our approach directly generates keypoint representations for different articulations in order to perform motion planning. ### _Articulated Objects_ The manipulation of articulated objects is a well known problem. In order to enable robotic manipulation it has been proposed to extract an explicit kinematic model from demonstration either using fiduciary markers [2, 26] or by tracking features within the observation [27, 28]. Others proposed to extract an explicit kinematic model through interactive perception [29, 30, 31, 32, 33]. Another line of research assumes knowledge of the kinematic structure of a broader category and only adjusts the parametrization to the observed instance from observed depth data [34, 4]. Instead of extracting an explicit model of the kinematic structure we are using an implicit representation. We are able to infer representations from posed RGB images and do not require depth data. Furthermore, our method does not require any interactions with the perceived object in order to construct a model of its kinematic structure. ## III Background Our approach extends _Scene Representations Networks_[13] for manipulation planning of articulated objects. We first summarize the original framework, and present our contributions and extensions in Sec. IV. We represent objects implicitly with a neural field: a function \(\Phi_{\theta}\in\mathcal{X}\), which maps 3D spatial coordinates \(\mathbf{x}\) to \(n\)-dimensional feature vectors \(\mathbf{v}\), \[\Phi_{\theta}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{n},\ \mathbf{x}\mapsto \Phi_{\theta}(\mathbf{x})=\mathbf{v}. \tag{1}\] This function is implemented with a neural network parameterized by the weight vector \(\theta\in\mathbb{R}^{l}\). Given \(\Phi_{\theta}\), we can render images using a differentiable rendering algorithm \(\Theta_{\psi}\), for any camera extrinsic \(\mathbf{E}\) and intrinsic \(\mathbf{K}\) parameters: \[\begin{split}\Theta_{\psi}:\mathcal{X}\times\mathbb{R}^{3\times 4 }\times\mathbb{R}^{3\times 3}&\rightarrow\mathbb{R}^{H\times W \times 3},\\ (\Phi,\mathbf{E},\mathbf{K})&\mapsto\Theta(\Phi, \mathbf{E},\mathbf{K})=\hat{\mathcal{I}}.\end{split} \tag{2}\] We generate images by mapping the feature vectors at all surface points to their corresponding RGB values. Surface points are obtained by querying \(\Phi_{\theta}\) repeatedly and mapping Fig. 1: Interaction with articulated objects: First, the robot observes a new object (a); The latent code \(\mathbf{z}\) is found by minimizing the image loss between the observed real images and the generated images (b); We predict keypoints \(\mathbf{k}\) by forward simulating the motion (c); Finally, all keypoints are used to formulate an optimization problem (d) and (e). the corresponding feature vectors to step sizes along camera rays (differentiable raymarching). \(\Theta_{\psi}\) is implemented by multiple neural networks and we collect all weights in one weight vector \(\psi\). By using a hypernetwork \(H_{\phi}\)[35] it is possible to find a \(k\)-dimensional subspace of the space of weights of the neural vector field \(\theta\in\mathbb{R}^{l}\), which allows to represent objects with a \(k-\)dimensional latent code \(\mathbf{z}\) \[H_{\phi}:\mathbb{R}^{k}\rightarrow\mathbb{R}^{l},\ \mathbf{z}\mapsto H_{\phi}( \mathbf{z})=\theta\, \tag{3}\] with \(k<l\), which suffices to represent all instances of a certain object class \(\mathcal{K}\subset\mathcal{X}\)[13]. By implementing \(H_{\phi}\) as a neural network and optimizing the weight vector \(\phi\), we are learning a suitable prior of 3D surfaces. This prior is necessary to estimate a plausible 3D surface shape given a (possibly small) set of 2D images [36]. Given a set of posed images \(\mathcal{D}=\{\mathcal{C}_{j}\}_{j=1}^{N_{\text{sub}}}\) with \(\mathcal{C}_{j}:=\{(\mathcal{I}_{j,i},\mathbf{E}_{j,i},\mathbf{K}_{j,i})\}_{i= 1}^{N_{\text{sup}}}\) of different objects, we can learn to represent objects of a given class, by training the latent codes and all other weights jointly using the auto-decoder framework: \[\operatorname*{arg\,min}_{\mathbf{z}_{j},\phi,\psi}\sum_{j=1}^{N_{\text{sub }}}\sum_{i=1}^{N_{\text{sup}}}||\Theta(\Phi_{H(\mathbf{z}_{j};\phi)},\mathbf{ E}_{i},\mathbf{K}_{i};\psi)-\mathcal{I}_{i}||_{2}^{2}. \tag{4}\] The latent code \(\mathbf{z}_{\text{new}}\) for a previously unseen instance \(\mathcal{C}:=\{(\mathcal{I}_{i},\mathbf{E}_{i},\mathbf{K}_{i})\}_{i=1}^{N_{ \text{sup}}}\) is obtained through optimization as well: \[\operatorname*{arg\,min}_{\mathbf{z}_{\text{new}}}\sum_{i=1}^{N_{\text{sup}} }\|\Theta(\Phi_{H(\mathbf{z}_{\text{new}};\phi)},\mathbf{E}_{i},\mathbf{K}_{ i};\psi)-\mathcal{I}_{i}\|_{2}^{2}. \tag{5}\] Because the latent code \(\mathbf{z}\) is not generated by encoding the observations but is found through optimization instead, this approach is referred to as an auto-decoder framework [37, 18]. Due to this additional optimization step, auto-decoding is slower than a feed forward encoder approach. However, auto-decoding is more robust in certain out-of-distribution scenarios [36]. For instance, auto-decoding is able to infer good latent codes with low reconstruction loss even if the camera poses of the observations were not seen during training [13]. These benefits have contributed to the wide adoption of the auto-decoder approach [38, 39, 18, 40, 41, 13, 42, 43]. ## IV Neural Scene Representations for Articulated Objects For manipulation planning of articulated objects, we propose a method that can forward simulate the possible motions of a given object. With our extensions to SRNs [13], namely structured latent code, keypoint prediction and semantic labeling, we obtain a novel architecture which enables the desired forward simulation of motion (Fig. 2). Furthermore, by forward simulating the motion of the object and predicting keypoints for arbitrary articulations we are able to perform manipulation planning. In the following sections we will explain our extensions, the training of the whole model and how previously unseen objects are handled. ### _Latent code_ We define the latent code of the object instance as, \[\mathbf{z}:=\begin{bmatrix}\mathbf{z}_{\text{art}}\\ \mathbf{z}_{\text{obj}}\end{bmatrix}. \tag{6}\] The latent code comprises two distinct parts: the articulation code \(\mathbf{z}_{\text{art}}\) and the object code \(\mathbf{z}_{\text{obj}}\). The articulation code encodes the articulation and the object code encodes the shape and the appearance of the object. Instead of mapping the whole range of a single joint to the interval \(I:=[0,1]\) we are using a two dimensional representation \(\mathbf{z}_{\text{art}}\in\mathbb{R}^{2}\). This allows us to introduce a normalization layer to transform \(\mathbf{z}_{\text{art}}\). The transformed articulation code \(\hat{\mathbf{z}}_{\text{art}}\) lies on a unit circle and in order to avoid discontinuities one half of the unit circle represents all possible articulations, while the second half of the unit circle mirrors the first. With the proposed normalization we ensure a uniform distribution of \(\hat{\mathbf{z}}_{\text{art}}\), even though we are using a gaussian prior on \(\mathbf{z}_{\text{art}}\). The proposed parameterization was motivated by the work of Salimans and Kingma [44] for improving the speed of convergence. ### _Forward Simulation by Latent Code Manipulation_ After training, when we see a new object instance we first optimize the complete latent code by minimizing the image reconstruction loss. Now, we can simulate the movement by modifying the articulation code, while keeping the object code constant. For each new code, we can simulate the movement by generating images, segmentation masks and predict keypoint positions. Finally, the information generated by simulating the movement is used for manipulation planning with trajectory optimization (Sec. V). Fig. 2: Overview: latent code \(\mathbf{z}\) is mapped via \(H\) to \(\Phi\), which is queried repeatedly in order to extract surface points through differentiable raymarching (b); Feature vectors at surface points are mapped to RGB values to generate RGB images (a) and to semantic labels (c); 3D positions of keypoints (e.g. handle) are directly obtained from \(\mathbf{z}\) (d). ### _Semantic Segmentation_ Using differentiable raymarching we are able to generate a multiset of feature vectors \(\mathcal{V}=\{\mathbf{v}_{N}^{p}\}_{p=1}^{H\times W}\) (Fig. 2b). These feature vectors can be mapped to RGB colors via \(\Psi_{1}\) (Fig. 2a) or to semantic labels via \(\Psi_{2}\) (Fig. 2c). Thus, we are able to generate semantically labeled point clouds of the object from arbitrary viewpoints (\(\mathbf{E}\), \(\mathbf{K}\)) _and_ arbitrary articulations. ### _Keypoint Prediction_ A latent code representation \(\mathbf{z}\) can be used to predict the 3D positions of specific keypoints (Fig. 2d). For instance, on our closet dataset we defined the center of the handle, the hinge joints and a goal location inside the closet as keypoints. In order to predict the keypoint positions for an arbitrary articulation \(q\), we first generate a new latent code \(\mathbf{z}_{\text{new}}\) and then map this generated latent code \(\mathbf{z}_{\text{new}}\), via the neural network \(\Gamma_{\gamma}\), to the predicted keypoint positions \(\mathbf{k}_{\text{new}}\). ### _Training_ Here, we describe the training of our framework on closets, including the data generation process and the loss function used. #### Iv-F1 Data Generation We generated a dataset containing \(N_{\text{closet}}=1000\) closet models with varying shapes and appearances. For each closet model we generated \(N_{\text{art}}=100\) uniformly distributed articulations of the door between \(0^{\circ}\) (\(q=0\)) and \(90^{\circ}\) (\(q=1\)). Thus in total our dataset \(\mathcal{D}\) is composed of \(M=N_{\text{closet}}\cdot N_{\text{art}}=100000\) instances. For each instance we generated \(N_{\text{view}}=10\) posed images with a resolution of \(128\times 128\) using NViSII [45], a scriptable tool for photorealistic image generation. Additionally, we varied the lighting conditions. For each instance we generated the ground truth position of the handle, the hinges and the goal location inside the closet. #### Iv-F2 Loss Function We are optimizing all object codes \(\{\mathbf{z}_{\text{obj}}^{l}\}_{l=1}^{N_{\text{closet}}}\) and the weights of all networks (\(\phi\): hypernetwork; \(\gamma\): keypoint prediction; \(\theta_{\text{RM}}\): raymarching; \(\psi_{1}\): RGB rendering; \(\psi_{2}\): semantic labelling) jointly, \[\underset{\begin{subarray}{c}\{\mathbf{z}_{\text{obj}}^{l}\}_{l=1}^{N_{ \text{closet}}}\\ \phi,\gamma\\ \theta_{\text{RM}},\psi_{1},\psi_{2}\end{subarray}}{\arg\min}\sum_{l=1}^{N_{ \text{closet}}}\sum_{k=1}^{N_{\text{art}}}\sum_{i=1}^{N_{\text{view}}} \mathcal{L}_{\text{SRN}}+\lambda_{1}\mathcal{L}_{\text{SEG}}+\lambda_{2} \mathcal{L}_{\text{KP}}. \tag{7}\] In this formulation \(\mathcal{L}_{\text{SRN}}\) is comprised of the image loss (\(\mathcal{L}_{\text{img}}\)) and two regularization terms, for more details please refer to [13]. The other loss components are defined as follows: \[\mathcal{L}_{\text{SEG}} =\text{CE}(\Theta_{\text{SEG}}(\Phi_{H(\mathbf{z};\phi)},\mathbf{ E}_{i}^{l,k},\mathbf{K}_{i}^{l,k};\theta_{\text{RM}},\psi_{2}),\mathcal{J}_{i}^{l,k}),\] \[\mathcal{L}_{\text{KP}} =||\Gamma(\mathbf{z};\gamma)-\mathcal{P}^{l,k}||_{2}^{2},\] where \(\text{CE}(\cdot)\) is the cross entropy loss between the predicted segmentation generated via \(\Theta_{\text{SEG}}\) and the ground-truth segmentation \(\mathcal{J}_{i}^{l,k}\). During training, the ground truth articulation codes \(q^{l,k}\) are used. The ground-truth keypoint positions \(\mathcal{P}^{l,k}\) and images \(\mathcal{I}_{i}^{l,k}\) are provided during training as well. The weights \(\lambda_{1}\) and \(\lambda_{2}\) control the relative importance of each loss term during training. ### _Inference_ Given a trained model and a set of images of a previously unseen articulated object, we are able to find the corresponding latent code of the object by minimizing the image loss. In contrast to the training phase, the weight vector of all neural networks are kept constant. Additionally, since we do not have access to the ground truth semantic segmentation and the positions of the keypoints, we set \(\lambda_{1}=\lambda_{2}=0\). ## V Manipulation Planning with Neural Representations In order to perform manipulation planning we integrated our neural field representation of articulated objects with the constraint-based trajectory optimization and manipulation planning framework used within _Logic-Geometric Programming_ (LGP) [8]. With this framework our method works as follows: 1. The robot takes a few pictures of an unseen closet. 2. The latent code that corresponds to the closet is computed by minimizing the image reconstruction loss. 3. Movement of the closet is simulated by interpolating the articulation component of the latent code, from the estimated current value to a desired value. During this forward-simulation of the neural model, the trajectory of a set of keypoints is predicted and stored. 4. The predicted keypoint trajectory is used to define a trajectory optimization problem. 5. The optimization problem is solved with constrained optimization, and the robots execute the resulting motion. By predicting these keypoint positions, the motion constraints of the object are considered. As depicted in Fig. 3, for each articulation the position of the hinges and target location remain constant, while the handle moves along an object specific trajectory. By mapping consecutive articulations to corresponding time steps we can define different tasks such as the _opening_ or _closing_ of a closet. Specifically, the interaction with the object is discretized into \(T\in\mathbb{N}\) steps. Given an inferred articulation code and a target, we can map each intermediate step to a specific articulation using a linear interpolation in latent space. By mapping the \(T\) latent codes (combination of interpolated articulation and inferred object codes) to keypoint positions, we are able to formulate a constrained minimization problem. Fig. 3: Interaction with a closet: Four input images (left); Simulated motion: handle positions in red, hinge positions in blue and goal location in magenta (middle); final interaction on a real robot (right). ## VI Experimental evaluation We evaluate our framework in multiple ways. First, we evaluate the ability of our learned model to render images. Next, we demonstrate that given a latent code representation, the motion of an articulated object can be simulated. Furthermore, we evaluate the keypoint estimation of an observed object as well as the keypoint prediction for arbitrary articulations. Finally, we demonstrate the robotic manipulation of different object classes in simulation and on a real robot. Our model was trained on the training dataset described in Sec. IV-E. For our evaluation we use two different datasets: \(\mathcal{D}_{\text{SYNT}}\) and \(\mathcal{D}_{\text{REAL}}\). Both datasets have \(N_{\text{view}}=10\) views of each particular instance. \(\mathcal{D}_{\text{SYNT}}\) was generated similar to the training dataset. For \(\mathcal{D}_{\text{REAL}}\) we manually collected posed images from a single (real) closet. ### _Image Rendering_ Latent code representations are found through the minimization of the image loss between the observed image and the image rendered by our trained model. Fig. 5 confirms that for our trained model interpolations between latent codes correspond to semantically meaningful and smooth interpolations in image space, which is required in order to find good latent representations for a broad range of objects. Furthermore, our framework is also able to find good latent code representations for real images and we can simulate the whole range of motion of the perceived object (Fig. 4). The ability to interpolate between latent codes, the generalization to real images and the ability to simulate the motion confirms that we have learned a strong prior for the given object category. ### _Keypoint Estimation and Forward Simulation of Motion_ In this section we evaluate the keypoint estimation and prediction. First, we describe the baseline used. Next, we evaluate the keypoint estimation of the observed object and the keypoint prediction for arbitrary articulations. #### Vi-B1 Baseline As a baseline we trained a standard image encoder \(\mathcal{E}\) similar to the one used in [9], which adopted the U-net architecture [46] with ResNet-34 [47] as its downward path. Each image \(\mathcal{I}_{i}\) together with its pose \(\mathbf{E}_{i}\) with \(i\in\{1,...,N_{\text{view}}\}\) of a single instance is encoded. The final latent code \(\mathbf{z}_{\text{ResNet}}\) is obtained by taking the average of all \(N_{\text{view}}\) image encodings. The neural network \(\Gamma_{\text{ResNet}}\) maps latent codes to keypoint positions, _and_ estimates the current articulation \(q\) of the perceived object explicitly. In contrast to our approach, the baseline implementation is not capable of generating new representations for different articulations of the observed object and thus can only infer features, e.g., keypoints, for the perceived object. In order to compare the baseline to our model we are required to provide additional knowledge about the geometric properties and behavior of any given object. For example, on our closet dataset we assume a vertical axis of rotation at the hinge position. Only with this additional assumption we can predict the positions for different articulations by the baseline model. #### Vi-B2 Keypoint Estimation of Observed Configurations In this section we evaluate the estimated keypoint position for observed objects. We compare our implementations, with and without articulation code normalization (Sec. IV-A), and the ResNet baseline. Our results in Fig. 6 show that with the proposed normalization of the articulation code we achieve comparable results to a classic image encoder. All methods achieve subcentimeter accuracy, while our methods Fig. 4: Forward simulation of motion: (a) shows one input image of the perceived object; The top row in (b) shows the RGB renderings generated with \(\Theta\), whereas the bottom row depicts the semantic segmentation generated by \(\Theta_{\text{SEG}}\). Fig. 5: Interpolation of latent codes across articulation code and object code: the latent codes corresponding to the left- and rightmost image were found through optimization. Fig. 6: Articulation and keypoint estimation: (a) shows the prediction error on the estimated articulation angle; (b) shows the RMSE error on the predicted keypoint positions provides additional benefits like generating point clouds with semantic annotations and generating estimates for arbitrary articulations for objects with unknown dynamic behavior. #### Vi-B3 Forward Simulation of Motion Using a latent code \(\mathbf{z}\) obtained from the synthetic dataset we are able to simulate the whole range of motion by generating new latent codes for arbitrary articulations \(q\in[0,1]\). For each generated latent code the predicted handle position is shown in Fig. 7. With a traditional image encoder we are not able to compute handle positions for arbitrary articulations \(q\) directly. Thus, based on the image encoding of the corresponding instance we estimated only the current articulation and the position of the hinges. Additionally, we predict the position of the handle and hinge joint for different articulations based on the explicit geometric model we provided for comparison. Both approaches perform well in predicting the handle positions for arbitrary articulations on synthetic images, but if we compare their predictive performance on real objects our approach outperforms the baseline, which diverges from the true path (Fig. 7). Here we are using the data from \(\mathcal{D}_{\text{REAL}}\). Those images and the corresponding camera parameters are drawn from a different distribution than the one present in the synthetic dataset. Since our approach minimizes the reconstruction loss it is able to generalize to this out-of-distribution scenario. ### _Motion Planning_ Last, we describe the integration of all parts for manipulation planning in simulation and on a real robot. #### Vi-C1 Simulation Given only a small set of images from different viewpoints, we are able to estimate the current position of all keypoints and to simulate their movement during interaction with the robot. Only those keypoint predictions are used during trajectory optimization. After planning, we check that the handle is grasped correctly and that the motion does not violate the geometric constraints of the object. #### Vi-C2 Real Robot For manipulation planning on a real robot, we take ten images from different viewpoints. Based on the latent code which minimizes the image loss we predict ten waypoints to formulate and solve the corresponding trajectory optimization (see Sec. V). Finally, we execute the plan using a position-based controller. Thus, even without an explicit kinematic model of the perceived object, the robot is able to perform the desired object manipulation as shown in Fig. 1 and in the accompanying video by forward simulating the motion. ### _Generalization to Different Object Categories_ Our approach generalizes to different object categories. We trained a different model to manipulate drawers (Fig. 8). Note that objects of this class impose a different movement constraint compared to the closets. With our method we can predict the handle positions for the entire range of motion and perform manipulation planning for drawers as well. ## VII Conclusion In this work, we have proposed a method for finding implicit representations of articulated objects by minimizing the image loss between observed images and rendered images. As we have shown this approach is robust to out-of-distribution scenarios and generalizes to real images and previously unobserved camera parameters. The structured latent code enables motion planning by predicting keypoint position through forward simulating the motion of observed objects. Finally, we demonstrated manipulation planning in simulation and on a real robot. A current limitation is that we trained separate models for different object classes (e.g., closets and drawers). As future work we would address this limitation by training a single general model with data of multiple diverse objects. Furthermore, in this work we considered only objects with a single joint. How our approach scales to complex objects with multiple joints is another interesting direction for further research.
2309.17433
DREAM: Decentralized Reinforcement Learning for Exploration and Efficient Energy Management in Multi-Robot Systems
Resource-constrained robots often suffer from energy inefficiencies, underutilized computational abilities due to inadequate task allocation, and a lack of robustness in dynamic environments, all of which strongly affect their performance. This paper introduces DREAM - Decentralized Reinforcement Learning for Exploration and Efficient Energy Management in Multi-Robot Systems, a comprehensive framework that optimizes the allocation of resources for efficient exploration. It advances beyond conventional heuristic-based task planning as observed conventionally. The framework incorporates Operational Range Estimation using Reinforcement Learning to perform exploration and obstacle avoidance in unfamiliar terrains. DREAM further introduces an Energy Consumption Model for goal allocation, thereby ensuring mission completion under constrained resources using a Graph Neural Network. This approach also ensures that the entire Multi-Robot System can survive for an extended period of time for further missions compared to the conventional approach of randomly allocating goals, which compromises one or more agents. Our approach adapts to prioritizing agents in real-time, showcasing remarkable resilience against dynamic environments. This robust solution was evaluated in various simulated environments, demonstrating adaptability and applicability across diverse scenarios. We observed a substantial improvement of about 25% over the baseline method, leading the way for future research in resource-constrained robotics.
Dipam Patel, Phu Pham, Kshitij Tiwari, Aniket Bera
2023-09-29T17:43:41Z
http://arxiv.org/abs/2309.17433v1
DREAM: Decentralized Reinforcement Learning for Exploration and Efficient Energy Management in Multi-Robot Systems ###### Abstract Resource-constrained robots often suffer from energy inefficiencies, underutilized computational abilities due to inadequate task allocation, and a lack of robustness in dynamic environments, all of which strongly affect their performance. This paper introduces _DREAM_ - Decentralized Reinforcement Learning for Exploration and Efficient Energy Management in Multi-Robot Systems, a comprehensive framework that optimizes the allocation of resources for efficient exploration. It advances beyond conventional heuristic-based task planning as observed conventionally. The framework incorporates Operational Range Estimation using Reinforcement Learning to perform exploration and obstacle avoidance in unfamiliar terrains. DREAM further introduces an Energy Consumption Model for goal allocation, thereby ensuring mission completion under constrained resources using a Graph Neural Network. This approach also ensures that the entire Multi-Robot System can survive for an extended period of time for further missions compared to the conventional approach of randomly allocating goals, which compromises one or more agents. Our approach adapts to prioritizing agents in real-time, showcasing remarkable resilience against dynamic environments. This robust solution was evaluated in various simulated environments, demonstrating adaptability and applicability across diverse scenarios. We observed a substantial improvement of about 25% over the baseline method, leading the way for future research in resource-constrained robotics. ## I Introduction In the domain of robotics, addressing the unique challenges and opportunities presented by resource-constrained robots is of paramount significance. Such robots are expected to employ their computational ability and sensor capabilities to execute tasks in the most energy-efficient manner. Integral to these challenges are issues related to energy consumption, path planning, and coordinated decision-making, especially in multi-robot scenarios. As robotic systems are increasingly applied in diverse fields, ranging from environmental monitoring, natural disasters [1], and search & rescue missions [2], solving these resource constraints has become crucial. Constructing an energy consumption model that comprehensively factors in the variables impacting energy use within a robot poses significant challenges. This complexity arises from the necessity to conduct precise system identification procedures and accurately quantify each energy expense. The operational range estimation of a robot refers to the distance within which it can function given its resource constraints. Energy should be perceived as a predetermined resource constraint that optimizes the mission execution within these defined limitations. This model, therefore, needs to be adaptable, and capable of responding to real-time changes. In the context of co-operative Multi-Robot Systems (MRS), the necessity for efficient resource management becomes more complex and crucial. Compared to a single robot, an MRS setup offers improved reliability and scalability. It can cover large areas and complete tasks quickly and efficiently, making it suitable for applications in diverse fields. However, this also leads to challenges associated with goal allocation and trajectory coordination under constrained resources. Efficient task allocation in an MRS involves assigning suitable roles to individual robots based on their specific capabilities and mission requirements. Trajectory coordination, on the other hand, involves planning the paths of individual robots in a way that they can collaborate effectively without interfering with each other's tasks. This requires careful planning and real-time adaptability to avoid collisions and ensure smooth cooperation among the robots with limited resources. This makes decentralized navigation essential, where each robot is equipped to make independent decisions based on local information. It can adjust its path dynamically, responding to real-time changes in the environment. Fig. 1: _Three Robots (in black) navigating to their respective Goal positions using the Refined TD3 (RTD3) Model. Upon initiation, the GNN model uses all the Robots’ states to do goal allocation, which accounts for minimizing the cumulative energy used by the system at the end of the mission._ We propose a Refined Twin Delayed Deep Deterministic Policy Gradient (RTD3) based model for obstacle avoidance and exploration for Multi-Robot Systems. We also propose a Graph Neural Network (GNN) based model that leverages real-time operational data for instantaneous goal allocation. In summary, our model prioritizes developing robust path-planning mechanisms under strict energy constraints which drives the development of DREAM. The key contributions of our work can be summarized as follows: * Introduced a **Refined TD3** structure, leveraging the **Reward Categorized Replay Buffer**, resulting in a **75%** reduction in model parameters. * Developed a GNN-based **Energy Management Model** for adaptive mission planning based on real-time energy availability, enhancing mission success and system lifespan. * Expanded navigation capabilities for multi-agent, goal-driven exploration and collaborative mapping, using a single agent-goal pair for training. * Amplified the model's versatility across varying environments, regardless of the robot-goal pair count, leveraging the benefits of **Curriculum Learning**. ## II Related Works The increasing complexity and scalability of problems for robotics have necessitated the deployment of MRS, with agents discovering solutions using learning mechanisms [3, 4, 5]. Such systems have gained tremendous attention in recent years, with applications ranging from complex system modeling to smart grids and computer networks. Nonetheless, MRS presents inherent challenges, including agent coordination, security, and task allocation [4, 5]. Reinforcement learning techniques have been extensively used in multiagent systems. For example, [3] provides a comprehensive survey of multi-agent reinforcement learning. Likewise, [6] and [7] explore deep reinforcement learning methods for multi-agent domains, with the latter introducing the Mean Field Reinforcement Learning approach to address the scalability issue. Interestingly, [7] also establishes a relationship between the learning of an individual agent's optimal policy and the dynamics of the population, highlighting the mutual reinforcement between the two. In the context of swarm robotics, inspired by natural swarms' collaborative intelligence, [8] investigates a fault-tolerant pattern formation algorithm. The paper sheds light on how different agents can form a geometric pattern and maintain it, even in the presence of faulty agents. In a related study, [9] proposes a technique for the identification of biased-measurement agents in a network of mobile robots, highlighting the impact of errors on system performance. Energy efficiency and optimization have also been a significant focus in multi-agent systems and robotics. For instance, [10] offers an energy-aware path planning algorithm for UAVs, while [11] examines energy optimization for optimal motion planning for a dual-arm industrial robot. Further, [12] describes an energy management system for mobile robots used for search and rescue that integrates a battery and supercapacitor to manage power sharing. Effective inter-agent communication is pivotal in decentralized multi-robot coordination. Traditional methods have employed discrete communication strategies, such as signal binarization [13]. The infusion of attention mechanisms into GNNs offers a promising direction to process information-dense graphs [14]. While attention on static graphs has shown potential, its efficacy in dynamic multi-agent graphs remains to be fully explored [15]. Deep learning advancements have enabled robots to anticipate their environments and conduct directed explorations. Notable works include exploring navigation via deep Q-learning [16], utilizing RGB images to anticipate maps and actions [17, 18], and using learned predictions to map surroundings [19]. Our research extends the work of Cimurs et al. [20] for Multi-Robot Systems under strict resource constraints. Our proposal integrates a streamlined motion policy based on raw Lidar data with a holistic global navigation strategy. Our approach not only aims for goal-driven exploration in uncharted terrains along with obstacle avoidance but also does multi-robot mapping with limited resources in order not to compromise any agent during a mission. ## III Methodology To optimize resource allocation in multi-agent systems, we use the RTD3 model for multi-agent goal-based exploration and collaborative mapping. We also implemented the GNN model, which does the goal allocation for each robot. This section details our implementation and highlights the key improvements and contributions. ### _Refined TD3 Architecture_ #### Iii-A1 **Actor Network** The architecture of our Actor network is meticulously designed to cater to the complex state-action relationship. The Actor network embodies the policy function of the agent. Given a state \(s\), the Actor predicts the optimal action \(a\) that the agent should execute. The input to the network consists of 4 neurons along with the lidar input. These four neurons make up the agent's state space - [\(d_{goal},\theta_{goal},v,\omega\)], which corresponds to the distance to the goal, angle to goal, linear velocity, and angular velocity respectively at each time step in the simulation. The first layer consists of 256 neurons, followed by the subsequent 128-neuron layer (compared to Cimurs et al. [20] where they have 800 and 600 neurons, respectively). Both the layers thereafter have layer normalization, ReLU activation, and a dropout of 0.2. Lastly, the output layer produces 2 action values - \(\left[v\left(\frac{(a+1)}{2}\right),\omega\right]\). The Actor function \(\pi\) can be represented as: \[a=\pi(s;\theta_{\pi})\] where \(\theta_{\pi}\) denotes the parameters of the Actor network and \(s\) denotes the state space. Our model has one-fourth of the number of model parameters, which was essentially due to incorporating Reward Categorized Replay Buffer and Curriculum Learning. These concepts will be discussed in detail in further sections. #### Iii-A2 **Critic Network** The Critic's role is pivotal in value-based reinforcement learning. It estimates the Q-value of a given state-action pair, guiding the Actor's policy optimization. Our architecture employs a twin network approach Q1 & Q2 to alleviate overestimation bias, a common pitfall in Q-learning methods. This is prevented by taking the \(min(Q1,Q2)\). The Critic network is similar to the Actor-network, except this has two separate pathways for both the state and action. The action, on the other hand, is transformed through a 256-neuron layer. The state and action representations are then concatenated and processed to produce the Q-value: \[Q(s,a;\theta_{Q})=V_{a,s}\] where \(Q(s,a;\theta_{Q})\) represents the Q-value function parameterized by \(\theta_{Q}\) and \(V_{a,s}\) denotes the expected return when taking action \(a\) in state \(s\). ### _Reward Categorized Replay Buffer_ The quality of stored experiences can profoundly influence an agent's learning trajectory. Our approach incorporates a Reward Categorized Replay Buffer (RCRB) that categorizes experiences based on their reward outcomes. This improvement in using a replay buffer makes training the Actor-Critic network faster than the original model. These categories include Positive Buffer (\(B_{+}\)), Neutral Buffer (\(B_{0}\)), and Negative Buffer (\(B_{-}\)). Depending on the reward value, the experience tuple -- comprising of \((s,a,r,s^{\prime})\) is sorted into one of these buffers. When training the agent, a balanced batch of experiences is required. The number of samples from each category is determined proportionally based on the current size of each buffer. This ensures that even rare occasions have a fair chance of being sampled, promoting a balanced learning experience for the agent as described below: \[n_{+}=\Big{\lfloor}\frac{len(B_{+})}{N}\times b\Big{\rfloor},n_{0}=\Big{\lfloor} \frac{len(B_{0})}{N}\times b\Big{\rfloor},n_{-}=b-n_{+}-n_{0}\] where \(N\) is the total number of experiences in the buffer, and \(b\) is the batch size for sampling. Figure 2 describes the above-discussed approach. This categorization offers several distinct advantages: 1. **Balanced Sampling:** Traditional buffers dilute sparse yet informative experiences in a vast sea of frequent outcomes. RCRB ensures a balanced representation, preventing bias towards commonly occurring rewards. 2. **Efficient Learning:** By providing a representative sample of the environment's dynamics, RCRB can expedite the agent's learning, leading to faster convergence. 3. **Optimal Memory Utilization:** RCRB's balanced storage prevents the over-representation of any specific reward category, ensuring efficient memory use. ### _Training Pipeline - Refined TD3_ Central to the DREAM framework is the deployment of the RTD3 algorithm, a state-of-the-art approach for predicting continuous action spaces. The intricate design of our model, combined with our custom architectural choices, forms the backbone of our agent's learning mechanism. #### Iii-C1 **Curriculum Learning** We commenced our training regimen with a basic environment setup involving one robot-goal pair with some obstacles at a fixed position. This enabled the model to grasp fundamental obstacle avoidance nuances. Eventually, the complexity of the environment was increased along with different obstacle poses during each episode to simulate real-world conditions. Figure 3 showcases this curriculum learning approach [21]. Another aspect involved the goal and home positions spawned in close proximity to the robot's initiation point. As the model showcased increased competence, we expanded the distances to account for the challenge of prolonged journeys. This extension demanded the model to account for increased energy consumption, navigation challenges, and optimal path selection over elongated distances. #### Iii-C2 **Critic Update** The target Q-values for the Critic's update are computed using a modified Bellman equation: \[Q_{target}=r+\gamma(1-d)\min_{i=1,2}Q_{i,target}(s^{\prime},\pi(s^{\prime}))\] Here, \(r\) represents the reward, \(\gamma\) is the discount factor, \(d\) is the termination flag and \(s^{\prime}\) is the subsequent state. The Fig. 3: _From left to right - The complexity of the environment and learning process increases as the training progresses. Initially, the obstacles are set at a fixed location. As time progresses, the number of obstacles grows, and their placement becomes random. A similar trend is observed for the robot’s odometry, goal, and home positions._ Fig. 2: _Representation of Reward Categorized Replay Buffer which accounts for Positive, Neutral, and Negative buffers based on the input experience replay_ innovation lies in the use of the minimum Q-value from the two target Critic networks, a strategy that curbs overestimation bias. The primary Critic networks then predict the Q-values for the original actions. The Mean Squared Error between these predicted and target Q-values forms the loss, thereby guiding the Critic's optimization of parameters. ``` Init: Actor, Critic networks; Target networks Init: Reward Categorized Replay Buffer (RCRB) Set: Curriculum Level \(\leftarrow\) 1 for episode \(\leftarrow\) 1 to N do Adjust environment by Curriculum Level Init environment; Obtain initial state \(s\) while!done do \(a\leftarrow\pi(s;\theta_{\pi})+\text{noise}\) Execute \(a\); Get \(r\), \(s^{\prime}\), done Store (\(s\), \(a\), \(r\), \(s^{\prime}\), done) in RCRB Sample from RCRB: states, actions, rewards, next_states, dones \(Q_{\text{target}}\gets r+(1-\text{done})\times\gamma\times\min(Q_{\text{targets}})\) Update Critic with MSE loss using \(Q_{\text{target}}\) if iteration mod delay \(==0\)then Update Actor by maximizing Q-values Soft update target networks endif \(s\gets s^{\prime}\) endwhile Modify battery parameters using actions, state Save networks at intervals or on improvement if Performance meets Curriculum Level then Increment Curriculum Level endif endfor ``` **Algorithm 1** Overview of the Improvised TD3 Approach #### Iii-C3 **Actor Update** Unlike the Critic's frequent updates, the Actor undergoes parameter refinement every two iterations, which is a design choice to ensure stability. The Actor's objective is to maximize the expected Q-values from one of the Critic networks. Given the Critic's representation of the environment's value structure, this maximization ensures the Actor's policy aligns with high-reward trajectories. The pseudocode for training the RTD3 approach is provided in 1. #### Iii-C4 **Reward Policy** The reward policy for this algorithm is designed according to the following function: \[r(s_{t},a_{t})=\begin{cases}r_{goal}&\text{if }D_{goal}<D_{thresh}\\ r_{collision}&\text{if }L_{min}<C_{thresh}\to\text{collision}\\ r_{nothing}&\text{otherwise}\end{cases}\] Here, \(r\) is the reward of the state-action pair \((s_{t},a_{t})\) at timestep \(t\). If the distance to the goal \(D_{goal}\) was less than the threshold \(D_{thresh}\), \(r_{goal}=200\) was awarded. If the distance from the closest Lidar reading was less than the threshold \(C_{thresh}\), it was considered as collision, and \(r_{collision}=-100\) was awarded. In case the robot never encountered any of these conditions, \(v-|\omega|\) was awarded to nudge the system to move more linearly than in the angular direction and ensure the agent learned to reach its goal in minimal steps. ### _Obstacle Avoidance & Navigation_ Since the robot used Lidar scan as input from the environment, it received high-dimensional sensory data reflecting the distances to nearby obstacles in a 180-degree view. This data can be utilized as the state space. However, since 180 is a high dimension for the model to learn, with most of it being redundant information, we represent the environment with only 30 values. This is achieved by dividing the 180-degree view into 30 segments. The Lidar data provides the state space describing the immediate environment. Hence, the total state space for this algorithm consists of environment and robot states, totaling to 34, which becomes the input to the network. The agent trained in the RTD3 network takes action from a set of possible actions at every timestep to maximize cumulative reward. The reward function provides positive feedback for actions that move the robot closer to its goal since it rewards more for moving linearly while avoiding obstacles. As the agent iterates through numerous episodes, it learns an optimal policy that maps states to actions, optimizing the trajectory to the goal while ensuring obstacle-free navigation. The network, when trained for a sufficient number of episodes, generalizes well to unseen environments, allowing the robot to navigate and avoid obstacles in novel scenarios using only its Lidar. ### _Multi-Robot Decentralized Exploration & Mapping_ We previously highlighted that our network was primarily designed to train a single agent to reach its target goal. This choice to focus on one agent rather than multiple agents simultaneously, was made to simplify the training process. This allowed the agent to deeply understand the environment's intricacies, including obstacles and the goal's location. Once the training was complete, we deployed this model on three robots, each navigating to their goals. With every episode, the environment, as well as the robot and goal poses, were randomized. The state space comprised 30 Lidar readings and the agent's state space. This setup enabled the robot to navigate around both static and dynamic obstacles (other robots). We utilized _slam toolbox_[22] ROS2 package [23] for mapping. These tools enabled collaborative mapping, using laser scans and odometry sensors to produce an occupancy grid map of the environment. Each robot was set to publish its parent TF transform as \(/robot\_x/odom\). To facilitate collaborative mapping, we established additional TF layers, connecting these individual robots under a single parent. Specifically, \(robot\_x/odom\) was made a child to \(robot\_x/map\), and all such TFs were linked to the main \(map\) frame. This structure allowed for map visualization in RViz2, as shown in Figure 4. Initially, goals were randomly set before exploration began. Robots followed waypoints, moving towards the goals while mapping their surroundings. Once the goal was reached, that area was marked as explored, and a new goal was then set, thereby continuing this cycle for all robots. ### _Battery Modeling_ In multi-agent systems, understanding and predicting battery behavior is critical. With the integration of energy management in the DREAM framework, simulating a battery's behavior allows for more informed decisions regarding energy allocation. In this work, we utilized the Doyle-Fuller-Newman (DFN) model to simulate and visualize battery discharge over time. This DFN model provides a comprehensive representation of a lithium-ion battery by capturing the intricate electrochemical processes occurring within. Leveraging the PyBaMM library [24], we readily instantiate this model to serve as the foundation for our battery life simulation. Our target simulation duration was set to 900 seconds, equivalent to approximately 15 minutes of battery life for each robot. The DFN model's default parameters are updated to reflect our calculated constant discharge current. This simulated battery was utilized in training the GNN model since it was one of the important factors for goal allocation. This module was not included in training the RL model as one of the rewards addressed this perspective of reaching the target quickly. ### _GNN for Energy Management & Goal Allocation_ In the realm of decentralized multi-robot systems, understanding and predicting the individual goals of robots are foremost. Inspired by the work of Li et al. [25], our approach leverages Graph Neural Networks (GNN) to model the relationship between different robots and predict their respective goals. This facilitates mission completion under constraint resource conditions while still being decentralized. #### Iii-G1 Data Collection & Preprocessing The framework is designed to simulate scenarios where three robots are assigned three distinct goals. The overarching objective is to pair each robot with a goal such that the cumulative energy consumption across all robots is minimized while maintaining the overall lifespan of the entire MRS. **State space:** the state space include robot odometry \((r_{x},r_{y})\), home \((h_{x},h_{y})\), and goal \((g_{x},g_{y})\) locations. **Energy Management Model:** The energy consumed by a robot is contingent on its linear or angular movement and the distance to the goal. Given \(R\) robots and \(G\) goals, we want to assign each robot to a goal. Each robot-goal pair has an associated energy consumption, which depends on the distance and orientation difference between the robot's current state and the goal. For each robot \(i\) and goal \(j\) pair, the energy \(E_{ij}\) is computed based on the distance and orientation difference: \[E_{ij}=d_{target}\times battery_{straight}+\theta_{target}\times battery_{turn}\] where, \(d_{target}\) is the normalized distance between the robot and the goal, \(\theta_{target}\) accounts for the difference in orientation between the robot's current pose and its desired goal pose. \(battery_{straight}\) = 10 and \(battery_{turn}=1.25\times battery_{turn}\). The goal here is to find an assignment matrix \(A\) of size \(R\times G\) such that it minimizes the overall energy consumption \[A=\sum_{i=1}^{R}\sum_{j=1}^{G}E_{ij}\] where \(E_{ij}\) represents the energy consumed by robot \(i\) to reach goal. This methodology determines the permutation of robot-goal assignments that optimally reduces energy consumption at both the collective and individual levels. This is achieved by deploying a graph convolution neural network to predict the goal and home allocations, one after another. #### Iii-G2 Network Architecture The architecture consists of multiple layers, beginning with a graph convolutional layer (GCN) followed by a fully connected layer. The network takes robot states, home locations, and goal positions as inputs. The inputs are concatenated and processed through two GCN layers. The first GCN layer takes the initial feature vector and transforms it into a higher-dimensional representation (128 dimensions) using the GCNConv operation, followed by a rectified linear unit (ReLU) activation function. Subsequently, the second GCN layer further refines the representation to a lower-dimensional space (64 dimensions) using the same convolutional operation and activation function. To prevent overfitting, dropout regularization is applied with a dropout rate of 0.5 during training. Finally, a fully connected layer maps the learned features to the prediction space, with the output reshaped to size \(3\times 2\), corresponding to the indices of goal and home locations for each robot. This architecture is designed to learn the mapping between the world states and the optimal goal allocation conditioned on robot battery levels. #### Iii-G3 Training Pipeline - GCN We use the Adam optimizer with a learning rate of 0.005 to update the network weights. The model parameters are initialized using the Xavier uniform initialization for the convolutional layers. This initialization method scales the weights based on the number of input and output units, which can aid in achieving a faster convergence. We design a special loss function to optimize energy consumption while maintaining the lifespan of each robot in the system. To this end, we compute the total energy of each robot on two journeys: from the current location to the goal and from the goal to home. Specifically, the network will optimize Fig. 4: _RVz2 showing three robots navigating to their respective goals along with mapping the environment collaboratively_ the total energy consumed by the whole system and also guarantee that each robot has enough battery to reach home. Any assignment that results in one or more stranded robots (low battery) during the mission, will be heavily penalized by our design. Consequently, the network will favor the optimal solutions for both global and individual levels. #### Iii-B4 Model Insights The utilization of a graph-based paradigm offers notable advantages in terms of scalability and adaptability, particularly in the context of accommodating a variable number of robots within a system. In contrast to conventional methodologies which typically involve random goal assignment or demand computational complexity, the proposed model demonstrates the ability to efficiently predict individual goals for each robot based on their collective states. ## IV Experiments ### _System Setup_ We used a computer equipped with an NVIDIA GeForce RTX 3060 graphics card, with 32 GB of RAM, and 12th Gen Intel Core i7 12700K x 20 CPU running Ubuntu 22.04.3 and ROS2 Humble. We trained the Refined TD3 network using PyTorch [26] in the Gazebo simulator [27] for 7000 iterations. ### _Training in Simulation_ The training pipeline was set up using ROS2, while Gazebo and RViz2 were used for visualization of laser scan, goal pose, home pose, robot odometry, and camera. The Pioneer P3-DX robot was used in the simulation. During the testing phase, three such robots were used with the same suite of sensors. When the training starts, Gazebo and RViz2 were disabled to accelerate the training process. One training episode is considered concluded when the robot reaches a goal, a collision is detected, or 250 steps are taken. As shown in figure 3, the training was carried out in a simulated 10x10m sized environment as previously discussed. We reward the robot for reaching the goal, penalize it for collisions, and give a minor reward otherwise. This encourages the robot to move, initially favoring forward motion over rotation. Despite early collisions, moving yields higher rewards than being stationary. The robot soon learns that avoiding obstacles, even if it means turning, is better than crashing. Hence, there's no need for a battery-saving penalty. The state space doesn't consist of a home location since that is just another goal for the robot. This was confirmed during the evaluation phase as it is independent of the type of the goal. The model also doesn't account for dynamic obstacles, only a randomized environment on every reset. This has made the model robust enough to avoid dynamic obstacles like other robots. This was the primary reason not to train the three robots simultaneously. This was also confirmed in testing as the robots would swerve on encountering them in their path. ### _Results_ To evaluate the effectiveness of our GNN-based goal allocation strategy versus a random allocation method, we introduce the Goal Allocation Evaluation Metric (GAEM). This metric is designed to measure the efficiency of different methods by comparing the overall energy consumption of robot teams in various environments. **Metric Definition:** Given a multi-robot system (MRS), let \(E_{GNN}\) and \(E_{random}\) denote the energy consumption of robot \(r\) to reach goal \(g\) when goals are assigned using the GNN and random methods, respectively. The efficiency metric \(\epsilon\) for a particular environment is defined as: \[\epsilon=\frac{\sum_{r=1}^{R}E_{rg}^{random}-\sum_{r=1}^{R}E_{rg}^{GNN}}{\sum_ {r=1}^{R}E_{rg}^{random}}\] A higher \(\epsilon\) indicates a more significant energy-saving using the GNN method compared to the random allocation. \(\epsilon\) offers a direct understanding of the percentage reduction in energy consumption using GNN over random goal allocation. From Table I, it is evident that this can be extended for various environments and a number of agents to showcase the strength and consistency of the GNN approach. The reduction in energy consumption not only improves the sustainability of the robot team but also enhances the efficiency and reliability of the operations on an individual level. ## V Conclusion and Future Works In this paper, we introduced _DREAM_ - a comprehensive framework that optimizes the allocation of vital resources in multi-agent systems. We successfully demonstrated the Refined TD3 model for obstacle avoidance, goal-based exploration and collaborative mapping for a multi-agent setup. This network had one-fourth number of model parameters due to the reward-categorized replay buffer and curriculum learning which created a significant impact on the rapid convergence of the model. We also presented a unique GNN model for goal allocation with resource constraints, since it is more optimized and scalable than a conventional approach. Even though our model was trained in simulation, we believe this learned policy should be easily transferrable with minimal parameter changes on any type of robot with a Lidar sensor. Further, this model can be fine-tuned using pretraining to account for more velocity inputs for certain robots. In this work, we have not addressed certain scenarios where robot(s) become incapable during a mission due to hardware or software issues. To alleviate this, one or more of the remaining agents would take over the task of the robot(s) which got compromised in order not to sacrifice the entire mission. This will be addressed in future work. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Env** & **Avg E\({}^{random}\)** & **Avg E\({}^{GNN}\)** & \(\epsilon\) **(\% Reduction)** \\ \hline 1 & 124.67 & 93.4 & 25.08 \\ 2 & 136.01 & 102.56 & 24.59 \\ 3 & 157.32 & 120.98 & 23.09 \\ 4 & 163.71 & 123.16 & 24.76 \\ 5 & 122.83 & 96.67 & 21.29 \\ \hline \end{tabular} \end{table} TABLE I: _Comparison of energy consumption for five different environments run five times each using random and GNN goal allocation methods._
2304.00158
Distinguishing X-ray Stars vs. AGN through ML
Modern X-ray telescopes have detected hundreds of thousands of X-ray sources in the universe. However, current methods to classify these sources using the X-ray data themselves suffer problems - detailed X-ray spectroscopy of individual sources is too time-consuming, while hardness ratios often lack accuracy, and can be difficult to use effectively. These methods fail to use the power of X-ray CCD detectors to identify X-ray emission lines and distinguish line-dominated spectra (from chromospherically active stars, supernova remnants, etc.) from continuum-dominated ones (e.g., compact objects or active galactic nuclei [AGN]). In this paper, we probe the use of artificial neural networks (ANN) in differentiating Chandra spectra of young stars in the Chandra Orion Ultradeep Project (COUP) survey from AGN in the Chandra Deep Field South (CDFS) survey. We use these surveys to generate 100,000 artificial spectra of stars and AGN and train our ANN models to separate the two kinds of spectra. We find that our methods reach an accuracy of approx. 92% in classifying simulated spectra of moderate-brightness objects in typical exposures, but their performance slightly decreases on the observed COUP and CDFS spectra (approx. 91%), due in large part to the relatively high background of these long-exposure datasets. We also investigate the performance of our methods with changing properties of the spectra such as the net source counts, the relative contribution of background, the absorption column of the sources, etc. We conclude that these methods have substantial promise for application to large X-ray surveys.
Pavan R. Hebbar, Craig O. Heinke
2023-03-31T22:30:14Z
http://arxiv.org/abs/2304.00158v1
Machine learning applied to X-ray spectra: separating stars in Orion nebula cluster from active galactic nuclei in CDFS ###### Abstract Modern X-ray telescopes have detected hundreds of thousands of X-ray sources in the universe. However, current methods to classify these sources using the X-ray data themselves suffer problems -- detailed X-ray spectroscopy of individual sources is too time-consuming, while hardness ratios often lack accuracy, and can be difficult to use effectively. These methods fail to use the power of X-ray CCD detectors to identify X-ray emission lines and distinguish line-dominated spectra (from chromospherically active stars, supernova remnants etc.) from continuum-dominated ones (e.g. compact objects or active galactic nuclei [AGN]). In this paper, we probe the use of artificial neural networks (ANN) in differentiating _Chandra_ spectra of young stars in the _Chandra_ Orion Ultradeep Project (COUP) survey from AGN in the _Chandra_ Deep Field South (CDFS) survey. We use these surveys to generate 100,000 artificial spectra of stars and AGN, and train our ANN models to separate the two kinds of spectra. We find that our methods reach an accuracy of \(\sim 92\%\) in classifying simulated spectra of moderate-brightness objects in typical exposures, but their performance decreases on the observed COUP and CDFS spectra (\(\sim 91\%\)), due in large part to the relatively high background of these long-exposure datasets. We also investigate the performance of our methods with changing properties of the spectra such as the net source counts, the relative contribution of background, the absorption column of the sources, etc. We conclude that these methods have substantial promise for application to large X-ray surveys. X-ray surveys (1824) -- X-ray identification (1817) -- X-ray stars (1823) -- X-ray active galactic nuclei (2035) -- Neural networks (1933) + Footnote †: journal: ApJ 0000-0002-8880-788X]Pavan R. Hebbar 0000-0002-1888-088X]Craig O. Heinke ## 1 Introduction The X-ray sky consists of a variety of extremely hot objects (\(kT\sim 1\) keV). These sources include chromospherically active stars (e.g., Gudel, 2004; Preibisch & Feigelson, 2005), supernova remnants (SNRs; e.g., Vink, 2012), isolated neutron stars (NSs; e.g., Kaspi et al., 2006; Pavlov et al., 2002), X-ray binaries (XRBs; e.g., Remillard & McClintock, 2006; Campana et al., 1998), and active galactic nuclei (AGN; e.g., Netzer, 2015; Padovani et al., 2017). These different kinds of X-ray sources emit radiation through distinct physical processes -- active stars and SNRs emit X-rays predominantly from thermal bremsstrahlung and line radiation, young isolated NSs show thermal blackbody-like emission, NSs with strong magnetic fields can also produce synchrotron radiation, and X-ray binaries and AGN cool via inverse Compton scattering (refer Bradt, 2014, for detailed review of the emission processes). X-ray sources, especially compact objects, host exotic environments with extremely hot temperatures, high densities and immensely strong gravitational and magnetic fields (see Remillard & McClintock, 2006; Lattimer & Prakash, 2007; Turner & Miller, 2009, for detailed reviews). The surfaces of young NSs can have temperatures \(T\sim 10^{6}\) K, the centres of NSs can reach densities \(\rho\sim 10^{14}-10^{15}\) g/cm\({}^{-3}\), magnetars can host magnetic fields of \(10^{14}-10^{15}\) G (Olausen & Kaspi, 2014), and black holes (BHs) exert gravitational fields that test the limits of modern physics. Thus, studying these compact objects can help us understand important physics such as nuclear forces, interaction of matter with strong magnetic fields, etc. X-ray emission is universal from these sources due to their temperatures and the high-energy processes involved in emitting radiation, thus making X-ray surveys ideal to search for and study compact objects. However, our knowledge of many of these sources has been constrained by the small numbers of identified objects. Over the last decades, we have launched several powerful X-ray telescopes to observe these high energy X-ray sources, including the _Chandra X-ray observatory_ (CXO or _Chandra_, for short), the _XMM-Newton_ telescope, _eROSITA_ instrument etc. These instruments record the position, time of arrival and energy of each X-ray photon detected. These telescopes have detected hundreds of thousands of X-ray sources in the sky -- e.g. the _Chandra_ Source Catalog (CSC) has \(\sim\)300,000 unique sources (Evans et al., 2010), the XMM-Newton Serendipitous Source Catalog has detected \(\sim\)600,000 X-ray sources (Webb et al., 2020), and the eROSITA Final Equatorial Depth Survey (eFEDS) survey has detected \(\sim\)30,000 sources (Brunner et al., 2022), with several million X-ray sources expected in the final eROSITA survey (Predehl et al., 2021). However, most of these sources haven't been studied in detail and we do not know the type of the X-ray source. Detailed individualized X-ray spectroscopy to model the X-ray emission of individual X-ray sources, understand their properties, and detect compact objects can be time-consuming for these large catalogs. Automated spectroscopic catalogs (e.g. Corral et al., 2015) can be useful, though these also have limits (the selection of models, time and storage required for the fitting). Hardness ratios, that compare the number of X-ray photons in the soft X-ray band (say, 0.5-2 keV) and the hard X-ray band (say, 2-6 keV), can also be used to estimate the properties of the X-ray source (Yokogawa et al., 2000; Prestwich et al., 2003; Brassington et al., 2008), and quantile analysis provides an alternative with substantial benefits (Hong et al., 2004). However, hardness ratios do not have strong discriminatory power for faint (and sometimes even moderately bright) sources, which can lead to inaccurate classification of X-ray sources (e.g. Hebbar et al., 2019). Thus, we need efficient ways to identify the X-ray sources detected in large X-ray surveys. CCD detectors that are used in most X-ray telescopes have an energy resolution of \(\Delta E\sim\) 0.1-0.2 keV (Fano, 1947; Struder et al., 2001; Predehl et al., 2021). Such moderate-resolution X-ray spectra are capable of detecting and resolving emission and absorption lines from elements such as neon (Ne), magnesium (Mg), silicon (Si), and iron (Fe). The coronae of active stars emit significant X-ray energy through the Ne-K & Fe-L lines. (Both Ne & Fe emit at \(\sim\) 1 keV. The dominant line among them depends on conditions in the stellar coronae. The comparison between the relative strengths of these lines is not important for the moderate resolution spectra used here.) The spectra of SNRs also have bright emission lines from Mg, Si, S, etc. Most AGN have a continuum dominated inverse Compton power-law spectrum from the hot corona. Some AGN also have additional soft energy components and fluorescent emission lines from the surrounding colder gas. But these line features are usually much fainter than the continuum component, except for the Fe-K line at \(\sim\) 6.4 keV (which can be redshifted) (Matt et al., 1997; Garcia et al., 2013). NSs typically have continuum X-ray spectra with no emission lines. Typical spectra of these sources are shown in Fig. 1. Thus, an ability to identify these emission lines and distinguish between continuum- and line-dominated X-ray spectra will allow us to understand the nature of the X-ray source. The large datasets used in astronomy, minimal privacy and ethical concerns in sharing data, etc. make astronomy research a great potential application for machine learning (ML). Machine learning algorithms have been used for calculating photometric redshifts (e.g., Carrasco Kind & Brunner, 2013), classification of galaxies and identification of AGN using optical observations (e.g., Figure 1: Example spectra of an isolated neutron star (4U 0142+61), active star (COUP 053445.2-052504), SNR (Cassiopeia A), and AGN (Centaurus A). We also show the approximate positions of Fe-L, Ne-K, Mg-K, Si-K, S-K, and Fe-K emission lines. The spectra of active stars and SNRs are dominated by these lines while that of AGN and NSs are largely continuous. AGN can sometimes show an Fe-K line. Cavuoti et al., 2014; Rozo et al., 2016; Chattopadhyay et al., 2019, etc.), classification of supernovae (Moller and de Boissiere, 2020) and modelling their X-ray spectra(Parker et al., 2022; Matzeu et al., 2022), identification of exoplanets (Shallue and Vanderburg, 2018), etc. Recently Yang et al. (2021), Schneider et al. (2021), and Tranin et al. (2022) have used ML algorithms to identify the multiwavelength counterparts of X-ray sources in CSC v2.0, eFEDS, and XMM-Newton data, respectively, using properties such as angular separation, X-ray flux, X-ray hardness ratios, Gaia magnitude and colors, etc., and classify the X-ray sources. While such methods work efficiently for sources far from the Galactic plane, it can be very time-consuming to find multiwavelength counterparts of X-ray sources in highly absorbed or crowded regions such as the Galactic bulge and globular clusters. We also anticipate that combining information from the X-ray spectra themselves along with multiwavelength information will provide more accurate source identification. ### Artificial neural networks Artificial neural networks (ANNs) are supervised learning algorithms that use hidden layers to learn non-linear representations of the data. In a classical feed-forward ANN, a weight matrix is applied on the input data that transforms the dimensionality of input. Then, we apply non-linear activation functions on this transformed input to calculate the values of nodes in the first hidden layer. This process can be repeated on the first hidden layer to calculate the next layer, and so on, and finally, the output layer. Typical activation functions include Rectified Linear Unit (ReLU) and its variants, sigmoid function, softmax function, etc. The choice of activation functions depends on the problem and nature of output needed (check Bishop and Nasrabadi, 2007 for detailed reference). In particular, using the sigmoid and the softmax functions that give values between 0 and 1 on the output layer gives a probabilistic interpretation for binary and multi-class classification. As non-linear classifiers, the ANNs can learn complex relationships in the data and thus lead to better accuracy of classification results. The probabilistic nature of the output allows us to change the threshold based on our requirement. ANNs can also be modified for semi-supervised training through auto-encoders, and use physical expressions for better interpretation of the results and the training process. Thus ANNs can be adapted to a wide variety of problems. However, ANNs need a large dataset for training and are prone to overfitting. Thus, it is important that we use proper regularization to avoid spurious weights and test the trained model with a separate dataset that is not used for training (a test set). In this paper, we aim to develop ANN algorithms that distinguish different kinds of X-ray sources based on their moderate-resolution X-ray spectra from CCDs. We test if ANNs can differentiate young stars from AGN, in simulated and real Chandra X-ray Observatory data. We describe our methods of data generation and the setup of our analysis in SS2. We show the results of our classification and investigate the robustness of our ANN model in SS3. We discuss the implications of our results and its broader application in SS4. We summarize the results of our analysis and discuss future prospects in SS5. ## 2 Data Acquisition and Analysis We use ANNs as a supervised training algorithm to test their feasibility. Training and testing a complex ANN requires a large dataset of labelled sources. Thus, we rely on XSPEC fakeit simulations of the spectra of stars and AGN for the training. The _Chandra_ Orion Ultradeep Project (COUP, Getman et al., 2005) and the _Chandra_ Deep Field South (CDFS, Giacconi et al., 2002) surveys are ideal for extracting the properties of stars and AGN for the simulated spectra. Most of the sources in the COUP survey of a star forming region are young stars, while the majority of the sources detected in the CDFS are AGN. All the COUP observations were conducted in January 2003, thus removing the effects of the changing soft energy response of _Chandra_ ACIS. The CDFS observations were taken during the years 2000, 2007 and 2010. We only consider the observations in year 2000 for our analysis as they are the ones used in the CDFS AGN spectral properties catalog (Tozzi et al., 2006). This also ensures a better soft energy response for our AGN data. With a few hundred sources detected in each of the COUP and CDFS surveys, they provide us samples of observed spectra to test our results. We use the X-ray sources in the _Chandra_ Orion Ultradeep Project (COUP, Getman et al., 2005) survey for our ensemble of young stars. The COUP survey used the ACIS-I instrument of _Chandra_ for an exposure time of 838 ks and detected 1616 X-ray point sources, that are mostly active young stars, and modeled them using one or two-component thermal plasma spectra. We utilize their catalog of objects identified as stars, and do not consider sources that are marked uncertain. The COUP catalog specifies the best-fitting parameter values of these models for each COUP spectrum. We use the distribution of these properties to extract the general properties of X-rays from young stars. Since we are interested in getting the distribution of the properties of the COUP spectra, we also omit sources with marginal (null hypothesis property from \(\chi^{2}\) is between 0.05 and 0.005) and poor (null hypothesis probability from \(\chi^{2}\) is less than 0.005) fits. These poor fits could be due to contamination from surrounding sources, high absorption leading to poor fitting of lines, spectra more complicated than that of thermal-equilibrium plasma, etc. Using the membership information of the COUP catalog provided by Getman et al. (2005a), we only select sources within the Orion Nebula Cluster. This gives us a set of 1045 sources. We also remove sources with flags for: deviations in the emission lines (presence of narrow spectral features not explained by the model, probably due to different elemental abundance; 62 such sources), soft or hard excess (mostly due to poor subtraction of a non-uniform background around weak sources; 214 such sources), confusion from nearby sources (54 such sources), two components with different absorption column densities (6 such sources) or a poor fit (poor \(\chi^{2}\) statistics and/or based on visual examination; 89 such sources). Note that these are only removed for the creation of the distribution of spectral properties, not for testing our trained ML model. This filtering leaves us with 679 sources. Of these 679 sources, 406 sources were modeled with single plasma models and 273 sources were fit with two-temperature plasma models. We use the distributions of \(N_{H}\) (absorption column densities), \(kT_{1}\), \(kT_{2}\) (temperatures of plasma [stellar atmosphere in this case]), and the emission measures of the components of these COUP X-ray spectra models to simulate spectra of 100,000 young stars in our sample. We fix the abundance values to 0.3 times the solar values in accordance with Getman et al. (2005c). We show the distribution of parameters used to simulate our active star spectra in Figs. 2. From Figs. 2a & 2b, we notice that 90% of the stars have \(\log N_{H}\) (in cm\({}^{-2}\)) \(\in(20.82,22.68)\) and \(kT_{1}\)\(\in(0.43,3.3)\) keV. We use the tbabs*apec and tbabs*(apec+apec) models in XSPEC v12.11.1 and the fakeit command to simulate spectra with an exposure time of 1 megasecond (Ms). (The tbabs component is used to model the absorption with Wilms et al. (2000) abundances and the apec models X-ray emission from plasma in thermal equilibrium). Tsujimoto et al. (2005) detected a 6.4 keV Fe fluorescence line in seven of the COUP stars. However, this fluorescent emission is only present in young stellar objects with significant disks, and is weak (usually \(<\)150 eV, and weaker than the nearby 6.7 keV Fe line). Therefore, we do not add this line to our simulated stellar spectra. The properties of the AGN X-ray spectra are selected from the _Chandra_ Deep Field South AGN spectral properties catalog (CDFSAGNCXO, Tozzi et al., 2006). This catalog lists the properties of 321 high redshift AGN in the 1 Ms _Chandra_ Deep Field South (CDFS) survey. Most of these sources were fit with an absorbed power-law model (Compton thin or 'C-thin' model). For 8 sources, the fit included an additional soft power-law component (same slope as the absorbed power-law but no absorption; Soft component or 'Soft-C' model), and 14 AGN were fit with a reflection-dominated model (Compton thick or the 'C-thick' model). Tozzi et al. (2006) fixed the Galactic absorption (from our Galaxy) to \(N_{H,Gal}\) = 8 \(\times\) 10\({}^{19}\) cm\({}^{-2}\). Accordingly, we use a C-thin model for 93%, C-thick model for 4%, and Soft-C model for 3% of the 100,000 artificial AGN spectra that we generated. For models where the intrinsic absorption of the host galaxy could not be constrained [i.e. where Tozzi et al. (2006) reported intrinsic absorption column density \(N_{H,int}=0\)], we use a value of 10\({}^{19}\) cm\({}^{-2}\). Note that effects from Galactic absorption and the response of _Chandra_ dominate the soft spectra in these AGN. Fe line emission has been detected in 34% of well-detected CDFSAGNCXO AGN with spectroscopic redshifts, with equivalent widths between 100-3000 eV (Liu et al., 2017). Since we desire our ML algorithms to account for the possibility of Fe-K emission in the AGN spectra, we introduce an Fe-K line in 50% of our simulated spectra of AGN. This may or may not be representative of the AGN population in the CDFS sample or across the universe (it is unclear, since our modelled equivalent width distribution reaches to small values that may not be detected in e.g. the CDFS sample), but this ensures that AGN with and without lines are represented in our sample. Our goal is that our model can identify AGN with or without the Fe-K line. The equivalent width of the Fe-K emission line was assumed to follow a uniform distribution between 100-3000 eV. The position of the Fe-K line is calculated based on the chosen redshift for each AGN, and a rest-frame energy of 6.4 keV. We show the properties of our AGN sample in Figs. 2. We notice that 90% of the sources have \(\log N_{H}\) (in cm\({}^{-2}\)) \(\in(19.05,24.2)\). Among sources where \(N_{H}\) could be properly constrained, the range is (21.37, 24.2), \(z\)\(\in(0.01,2.56)\) and \(\Gamma\)\(\in(1.2,2.2)\). We use the XSPEC models cflux*tbabs*ztbabs*pegpwrlw, tbabs*pegpwrlw + cflux*ztbabs*pegpwrlw, and cflux*tbabs*pexrav for the C-thin, Soft-C and C-thick AGN respectively (we use cflux on the absorbed flux, as the CDFSAGNCXO catalog only reports the absorbed flux values) to generate an artificial AGN spectra sample with an exposure time of 1 Ms. The tbabs and ztbabs are used to model the absorption from the Galaxy (fixed to 8\(\times\)10\({}^{-19}\) cm\({}^{-2}\) in the direction of the CDFS sample) and the redshifted intrinsic absorption in the host galaxy. The pegpwrlw component models the power-law emission from the AGN. We add an additional tbabs*pegpwrlw component to model the soft component in Soft-C AGN. In these AGN, we use the same power-law indices for the two components, in accordance with the spectral fitting in Tozzi et al. (2006). We use pexrav model to generate the reflected AGN spectra in C-thick AGN. We add an additional tbabs*ztbabs*gaussian component for AGN with an Fe-K line. We cross-match the X-ray positions of the COUP and CDFSAGNCXO sources with the CSC v2.0 catalog using a search radius of 3.0\({}^{\prime\prime}\), and download the spectra from the COUP and CDFS observations. We are able to retrieve the spectra of 1373 stars out of the 1616 COUP sources, and 296 AGN out of the 321 CDFS sources. (We fail to extract all sources because the CSC pipeline masks the corners of the ACIS detectors, where the background is very high, to minimize the detection of erroneous sources.) The CDFS observations were taken in four epochs - 1 Ms between 1999-2000, 1 Ms in 2007, 2 Ms in 2010, and 3 Ms in 2014. Since the soft energy response of _Chandra_ ACIS has been degrading, we only use the 1999-2000 observations in this paper. These are also the observations used by Tozzi et al. (2006) to study the AGN properties. In order to generate the artificial spectra of our sample, we use the response matrix and effective area files of the cross-matched COUP and CDFS sources. We show the mean spectra of these sources in Fig. 2. We only used sources with net counts greater than 100 in Fig. 2 to reduce the noise. We also show Figure 2: Distribution of the parameters used to simulate the spectra of stars for training our ANN. Since the Orion nebula is close to the galactic plane, \(N_{H}\) is high. Most stars have \(kT<2\) keV. The few sources with \(kT\approx 14\) keV are likely to be heavily absorbed stars where emission lines could not be fit (and a higher temperature plasma could fit the relatively hard spectra). We see some correlation between \(kT1\) and \(kT2\), and a strong correlation between \(EM_{1}\) and \(EM_{2}\). Since we match the fluxes of stars and AGN, specifying the emission measure values only contributes to maintaining the ratio of emission from the two plasmas. the distribution of the net counts and the background contribution to these sources in Figs. 5 & 6. We note a few caveats of using the COUP and CDFS surveys. Both COUP and CDFS surveys have exposure time \(\sim 1\)Ms, much longer than the exposure times of typical Chandra observations. Thus, the net counts of sources in COUP are higher than those of typical stars, and the faint AGN in CDFS have larger background-to-net count ratios than most AGN detected. From Figs 5, we notice that the COUP stars are \(\sim 10\) times brighter than CDFS AGN, implying that the spectra of our sample stars will have higher signal-to-noise ratio than our sample AGN. Therefore, we use the flux distribution from the CDFS AGN in the 0.5-10 keV regime to simulate the spectra of stars so that both AGN and stellar spectra have similar signal-to-noise ratios. We do this to check if our ML method is inherently better at picking AGN/stars. However, this doesn't imply that all our sources have the same noise. The \(\sim 2\) orders of magnitude range in the flux of CDFS AGN ensures that we can check our performance with varying net-counts and background contribution. We attempt to analyze how the background can affect the performance of our methods by simulating three kinds of spectra from AGN and stars -- without background, with the observed background, and with a reduced background rate (i.e., we increased the flux of the sources by a factor of 10 and reduced the exposure to 100 ks). We consider a reduced background because the targets we selected are unrepresentative of typical Figure 3: Properties of AGN sample used for training and testing our ANN. Chandra observations (as noted above), so the reduced background analysis will be more helpful to estimate potential future performance. We also notice the COUP sources are \(\sim\)10 times brighter than the AGN in the CDFS survey. Therefore, we use the fluxes from the CDFS AGN in the 0.5-10 keV regime to simulate the spectra of stars so that both AGN and stellar spectra have similar signal-to-noise ratios. ### Setting up the ANN We incorporate a sequential artificial neural network (ANN) model using Tensorflow (Abadi et al., 2015) for our primary analysis. In this model, we add one hidden layer with ten nodes (we also tried other configurations, but found no significant increase in the accuracy with a higher number of nodes/layers) and a Rectified Linear Unit (ReLU) activation function (we show the neural network architecture in Fig. 7). The output layer has sigmoid activation ensuring the probabilistic interpretation of the output values. As the problem of classifying the X-ray spectra is essentially a Bernoulli problem (i.e. an X-ray spectrum could be from an AGN with probability \(p\) and star with probability _1-p_), we use a binary cross-entropy loss for training. The binary cross-entropy loss is defined by, \[H_{p}(D)=-\frac{1}{N}\sum_{i=1}^{N}y_{i}\log p(y_{i})+(1-y_{i})\log(1-p(y_{i})), \tag{1}\] where \(H_{p}(D)\) signifies loss calculated over dataset D, N is the number of input data, \(y_{i}\in\{0,1\}\) (we chose 0 for stars and 1 for AGN), and \(p(y_{i})\) is the probability that \(y_{i}=1\). We use the Adam optimization algorithm (Kingma & Ba, 2014) for training the model since it results in less computation time and faster convergence. We fix the L1 regularization value, \(\lambda=0.001\), to avoid over-fitting of the data (we test for various values). We consider the 0.3-8.0 keV energy interval while training our ANN classification model, to limit the background contribution (not to be confused with the 0.5-10 keV flux values reported in Tozzi et al., 2006 and used in our simulations). In order to limit the Poisson noise, we only use simulated spectra with greater than 100 net counts Figure 4: Normalized (divided by total counts) mean spectra of the observed COUP stars (solid blue) and CDFS AGN (solid purple). For the purpose of plotting, we only show the mean spectra calculated from sources with net counts more than 100. From the figure, we see that the spectra of stars show a prominent Ne-K/Fe-L emission line at \(\sim\)1 keV along with slight hints of Mg-K (\(\sim\)1.3 keV), Si-K (\(\sim\)1.8 keV), and S-K (\(\sim\)2.3 keV), while the AGN show no such lines. Since the CDFS AGN have lower flux than the COUP stars in general, we notice that the spectra of AGN have more noise (both background and Poisson) than the stars. Figure 5: Histogram of the net counts of the observed COUP stars (solid blue) and CDFS AGN (solid purple). We have normalized the histograms such that the area under both histograms is unity (to avoid distortion due to more COUP sources than CDFS AGN). We notice that the COUP stars are typically \(\sim\) 10 times brighter than the CDFS AGN. Figure 6: Normalized histogram of the ratio of background-to-net counts for the observed COUP stars and CDFS AGN. Since CDFS AGN are mostly faint, most of them have a very high background contribution. in the 0.3-10 keV energy range for training (this net count criteria is not for simulating sources but only for training and testing ANN). We use 80% of the spectra for training the model and use the remaining 20% as our test set. We perform 10-fold cross-validation while training the model, to ensure that we do not overfit the data. We then apply the trained ANN model to classify spectra of the test set and evaluate our performance. We perform 20 such iterations to study the behaviour of our ANN model. For several ML applications, the input is standardized by subtracting each feature by the mean of that feature across all classes, and dividing the difference by the standard deviation of that feature. In general, this allows for better fitting of the weights. We also perform ANN classification with standardized input, but the improvement in classification accuracy is \(\lesssim 1\%\). In astrophysics, having unequal number of sources in each source class is common, which can lead to the mean being biased to one particular class. Therefore, we choose to present results that do not use standardized inputs. We analyze the results of our classification using the following metrics (for the purpose of this paper, we consider AGN as positives and stars as negatives): * **Accuracy:** The total fraction of correct predictions across the entire set. \[\text{Accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+ \text{FN}}\] (2) * **Recall:** The ratio of true positives to the total number of positives in the dataset. In our case, this signifies the fraction of AGN identified correctly. \[\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{FN}}\] (3) * **True Negative Rate (TNR):** The ratio of true negatives to the total number of negatives in the dataset, i.e. the fraction of stars identified correctly. \[\text{TNR}=\frac{\text{TN}}{\text{TN}+\text{FP}}\] (4) * **Precision:** The ratio of true positives to the number of samples that have been classified as positives. In our case, it is the fraction of true AGN among the spectra classified as AGN. \[\text{Precision}=\frac{\text{TP}}{\text{TP}+\text{FP}}\] (5) * **Negative Predictive Value (NPV):** The ratio of true negatives to the number of samples that have been classified as negative, i.e. the fraction of true stars among the spectra that have been identified as stars. \[\text{NPV}=\frac{\text{TN}}{\text{TN}+\text{FN}}\] (6) In the above equations TP, TN, FP, and FN stand for the true positives (AGN identified correctly), true negatives (stars identified correctly), false positives (stars identified as AGN), and false negatives (AGN identified as stars), respectively. We also evaluate the performance of the trained ML model on the observed data. This set consists of all COUP stars (including sources that have been flagged for deviations in emission lines, poorly or marginally fit spectra, soft/hard excess, etc; only extragalactic sources are removed) and CDFS AGN that have more than 100 counts. Since there are only 108 such CDFS AGN as Figure 7: Architecture of the ANN model. The input layer consists of 527 nodes corresponding to the number of channels in the _Chandra_ ACIS-S spectra in the energy range 0.3–8.0 keV. We use one hidden layer with 10 nodes which are fully connected to the input layer. The hidden layer uses a ReLU activation and a L1 regularization with \(\lambda=0.001\). These hidden nodes are then used to calculate the values in the two output nodes that use a softmax activation function and signify the probability of the source being an active star or an AGN. compared to 679 COUP stars, we randomly select 108 COUP stars so that both classes are equally sampled. Note that we select a new random sample of COUP stars with each of the 20 iterations to avoid any hidden biases. ## 3 Results We first analyze the spectra of AGN and stars without including the background. We show the mean spectra of stars and AGN in Fig. 3. From the figure, we notice that AGN in general have harder spectra than their stellar counterparts (note that some AGN in our sample also have soft spectra as seen from Fig. 3a). In our case this is further amplified due to their higher (intrinsic) \(N_{H}\) and the presence of redshifted Fe-K lines in the CDFS AGN spectra. We also notice that the spectra of stars have dominant Ne, Fe, Mg, Si, and S lines. The feature at \(\sim\)2 keV in the mean AGN spectrum is due to a combination of \(N_{H}\), \(z\), position of the Fe-K line, and the response of the _Chandra_ ACIS detector, and is not a real emission line. We show the distribution of the net counts in the \(0.3-8.0\) keV energy range in Fig. 9. We see that the stars have higher count rates than AGN on average by a factor of \(\sim 5\), even though their fluxes in the 0.5-10 keV interval are similar. This is because of the softer spectra of stars in comparison to AGN (i.e. we match the X-ray fluxes of stars and AGN in the 0.5-10 keV range while simulating spectra, but consider 0.3-8.0 keV for the classification and analysis.). Applying our ANN model to this dataset gives us an overall accuracy \(\sim\)89%, recall \(\sim\)83%, TNR \(\sim\)92%, precision of \(\sim\)86%, and NPV of \(\sim\)91%. However, when this trained model is used to classify the set of observed spectra, we get an accuracy of only \(\sim\)81%. The decreased accuracy is due to the poor performance of the classification algorithms at low net counts, where the contribution of background cannot be ignored. Next, we explore the effect of background on our simulations. For this purpose, we simulate the spectra of AGN and stars using similar background levels as the COUP and CDFS sources. Fig. 10 shows the histogram of the ratio of background counts to net counts for the simulated sources in the 0.3-8.0 keV energy range. (The background counts have been scaled such that they cor Figure 8: Normalized mean of the artificially generated AGN and star spectra simulated without including the background. These spectra are similar to that in Fig. 2, but less noisy since they are the mean of 100,000 spectra. With less noise, the emission features in the spectra of stars are more clear. The emission-line-like features in the AGN spectra are due to a combination of the intrinsic absorption columns, redshifts, and the _Chandra_ response. Red-shifted Fe-K emission lines cause the appearance of bumps in the mean AGN spectra in the 2–4 keV energy range. Figure 10: Distribution of background-to-net count ratio in the simulated AGN and star spectra when the observed background is included. The plot only represents spectra with net counts greater than 100 in the 0.3–8.0 keV energy range. Many of these spectra have a high contribution from the background as these sources are fainter than typical sources detected by _Chandra_. Figure 9: Distribution of total counts, in the 0.3–8.0 keV energy range, in the simulated sample of AGN and stars generated without including the background. Despite equating the 0.5-10 keV flux of simulated stars and AGN, stars have higher X-ray photon counts since they have softer spectra than AGN. respond to the source extraction region). We notice that the stars and AGN in the COUP and CDFS catalogs have a high background, in general, with many sources having background counts \(>\) 0.1 times their net counts. Our classification accuracy is \(\sim\) 89% on these simulated spectra. When we apply the trained model to the observed spectra of CDFS and COUP sources, we get a a similar classification accuracy of \(\sim\)89%. This performance is better than the ANN model trained on simulated spectra without background. The high background in our simulated flux is due to the low fluxes of the CDFS AGN (as well as the off-axis positions of a substantial portion of the sources). In general, most X-ray sources detected by _Chandra_ have a lower background contribution than these AGN. For instance, a typical Chandra observation is of order 30 ks, rather than the 1 Ms CDFS, reducing the typical background by a factor of 33. However, more Chandra sources with \(>\)100 counts can be found in longer observations; we use 100 ks observations for our standard as a compromise. To consider the performance of our classification algorithms for typical stars and AGN, we thus consider simulated spectra of stars and AGN where the background contribution has been reduced by a factor of 10. We show the distribution of the net counts and the background-to-net count ratio in Figs. 11 & 12 respectively. These values of background are more typical of X-ray sources detected by _Chandra_. Applying our ANN model to these spectra gives an overall accuracy \(\sim\) 92%, recall of \(\sim\) precision \(\sim\)91%, and TNR and NPV\(\sim\)92.5%. Applying the trained model to observed spectra gives an accuracy of \(\sim\)91%. One of the reasons why the model trained on reduced background performs better on the observed data, as compared to the model trained on the observed background, could be that the lower noise from the background contribution helps better constrain the values of the best fit model. We summarize the results of our classification on the simulated and the training data in Table 1. We also look at the certainty with which our model identifies the AGN vs. stars. Fig. 13 shows the distribution of probabilities of the input source being an AGN as estimated by our model. From the figure, we note that our model is able to distinguish most AGN \begin{table} \begin{tabular}{c c c c c c} \hline Type of Dataset & Testing data & Accuracy & Recall & TNR & Precision & NPV \\ \hline No background & Simulated spectra & \((88.9\pm 0.9)\%\) & \((83\pm 5)\%\) & \((92\pm 3)\%\) & \((86\pm 3)\%\) & \((91\pm 2)\%\) \\ Observed spectra & Observed spectra & \((81\pm 3)\%\) & \((67\pm 7)\%\) & \((93\pm 5)\%\) & \((92\pm 6)\%\) & \((74\pm 4)\%\) \\ \hline Observed background & Simulated spectra & \((88.7\pm 0.6)\%\) & \((86\pm 3)\%\) & \((91\pm 2)\%\) & \((87\pm 2)\%\) & \((90\pm 2)\%\) \\ Observed spectra & Observed spectra & \((89\pm 3)\%\) & \((84\pm 5)\%\) & \((95\pm 2)\%\) & \((94\pm 3)\%\) & \((85\pm 4)\%\) \\ \hline Reduced background & Simulated spectra & \((91.7\pm 0.8)\%\) & \((91\pm 2)\%\) & \((92\pm 2)\%\) & \((91\pm 2)\%\) & \((93.0\pm 2)\%\) \\ Observed spectra & \((91\pm 2)\%\) & \((90\pm 3)\%\) & \((92\pm 4)\%\) & \((91\pm 3)\%\) & \((91\pm 3)\%\) \\ \hline Reduced background, Net Counts \(>\) 1 & Simulated spectra & \((86\pm 1)\%\) & \((86\pm 3)\%\) & \((85\pm 3)\%\) & \((85\pm 2)\%\) & \((86\pm 2)\%\) \\ Observed spectra & \((82\pm 3)\%\) & \((77\pm 4)\%\) & \((87\pm 4)\%\) & \((86\pm 4)\%\) & \((80\pm 3)\%\) \\ \hline \end{tabular} Note: The error-bars indicated above correspond to one standard deviation \end{table} Table 1: Performance of our machine learning model on different datasets Figure 11: Distribution of net counts of star and AGN spectra, simulated with a reduction of the background contribution by a factor of 10. Figure 12: Distribution of the background-to-net count ratio in the simulated AGN and star spectra simulated with the reduced background. The y-axis is normalized such that the figure shows a density function. This distribution is more typical of the X-ray sources detected by _Chandra_. and stars with very high confidence. We used a cutoff of 50% to classify sources as AGN/stars in this paper. For situations where we need a purer sample of AGN, we can use a higher cutoff, or we can use a lower cutoff if we want to lower the number of AGN that could be identified as stars. ### Variation of performance with properties of sources. Since the model trained on the reduced background spectra, which represent typical _Chandra_ sources, gives us the best results for both simulated and observed spectra, we use these to analyze if our classification model preferentially selects stars or AGN of certain properties. We show the performance metrics with respect to different properties of stars and AGN in Fig. 14. We first analyze the performance with respect to the net counts and the background contribution (background-to-net count ratio). The performance improves as net counts increase. As the net counts increase, the Poisson noise decreases, and hence the emission lines can be more easily identified. Similarly, as the background contribution decreases, the background noise can be distinguished from the true emission lines leading to better differentiation of stars and AGN. In general, we get good results (better than 90%) for sources with net counts greater than 200 and/or background-to-net count ratio smaller than 0.05. We then study the performance with respect to the absorption column densities used for the AGN and star spectra. We see that the true negative rate (i.e., the fraction of stars detected correctly) decreases steeply for \(N_{H}\gtrsim 10^{22}\) cm\({}^{-2}\). This is because high absorption column densities block the soft X-rays which are more important to the detection of the Ne, Fe-L, Mg and Si lines that identify the spectra as stars. We expect such high \(N_{H}\) only for stars very close to the Galactic plane and those in dense nebulae. The detection of AGN suffers only slightly with the increasing \(N_{H}\). This is because as \(N_{H}\) increases, the count rate at softer energies decreases, thus increasing the Poisson noise, which could be confused as lines. At \(N_{H}\gtrsim 10^{24}\), the performance improves because of the reprocessed X-ray emission from the Compton thick model we use in this range. The detection of stars also decreases with the increase in the plasma temperature used to model X-ray emission. For high temperatures (\(kT>2\) keV), the continuum emission dominates while lines fade, making it harder to detect the line emission. However, we expect that most of the stars have \(kT<2\) keV (e.g. Gudel, 2004). Higher temperatures are mostly due to improper fitting of highly absorbed sources. The detection of AGN varies with the hardness of the X-ray emission. Our algorithm seems to detect harder AGN at a higher rate. This is expected since most CDFS AGN have \(\Gamma=1.75\pm 0.02\). We notice that the performance is better on Compton thick AGN, and AGN with an additional soft component, as compared to Compton thin AGN. The detection fraction of AGN increases slightly with the redshift of AGN. The decrease in recall at \(z>4\) may be due to very few data points in these bins. We notice that our model detects AGN with an Fe-K line more efficiently than the ones without Fe-K emission. This can be deduced through the increasing recall as the equivalent width of the Fe-K emission line increases. The change in recall with the position of the Fe-K emission line seems to be consistent within the error-bars. ### Performance at low counts We also test the performance of the ML algorithms on sources with reduced background and net counts less than 100. For this purpose, we select all sources with net counts \(>1\) in our simulated and observed dataset to train and test our performance. The overall accuracy is 86% on the simulated sample and 82% on the observed spectra. The recall, TNR, precision and NPV have similar values as the accuracy. We list the values of the performance metrics for the observed and simulated data in Table 1. We also show the variation in accuracy with net counts in Fig. 15. The accuracy is very poor (\(\lesssim 50\%\)) for net counts \(\lesssim 8\). The accuracy is \(\sim 60\%\), \(\sim 74\%\), and \(\sim 83\%\) for net counts \(\sim 8-21\) Figure 13: Distribution of probabilities of the source being AGN, as calculated by our trained model. Notice that most of the stars and AGN have been correctly identified with a certainty of more than 95%. In this article we have used a cutoff of 50% to differentiate stars and AGN. Figure 14: Variation in performance of our ANN model for different properties of the sources. In each case the histogram represents the number of spectra in the given binning of the property. The y-axis on the right corresponds to the values of confusion matrix (1-Accuracy, 1-Recall/TNR etc.). \(\sim 21-55\), and \(\sim 55-146\), respectively. For these low count sources, including the information from their positions in the sky, detection of multiwavelength counterparts and their properties along with the classification probabilities from X-ray spectra alone will help in improving the overall classification accuracy. ### Weights applied by ANN on energy bins Since we use a simple ANN model with only one neural network layer, the weights assigned to the input layer can be interpreted as the relative importance of the energy channel to classifying the source as a star or an AGN. We show the weights assigned to each energy channel in Fig. 16. From this figure, we see that the X-ray spectra in the energy range \(\sim 0.8\)-1.4 keV are strongly selected, indicating the preference given to the Ne, Fe, Mg, and Si lines. The bumps in the spectra at energy bins \(\gtrsim 2\) keV are probably due to the model searching for the redshifted Fe-K line. We notice that training the ANN model using only the spectra without an Fe-K line reduces the weights in this regime. Thus, our model indeed tries to identify emission lines in the spectra to classify the source, and does not use the hardness ratio of the spectra alone. ### Applying trained ANN model to observed data From the values in Table 1, we see that while the ANN models trained on the data simulated with the observed background or with the reduced background yield only slightly poorer performance on the observed data as compared to that on the simulated spectra, the model trained on spectra without background performs very poorly on the observed spectra. Comparing the performance of these models with respect to the net counts and the background contribution of the observed spectra, shown in Fig. 17, gives us more insight into this behavior. We see that the performance of the ANN model trained on the spectra without background contribution degrades steeply at low net counts (\(\lesssim 1000\)) or when the ratio of background-to-net counts is high (\(\gtrsim 0.02\)). This is because the contribution of background is high in this regime. This background noise can be confused with emission lines, leading to many AGN being classified as stars. Looking at the performance of the ML models trained with background (observed and/or reduced background) with respect to the background-to-net count ratio also allows us to understand the slightly poorer overall accuracy on the observed data. Comparing Fig. 17d and Fig. 14d, we see that the the observed AGN have much higher background than the simulated AGN. Both plots show that the accuracy is greater than 90% for background counts \(\lesssim 5\%\) of the net counts from the source. ### Using ML to detect extragalactic sources in COUP Using the ANN model trained on the data simulated with the reduced background, we try to identify the extragalactic sources detected in the COUP catalog. Among the sources marked as extragalactic in Getman et al. (2005a), 63 sources have net counts greater than 100. Our ANN model is able to identify \(\sim\)40 of these 63 extragalactic sources. This corresponds to a recall of \(\sim\)63% which is much lower than the recall on the CDFS spectra. This poorer performance is because most AGN in the COUP catalog are heavily absorbed and have a high background (60% of extragalactic sources have background-to-net count ratio \(\gtrsim\)0.3, and 84% of Figure 16: Weights to each energy channel in the first layer of a trained ANN model. We notice that the model assigns larger weights (magnitude only) to the spectra around the position of the Ne, Fe, Mg, and Si lines. The different curves correspond to the weights of ten nodes in our ANN. Figure 15: Change in the accuracy with net counts for simulated spectra with net counts \(>1.0\). From the figure, we see that our algorithms does not perform well on sources with 1–10 net counts (accuracy marginally better than 50%). The accuracy is \(\gtrsim 70\%\) for sources with net counts \(\gtrsim 20\) and \(\gtrsim 80\%\) for net counts \(\gtrsim 70\). extragalactic sources have background-to-net count ratio \(\gtrsim\)0.1). ### Changing response of Chandra The response function of _Chandra_ has been changing over the years, as the effective area at soft energies degrades. Thus the same star could have different emission line strengths (especially \(<\) 2 keV) across the years. We generate 10,000 artificial spectra of stars and AGN each, with the ACIS-I aimpoint responses from the _Chandra_ Cycle 25 call for proposals to check if our model could be applied if similar _Chandra_ observations were taken in 2023. We find that our model (trained on the COUP responses) can can classify the simulated stars having \(>\)100 net counts with a accuracy of \((84\pm 3)\%\), retrieve \((77\pm 7)\%\) of AGN (recall), and identify \((90\pm 3)\%\) of the stars (TNR) i.e. the performance on identifying stars is only slightly decreased, but that for AGN decreases a lot. We will explore the training of ML models with the changing response of _Chandra_ in our future works. ## 4 Discussions ### Incorrect classification of Sources Figure 17: Performance of the trained ML model with net counts, and with background contribution, on the observed spectra. Figure 18: Mean normalized spectra of correctly identified stars and AGN, compared to those that are not classified properly. Notice that the stars that have been identified as AGN do not show strong emission lines, are heavily absorbed, and are harder than the stars that have been correctly identified. From our results, we notice that the trained ANN model uses a combination of emission-line strengths (interpreted from Figs. 14g, 14h & 16) and hardness of the spectra (Fig. 14i). Fig. 18 shows the mean spectra of stars and AGN that have been correctly/incorrectly identified. From the figure, we notice that incorrectly classified stars show stronger absorption, show fainter emission lines, and/or are harder than correctly identified stars, i.e., their spectra are not line-dominated. AGN that have been identified as stars seem to show higher absorption and/or are softer than correctly identified AGN. We further explore the mis-classification of softer AGN. The poor recall of AGN for \(\Gamma>2.0\) is because these AGN are affected more by increasing \(N_{H}\), i.e. for \(N_{H}\lesssim 10^{21}\) cm\({}^{-2}\), all AGN have similar recall (\(\sim\)98-99%), but at \(N_{H}\sim 10^{22}\) cm\({}^{-2}\), the recall for AGN with \(\Gamma>2.0\) decreases to \(\sim\)72% while the recall for hard AGN with \(\Gamma<2.0\) is \(\sim\)96%. This can be explained as classification of AGN with hardness similar to stars will mainly be based on identification of soft X-ray emission lines, which can be confusing at high absorption. Our sample of soft AGN with \(\Gamma>2.0\) need a net count \(\gtrsim 3000\) photons, or a strong Fe-line with equivalent width \(\gtrsim 1500\) eV, to be detected with a recall of \(>90\%\). The improvement in the detection of AGN with the increasing fraction of continuum soft-emission also explains why our performance is better on AGN with soft component and Compton thick AGN. ### Applications Based on our results, our algorithm works very well on sources with sufficient X-ray photons to enable the detection of Fe-L, Mg and Si lines against the background and Poisson noise. Based on the distribution of weights (Fig. 12) and the decreased accuracy of classification on stars with \(kT>2\) keV, we see that the trained ANN picks out the energies corresponding to emission lines and identifies line-dominated X-ray spectra. Thus, ANNs could provide an efficient and practical approach to classify serendipitous X-ray sources. X-ray emission from most isolated NSs in our Galaxy can be fit with a power-law model with photon index \(\Gamma\approx 1\)-2, a blackbody of temperature 0.05-0.3 keV, or a combination of these, and do not show any lines (Pavlov et al., 2002). Those which are dominated by the power-law will then be similar to AGN, without the Fe-K lines. Based on Fig. 14l, we expect that our model should be able to identify such NSs from chromospherically active stars in our Galaxy with a good recall (80-90% if they follow a similar distribution of net counts, column density, and background contribution). Our results indicate that the additional presence of softer components (e.g. a blackbody) does not affect our performance. Many neutron star X-ray spectra are dominated by a blackbody-like thermal component. Discriminating between blackbody-like spectra and soft thermal plasma spectra (e.g. APEC models with temperatures \(<\)2 keV) will be even easier, as these low-temperature plasma spectra are even more dominated by lines than the harder spectra. X-ray binaries have spectra similar to those of AGN in the energy range of 0.1-10 keV. Thus our results can also be applied to identifying X-ray binaries. However, distinguishing X-ray binaries from AGN based on their spectra alone would be more challenging, and we would need to use the information from location of the source (X-ray binaries are primarily found along the Galactic plane and globular clusters), X-ray variability and multi-wavelength observations to properly differentiate X-ray binaries and AGN. The presence of a fluorescent Fe line will also be important for distinguishing AGN at significant redshift from XRBs. Our results show that ANNs can detect the Fe-line irrespective of the redshift. Another application of this method would be in identifying millisecond pulsars (MSPs) in highly absorbed regions like the Galactic center and bulge (where soft X-ray sources are obscured). MSPs have hard power-law spectra (\(\Gamma\sim 1\)) from the intrabinary shock between the NS and its companion, or magnetospheric emission. Identifying the Fe-K line would allow us to distinguish MSPs from other accreting hard X-ray sources like cataclysmic variables. The X-ray spectra of SNRs can also be modeled by thermal plasma components with high metal abundance. Young SNRs (where the reverse shock has not reached the core) usually have a cooler component with \(kT<2keV\), and strong Mg and Si lines. Thus ANN models that can pick out these lines will be able to differentiate the young SNRs in distant galaxies from low-luminosity AGN (see for instance Hebbar et al., 2019). ### Current and Future missions In this work, we have focused on _Chandra_ observations with similar response functions. Our results in SS3.6 indicate that we need a more detailed modeling of the responses during our training before we can combine the spectra from multiple _Chandra_ observations to increase the total number of counts. This will enable us to classify the faint sources more accurately. _XMM-Newton_ has \(\approx\) 5-10 times the effective area of _Chandra_ at 1 keV (depending on the year), but also has a higher background for point-source extraction. Thus for individual observations we would only expect similar or a slight improvement in our performance. However, _XMM-Newton_'s soft energy response is more stable than _Chandra_'s, implying that spectra from multiple observations of the faint sources can be readily combined to increase the sensitivity in detecting emission lines. The _eRosita_ probe also has similar characteristics as _XMM-Newton_ but has observed a wider portion of the sky, and we can expect similar results. Our results also point to the characteristics of future missions that could be ideal for our work. The Resolve instrument onboard _XRISM_, set to be launched later this year, will have an excellent spectral resolution allowing detection of abundant emission lines in the spectra of X-ray sources. However, its effective area in the soft energies is lower than _Chandra_, and it has a higher background (for point source spectroscopy). Thus, studying faint serendipitous sources will be difficult with XRISM. Proposed X-ray missions like the AXIS-probe with \(\sim\)10 times the effective area of _Chandra_ and excellent angular resolution throughout the field will allow for the detection of higher counts for the faint sources, and allow classifying them accurately. The higher effective area will also lead to the detection of more serendipitous sources making machine learning methods much more efficient than traditional methods. X-ray missions like the Line Emission Mapper and Athena X-IFU would be the most ideal for our work. Their large effective area in the soft-energy X-rays (\(<\) 2 keV) and spectral resolution of a few electron-volts will allow for the detection of individual emission lines from elements with different ionization states, even in faint sources. ## 5 Conclusion We discussed the application of a neural network model in distinguishing _Chandra_ X-ray spectra of AGN in CDFS and stars in the Orion nebula cluster. We are able to achieve accuracy, recall and precision of \(\sim\)92% on the simulated spectra and \(\sim\)91% on the observed spectra. The algorithm is most efficient when the net counts in the 0.3-8.0 keV regime are \(\gtrsim\) 200, background contribution is \(\lesssim\) 5%, the stars have absorption column densities \(N_{H}<10^{22}\) cm\({}^{-2}\), \(kT\lesssim 2\) keV, and the AGN are hard with power-law index \(\Gamma\leq 2\) and have a strong Fe-K emission line with equivalent width \(\gtrsim\) 0.5 keV. We also tested the robustness of our method with the changing soft energy response of _Chandra_ ACIS, and found that the performance of our model, trained on the COUP and CDFS responses, decreased on simulated spectra from responses in the Cycle 25 _Chandra_ Call for proposals, especially in identifying AGN. Thus, combining multiple observations of faint sources would need more detailed modelling of the _Chandra_ responses. Applying these algorithms to _XMM-Newton_ observations will allow us to utilize its larger effective area and near-constant response to increase the signal-to-noise ratio for faint sources. In this paper, we have only used the X-ray spectra for classifying the source. For sources with known optical/radio counterparts, we can use their position, variability, and multiwavelength properties in addition to the output of our machine learning methods (in place of commonly-used hardness ratios) to improve the accuracy of the classification. X-ray catalogs have a unique property that while hundreds and thousands of sources have available spectra, only a few of them are labelled. Such datasets allow the use of semi-supervised learning such as auto-encoders. These algorithms use clustering methods (i.e., use the distribution of features in the unlabelled data), representation of the data into smaller dimensions, etc., along with the information from the classified/labelled sources to efficiently train the ML model. Using these kinds of training algorithms would allow us to study the data without the need to simulate artificial spectra. We plan to extend this method to distinguish and identify other kinds of X-ray spectra, like those of neutron stars and SNRs, from stars and AGN. We would also like to check the performance of our model on sources that have slightly different properties than the ones used in training. These methods will be especially useful with future higher-spectral-resolution X-ray data from upcoming microcalorimeter missions, such as Athena. ## Acknowledgements The authors thank Dr. Abram Hindle for very constructive suggestions. COH is supported by NSERC Discovery Grant RGPIN-2016-04602. This work has made use of data obtained from the Chandra Data Archive and the Chandra Source Catalog, version 2.0. ## Data Availability The COUP source list and the membership catalogues used to extract the properties of the stars are available at the VizieR online databases Getman et al. (2005d,b). The catalogue of AGN in CDFS can also be found at the VizieR online catalogue Tozzi et al. (2007). The _Chandra_ data themselves are available via the _Chandra_ Source Catalog, [https://cxc.cfa.harvard.edu/csc/](https://cxc.cfa.harvard.edu/csc/), and/or the _Chandra_ Data Archive [https://cxc.cfa.harvard.edu/cda/](https://cxc.cfa.harvard.edu/cda/). The spectra of simulated stars and their properties are available upon reasonable request to the first author.
2310.20471
Measure-dependent non-linear diffusions with superlinear drifts: asymptotic behaviour of the first exit-times
In this paper, we study McKean-Vlasov SDE living in $\mathbb{R}^d$ in the reversible case without assuming any type of convexity assumptions for confinement or interaction potentials. Kramers' type law for the exit-time from a domain of attraction is established. Namely, in the small-noise regime, the limit in probability of the first exit-time behaves exponentially. This result is established using the large deviations principle as well as improved coupling method. Having removed the convexity assumption, this work is a major improvement of the previously known results for the exit-time problem, the review of which is provided in the paper.
Ashot Aleksian, Julian Tugaut
2023-10-31T14:04:45Z
http://arxiv.org/abs/2310.20471v1
Measure-dependent non-linear diffusions with superlinear drifts: asymptotic behaviour of the first exit-times ###### Abstract In this paper, we study McKean-Vlasov SDE living in \(\mathbb{R}^{d}\) in the reversible case without assuming any type of convexity assumptions for confinement or interaction potentials. Kramers' type law for the exit-time from a domain of attraction is established. Namely, in the small-noise regime, the limit in probability of the first exit-time behaves exponentially. This result is established using the large deviations principle as well as improved coupling method. Having removed the convexity assumption, this work is a major improvement of the previously known results for the exit-time problem, the review of which is provided in the paper. **Key words:** Measure-dependent diffusions; Large deviations principle; Freidlin-Wentzell theory; Multi-well landscape **2020 AMS subject classifications:** Primary: 60H10 ; Secondary: 60J60, 60K35 ## 1 Introduction Let us consider \((X_{t}^{\sigma},\ t\geq 0)\) a measure-dependent stochastic process (also called McKean-Vlasov diffusion [16, 17]), solution of the following stochastic differential equation (SDE): \[\mathrm{d}X_{t}^{\sigma}=\sigma\mathrm{d}B_{t}-\nabla V(X_{t}^{\sigma})\, \mathrm{d}t-\nabla F\ast\mu_{t}^{\sigma}(X_{t}^{\sigma})\,\mathrm{d}t\,,\quad X _{0}^{\sigma}=x_{\mathrm{init}}\in\mathbb{R}^{d}. \tag{1.1}\] Here \((B_{t},\,t\geq 0)\) stands for the \(d\)-dimensional Brownian motion, \(V\) represents the environment which is assumed to be a multi-well function (also called confinement potential in this work) and \(F\) is the interaction potential corresponding to the form and strength of interaction of the process with its law. This specific form of the McKean-Vlasov diffusion is also known in the literature under the name of self-stabilizing diffusion or SSD (see [15]). The aim of this study is to describe how long the stochastic process stays in a domain \(\mathcal{D}\), which is a neighborhood of a local minimum of \(V\), before its first exit from this neighborhood. Therefore, the main object of interest in this paper is the following stopping time: \[\tau^{\sigma}_{\mathcal{D}}:=\inf\{t\geq 0:\ X^{\sigma}_{t}\notin\mathcal{D}\}. \tag{1.2}\] The precise assumptions under consideration are given later on. ### Organization of the paper The current section is followed by presenting and discussing assumptions on potentials \(V\) and \(F\) and domain \(\mathcal{D}\), exit-time from which is considered. An existing result on existence and uniqueness of the process under almost identical assumptions are provided in Section 1.3 with a discussion on how its proof can be adapted to our case. The large deviations principle for this system is provided in Section 1.4. Main results of this paper are formulated in Section 2. Namely, the Kramers' type law for exit-time and the exit-location results for both cases of bounded and unbounded domain \(\mathcal{D}\). This theorem is followed by Section 2.4 comparing them to previously known results for exit-time problem in the case of self-stabilizing diffusion and Section 2.5 discussing open questions and possible extensions of our findings. Section 3 contains intermediate lemmas that are necessary for the proof of the main theorem of the paper. These lemmas are proved in Section 5. Section 4 contains the proof of the main theorem provided in Section 2. ### Assumptions Here, we give the assumptions on the potentials and on the domain. **Assumption A-1**.: _Let us consider the following hypotheses concerning the confinement potential:_ 1. _The confinement potential is a regular function_ \(V\in\mathcal{C}^{2}(\mathbb{R}^{d})\)_._ 2. \(V\) _is uniformly convex at infinity. Namely, there exists_ \(\theta_{1}>0\) _and_ \(R>0\) _such that for all_ \(x\in\mathbb{R}^{d}\) _satisfying_ \(|x|>R\) _we have_ \[\nabla^{2}V(x)\succeq\theta_{1}\mathrm{Id},\] _where_ \(\mathrm{Id}\) _is the identity matrix._ 3. _There exist_ \(r\in\mathbb{Z}_{+}\) _and a constant_ \(C>0\) _such that_ \[|\nabla V(x)|\leq C(1+|x|^{2r-1}),\quad\text{for all}\quad x\in\mathbb{R}^{d}.\] 4. _There exists_ \(a\in\mathbb{R}^{d}\) _such that_ \(\nabla V(a)=0\) _and_ \(\nabla^{2}V(a)\succeq\rho_{1}\mathrm{Id}\) _for some_ \(\rho_{1}>0\)_, where_ \(\mathrm{Id}\) _is the identity matrix._ \((V-5)\): _The function_ \(\nabla V\) _is locally Lipschitz. More precisely, for any_ \(x\in\mathbb{R}^{d}\) _and_ \(y\in\mathbb{R}^{d}\)_, we have:_ \[|\nabla V(x)-\nabla V(y)|\leq C|x-y|(1+|x|^{2r-1}+|y|^{2r-1})\,,\] (1.3) _where_ \(r\) _has been introduced in_ \((V-3)\)_._ Assumption \((V-1)\) is natural since we will use Ito calculus to obtain some of our results. Thus, we require that \(V\) is of class \(\mathcal{C}^{2}\). Assumption \((V-2)\) is taken to ensure that the confinement potential forces the diffusion to stay in a compact set and thus that the process does not explode. Assumptions \((V-3)\) and \((V-5)\) are required to comply with the theory developed in [2] for ensuring the existence of the self-stabilizing diffusion when the drift is superlinear. Assumption \((V-4)\) means that there is a local minimizer with a non-degenerate Hessian. We point out that \(\nabla V\) is not assumed to be globally Lipschitz. Assumption A-1 covers a wide range of possible multi-well potentials. An analytical example of such a potential \(V\) that satisfies Assumption A-1 in dimension \(d=1\) could be the classical double-well potential (see Fig. 2) \[V(x):=\frac{x^{4}}{4}-\frac{x^{2}}{2}.\] In dimension two, the following function \[V(x_{1},x_{2}) :=\frac{3}{2}\left(1-x_{1}^{2}-x_{2}^{2}\right)^{2}+\frac{1}{3} \left(x_{1}^{2}-2\right)^{2}+\frac{1}{6}\left((x_{1}+x_{2})^{2}-1\right)^{2}\] \[\quad+\frac{1}{6}\left((x_{1}-x_{2})^{2}-1\right)^{2}\] could be an example of a double-well potential satisfying these assumptions. Fig. 2 shows its level sets. We now give the assumptions on the interaction potential. **Assumption A-2**.: _Let \(\theta_{1}\) and \(r\) be the positive constants introduced in \((V-2)\) and \((V-3)\). Consider the following hypotheses concerning the interaction:_ 1. _The interaction potential is a regular function_ \(F\in\mathcal{C}^{2}(\mathbb{R}^{d})\)_._ 2. \(F(0)=0\) _and_ \(\nabla F\) _is rotationally invariant, that is there exists a continuous function_ \(\phi:[0;\infty)\to\mathbb{R}\) _with_ \(\phi(0)=0\) _such that_ \[\nabla F(x)=\frac{x}{|x|}\phi(|x|).\] 3. _There exists a constant_ \(C^{\prime}>0\) _such that_ \[|\nabla F(x)|\leq C^{\prime}(1+|x|^{2r-1}),\quad\text{for all}\quad x\in \mathbb{R}^{d}\,.\] 4. _The function_ \(\nabla F\) _is locally Lipschitz. More precisely, for any_ \(x\in\mathbb{R}^{d}\) _and_ \(y\in\mathbb{R}^{d}\)_, we have:_ \[|\nabla F(x)-\nabla F(y)|\leq C^{\prime}|x-y|(1+|x|^{2r-1}+|y|^{2r-1})\,.\] (1.4) 5. _There exists a constant_ \(\theta_{2}>0\) _such that for any_ \(x\in\mathbb{R}^{d}\) _we have_ \[\nabla^{2}F(x)\succeq-\theta_{2}\mathrm{Id},\] _where_ \(\mathrm{Id}\) _is the identity_ _matix. Moreover,_ \(\theta_{1}>\theta_{2}\)_._ Again, Assumption \((F-1)\) is natural since we will use Ito calculus. Assumption \((F-2)\) is taken to ensure existence and uniqueness of the process following the work [15], where a similar assumption was introduced. We point out that the exact value of \(F(0)\) does not have any effect on our methods, however, taking it equal to \(0\) simplifies the writing. Note that we do not use assumption \((F-2)\) for proving the exit-time result. Assumption \((F-3)\) is required for using the method developed in [2, 15] about the existence of the self-stabilizing diffusion when the drift is superlinear. We point out that \(\nabla F\) is not assumed to be globally Lipschitz. Assumption \((F-5)\) is taken in order to guarantee that the attractive behaviour at infinity of \(V\) will not be overcome by \(F\), which is essential for existence and uniqueness results (we provide this result in Section 1.3). Assumption A-2 covers a wide range of possible interaction potentials defining various behaviour with respect to the law of the process (attractive, repulsive or the combination of two). A classical analytical example of the interaction potential in general dimension \(d\) is \[F(x):=\pm\frac{\alpha}{2}|x|^{2}\,,\] with \(\alpha>0\). In the case of \(F(x)=\frac{\alpha}{2}|x|^{2}\) (see Fig. 3 for its depiction in \(d=1\)), the interacting potential is globally convex and induces attracting behaviour, whereas it is globally concave and thus repulsive with the negative sign. Another possible example of a potential is \[F(x):=C\mathrm{e}^{-\frac{\theta}{2}|x|^{2}}\,,\] with \(\theta>0\) (for its graph in \(d=1\) see Fig. 4). In this case, the function is neither convex nor concave, but, after a careful examination, we can see that it still exhibits repulsive behaviour, though dissipating at infinity. Note that here, despite assumption \((F-2)\), \(F(0)\neq 0\). As was pointed out above, the translations of \(F\) do not influence the dynamic of (1.1). In the following, we introduce the assumptions on the domain. First, we define the effective (in the small-noise limit) potential. **Definition 1.1**.: _Let \(a\) be the local minimizer of \(V\) introduced in A-1. Then, \(W_{a}\in\mathcal{C}^{2}(\mathbb{R}^{d})\) such that \(W_{a}:=V+F*\delta_{a}=V+F(\cdot-a)\) is called the effective potential._ The name "effective" comes from the fact that, as will be proved below, before the exit-time from the stable domain \(\mathcal{D}\), for small \(\sigma\), the potential \(V+F*\mu_{t}^{\sigma}\), inducing the drift term of our process, is well approximated by \(W_{a}\). In order to ensure that, in the small-noise limit, our process behaves well around the attractor \(a\), we need to assume that \(a\) is also a stable local minimizer of the effective potential. Consider the following assumption **Assumption A-4**.: _The matrix \(\nabla^{2}W_{a}(a)=\nabla^{2}V(a)+\nabla^{2}F(0)\) is positive definite._ Note that Assumption A-4, along with the continuity assumptions on \(\nabla^{2}V\) and \(\nabla^{2}F\) (Assumptions A-1 and A-2), leads to the fact that we can find an open neighborhood of the point \(a\) such that \(W_{a}\) is convex inside it. Consider: **Definition 1.2**.: _Let \(\rho>0\) be a small enough positive number such that \(W_{a}\) is convex inside \(B_{\rho}(a)\). Let \(C_{W}>0\) be a constant such that for any \(x\in B_{\rho}(a)\):_ \[\nabla^{2}W_{a}(x)=\nabla^{2}V(x)+\nabla^{2}F(x-a)\succeq C_{W}\mathrm{Id}\,,\] _where \(\mathrm{Id}\) is the identity matrix._ Let us now introduce assumptions regarding the domain of interest \(\mathcal{D}\subset\mathbb{R}^{d}\), exit-time from which will be considered in the future. First assumption on domain \(\mathcal{D}\) is the following: **Assumption A-5**.: \(\mathcal{D}\) _is a bounded connected open subset of \(\mathbb{R}^{d}\) containing the point \(a\)._ **Remark 1.3**.: _Without loss of generality, we choose \(\rho>0\) from Definition 1.2 to be small enough such that we have the following strict inclusion \(B_{\rho}(a)\subset\mathcal{D}\)._ The boundedness of the domain \(\mathcal{D}\) will be relaxed later. However, the fact that \(\mathcal{D}\) is connected and open is mandatory and classical from [7, 10]. The following assumptions on \(\mathcal{D}\) are mandatory: **Assumption A-6**.: _The domain \(\mathcal{D}\) contains the deterministic path \((\gamma_{t},\,t\geq 0)\) solution of the following dynamical system_ \[\frac{\mathrm{d}}{\mathrm{d}t}\gamma_{t}=-\nabla V(\gamma_{t}),\qquad\gamma_{ 0}=x_{\mathrm{init}}. \tag{1.5}\] _We assume furthermore that \(\lim_{t\to\infty}\gamma_{t}=a\)._ This assumption is important for the type of exit-problem that we consider here, which is exit created by the small noise from a domain of attraction. We will see further, using the large deviations principle (LDP), that for any \(T>0\), the processes \((X^{\sigma}_{t},0\leq t\leq T)\) and \((\gamma_{t},0\leq t\leq T)\) are close in supremum norm with high probability when \(\sigma\) is small enough. In the case where \(T_{1}:=\inf\{t\geq 0:\gamma_{t}\notin\mathcal{D}\}<\infty\), it is easy to show, using LDP, that \(\tau^{\sigma}_{\mathcal{D}}\approx T_{1}\) for small \(\sigma\). In other words, it is impossible to obtain the Kramers' type law without Assumption A-6. Now, we present the definition of a stable domain. **Definition 1.4**.: _We say that an open connected subset \(\mathcal{G}\) of \(\mathbb{R}^{d}\) is stable by the vector field \(-\nabla W_{a}\) if for any \(t\geq 0\), for any \(x\in\mathcal{G}\), \(\psi_{t}(x)\in\mathcal{G}\) where the process \(\psi(x)\) is the solution to the following dynamical system:_ \[\psi_{t}(x)=x-\int_{0}^{t}\nabla W_{a}(\psi_{s}(x))\,\mathrm{d}s\,.\] This leads to a classical assumptions on the domain \(\mathcal{D}\) that is standard for the Freidlin-Wentzell theory, see [7, 10]. **Assumption A-7**.: _The open domain \(\mathcal{D}\) is stable by the vector field \(-\nabla W_{a}\). Moreover, for any \(z\in\partial\mathcal{D}\), \(\lim_{t\to+\infty}\psi_{t}(z)=a\)._ **Remark 1.5**.: _Note that by continuity argument we can expand domain \(\mathcal{D}\) such that Assumptions A-6 and A-7 still hold in the enlargement. Namely, for any \(\kappa>0\) small enough there exists an open connected bounded set \(\mathcal{D}^{\mathsf{e}}_{\kappa}\subseteq\{x\in\mathbb{R}^{d}:\inf_{z\in \mathcal{D}}|z-x|<\kappa\}\) such that Assumptions A-6 and A-7 are satisfied for \(\mathcal{D}^{\mathsf{e}}_{\kappa}\). Obviously, the same holds for constrictions: for any \(\kappa>0\) small enough there exists an open set \(\mathcal{D}^{\mathsf{c}}_{\kappa}\subseteq\{x\in\mathcal{D}:\inf_{z\in \partial\mathcal{D}}|z-x|>\kappa\}\) satisfying Assumptions A-6 and A-7._ _We can also define their exit-costs as \(H^{\mathsf{e}}_{\kappa}:=\inf_{z\in\partial\mathcal{D}^{\mathsf{c}}_{\kappa}} \{W_{a}(z)-W_{a}(a)\}\) and \(H^{\mathsf{c}}_{\kappa}:=\inf_{z\in\partial\mathcal{D}^{\mathsf{c}}_{\kappa}} \{W_{a}(z)-W_{a}(a)\}\) respectively._ ### Existence of the process The problem of existence and uniqueness of the SDE (1.1) was studied in [15]. Mutatis mutandis from [15, Theorem 2.13], we get the following proposition: **Proposition 1.6**.: _Let \(r\) be the positive constant introduced in \((V-3)\). For any \(\sigma\geq 0\), under Assumptions A-1 and A-2, the SDE (1.1) has a unique strong solution that we denote by \((X_{t}^{\sigma},t\geq 0)\). Moreover, there exists a constant \(M>0\), such that_ \[\sup_{0\leq\sigma\leq 1}\sup_{t\geq 0}\mathbb{E}\big{[}|X_{t}^{\sigma}|^{8r^{2}} \big{]}\leq M\,. \tag{1.6}\] Note that the assumptions used in [15, Theorem 2.13] are slightly different from ours, particularly for the interaction term. Assumption \((F-2)\) of A-2 allows \(\phi\) to be negative and thus to exhibit repulsive behaviour, while in [15]\(\phi\) is set to be a positive increasing function. To neutralise possible problems that this relaxation could pose, we introduce assumption \((F-5)\). The fact that \(\theta_{1}>\theta_{2}\) guarantees that, regardless of \(\mu^{\sigma}\), the drift term of our process is always attractive outside of a compact set. Namely, for any \(\mu\in\mathcal{P}(\mathbb{R}^{d})\) and for any \(x\in\mathbb{R}^{d}\) such that \(|x|>R\), we have \(\nabla^{2}V(x)+\nabla^{2}F*\mu(x)\succeq(\theta_{1}-\theta_{2})\mathrm{Id}\) and thus \[\langle x;-\nabla V(x)-\nabla F*\mu(x)\rangle\leq-(\theta_{1}-\theta_{2})|x| ^{2}.\] This guarantees non-explosiveness of the process in finite time. After this observation, the proof in [15] can be easily adapted for the case of Assumptions A-1 and A-2. ### Large deviations principle The large deviations principle (LDP) for the process (1.1) was also proved in [15]. Unlike in the case of Proposition 1.6, the adaptation of these results for our assumptions on the interaction term is immediate. The authors proved the following result: **Proposition 1.7** ([15, Theorem 3.4]).: _Let \(\gamma\) be the unique solution of the ODE_ \[\frac{\mathrm{d}}{\mathrm{d}t}\gamma_{t}=-\nabla V(\gamma_{t}),\quad\gamma_{0 }=x_{\mathrm{init}}.\] _Then for any \(T>0\), the probability measures induced by the processes \((X_{t}^{\sigma},0\leq t\leq T)_{\sigma>0}\) on \(\mathcal{C}([0,T])\) satisfy the LDP with convergence rate \(\frac{\sigma^{2}}{2}\) with the following good rate function:_ \[I_{T}(\varphi):=\frac{1}{4}\int_{0}^{T}|\dot{\varphi}_{t}+\nabla V(\varphi_{t })+\nabla F(\varphi_{t}-\gamma_{t})|^{2}\,\mathrm{d}t\,, \tag{1.7}\] _for any \(\varphi\in\mathcal{H}_{1}\), the set of absolutely continuous functions from \([0;T]\) to \(\mathbb{R}^{d}\) such that \(\varphi(0)=x_{\mathrm{init}}\). Otherwise, \(I_{T}(\varphi):=+\infty\)._ If we denote by \((\nu^{\sigma})_{\sigma>0}\) the family of probability measures induced on \(\mathcal{C}([0,T])\) by \((X_{t}^{\sigma},0\leq t\leq T)\) for respective \(\sigma>0\), then the proposition above takes the following form. For any measurable subset \(\Gamma\subset\mathcal{C}([0,T])\), we have: \[-\inf_{f\in\bar{\Gamma}}I(f)\leq\liminf_{\sigma\to 0}\frac{\sigma^{2}}{2} \log\nu^{\sigma}(\Gamma)\leq\limsup_{\sigma\to 0}\frac{\sigma^{2}}{2}\log\nu^{ \sigma}(\Gamma)\leq-\inf_{f\in\bar{\Gamma}}I(f).\] Note that most authors use the convergence rate \(\sigma^{2}\) and, consequently, the term in front of the integral in (1.7) is \(\frac{1}{2}\) instead of \(\frac{1}{4}\). However, we choose to take as convergence rate the coefficient in front of the Laplacian in the associated partial differential equation. Note also that in this proposition \(\gamma\) represents the deterministic limit of the system (1.1). When \(\sigma\) is small, we expect our process to stay close to \(\gamma\) for fixed time intervals. Thus it does not come as a surprise that it is \(\delta_{\gamma}\) that replaces \(\mu^{\sigma}\) in the rate function. ## 2 Main results In this paragraph, we list the main results of the paper. ### Exit-time We now give the main results concerning the exit-time, for the case when \(\mathcal{D}\) is a bounded domain. **Theorem 2.1**.: _Let \(H\) be the exit-cost introduced in Assumption A-7. Under Assumptions A-1-A-7, the following two results hold_ 1. _Kramers' law: for any_ \(\delta>0\)_, the following limit holds:_ \[\lim_{\sigma\to 0}\mathbb{P}\left[\exp\Big{\{}\frac{2}{\sigma^{2}}(H- \delta)\Big{\}}\leq\tau_{\mathcal{D}}^{\sigma}\leq\exp\Big{\{}\frac{2}{\sigma ^{2}}(H+\delta)\Big{\}}\right]=1\] (2.1) 2. _Exit-location: for any closed set_ \(N\subset\partial\mathcal{D}\) _such that_ \(\inf_{z\in N}W_{a}(z)>H\) _the following limit holds:_ \[\lim_{\sigma\to 0}\mathbb{P}\big{(}X_{\tau_{\mathcal{D}}^{\sigma}}^{\sigma} \in N\big{)}=0.\] (2.2) Proof of Theorem 2.1 is provided in Section 4. ### Control of the law We now present a result on the control of the law in the case where \(\mathcal{D}\) is bounded. The following theorem rigorously states that, starting from some uniformly bounded in \(\sigma\) time, the law of the process \(\mu^{\sigma}\) stays close to \(\delta_{a}\) long enough to obtain the result of Theorem 2.1. **Theorem 2.2**.: _Under Assumptions A-1-A-7, for any \(\kappa>0\) small enough there exist \(\overline{T}_{\sf st}(\kappa)>0\) and \(\sigma_{\kappa}>0\) such that_ \[\sup_{0<\sigma<\sigma_{\kappa}}\sup_{t\in\left[\overline{T}_{\sf st}(\kappa); \mathrm{e}\frac{2H}{\sigma^{2}}\right]}\mathbb{W}_{2}(\mu_{t}^{\sigma};\delta_ {a})\leq\kappa.\] This theorem can be easily proven using Lemmas 3.1 and 3.6 provided in Section 3. It is left for the reader ### Unbounded case We now present the generalisation of the results above to the case where \(\mathcal{D}\) is not bounded. **Corollary 2.3**.: _If \(\mathcal{D}\) is an open and connected subset of \(\mathbb{R}^{d}\), under Assumptions A-1-A-4 and Assumptions A-6, A-7, the statements of Theorem 2.1 hold._ The control of the law also holds immediately even if \(\mathcal{D}\) is unbounded. **Corollary 2.4**.: _If \(\mathcal{D}\) is an open and connected subset of \(\mathbb{R}^{d}\), under Assumptions A-1-A-4 and Assumptions A-6, A-7, the statement of Theorem 2.2 holds._ Proofs of Corollary 2.3 and Corollary 2.4 are postponed to Section 4. ### Comparison to previous results In the seminal work [15], S. Herrmann, P. Imkeller, and D. Peithmann proved the existence of the self-stabilizing diffusion in the irreversible case. The assumptions they used correspond to A-1 and A-2 if confinement and interaction were gradients of some regular potentials, except for a slight difference in the interaction term (this difference was discussed in Section 1.3). In the same work, the authors show the exit-time result for SSD, but, in order to do that, they had to assume convexity of confinement and interaction. Removal of this assumption, that we present in this paper, is a big improvement of previous results. Note, that, unlike in [15], we solve the exit-time problem for the reversible case (confinement and interaction are gradients of some regular functions). Nevertheless, we could treat the general situation, see Section 2.5 on the possible extensions of our results. Another difference between our approach and the one presented in the paper [15] is that, after controlling the law of the process \(X^{\sigma}\), we use coupling techniques to prove the exit-time, while the approach used by S. Herrmann, P. Imkeller, and D. Peithmann consists in reconstructing the Freidlin-Wentzell techniques and taking advantage of the contractivity of the drift. In [24], J. Tugaut focused on the reversible case of the SSD with potentials \(V\) and \(F\) being convex. He proved a similar to ours result by using another method than in [15]. The approach of [24] was to apply the Freidlin-Wentzell theory without adapting it to the McKean-Vlasov diffusions. In this work, the classical large deviations principle theory for processes is used to the associated system of particles \[\mathrm{d}X_{t}^{i,N}=\sigma\,\mathrm{d}B_{t}-\left(\nabla V(X_{t}^{i,N})+ \frac{1}{N}\sum_{j=1}^{N}\nabla F(X_{t}^{i,N}-X_{t}^{j,N})\right)\mathrm{d}t\,, \tag{2.3}\] after which a trajectorial uniform propagation of chaos is established. Using the propagation of chaos, the author obtained the Kramers' type law. In [25] J. Tugaut employed a different method, applicable to the case where the parts of the drift term are not necessarily assumed to be gradients of a regular function, although they remain globally contractive. This method primarily revolves around controlling the law at time \(t\) of \(X^{\sigma}\), denoted as \(\mu_{t}^{\sigma}\). Notably, J. Tugaut demonstrated that this law converges to \(\delta_{a}\) in Wasserstein distance for \(t\to+\infty\). Subsequently, a synchronous coupling with a diffusion, where the drift is represented as \(x\mapsto-\nabla V(x)-\nabla F*\delta_{a}(x)\) instead of \(x\mapsto-\nabla V(x)-\nabla F*\mu_{t}^{\sigma}(x)\), is employed. Exploiting the contractivity, it is straightforward to prove that the two diffusions remain close. Consequently, the exit-time of \(X^{\sigma}\) behaves similarly to that of the coupled diffusion. This approach has been extended to non-convex scenarios in the reversible case, as described in [23]. In this context, \(V\) is not necessarily convex, although \(F\) exhibits sufficient convexity to ensure convexity of the effective potential \(W_{a}=V+F(\cdot-a)\). As a result, coupling between the two diffusions is straightforward, allowing us to infer the exit-time of \(X^{\sigma}\) from that of the coupled diffusion. The convexity assumption on \(W_{a}\) has been removed in [27], though this result is limited to the one-dimensional case. Unfortunately, the method used there cannot be directly extended to the general-dimensional case. Thus, it becomes essential to find an alternative way to control the law. In [26], J. Tugaut demonstrated that \(\mu^{\sigma}\) does not always converge to \(\delta_{a}\). This limitation arises when \(W_{a}\) fails to reach its global minimum at \(a\), therefore, in order to control the law of the process (at least until exit-time) other methods should be used. Despite all these developments, the exit-time problem for SSD with general (non-convex) coefficients was an open problem throughout all these years. We solve it in this paper by significantly improving the coupling method introduced in [25]. ### Discussions on extension In this section, we provide some possible extensions to our results. #### 2.5.1 Non-identity matrix as the diffusion coefficient In this work, we have simplified the study by assuming that the diffusion coefficient takes the form \(\sigma\mathrm{Id}\). However, for certain algorithmic applications such as molecular dynamics, it could be beneficial to consider scenarios where the diffusion coefficient is not directly proportional to the identity matrix, as discussed for example in [6]. To make further progress, it would be a significant improvement to include the scenario where the diffusion coefficient is given by \(\sigma M\), with \(M\) being a non-degenerate matrix. This particular situation has been studied in, for instance, [8, 9, 18]. The techniques developed in the present work can be readily adapted for this non-identity diffusion coefficient case. However, a more challenging extension would involve considering cases where \(M\) is degenerate. This would allow us to address the Langevin kinetic diffusion, where both position and velocity play crucial roles. Combining techniques we have developed with those from [6], we firmly believe that we can obtain valuable insights into the asymptotic behaviour of the first exit-time. #### 2.5.2 Initial random variable Another possible extension is related to the initial random variable. In the current work, we establish the asymptotic behaviour of the exit-time for \(X_{0}:=x_{\mathrm{init}}\in\mathbb{R}^{d}\). However, for studying the basins of attraction, as was done in [28], it is crucial to consider scenarios where \(\mu_{0}^{\sigma}:=\mathcal{L}(X_{0}^{\sigma})\) is not necessarily a Dirac measure. Specifically, we may be interested in cases where \(\mu_{0}^{\sigma}:=\mu_{0}\), with the measure \(\mu_{0}\) being compactly supported in \(\mathcal{D}\). In this situation, we need to make a slight modification to Assumptions A-6. Instead of considering \(\gamma^{\prime}(t)=-\nabla V(\gamma_{t})\), we would need to consider the partial differential equation: \[\frac{\partial}{\partial t}\mu_{t}^{0}=\mathrm{div}\left(\mu_{t}^{0}(\nabla V+ \nabla F*\mu_{t}^{0})\right),\] with \(\mu_{0}^{0}=\mu_{0}\). This corresponds to the granular media equation with zero noise. The associated dynamical system that approximates the diffusion \(X^{0}\) on \([0;T]\) (with \(T>0\)) due to the large deviations principle is thus given by: \[\rho_{t}(x_{\mathrm{init}})=x_{\mathrm{init}}-\int_{0}^{t}\nabla V(\rho_{s}(x _{\mathrm{init}}))\,\mathrm{d}s-\int_{0}^{t}\nabla F*\mu_{s}^{0}(\rho_{s}(x_{ \mathrm{init}}))\,\mathrm{d}s\,,\] for any \(x_{\mathrm{init}}\in\mathrm{supp}(\mu_{0})\). In this case, Assumptions A-6 would be: for any \(x_{\mathrm{init}}\in\mathcal{D}\cap\mathrm{supp}(\mu_{0})\) and for any \(t\geq 0\), we have \(\rho_{t}(x_{\mathrm{init}})\in\mathcal{D}\). The techniques developed in the present work can be seamlessly adapted to handle this situation. #### 2.5.3 Reflexion on the boundary In this work, the diffusion process takes place in the entire phase space \(\mathbb{R}^{d}\). However, we can consider a subspace of \(\mathbb{R}^{d}\) instead. This could be achieved by introducing a reflection on certain boundaries, as it was done, for example, in [22]. Such an extension would be a significant improvement compared to [1], where the uniform convexity of both confinement and interaction potentials was assumed. In the mentioned article, the domain \(\mathcal{G}\) in which the diffusion takes place satisfies \(d\big{(}\overline{\mathcal{D}};\partial\mathcal{G}\big{)}>0\), which simplifies the study. We believe that techniques we have developed could treat this case. However, considering scenarios where \(\mathcal{D}\cap\mathcal{G}^{c}\neq\emptyset\) is more challenging. This could require extending the large deviation techniques for processes with reflection, something that is not done yet even for linear case. #### 2.5.4 More accurate estimates In this paper, our focus has been on establishing the Kramers' law, that is a limit in probability of \(\frac{\sigma^{2}}{2}\log(\tau_{\mathcal{D}}^{\sigma})\) as \(\sigma\) approaches \(0\), as well as the exit-location result. However, in [15], the authors have obtained a more precise estimate, which could be of interest in our context. For example, the so-called Arrhenius law was established, i.e. the convergence of \[\frac{\sigma^{2}}{2}\log\mathbb{E}(\tau_{\mathcal{D}}^{\sigma})\xrightarrow[ \sigma\to 0]{}H>0.\] Unfortunately, since we do not provide the control of the law of the process after the exit-time, we could not use the standard method to show the Arrhenius law in the current work. Additionally, it is well-known, as discussed in [19], that the first exit-time \(\tau_{\mathcal{D}}^{\sigma}\) for a linear (Ito) diffusion satisfies the following limit: \[\frac{\tau_{\mathcal{D}}^{\sigma}}{\mathbb{E}[\tau_{\mathcal{D}}^{\sigma}]} \xrightarrow[\sigma\to 0]{}\mathcal{E}(1),\] where the convergence is meant in law, and \(\mathcal{E}(1)\) is the exponential law with a parameter equal to \(1\). The same behaviour for self-stabilizing diffusions is not established yet even in the case where both \(V\) and \(F\) are convex. In [3, 4], A. Bovier, M. Eckhoff, V. Gayrard, and M. Klein studied the exit-time problem for linear reversible diffusion process using potential theory approach. Using these techniques, the authors could not only establish the Arrhenius law for multi-well potential in \(\mathbb{R}^{d}\), but also prefactor of the convergence. Namely, the following equality was established: \[\mathbb{E}[\tau_{\mathcal{D}}^{\sigma}]=C^{*}\mathrm{e}^{\frac{2H}{\sigma^{2} }}\big{(}1+O(\sigma|\log(\sigma)|)\big{)},\] where the constant \(C^{*}>0\) depends on the derivatives of the potential \(V\) at the point of attraction \(a\) as well as the saddle points surrounding the well under consideration. For the explicit form of the prefactor see [3]. Similar methods could be also used for the self-stabilizing diffusion. However, that would imply studying the associated PDE for the law of the process: \[\frac{\partial}{\partial t}\mu_{t}^{\sigma}=\frac{\sigma^{2}}{2}\Delta\mu_{t} ^{\sigma}+\operatorname{div}\left(\mu_{t}^{\sigma}(\nabla V+\nabla F*\mu_{t}^ {\sigma})\right),\] which is considered to be a hard problem due to its non-linearity. These questions could be the focus of future studies. #### 2.5.5 Non-reversible case In this work, we have focused on the case where both the confinement and the interaction terms are gradients of some potentials. However, it would be valuable to consider non-reversible situations of the form: \[X_{t}=X_{0}+\sigma MB_{t}+\int_{0}^{t}a(X_{s})\,\mathrm{d}s+\int_{0}^{t}b*\mu_ {s}^{\sigma}(X_{s})\,\mathrm{d}s\,,\] where \(a\) and \(b\) are general vector fields on \(\mathbb{R}^{d}\). It is worth noting that in previous works such as [6, 15, 25], the authors have successfully addressed this problem, but in the contractive (convex confinement and interaction) case. The techniques developed in this paper can readily be adapted to handle the non-reversible case. However, the exit-cost is not explicit in this situation, which is why we have described the reversible case here. #### 2.5.6 More general McKean-Vlasov diffusions A broader class of nonlinear diffusion processes can be considered. For example: \[\mathrm{d}X_{t}=\sigma\mathrm{d}B_{t}-\nabla V(X_{t})\,\mathrm{d}t-b(X_{t},\mu_{t }^{\sigma})\,\mathrm{d}t\,,\] where the nonlinear drift \(b\) takes the form \[b(x,\mu):=\int_{\mathbb{R}^{d}}B(x,y)\mu(\mathrm{d}y).\] Here, the function \(B\) is required to be regular and maps from \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) to \(\mathbb{R}^{d}\). Such a generalization would have significant implications for theoretical purposes (as shown in [14]) as well as applications (see e.g. [13]). We firmly believe that the techniques developed in this work can be adapted to handle a wide range of situations within this framework. For algorithmic applications, it would be also interesting to include jumps in the process, as discussed in [11, 12]. This could be a subject of future studies. #### 2.5.7 Extension on the domain \(\mathcal{D}\) and metastability An important yet challenging extension concerns the domain \(\mathcal{D}\) itself. In this work, we have confined our study to cases where \(\overline{\mathcal{D}}\) is stable under the effective potential \(W_{a}\). However, the most interesting scenario arises when the saddle point lies on the boundary of \(\mathcal{D}\). Moreover, it would be interesting to establish some metastable properties of \(X^{\sigma}\), that is considering \(t(\sigma)\) as a function of \(\sigma\) and investigating \(X^{\sigma}_{t(\sigma)}\) in metastable confinement as it was done in [10]. Complexity of this problem in the case of SSD is that the drift itself (the effective potential) may change after the transition of the process from one metastable state to another. These questions could be the focus of future studies. #### 2.5.8 System of particles For algorithmic applications, it is essential to consider the associated system of particles described by Equation (2.3). In this system, the measure \(\mu_{t}^{X}\) is replaced by \(L_{t}^{\sigma}:=\frac{1}{N}\sum_{j=1}^{N}\delta_{X_{t}^{j}}\). In [24], J. Tugaut has obtained the exit-time of the McKean-Vlasov diffusion from the system of particles in the convex case. Consequently, it appears feasible to do the opposite and establish the exit-time of the system of particles based on the exit-time of the McKean-Vlasov diffusion. Similar techniques like a trajectorial uniform propagation of chaos (see for example [2, 5, 21]) can be used. However, in [24], convexity was essential for controlling the law, which is now also available in the general situation due to the current work. Intermediate results In this preliminary section, we will give the key results which allow us to prove the main theorems related to exit-time in Section 4. Their proofs are given in Section 5. ### Stabilisation in finite time Let us define the following two deterministic times for any \(\kappa>0\) small enough: \[T_{\mathsf{st}}^{\sigma}(\kappa) :=\inf\left\{t\geq 0\ :\ \mathbb{W}_{2}(\mu_{t}^{\sigma};\delta_{a}) \leq\kappa\right\},\] \[S_{\mathsf{st}}^{\sigma}(\kappa) :=\inf\left\{t\geq T_{\mathsf{st}}^{\sigma}(\kappa)\ :\ \mathbb{W}_{2}(\mu_{t}^{\sigma};\delta_{a})>\kappa\right\};\] and we let the infima to be equal to \(+\infty\) if respective sets are empty. First key result consists in obtaining the existence of a time \(T\) such that \(\mathbb{W}_{2}(\mu_{t}^{\sigma};\delta_{a})\) is small and such that \(X_{T}^{\sigma}\) is concentrated around \(a\). **Lemma 3.1**.: _Under Assumptions A-1-A-7, for any \(\kappa>0\) there exist \(\overline{T}_{\mathsf{st}}(\kappa)>0\) and \(\sigma_{\kappa}>0\) such that:_ \[T_{\mathsf{st}}^{\sigma}(\kappa)\leq\overline{T}_{\mathsf{st}}(\kappa)\quad \text{for any }0<\sigma<\sigma_{\kappa}.\] _Moreover,_ \[\lim_{\sigma\to 0}\mathbb{P}\Big{(}\Big{|}X_{\overline{T}_{\mathsf{st}}( \kappa)}^{\sigma}-a\Big{|}>\kappa\Big{)}=0.\] An important implication of this lemma is that, with high probability, the exit from the domain \(\mathcal{D}\) does not occur before time \(\overline{T}_{\mathsf{st}}(\kappa)\) (see Section 5 for the proof). Consider the following corollary. **Corollary 3.2**.: _Under Assumptions A-1-A-7, for any \(\kappa>0\) the following limit holds:_ \[\lim_{\sigma\to 0}\mathbb{P}\big{(}\tau_{\mathcal{D}}^{\sigma}\leq\overline{T}_{ \mathsf{st}}(\kappa)\big{)}=0\,.\] ### The coupling method We now introduce the diffusion \(Y^{\sigma}:=(Y_{t}^{\sigma},\,t\geq T_{\mathsf{st}}^{\sigma}(\kappa))\) solution to the following linear SDE: \[\begin{split} Y_{t}^{\sigma}&=X_{T_{\mathsf{st}}^{ \sigma}(\kappa)}^{\sigma}+\sigma(B_{t}-B_{T_{\mathsf{st}}^{\sigma}(\kappa)})- \int_{T_{\mathsf{st}}^{\sigma}(\kappa)}^{t}\nabla V(Y_{s}^{\sigma})\,\mathrm{ d}s\\ &\quad-\int_{T_{\mathsf{st}}^{\sigma}(\kappa)}^{t}\nabla F\,(Y_{ s}^{\sigma}-a)\,\mathrm{d}s\,,\end{split} \tag{3.1}\] where \((B_{t},t\geq 0)\) is the same Brownian motion that drives the main equation (1.1). Note, that this SDE has a unique solution (see for example [20, Theorem 10.2.2, p. 255]). Note also that \(Y^{\sigma}\) is a linear diffusion. As a consequence, we can apply the classical Freidlin-Wentzell theory, see [7, 10], for estimating the first exit-time as the diffusion coefficient tends to \(0\). Apart from the processes \((Y^{\sigma})_{0<\sigma<1}\) that is defined by SDE (3.1), we also define the following family of processes that constitute Ito diffusions and will help us to study stochastic properties of \(Y^{\sigma}\). For any \(y\in\mathbb{R}^{d}\) and for any \(0<\sigma<1\) define \((Y^{y,\sigma}_{t},t\geq 0)\) as the unique solution to the following SDE: \[Y^{y,\sigma}_{t}=y+\sigma B_{t}-\int_{0}^{t}\nabla V(Y^{y,\sigma}_{s})\, \mathrm{d}s-\int_{0}^{t}\nabla F(Y^{y,\sigma}_{s}-a)\,\mathrm{d}s\,. \tag{3.2}\] Following the standard notation for diffusions, we will drop the initial point \(y\) for \(Y^{y,\sigma}\), as well as for all random variables that are functions of \(Y^{y,\sigma}\), and put it as a subscript under the probability measure. Namely, for any \(y\in\mathbb{R}^{d}\) we introduce a probability measure \(\mathsf{P}_{y}\) that is simply a restriction of \(\mathbb{P}\) to the measurable space \(\big{(}\Omega,\sigma(Y^{y,\sigma}_{t}:t\geq 0)\big{)}\). The following proposition is a classical result of Freidlin-Wentzell theory for the exit-time of linear diffusions of the type (3.2). Consider: **Proposition 3.3** ([7], Theorem 5.7.11).: _Let Assumption A-1 be satisfied and let \(G\subset\mathbb{R}^{d}\) be a domain such that Assumptions A-5-A-7 are satisfied for it and its exit-cost \(H_{G}:=\inf\limits_{z\in\partial G}\{W_{a}(z)-W_{a}(a)\}\). Let \(K\subset G\) be a compact set. Define \(\tau_{G}^{Y,\sigma}:=\inf\{t\geq 0:Y^{y,\sigma}_{t}\notin G\}\). Then, for any \(\delta>0\) we have_ \[\lim_{\sigma\to 0}\sup_{y\in K}\mathsf{P}_{y}\bigg{(}\exp\biggl{\{}\frac{2(H_{G}- \delta)}{\sigma^{2}}\biggr{\}}\leq\tau_{G}^{Y,\sigma}\leq\exp\biggl{\{}\frac{ 2(H_{G}+\delta)}{\sigma^{2}}\biggr{\}}\biggr{)}=1.\] Obviously, this theorem also holds when \(G\) is the domain \(\mathcal{D}^{\mathsf{e}}_{\kappa}\) defined as in Remark 1.5 and \(H_{G}=H^{\mathsf{e}}_{\kappa}:=\inf_{x\in\partial\mathcal{D}^{\mathsf{e}}_{ \kappa}}\{W_{a}(x)-W_{a}(a)\}\) respectively. Let us now describe how both diffusion processes \(X\) (the targeted diffusion) and \(Y\) (the auxiliary one) are coupled. We are especially interested in describing the distance between them. **Proposition 3.4**.: _Under Assumptions A-1-A-7 there exists \(\eta>0\) such that for any \(\kappa>0\) small enough, we have_ \[\lim_{\sigma\to 0}\mathbb{P}(\sup\big{|}X^{\sigma}_{t}-Y^{\sigma}_{t}\big{|}> \kappa)=0,\] _where supremum is taken over \(t\in\Big{[}T^{\sigma}_{\mathsf{st}}(\kappa);S^{\sigma}_{\mathsf{st}}(\kappa) \wedge\exp\Bigl{\{}\frac{2(H+\eta)}{\sigma^{2}}\Bigr{\}}\Bigr{]}\)._ As it is shown below (Corollary 3.7), this result can be improved by removing the time \(S^{\sigma}_{\mathsf{st}}(\kappa)\), since, as it turns out, the destabilization of the law of the process can not happen before its exit-time from the domain \(\mathcal{D}\). The following lemma is an important result stating that, at each point of time, the diffusion \(Y^{\sigma}\) is close to \(a\) with high probability. **Lemma 3.5**.: _Let \(\rho\) be a positive constant introduced in Definition 1.2. Under Assumptions A-1-A-7 there exists \(\eta>0\) small enough such that for any \(\kappa>0\) small enough:_ \[\sup\mathbb{P}\big{(}Y^{\sigma}_{t}\notin B_{\rho/2}(a)\big{)}=o_{\sigma}(1),\] _where supremum is taken over \(t\in\Big{[}T^{\sigma}_{\mathsf{st}}(\kappa);\exp\Bigl{\{}\frac{2(H+\eta)}{ \sigma^{2}}\Bigr{\}}\Big{]}\)._ Note, that the position of supremum in Lemma 3.5 is important. Indeed, according to the Freidlin-Wentzell theory for Ito diffusions, the exit-time of \(Y^{\sigma}\) from \(B_{\rho/2}(a)\) is, with high probability, of order \(\exp\bigl{\{}2H_{\rho/2}/\sigma^{2}\bigr{\}}\), where \(H_{\rho/2}:=\inf_{z\in\partial B_{\rho/2}(a)}\{V(z)+F(z-a)-V(a)\}\), which means, among other things, that we can not expect \(\mathbb{P}\left(\sup|Y^{\sigma}_{t}-a|>\frac{\rho}{2}\right)\) to be equal to \(o_{\sigma}(1)\). Instead, what Lemma 3.5 states is that for all \(t\) before the exit of \(Y^{\sigma}\) from a small enlargement \(\mathcal{D}^{\mathsf{e}}_{\kappa}\), the probability that \(Y^{\sigma}\) is not close to \(a\) tends to \(0\). We come back to this description in Section 5.3. ### Control of the law In this section we present a result regarding the control of the law of the process after the stabilisation time. Consider the following lemma. **Lemma 3.6**.: _Under Assumptions A-1-A-7 there exists \(\eta>0\) such that for any \(\kappa>0\) small enough there exists \(\sigma_{\kappa}\) such that for any \(0<\sigma<\sigma_{\kappa}\) we have_ \[S^{\sigma}_{\mathsf{st}}(\kappa)>\exp\biggl{\{}\frac{2(H+\eta)}{\sigma^{2}} \biggr{\}}.\] This lemma together with Proposition 3.4 immediately gives us the following corollary: **Corollary 3.7**.: _Under Assumptions A-1-A-7 there exists \(\eta>0\) such that for any \(\kappa>0\) small enough, we have_ \[\lim_{\sigma\to 0}\mathbb{P}(\sup\left|X^{\sigma}_{t}-Y^{\sigma}_{t}\right|> \kappa)=0,\] _where supremum is taken over \(t\in\left[T^{\sigma}_{\mathsf{st}}(\kappa);\exp\Bigl{\{}\frac{2(H+\eta)}{ \sigma^{2}}\Bigr{\}}\right]\)._ ## 4 Proofs of the main results Here, we give the proofs of the main results. ### Exit-time and exit-location **Step 1.** To prove the lower bound of Kramers' law, consider the following inequality. For any \(\delta>0\) and for fixed \(\kappa>0\) small enough we have \[\mathbb{P}\left(\tau^{\sigma}_{\mathcal{D}}<\exp\biggl{\{}\frac{2 (H-\delta)}{\sigma^{2}}\biggr{\}}\right)\leq\mathbb{P}(\tau^{\sigma}_{ \mathcal{D}}<T^{\sigma}_{\mathsf{st}}(\kappa)) \tag{4.1}\] \[\qquad+\mathbb{P}\Bigl{(}\tau^{\sigma}_{\mathcal{D}}<\exp\Big{\{} \frac{2(H-\delta)}{\sigma^{2}}\Bigr{\}},\sup_{t\in[T^{\sigma}_{\mathsf{st}}( \kappa);\e^{\frac{2H}{\sigma^{2}}}]}|X^{\sigma}_{t}-Y^{\sigma}_{t}|\leq\kappa \Bigr{)}\] \[\qquad+\mathbb{P}\Bigl{(}\sup_{t\in[T^{\sigma}_{\mathsf{st}}( \kappa);\e^{\frac{2H}{\sigma^{2}}}]}|X^{\sigma}_{t}-Y^{\sigma}_{t}|>\kappa \Bigr{)}.\] By the construction of the domain \(\mathcal{D}_{\kappa}^{\mathsf{c}}\) (see Remark 1.5), \(d(\mathcal{D}_{\kappa}^{\mathsf{c}},\partial\mathcal{D})\geq\kappa\). Let us define \(\delta_{\kappa}:=H-H_{\kappa}^{\mathsf{c}}\). Note that \(H_{\kappa}^{\mathsf{c}}\xrightarrow[\kappa\to 0]{}H\) due to the continuiuty of the effective potential \(W_{a}\). Therefore, we can choose \(\kappa\) to be small enough such that \(\delta_{\kappa}<\delta\). Then the following inequality holds: \[\mathbb{P}\left(\tau_{\mathcal{D}}^{\sigma}<\exp\biggl{\{}\frac{2 (H-\delta)}{\sigma^{2}}\biggr{\}},\quad\sup|X_{t}^{\sigma}-Y_{t}^{\sigma}|\leq \kappa\right)\] \[\quad\leq\mathbb{P}\Bigl{(}\tau_{\mathcal{D}_{\kappa}^{\mathsf{ c}}}^{Y,\sigma}>\exp\Big{\{}\frac{2(H-\delta)}{\sigma^{2}}\Bigr{\}}=\exp\Big{\{} \frac{2(H_{\kappa}^{\mathsf{c}}+\delta_{\kappa}-\delta)}{\sigma^{2}}\Big{\}} \Bigr{)}\] \[\quad\leq\mathbb{P}(|X_{T_{\mathsf{Z}}^{\sigma}(\kappa)}^{\sigma} -a|>\kappa)+\sup_{y\in B_{\kappa}(a)}\mathsf{P}_{y}\left(\tau_{\mathcal{D}_{ \kappa}^{\mathsf{c}}}^{Y,\sigma}>\exp\biggl{\{}\frac{2(H_{\kappa}^{\mathsf{c}}+ \delta_{\kappa}-\delta)}{\sigma^{2}}\biggr{\}}\right)\xrightarrow[\sigma\to 0]{}0,\] where the convergence to \(0\) is due to Lemma 3.1 and Proposition 3.3, since \(\delta_{\kappa}-\delta<0\). The other probabilities in (4.1) converge to \(0\) by Corollaries 3.2 and 3.7. **Step 2.** To prove the upper bound of Kramers' law, consider the set \(\mathcal{D}_{\kappa}^{\mathsf{e}}\) (see Remark 1.5): enlargement of \(\mathcal{D}\) for small enough \(\kappa>0\). Let \(\eta>0\) be the positive constant defined in Corollary 3.7. Without loss of generality, let us fix positive \(\delta<\eta\). Consider the following inequalities. \[\mathbb{P}\left(\tau_{\mathcal{D}}^{\sigma}>\exp\biggl{\{}\frac{2 (H+\delta)}{\sigma^{2}}\biggr{\}}\right)\leq\mathbb{P}(\tau_{\mathcal{D}}^{ \sigma}<T_{\mathsf{st}}^{\sigma}(\kappa)) \tag{4.2}\] \[\quad+\mathbb{P}\Bigl{(}\tau_{\mathcal{D}}^{\sigma}>\exp\Big{\{} \frac{2(H+\delta)}{\sigma^{2}}\Bigr{\}},\quad\sup_{t\in[T_{\mathsf{st}}^{ \sigma}(\kappa);\mathrm{e}^{\frac{2(H+\delta)}{\sigma^{2}}}]}|X_{t}^{\sigma}-Y _{t}^{\sigma}|\leq\kappa\Bigr{)}\] \[\quad+\mathbb{P}\Bigl{(}\sup_{t\in[T_{\mathsf{st}}^{\sigma}( \kappa);\mathrm{e}^{\frac{2(H+\delta)}{\sigma^{2}}}]}|X_{t}^{\sigma}-Y_{t}^{ \sigma}|>\kappa\Bigr{)}.\] If \(\tau_{\mathcal{D}}^{\sigma}>\exp\Bigl{\{}\frac{2(H+\delta)}{\sigma^{2}}\Bigr{\}}\) and \(\sup\Big{\{}|X_{t}^{\sigma}-Y_{t}^{\sigma}|:t\in[T_{\mathsf{st}}^{\sigma}( \kappa);\mathrm{e}^{\frac{2(H+\delta)}{\sigma^{2}}}]\Bigr{\}}\leq\kappa\), then at the point of time \(\mathrm{e}^{\frac{2(H+\delta)}{\sigma^{2}}}\) the process \(Y^{\sigma}\) is still inside \(\mathcal{D}_{\kappa}^{\mathsf{e}}\). Define \(\delta_{\kappa}:=H_{\kappa}^{\mathsf{e}}-H\), decrease \(\kappa\) if necessary such that \(\delta_{\kappa}<\delta\), and consider \[\mathbb{P}\left(\tau_{\mathcal{D}}^{\sigma}>\exp\biggl{\{}\frac{2 (H+\delta)}{\sigma^{2}}\biggr{\}},\quad\sup_{t\in[T_{\mathsf{st}}^{\sigma}( \kappa);\mathrm{e}^{\frac{2(H+\delta)}{\sigma^{2}}}]}|X_{t}^{\sigma}-Y_{t}^{ \sigma}|\leq\kappa\right)\] \[\quad\leq\mathbb{P}\Bigl{(}\tau_{\mathcal{D}_{\kappa}^{\mathsf{e} }}^{Y,\sigma}>\exp\Big{\{}\frac{2(H+\delta)}{\sigma^{2}}\Bigr{\}}=\exp\Big{\{} \frac{2(H_{\kappa}-\delta_{\kappa}+\delta)}{\sigma^{2}}\Big{\}}\Bigr{)}\] \[\quad\leq\mathbb{P}(|X_{T_{\mathsf{Z}}^{\sigma}(\kappa)}^{\sigma}-a |>\kappa)+\sup_{y\in B_{\kappa}(a)}\mathsf{P}_{y}\left(\tau_{\mathcal{D}_{ \kappa}^{\mathsf{c}}}^{Y,\sigma}>\exp\biggl{\{}\frac{2(H_{\kappa}-\delta_{ \kappa}+\delta)}{\sigma^{2}}\biggr{\}}\right)\xrightarrow[\sigma\to 0]{}0,\] where the convergence to \(0\) holds due to Lemma 3.1 and Proposition 3.3. We finalise the proof of Kramers' type law by observing that, as in Step 1, all the others probabilities in (4.2) also tend to \(0\) by Corollaries 3.2 and 3.7. That proves Kramers' type law. **Step 3.** Let us now show the exit-location result. Fix a set \(N\subset\partial\mathcal{D}\) such that \(\inf\limits_{z\in N}\{W_{a}(z)-W_{a}(a)\}>H\). Let us choose \(\xi>0\) to be small enough such that \(\xi<\big{(}\inf\limits_{z\in N}\{W_{a}(z)-W_{a}(a)\}-H\big{)}/2\). Let us define the sublevel set \(L^{-}_{H+\xi}:=\{x\in\mathbb{R}^{d}:W_{a}(x)-W_{a}(a)\leq H+\xi\}\) (without loss of generality by \(L^{-}_{H+\xi}\) we will denote the unique connected component of the sublevel set that contains \(a\)). By geometric properties of the effective potential (regularity and convergence at infinity for big \(|x|\)), \(L^{-}_{H+\xi}\) satisfies the Assumptions A-5-A-7. Thus, after the initial convergence of \(X^{\sigma}\) to \(a\) and its law \(\mu^{\sigma}_{t}\) to \(\delta_{a}\), the Kramers' type law holds for the exit-time \(\tau^{\sigma}_{L^{-}_{H+\xi}}\), that is, for any \(\delta>0\), \[\lim\limits_{\sigma\to 0}\mathbb{P}\left(\mathrm{e}^{\frac{2(H+\xi-\delta)}{ \sigma^{2}}}\leq\tau^{\sigma}_{L^{-}_{H+\xi}}\leq\mathrm{e}^{\frac{2(H+\xi+ \delta)}{\sigma^{2}}}\right)=0, \tag{4.3}\] including for \(\delta=\xi/2\). We could easily show geometrically that exiting \(\mathcal{D}\) in the set \(N\) means crossing the boundary \(L_{H+\xi}:=\partial L^{-}_{H+\xi}\) before leaving the domain \(\mathcal{D}\). Therefore, we get the following inequality: \[\mathbb{P}(X^{\sigma}_{\tau_{\mathcal{D}}}\in N)\leq\mathbb{P}(\tau^{\sigma}_{ \mathcal{D}}\leq T^{\sigma}_{\mathrm{st}}(\kappa))+\mathbb{P}(\tau^{\sigma}_{L ^{-}_{H+\xi}}\leq\tau^{\sigma}_{\mathcal{D}}).\] The first probability converges to \(0\) by Corollary 3.2. Let us look at the second probability: \[\mathbb{P}(\tau^{\sigma}_{L^{-}_{H+\xi}}\leq\tau^{\sigma}_{\mathcal{D}})\leq \mathbb{P}\left(\tau^{\sigma}_{\mathcal{D}}\geq\mathrm{e}^{\frac{2(H+\xi/2)}{ \sigma^{2}}}\right)+\mathbb{P}\left(\tau^{\sigma}_{L^{-}_{H+\xi}}\leq\tau^{ \sigma}_{\mathcal{D}}<\mathrm{e}^{\frac{2(H+\xi/2)}{\sigma^{2}}}\right) \xrightarrow[\sigma\to 0]{}0,\] where th first probability tends to \(0\) by the Kramers' type law (Step 2) and the second probability tends to \(0\) by (4.3) if we take \(\delta=\xi/2\). ### Proof of Corollaries 2.3 and 2.4 We consider an unbounded domain \(\mathcal{D}\) with finite exit-cost \(H>0\). Then, set \(L^{-}_{H+\xi}:=\big{\{}x\in\mathbb{R}^{d}\ :\ W_{a}(x)-W_{a}(a)\leq H+\xi\big{\}}\). Let us assume without loss of generality that \(x_{\mathrm{init}}\in L^{-}_{H+\xi}\) (otherwise, the uniform in \(\sigma\) convergence in finite time inside \(L^{-}_{H+\xi}\) can be easily proven using LDP, similarly to Lemma 3.1). Let us define \(\mathcal{D}^{\prime}:=\mathcal{D}\bigcap L^{-}_{H+\xi}.\) Immediately, \(\mathcal{D}^{\prime}\) is bounded. Indeed, since \(W_{a}(x)\) tends to infinity as \(|x|\) goes to infinity, the level set \(L^{-}_{H+\xi}\) is compact. The domain \(\mathcal{D}^{\prime}\) is also stable by \(-\nabla W_{a}\), since both the domains \(\mathcal{D}\) and \(L^{-}_{H+\xi}\) are stable by definition. Thus, the domain \(\mathcal{D}^{\prime}\) satisfies all the assumptions of Theorem 2.1 with the height of \(W_{a}\) inside \(\mathcal{D}^{\prime}\) being equal to \(H\). Therefore, for any \(\xi>0\) we have: \[\lim\limits_{\sigma\to 0}\mathbb{P}\left(\mathrm{e}^{\frac{2}{\sigma^{2}}(H-\xi)} \leq\tau^{\prime}(\sigma)\leq\mathrm{e}^{\frac{2}{\sigma^{2}}(H+\xi)}\right)= 1\,,\] were, \(\tau^{\prime}(\sigma)\) is the first exit-time of \(X^{\sigma}\) from \(\mathcal{D}^{\prime}\). Indeed, the exit-cost is \(H\). Note that, by construction of the domain \(\mathcal{D}^{\prime}\), and by continuity of \(W_{a}\), for any \(\xi>0\) we have \[\inf\big{\{}W_{a}(z)-W_{a}(a):z\in\mathsf{CI}(\partial\mathcal{D}^{\prime} \setminus\partial\mathcal{D})\big{\}}>H,\] where \(\mathsf{Cl}\) stands for closure. It means that the exit-location result of the main Theorem 2.1 holds for \(N=\mathsf{Cl}(\partial\mathcal{D}^{\prime}\setminus\partial\mathcal{D})\), namely \[\lim_{\sigma\to 0}\mathbb{P}\left(X^{\sigma}_{\tau^{\prime}(\sigma)}\in \mathsf{Cl}(\partial\mathcal{D}^{\prime}\setminus\partial\mathcal{D})\right)=0.\] That essentially means that \[\lim_{\sigma\to 0}\mathbb{P}(\tau^{\prime}(\sigma)=\tau^{\sigma}_{\mathcal{D}})=1,\] which proves Corollary 2.3. The second corollary can be proved the same way by choosing \(\xi>0\) to be small enough such that the set under consideration \(N\subset\mathcal{D}\) lies entirely beyond the level set \(L^{-}_{H+\xi}\). ## 5 Proofs of the intermediate results ### Stabilisation in finite time: Proof of Lemma 3.1 and Corollary 3.2 The proof is based on LDP ideas and the fact that, for small \(\sigma\), the process \(X^{\sigma}\) is attracted towards \(a\). Fix some \(\kappa>0\). By Assumption A-6, the path of the deterministic solution to the following equation \[\frac{\mathrm{d}}{\mathrm{d}t}\gamma_{t}=-\nabla V(\gamma_{t}),\quad\text{ with }\gamma_{0}=x_{\text{init}}, \tag{5.1}\] is contained in \(\mathcal{D}\), i.e. \(\{\gamma_{t},t\geq 0\}\subset\mathcal{D}\), and tends to \(a\). Let us decrease \(\kappa>0\) to be small enough such that the distance between the set \((\gamma_{t},t\geq 0)\) and \(\partial\mathcal{D}\) is strictly greater than \(\kappa/3\). Let us define \(\overline{T}_{\mathfrak{st}}(\kappa)\) as the first time when \(\gamma_{t}\in B_{\kappa/3}(a)\). The following inclusion of events takes place: \[\mathbb{P}\left(\left|X^{\sigma}_{\overline{T}_{\mathfrak{st}}(\kappa)}-a \right|>\frac{2\kappa}{3}\right)\leq\mathbb{P}\left(\left|X^{\sigma}_{ \overline{T}_{\mathfrak{st}}(\kappa)}-\gamma_{\overline{T}_{\mathfrak{st}}( \kappa)}\right|>\frac{\kappa}{3}\right)\leq\mathbb{P}(X^{\sigma}\in\Phi),\] where \(\Phi:=\left\{\varphi\in C\left(\left[0;\overline{T}_{\mathfrak{st}}(\kappa) \right]\right):\|\varphi-\gamma\|_{\infty}\geq\kappa/3\right\}\). By Proposition 1.7, \[\limsup_{\sigma\to 0}\frac{\sigma^{2}}{2}\log\mathbb{P}(X^{\sigma}\in\Phi) \leq-\inf_{\varphi\in\Phi}I_{\overline{T}_{\mathfrak{st}}(\kappa)}(\varphi). \tag{5.2}\] Note that, by definition of the rate function \(I_{T}\), and by uniqueness of solution to equation (5.1), function \(\gamma\) is its only minimizer such that \(I_{\overline{T}_{\mathfrak{st}}(\kappa)}(\gamma)=0\). Since \(I_{T}\) is a good rate function, its infima are achieved over closed sets. Note that \(\gamma\notin\Phi\), thus \(A:=I_{\overline{T}_{\mathfrak{st}}(\kappa)}(\varphi)>0\). That proves the second result of the Lemma 3.1, since it guarantees that there exists \(\sigma_{\kappa}>0\) small enough such that for any \(0<\sigma<\sigma_{\kappa}\): \[\mathbb{P}\left(\left|X^{\sigma}_{\overline{T}_{\mathfrak{st}}(\kappa)}-a \right|>\frac{2\kappa}{3}\right)\leq\mathrm{e}^{-\frac{2A}{\sigma^{2}}}. \tag{5.3}\] For the first statement, consider the following equality: \[\mathbb{W}_{2}^{2}\left(\mu_{\overline{T}_{\mathfrak{st}}(\kappa)}^{ \sigma};\delta_{a}\right) =\mathbb{E}\left|X_{\overline{T}_{\mathfrak{st}}(\kappa)}^{\sigma}- a\right|^{2}=\mathbb{E}\left[\left|X_{\overline{T}_{\mathfrak{st}}(\kappa)}^{ \sigma}-a\right|^{2}\mathds{1}_{\{X_{\overline{T}_{\mathfrak{st}}(\kappa)}^{ \sigma}\in B_{2\frac{\kappa}{3}}(a)\}}\right]\] \[\quad+\mathbb{E}\left[\left|X_{T_{\mathfrak{st}}(\kappa)}^{ \sigma}-a\right|^{2}\mathds{1}_{\{X_{\overline{T}_{\mathfrak{st}}(\kappa)}^{ \sigma}\notin B_{2\frac{\kappa}{3}}(a)\}}\right].\] Therefore, by Cauchy-Schwarz inequality, we can bound the difference between the two measures by: \[\mathbb{W}_{2}^{2}\left(\mu_{\overline{T}_{\mathfrak{st}}(\kappa)}^{\sigma}; \delta_{a}\right)\leq\frac{4\kappa^{2}}{9}+\sqrt{\mathbb{E}\left|X_{\overline{ T}_{\mathfrak{st}}(\kappa)}^{\sigma}-a\right|^{4}}\sqrt{\mathbb{P}\left(\left|X_{ \overline{T}_{\mathfrak{st}}(\kappa)}^{\sigma}-a\right|>\frac{2\kappa}{3} \right)}.\] By Proposition 1.6, there exists \(M>0\) such that \(\sup_{0<\sigma<1}\sup_{t\geq 0}\mathbb{E}|X_{t}^{\sigma}-a|^{2}<M^{2}\). This estimate along with equation (5.3) gives us: \[\mathbb{W}_{2}^{2}\left(\mu_{\overline{T}_{\mathfrak{st}}(\kappa)}^{\sigma}; \delta_{a}\right)\leq\frac{4\kappa^{2}}{9}+M\mathrm{e}^{-A/\sigma^{2}}.\] That expression can be bounded by \(\kappa^{2}\) if we choose \(\sigma_{\kappa}>0\) to be small enough, which proves Lemma 3.1. Corollary 3.2 can be also easily proven by choosing \(\kappa\) such that \[\inf_{t\geq 0}\inf_{z\in\partial\mathcal{D}}|\gamma_{t}-z|>\frac{\kappa}{3}.\] In this case, the following estimate holds: \[\mathbb{P}\left(\tau_{\mathcal{D}}^{\sigma}\leq\overline{T}_{\mathfrak{st}}( \kappa)\right)\leq\mathbb{P}(X\notin\Phi)\leq\mathrm{e}^{-\frac{2A}{\sigma^{ 2}}}\xrightarrow[\sigma\to 0]{}0.\] ### The coupling estimate: Proof of Proposition 3.4 In this section we prove Proposition 3.4. The idea of the proof is based on the fact that, since the processes \(X^{\sigma}\) and \(Y^{\sigma}\) are coupled by the same Brownian motion and by the properties of convex sets, whenever both \(X^{\sigma}\) and \(Y^{\sigma}\) belong to the set \(B_{\rho}(a)\) (Definition 1.2), the distance between them decreases a.s. (we show this in Lemma 5.1). At the same time, whenever the two processes belong to the region \(\mathcal{D}\setminus B_{\rho}(a)\), their maximum scatter can be controlled in terms of the time spent inside \(\mathcal{D}\setminus B_{\rho}(a)\) (Lemma 5.2 below). The proof is finished by observing that, before exiting \(\mathcal{D}\), the processes \(X^{\sigma}\) and \(Y^{\sigma}\) spend inside \(B_{\rho}(a)\) long enough time comparing to the total time spent inside \(\mathcal{D}\setminus B_{\rho}(a)\), that the attracting effect surpasses the scattering one. Before proving the proposition rigorously, let us present the following notions. Let us decrease without loss of generality \(\kappa>0\) to be smaller than \(\rho/4\). Let us also fix some enlargement of the domain \(\mathcal{D}\) of some radius \(R>0\): \(\mathcal{D}_{R}^{\mathbf{e}}\) (see Remark 1.5 for the definition). Decrease \(\kappa\), if necessary, so that \(\kappa<R/2\). Consider the following sequence of stopping times: \[\theta_{1} :=\inf\{t\geq T^{\sigma}_{\sf st}(\kappa):\;Y^{\sigma}_{t}\notin B_{ \rho/2}(a)\}, \tag{5.4}\] \[\tau_{m} :=\inf\{t\geq\theta_{m}:Y^{\sigma}_{t}\in B_{\rho/4}(a)\cup \partial\mathcal{D}^{\sf e}_{R}\},\] \[\theta_{m+1} :=\inf\{t\geq\tau_{m}:Y^{\sigma}_{t}\notin B_{\rho/2}(a)\}.\] We also define the following stopping times that will allow us to study the behaviour of \(\theta_{i}\), \(\tau_{i}\) for different \(i\) using the strong Markov property of diffusion \(Y^{\sigma}\). For any \(y\in\mathbb{R}^{d}\) consider: \[\theta_{0} :=\inf\{t\geq 0:\;Y^{y,\sigma}_{t}\notin B_{\rho/2}(a)\}, \tag{5.5}\] \[\tau_{0} :=\inf\{t\geq 0:Y^{y,\sigma}_{t}\in B_{\rho/4}(a)\cup\partial \mathcal{D}^{\sf e}_{R}\}.\] Consider the following **Lemma 5.1**.: _Define for some \(K>0\) the following family of mappings \(\varphi_{T}:x\mapsto x{\rm e}^{-KT}+o_{\kappa}(1)\) for any \(T>0\), where \(o_{\kappa}(1)\xrightarrow[\kappa\to 0]{}0\). Then, under Assumptions A-1-A-7, there exists a constant \(K>0\) such that for any \(\alpha<\rho/4\), for any \(m\geq 1\), and for any \(\kappa>0\) small enough:_ \[\mathbb{P}\left(\sup_{t\in[\tau_{m};\theta_{m+1}]}|X^{\sigma}_{t}-Y^{\sigma}_ {t}|>\varphi_{\theta_{m+1}-\tau_{m}}(\alpha),A\right)=0,\] _where \(A:=\{\theta_{m+1}\leq S^{\sigma}_{\sf st}(\kappa),\sup_{t\leq\tau_{m}}|X^{ \sigma}_{t}-Y^{\sigma}_{t}|\leq\alpha\}\)_ Proof.: Let us define random time \(\mathcal{T}:=\inf\{t\geq\tau_{m}:X^{\sigma}_{t}\notin B_{\rho}(a)\}\) - first time when \(X^{\sigma}\) leaves the convexity area \(B_{\rho}(a)\). Obviously, for almost every \(\omega\in A\), we have \(\mathcal{T}>0\). **Step 1.** Let us define \(\xi(t):=|X^{\sigma}_{t}-Y^{\sigma}_{t}|^{2}\). The way functions \(X^{\sigma}\) and \(Y^{\sigma}\) are coupled provides us with the fact that \(\xi\) is differentiable in the usual sense. Its derivative is equal to: \[\xi^{\prime}(t) =-2\langle X^{\sigma}_{t}-Y^{\sigma}_{t};\,\nabla W_{a}(X^{\sigma }_{t})-\nabla W_{a}(Y^{\sigma}_{t})\rangle\] \[\quad-2\langle X^{\sigma}_{t}-Y^{\sigma}_{t};\,\nabla F*\mu^{ \sigma}_{t}(X^{\sigma}_{t})-\nabla F*\delta_{a}(X^{\sigma}_{t})\rangle.\] Since in this lemma we consider only outcomes such that \(\mathbb{W}_{2}(\mu^{\sigma}_{t};\delta_{a})\leq\kappa\) and \(|X^{\sigma}_{\tau_{m}}-Y^{\sigma}_{\tau_{m}}|\leq\alpha\), i.e. \(\omega\in A\), after integrating over the time interval \([\tau_{m};\theta_{m+1}\wedge\mathcal{T}]\) and applying Assumption A-4 (see also Definition 1.2), we get the following estimate. For any \(t>0\) and for \(\mathbb{P}\)-a.e. \(\omega\in A\cap\{t\in[\tau_{m};\theta_{m+1}\wedge\mathcal{T}]\}\): \[\xi(t) \leq|X^{\sigma}_{\tau_{m}}-Y^{\sigma}_{\tau_{m}}|^{2}-2\int_{ \tau_{m}}^{t}\langle X^{\sigma}_{s}-Y^{\sigma}_{s};\,\nabla W_{a}(X^{\sigma}_{ s})-\nabla W_{a}(Y^{\sigma}_{s})\rangle\,\mathrm{d}s \tag{5.6}\] \[\quad+2\int_{\tau_{m}}^{t}|X^{\sigma}_{s}-Y^{\sigma}_{s}||\nabla F *\mu^{\sigma}_{s}(X^{\sigma}_{s})-\nabla F*\delta_{a}(X^{\sigma}_{s})|\, \mathrm{d}s\] \[\leq\alpha^{2}-2C_{W}\int_{\tau_{m}}^{t}\xi(s)\,\mathrm{d}s+2 \int_{\tau_{m}}^{t}\sqrt{\xi(s)}\Big{|}\nabla F*\mu^{\sigma}_{s}(X^{\sigma}_{s} )-\nabla F*\delta_{a}(X^{\sigma}_{s})\Big{|}\,\mathrm{d}s\,.\] Since the term \(\Big{|}\nabla F*\mu^{\sigma}_{s}(X^{\sigma}_{s})-\nabla F*\delta_{a}(X^{\sigma }_{s})\Big{|}\) is hard to analyse, we study it separately. **Step 2.** Consider the following inequality. By Assumption (F - 4) of A-2, we can express: \[\int_{\mathbb{R}^{d}}\Big{|}\nabla F(X_{s}^{\sigma}-z)-\nabla F(X_{s }^{\sigma}-a)\Big{|}\mu_{s}^{\sigma}(\mathrm{d}z)\] \[\leq C^{\prime}\int_{\mathbb{R}^{d}}|z-a|\big{(}1+|X_{s}^{\sigma} -z|^{2r-1}+|X_{s}^{\sigma}-a|^{2r-1}\,\big{)}\mu_{s}^{\sigma}(\mathrm{d}z)\] \[\leq C^{\prime}\int_{\mathbb{R}^{d}}|z-a|\big{(}1+2^{2r-1}\,|X_{s }^{\sigma}|^{2r-1}+2^{2r-2}|z|^{2r-1}+2^{2r-2}|a|^{2r-1}\big{)}\mu_{s}^{\sigma} (\mathrm{d}z).\] In the following, we will denote by C the generic constant that may depend on \(r\), \(\rho\) and other parameters defined in assumptions. The bound thus takes the form: \[\mathrm{C}\int_{\mathbb{R}^{d}}|z-a|\big{(}\mathrm{C}+\mathrm{C} \,|X_{s}^{\sigma}|^{2r-1}+|z|^{2r-1}+|a|^{2r-1}\big{)}\mu_{s}^{\sigma}(\mathrm{ d}z)\] \[\leq\mathrm{C}\sqrt{\int_{\mathbb{R}^{d}}|z-a|^{2}\mu_{s}^{ \sigma}(\mathrm{d}z)}\sqrt{\mathrm{C}+\mathrm{C}\,|X_{s}^{\sigma}|^{4r-2}+|a| ^{4r-2}+\int_{\mathbb{R}^{d}}|z|^{4r-2}\mu_{s}^{\sigma}(\mathrm{d}s)}\] Since we only consider \(\omega\in A\cap\{t\in[\tau_{m};\theta_{m+1}\wedge\mathcal{T}]\}\), \(X^{\sigma}\) belongs to \(B_{\rho}(a)\) and is thus bounded by a constant. Moreover, \(\mathbb{W}_{2}(\mu_{s}^{\sigma};\delta_{a})\leq\kappa\) and \(|Y_{t}^{\sigma}|\leq\sup_{z\in\partial\mathcal{D}_{R}^{e}}|z-a|\) by the definition of the set \(A\). At the same time, by Proposition 1.6, we know that \(\int|z|^{4r-2}\mathrm{d}\mu_{s}^{\sigma}\leq M\) for any time \(t\geq 0\) and for any \(0\leq\sigma\leq 1\). Therefore, for any \(t>0\) and for any \(\omega\in A\cap\{t\in[\tau_{m};\theta_{m+1}\wedge\mathcal{T}]\}\) we have \[\int_{\mathbb{R}^{d}}\Big{|}\nabla F(X_{s}^{\sigma}-z)-\nabla F(X_{s}^{\sigma} -a)\Big{|}\mu_{s}^{\sigma}(\mathrm{d}z)\leq\mathrm{C}\kappa.\] **Step 3.** Let us come back to equation (5.6). Given the calculations in Step 2, the final bound takes the following form: \[\xi(t)\leq\alpha^{2}-2C_{W}\int_{\tau_{m}}^{t}\xi(s)\,\mathrm{d}s+2\kappa \mathrm{C}\int_{\tau_{m}}^{t}\!\!\sqrt{\xi(s)}\,\mathrm{d}s\,.\] It means that, if we introduce the deterministic function \(\psi\) that is the unique solution of equation \[\psi(u)=\alpha^{2}-2C_{W}\int_{0}^{u}\psi(s)\,\mathrm{d}s+2\kappa\mathrm{C} \int_{0}^{u}\sqrt{\psi(s)}\,\mathrm{d}s\,,\] then \(\xi_{\tau_{m}+u}\leq\psi_{u}\) for any positive \(u\leq t\) and for \(\mathbb{P}\)-a.e. point \(\omega\in A\cap\{t\in[\tau_{m};\theta_{m+1}\wedge\mathcal{T}]\}\). If \(\alpha>\frac{\mathrm{C}\kappa}{2C_{W}}\), we can solve this equation explicitly and get: \[\sqrt{\psi(u)}=\left(\alpha-\frac{\mathrm{C}}{2C_{W}}\kappa\right)\mathrm{e}^ {-2C_{W}u}+\frac{\mathrm{C}}{2C_{W}}\kappa. \tag{5.7}\] Otherwise, we can simply bound \(\psi(u)\) by \[\psi(u)\leq\frac{\mathrm{C}}{4C_{W}^{2}}\kappa^{2}, \tag{5.8}\] since \(\psi^{\prime}(u)<0\) whenever \(\psi(u)>\mathrm{C}\kappa^{2}/(4C_{W}^{2})\). Thus, \(\psi\) can be expressed in the form: \[\sqrt{\psi(u)}\leq\alpha\mathrm{e}^{-2C_{W}u}+o_{\kappa}(1).\] In particular, it means that if there is some random time \(\mathcal{S}\) defined for \(\omega\in A\) and such that for \(\mathbb{P}\)-a.e. \(\omega\in A\) we have \(\tau_{m}\leq\mathcal{S}\leq\theta_{m+1}\wedge\mathcal{T}\), then: \[\sqrt{\xi(\mathcal{S})}\leq\alpha\mathrm{e}^{-2C_{W}(\mathcal{S}-\tau_{m})}+o _{\kappa}(1)\] for \(\mathbb{P}\)-a.e. \(\omega\in A\). **Step 4.** To finalise the proof, let us show that for \(\mathbb{P}\)-a.e. \(\omega\in A\), we have \(\mathcal{T}>\theta_{m+1}\). Indeed, if it is not true, then there exists a set \(B\subseteq A\) with \(\mathbb{P}(B)>0\), such that for any \(\omega\in B\), \(X_{\mathcal{T}}^{\sigma}\notin B_{\rho}(a)\), but \(Y_{\mathcal{T}}^{\sigma}\in B_{\rho/2}(a)\). Yet, by derivations of Step 3, for \(\mathbb{P}\)-a.e. \(\omega\in A\): \[|X_{\mathcal{T}}^{\sigma}-Y_{\mathcal{T}}^{\sigma}|\leq\max\left(\alpha; \frac{\mathrm{C}}{2C_{W}}\kappa\right).\] Therefore, without loss of generality, we can choose \(\kappa>0\) to be small enough to get the contradiction. That proves the lemma. For control outside of the set \(B_{\rho}(a)\) consider the following lemma. **Lemma 5.2**.: _Define for some constant \(L>0\) and for any \(T>0\) the following mapping: \(\psi_{T}:x\mapsto x\mathrm{e}^{LT}\). Then, under Assumptions A-1-A-7, there exists a constant \(L>0\) such that for \(\alpha<\rho/4\), for any \(m\geq 1\), and for any \(\kappa>0\) small enough:_ \[\mathbb{P}\left(\sup_{t\in[\theta_{m};\tau_{m}]}|X_{t}^{\sigma}-Y_{t}^{\sigma} |>\psi_{\tau_{m}-\theta_{m}}(\alpha),A\right)=0,\] _where \(A:=\{\tau_{m}\leq S_{\mathfrak{st}}^{\sigma}(\kappa),\;\sup_{t\leq\theta_{m}} |X_{t}^{\sigma}-Y_{t}^{\sigma}|\leq\alpha\}\)_ Proof.: As in the proof of Lemma 5.1, we first introduce \(\xi(t)=|X_{t}^{\sigma}-Y_{t}^{\sigma}|^{2}\) and then differentiate this function with respect to time. The difference is that now we can not use convexity properties of the set \(B_{\rho}(a)\). Moreover, we will not be able to provide a good upper bound for \(|X_{t}^{\sigma}|\), since \(Y^{\sigma}\) and \(X^{\sigma}\) drift apart from each other. **Step 1.** The following inequality holds for \(\mathbb{P}\)-a.e. \(\omega\in A\wedge\{t\in[\theta_{m};\tau_{m}]\}\): \[\xi(t) \leq|X_{\theta_{m}}^{\sigma}-Y_{\theta_{m}}^{\sigma}|^{2}-2\int_{ \theta_{m}}^{t}\langle X_{s}^{\sigma}-Y_{s}^{\sigma};\nabla V(X_{s}^{\sigma}) -\nabla V(Y_{s}^{\sigma})\rangle\,\mathrm{d}s\] \[\quad-2\int_{\theta_{m}}^{t}\langle X_{s}^{\sigma}-Y_{s}^{\sigma };\nabla F*\mu_{s}^{\sigma}(X_{s}^{\sigma})-\nabla F(Y_{s}^{\sigma}-a)\rangle \,\mathrm{d}s\,.\] Using Cauchy-Schwarz inequality, we can obtain the following bound: \[\xi(t) \leq\alpha^{2}+2\int_{\theta_{m}}^{t}\sqrt{\xi(s)}\;\big{|} \nabla V(X_{s}^{\sigma})-\nabla V(Y_{s}^{\sigma})\big{|}\,\mathrm{d}s\] \[\quad+2\int_{\theta_{m}}^{t}\sqrt{\xi(s)}\;\big{|}\nabla F*\mu_{s }^{\sigma}(X_{s}^{\sigma})-\nabla F(Y_{s}^{\sigma}-a)\big{|}\,\mathrm{d}s=: \alpha^{2}+I_{1}+I_{2}.\] Let us consider \(I_{1}\) and \(I_{2}\) separately. In the following, \(\mathrm{C}\) will denote a generic constant that may depend on parameters defined in the assumptions. **Step 2.** For the first expression \(I_{1}\), we use Assumption (V - 5) of A-1 and get: \[2\!\int_{\theta_{m}}^{t}\!\!\!\sqrt{\xi(s)}\,\big{|}\nabla V(X_{s}^{\sigma})- \nabla V(Y_{s}^{\sigma})\big{|}\,\mathrm{d}s\leq\mathrm{C}\!\!\int_{\theta_{m }}^{t}\!\!\!\xi(s)\big{(}1+|X_{s}^{\sigma}|^{2r-1}+|Y_{s}^{\sigma}|^{2r-1}\big{)} \,\mathrm{d}s\,.\] By adding and subtracting \(Y_{s}^{\sigma}\) in the expression above, we can upper bound it by \[\mathrm{C}\int_{\theta_{m}}^{t}\xi(s)\left(\mathrm{C}+\mathrm{C}\xi(s)^{\frac{ 2r-1}{2}}+|Y_{s}^{\sigma}|^{2r-1}\right)\mathrm{d}s\,.\] Moreover, since we consider only those \(\omega\) for which \(t\leq\tau_{m}\), \(Y_{s}^{\sigma}\) belongs to \(\mathcal{D}_{R}^{\mathsf{e}}\), which is a bounded set. Therefore, the upper bound takes the final form: \[I_{1}\leq\mathrm{C}\int_{\theta_{m}}^{t}\xi(s)\left(\mathrm{C}+\xi(s)^{\frac{ 2r-1}{2}}\right)\mathrm{d}s\,. \tag{5.9}\] **Step 3.** For the second expression \(I_{2}\), let us use assumption (F - 4) of A-2 and get: \[I_{2} \leq\mathrm{C}\int_{\theta_{m}}^{t}\sqrt{\xi(s)}\int_{\mathbb{R}^ {d}}|X_{s}^{\sigma}-z-Y_{s}^{\sigma}+a|\] \[\quad\times\big{(}1+|X_{s}^{\sigma}-z|^{2r-1}+|Y_{s}^{\sigma}-a|^ {2r-1}\,\big{)}\mu_{s}^{\sigma}(\mathrm{d}z)\,\mathrm{d}s\] \[\leq\mathrm{C}\int_{\theta_{m}}^{t}\int_{\mathbb{R}^{d}}\xi(s) \big{(}1+|X_{s}^{\sigma}-z|^{2r-1}+|Y_{s}^{\sigma}-a|^{2r-1}\,\big{)}\mu_{s}^{ \sigma}(\mathrm{d}z)\,\mathrm{d}s\] \[\quad+\mathrm{C}\int_{\theta_{m}}^{t}\int_{\mathbb{R}^{d}}\sqrt{ \xi(s)}|z-a|\big{(}1+|X_{s}^{\sigma}-z|^{2r-1}+|Y_{s}^{\sigma}-a|^{2r-1}\, \big{)}\mu_{s}^{\sigma}(\mathrm{d}z)\,\mathrm{d}s\,.\] Let us denote the two expressions above as \(A_{1}\) and \(A_{2}\). For \(A_{1}\), we add and subtract \(Y_{s}^{\sigma}\) inside \(|X_{t}^{\sigma}-z|^{2r-1}\) and get: \[A_{1}\leq\mathrm{C}\int_{\theta_{m}}^{t}\int_{\mathbb{R}^{d}}\xi(s)\left( \mathrm{C}+\mathrm{C}\xi(s)^{\frac{2r-1}{2}}+\mathrm{C}|Y_{s}^{\sigma}|^{2r-1 }+|z|^{2r-1}+|a|^{2r-1}\right)\mu_{s}^{\sigma}(\mathrm{d}z)\,\mathrm{d}s\,.\] As was pointed out above, since \(t\in[\theta_{m};\tau_{m}]\), \(|Y_{s}^{\sigma}-a|\) is bounded for \(\mathbb{P}\)-a.e. \(\omega\in A\wedge\{t\in[\theta_{m};\tau_{m}]\}\). Moreover, by Proposition 1.6, there exists \(M>0\) such that \(\int|z|^{2r-1}\mathrm{d}\mu_{s}^{\sigma}<M\). Thus: \[A_{1}\leq\mathrm{C}\int_{\theta_{m}}^{t}\xi(s)\left(\mathrm{C}+\xi(s)^{\frac{ 2r-1}{2}}\right)\mathrm{d}s\,.\] Similarly, for \(A_{2}\): \[A_{2} \leq\mathrm{C}\int_{\theta_{m}}^{t}\int_{\mathbb{R}^{d}}\sqrt{\xi (s)}|z-a|\] \[\quad\times\left(\mathrm{C}+\mathrm{C}\xi(s)^{\frac{2r-1}{2}}+ \mathrm{C}|Y_{s}^{\sigma}|^{2r-1}+|z|^{2r-1}+|a|^{2r-1}\right)\mu_{s}^{\sigma} (\mathrm{d}z)\,\mathrm{d}s\,.\] By Cauchy-Schwarz inequality and since both \(\int_{\mathbb{R}^{d}}|z|\mathrm{d}\mu_{s}^{\sigma}\) and \(\int_{\mathbb{R}^{d}}|z|^{4r-2}\mathrm{d}\mu_{s}^{\sigma}\) are bounded by a constant, we get: \[A_{2}\leq\mathrm{C}\int_{\theta_{m}}^{t}\sqrt{\xi(s)}\sqrt{\mathrm{C}+\xi(s)^{2 r-1}}\,\mathrm{d}s\,,\] which gives the following bound for \(I_{2}\): \[I_{2}\leq\mathrm{C}\int_{\theta_{m}}^{t}\left(\mathrm{C}\xi(s)\left(\mathrm{C} +\xi(s)^{\frac{2r-1}{2}}\right)+\sqrt{\xi(s)}\sqrt{\mathrm{C}+\xi(s)^{2r-1}} \right)\mathrm{d}s\,.\] Since for any \(\alpha>1\) we have \(x^{\alpha}\leq\sqrt{x}+x^{\alpha+1}\) and since \(\sqrt{x}\leq 1+x\), we can roughly bound \(I_{2}\) by the following expression: \[I_{2}\leq\mathrm{C}\int_{\theta_{m}}^{t}\left(\xi(s)^{r+1}+\sqrt{\xi(s)} \right)\mathrm{d}s \tag{5.10}\] **Step 4.** From (5.9) and (5.10) we get that for \(\mathbb{P}\)-a.e. \(\omega\in A\wedge\{t\in[\theta_{m};\tau_{m}]\}\): \[\xi(t)\leq\alpha^{2}+\mathrm{C}\int_{\theta_{m}}^{t}\left(\mathrm{C}\xi(s)^{r+ 1}+\sqrt{\xi(s)}\right)\mathrm{d}s\,.\] Obviously, \(\xi(t)\) is bounded for respective \(\omega\) and \(t\) by a function of the form: \[\psi(u)=\alpha^{2}+\mathrm{C}\int_{0}^{u}\left(\mathrm{C}\psi(s)^{r+1}+\sqrt{ \psi(s)}\right)\mathrm{d}s\,,\] which in its term is bounded by the following expression. Note that for each period of time when \(\psi(u)\leq 1\), it is simply bounded by a linear function: \[\psi(u)\leq\alpha^{2}+\mathrm{C}u.\] Otherwise, its upper bound take the form: \[\psi(u)\leq\alpha^{2}+\mathrm{C}\int_{0}^{u}\psi(s)^{r+1}\,\mathrm{d}s\,,\] which is a polynomial. By choosing the right constant \(L>0\), we can easily bound \(\psi\) by \[\psi(u)\leq\alpha^{2}\mathrm{e}^{2Lu},\] which proves the Lemma by using the same approach as in Steps 3 and 4 of the proof of Lemma 5.1. The following lemma establishes the maximum number of excursions of the process \(Y^{\sigma}\) from \(B_{\rho}(a)\). Let us define the height of the effective potential inside the sets of the form \(B_{\rho/2}(a)\) as \(Q^{\mathsf{c}}:=\inf_{z\in S_{\rho/2}(a)}\{W_{a}(z)-W_{a}(a)\}\). We remind that \(H^{\mathsf{e}}_{R}:=\inf_{z\in\partial\mathcal{D}^{\mathsf{e}}_{R}}\{W_{a}(z)- W_{a}(a)\}\) is the height of the effective potential inside the set \(\mathcal{D}^{\mathsf{e}}_{R}\). Consider the following lemma: **Lemma 5.3**.: _Let \(N^{*}:=2\left\lceil\exp\bigl{\{}\frac{2}{\sigma^{2}}(H_{R}^{\mathbf{e}}-Q^{\mathbf{ c}}+\kappa)\bigr{\}}\right\rceil\). Let \(\tau_{N^{*}}\) be defined as in (5.4). Then, for any \(\kappa>0\) small enough:_ 1. \(\mathbb{P}(\tau_{\mathcal{D}_{R}^{\mathbf{e}}}^{Y,\sigma}>\tau_{N^{*}}) \xrightarrow[\sigma\to 0]{}0\)_._ 2. _There exists_ \(T_{1}>0\) _such that_ \(\mathbb{P}(\exists i\leq N^{*}:\theta_{i}-\tau_{i-1}>T_{1})\xrightarrow[\sigma \to 0]{}0\)_._ Proof.: We separate the proof into 2 steps. **Step 1.** Let us prove the first part of the lemma. Note, that if \(\tau_{N^{*}}\) is less or equal then \(\exp\bigl{\{}\frac{2}{\sigma^{2}}(H_{R}^{\mathbf{e}}+\frac{\kappa}{2})\bigr{\}}\), then necessarily the number of intervals of the form \([\tau_{i-1};\theta_{i}]\) such that \(\theta_{i}-\tau_{i-1}\geq\exp\bigl{\{}\frac{2}{\sigma^{2}}(Q^{\mathbf{c}}- \frac{\kappa}{2})\bigr{\}}\) can not exceed \(N^{*}/2\) by definition of the latter. Based on this observation and using Proposition 3.3, we have \[\begin{split}\mathbb{P}(\tau_{\mathcal{D}_{R}^{\mathbf{e}}}^{Y, \sigma}>\tau_{N^{*}})&\leq\mathbb{P}\Bigl{(}\tau_{N^{*}}<\tau_{ \mathcal{D}_{R}^{\mathbf{e}}}^{Y,\sigma}<\mathrm{e}^{\frac{2}{\sigma^{2}}(H_{ R}^{\mathbf{e}}+\frac{\kappa}{2})}\Bigr{)}+\mathbb{P}\left(\tau_{\mathcal{D}_{R}^{ \mathbf{e}}}^{Y,\sigma}\geq\mathrm{e}^{\frac{2}{\sigma^{2}}(H_{R}^{\mathbf{e} }+\frac{\kappa}{2})}\right)\\ &\leq\mathbb{P}\Bigl{(}\#\Bigl{\{}i\leq N^{*}:\theta_{i}-\tau_{i- 1}<\mathrm{e}^{\frac{2}{\sigma^{2}}(Q^{\mathbf{c}}-\frac{\kappa}{2})}\Bigr{\}} >\frac{N^{*}}{2}\Bigr{)}+o_{\sigma}(1),\end{split} \tag{5.11}\] where \(o_{\sigma}(1)\) is an infinitesimal with respect to \(\sigma\). Consider \[\begin{split}\mathbb{P}\Bigl{(}\#\Bigl{\{}& i\leq N^{*}: \theta_{i}-\tau_{i-1}<\mathrm{e}^{\frac{2}{\sigma^{2}}(Q^{\mathbf{c}}-\frac{ \kappa}{2})}\Bigr{\}}>\frac{N^{*}}{2}\Bigr{)}\\ &\leq\sum_{k=\left\lceil\frac{N^{*}}{2}\right\rceil}^{N^{*}} \sum_{(i_{1},\,\ldots,i_{k})}\mathbb{P}\left(\bigcap_{j=1}^{k}\left\{\theta_{i _{j}}-\tau_{i_{j}-1}<\mathrm{e}^{\frac{2}{\sigma^{2}}(Q^{\mathbf{c}}-\frac{ \kappa}{2})}\right\}\right)\\ &\leq\sum_{k=\left\lceil\frac{N^{*}}{2}\right\rceil}^{N^{*}}2^{N^ {*}}\Bigl{(}\sup_{y\in B_{\rho/4}(a)}\mathsf{P}_{y}\Bigl{(}\theta_{0}<\mathrm{ e}^{\frac{2}{\sigma^{2}}(Q^{\mathbf{c}}-\frac{\kappa}{2})}\Bigr{)}\Bigr{)}^{k}, \end{split} \tag{5.12}\] where \((i_{1},\ \ldots\,i_{k})\) stands for all possible choices of \(k\) numbers \(i_{1}<\cdots<i_{k}\) from the set \(\{1,\ \ldots\,N^{*}\}\). Note that the number of such combinations can be roughly bounded by \(2^{N^{*}}\). The last inequality in (5.12) we get due to the fact that \(Y^{\sigma}\) is a strong Markov process and \(\theta_{0}\) is defined in (5.5). By the exit-time result for diffusions of type \(Y^{\sigma}\) (see Proposition 3.3), for any \(\kappa<\rho/4\): \[\sup_{y\in B_{\rho/4}(a)}\mathsf{P}_{y}\Bigl{(}\theta_{0}<\mathrm{e}^{\frac{2} {\sigma^{2}}(Q^{\mathbf{c}}-\frac{\kappa}{2})}\Bigr{)}=o_{\sigma}(1).\] After adding this bound to equations (5.12) and (5.11), we get: \[\mathbb{P}(\tau_{\mathcal{D}_{R}^{\mathbf{e}}}^{Y,\sigma}>\tau_{N^{*}})\leq 2^{N^{ *}}o_{\sigma}(1)^{\left\lceil\frac{N^{*}}{2}\right\rceil}\frac{1-o_{\sigma}(1)^ {\left\lceil\frac{N^{*}}{2}\right\rceil}}{1-o_{\sigma}}+o_{\sigma}(1)=o_{ \sigma}(1).\] **Step 2.** For the second part of the lemma, we use [7, Lemma 5.7.19], that is the fact that there exists \(T_{1}>0\) big enough such that \[\limsup_{\sigma\to 0}\frac{\sigma^{2}}{2}\log\sup_{y\in B_{\rho/2}(a)}\mathsf{P}_{ y}(\tau_{0}>T_{1})<-(H_{R}^{\mathbf{e}}-Q^{\mathbf{c}}+1). \tag{5.13}\] Consider the following equations: \[\mathbb{P}(\exists i\leq N^{*}:\tau_{i}-\theta_{i}>T_{1})\leq\sum_{i=1}^{N^{*}} \mathbb{P}(\tau_{i}-\theta_{i}>T_{1})\leq N^{*}\sup_{y\in B_{\rho/2}(a)}\mathsf{ P}_{y}(\tau_{0}>T_{1}),\] where the last inequality is due to the Markov property of the diffusion \(Y^{\sigma}\). Finally, by (5.13), we get: \[\mathbb{P}(\exists i\leq N^{*}:\tau_{i}-\theta_{i}>T_{1}) \leq 2\left(\mathrm{e}^{2(H_{R}^{\mathsf{e}}-Q^{\mathsf{c}}+ \kappa)/\sigma^{2}}+1\right)\mathrm{e}^{-2(H_{R}^{\mathsf{e}}-Q^{\mathsf{c}}+1) /\sigma^{2}}\] \[\xrightarrow[\sigma\to 0]{}0,\] which proves the lemma if \(\kappa\) is chosen to be small enough. Now we are ready to prove Proposition 3.4. Proof of Proposition 3.4.: Since, by Lemma 5.3, each time spent outside of \(B_{\rho/2}(a)\) is bounded by a constant \(T_{1}>0\) with high probability, we are interested in the composition \[\psi_{T_{1}}\circ\phi_{t}(x)=x\mathrm{e}^{(LT_{1}-Kt)}+o_{\kappa}(1)\mathrm{e} ^{LT_{1}}\leq x\mathrm{e}^{(LT_{1}-Kt)}+o_{\kappa}(1). \tag{5.14}\] Let us introduce the following mapping: \[\Psi_{t}(x):=x\mathrm{e}^{LT_{1}-Kt}+o_{\kappa}(1).\] Then the results of Lemmas 5.1 and 5.2 can be rewritten in the following form: for any \(\kappa>0\) small enough, for any \(m\geq 1\) and for any \(\alpha<\kappa\): \[\begin{split}\mathbb{P}\Bigg{(}\sup_{t\in[\tau_{m};\tau_{m+1}]}|X _{t}^{\sigma}-Y_{t}^{\sigma}|>\Psi_{\theta_{m+1}-\tau_{m}}(\alpha);\\ \tau_{m+1}\leq S_{\mathsf{st}}^{\sigma}(\kappa),\sup_{t\leq\tau_{ m}}|X_{t}^{\sigma}-Y_{t}^{\sigma}|\leq\alpha\Bigg{)}=0.\end{split} \tag{5.15}\] Let us now come back to the statement of the proposition. Fix some \(0<\eta<H_{R}^{\mathsf{e}}-H\). Note that, if \(\sup_{t}|X_{t}^{\sigma}-Y_{t}^{\sigma}|>\alpha\), for \(t\in[T_{\mathsf{st}}^{\sigma}(\kappa);S_{\mathsf{st}}^{\sigma}(\kappa)\wedge \mathrm{e}^{\frac{2}{\sigma^{2}}(H+\eta)}]\), then it should happen for \(t\) belonging to one of the periods of time of the form \([\tau_{k-1};\tau_{k}]\) that are before \(S_{\mathsf{st}}^{\sigma}(\kappa)\wedge\mathrm{e}^{\frac{2}{\sigma^{2}}(H+\eta)}\). Moreover, since we know, by Lemma 5.3, that \(\tau_{N^{*}}\) happens after \(S_{\mathsf{st}}^{\sigma}(\kappa)\wedge\mathrm{e}^{\frac{2}{\sigma^{2}}(H+\eta)}\) with high probability, the number of periods of the form \([\tau_{k-1};\tau_{k}]\), during which \(|X_{t}^{\sigma}-Y_{t}^{\sigma}|\) can surpass the level \(\alpha\), is bounded by \(N^{*}\). Given these observations, consider the following line of equations: \[\mathbb{P}\left(\sup\left\{\left|X_{t}^{\sigma}-Y_{t}^{\sigma} \right|:t\in\left[T_{\mathtt{st}}^{\sigma}(\kappa);S_{\mathtt{st}}^{\sigma}( \kappa)\wedge\exp\left\{\frac{2}{\sigma^{2}}(H+\eta)\right\}\right]\right\}> \alpha\right)\] \[\leq\mathbb{P}\left(\tau_{N^{*}}\leq S_{\mathtt{st}}^{\sigma}( \kappa)\wedge\exp\biggl{\{}\frac{2}{\sigma^{2}}(H+\eta)\biggr{\}}\right)+ \mathbb{P}(\exists m\leq N^{*}:\tau_{m}-\theta_{m}>T_{1})\] \[\quad+\mathbb{P}\Bigl{(}\exists k^{*}\leq N^{*}:\sup_{t\in[\tau_{ k^{*}-1};\tau_{k^{*}}]}|X_{t}^{\sigma}-Y_{t}^{\sigma}|>\alpha\text{ and } \tag{5.16}\] \[\tau_{k^{*}}\leq S_{\mathtt{st}}^{\sigma}(\kappa)\wedge\exp \biggl{\{}\frac{2}{\sigma^{2}}(H+\eta)\biggr{\}},\forall m\leq N^{*}:\tau_{m} -\theta_{m}\leq T_{1},\biggr{)}\] \[=:I_{1}+I_{2}+I_{3}.\] For the first probability: \[I_{1} \leq\mathbb{P}\left(\tau_{N^{*}}\leq\tau_{\mathcal{D}_{R}^{ \mathtt{e}}}^{Y,\sigma}\right)+\mathbb{P}\left(\tau_{\mathcal{D}_{R}^{ \mathtt{e}}}^{Y,\sigma}<\tau_{N^{*}}\leq\exp\biggl{\{}\frac{2}{\sigma^{2}}(H+ \eta)\biggr{\}}\right)\] \[\leq\mathbb{P}\left(\tau_{N^{*}}\leq\tau_{\mathcal{D}_{R}^{ \mathtt{e}}}^{Y,\sigma}\right)+\mathbb{P}(|X_{T_{\mathtt{st}}^{\sigma}(\kappa )}-a|>\kappa)+\sup_{y\in B_{\kappa}(a)}\mathsf{P}_{y}(\tau_{\mathcal{D}_{R}^{ \mathtt{e}}}^{Y,\sigma}<\exp\biggl{\{}\frac{2}{\sigma^{2}}(H+\eta)\biggr{\}})\] \[\xrightarrow[\sigma\to 0]{}0,\] by Lemmas 5.3, 3.1 and Proposition 3.3, and since \(H+\eta<H_{R}^{\mathtt{e}}\). At the same time, by Lemma 5.3, the second expression: \[I_{2}\xrightarrow[\sigma\to 0]{}0.\] What is left is the third expression. Note that, by (5.15), \(I_{3}\) is bounded by: \[\sum_{k^{*}=1}^{N^{*}}\mathbb{P}(\Psi_{\theta_{k^{*}}-\tau_{k^{*}-1}}\circ \cdots\circ\Psi_{\theta_{2}-\tau_{1}}\circ\Psi_{\theta_{1}-T_{\mathtt{st}}( \kappa)}\circ 0>\kappa).\] We get that expression by observing that, if there exists \(k^{*}\) such that inequality \[\sup_{t\in[\tau_{k^{*}};\tau_{k^{*}+1}]}|X_{t}^{\sigma}-Y_{t}^{\sigma}|>\kappa\] holds, then, given that this difference is smaller than \(\kappa\) for times smaller than \(\tau_{k^{*}}\), we can control this difference in terms of \(\Psi_{\theta_{k}-\tau_{k-1}}\) by (5.15). Let us study the sum above. By definition of \(\Psi_{T}\), we have: \[\sum_{k^{*}=1}^{N^{*}}\mathbb{P}(\Psi_{\theta_{k^{*}}-\tau_{k^{*} -1}}\circ\cdots\circ\Psi_{\theta_{2}-\tau_{1}}\circ\Psi_{\theta_{1}-T_{\mathtt{ st}}(\kappa)}\circ 0>\kappa)\] \[\qquad\leq\sum_{k^{*}=1}^{N^{*}}\mathbb{P}\left(\sum_{i=1}^{k^{*}- 1}o_{\kappa}^{i}(1)\exp\Biggl{\{}\sum_{j=i}^{k^{*}}(LT_{1}-K(\theta_{j}-\tau_{j -1}))\Biggr{\}}+o_{\kappa}^{k^{*}}(1)>\kappa\right).\] We can continue the calculations and get the following upper bound: \[\sum_{k^{*}=1}^{N^{*}}\mathbb{P}\left(\sup_{1\leq i\leq k^{*}-1} \exp\left\{\sum_{j=i}^{k^{*}}(LT_{1}-K(\theta_{j}-\tau_{j-1}))\right\}>\frac{ \kappa-o_{\kappa}^{k^{*}}(1)}{k^{*}-1}\right)\] \[\leq\sum_{k^{*}=1}^{N^{*}}\mathbb{P}\left(\sup_{1\leq i\leq k^{*}- 1}\left\{\sum_{j=i}^{k^{*}}(LT_{1}-K(\theta_{j}-\tau_{j-1}))\right\}>-\log(k^{* }-1)\right)\] Note that, if there are more than \((k^{*}-i+1)/2\) intervals of the size \((\theta_{j}-\tau_{j-1})>\exp\Bigl{\{}\frac{2(Q^{\mathsf{c}}-\kappa)}{\sigma^{2 }}\Bigr{\}}\), then necessarily \[\sum_{j=i}^{k^{*}}(LT_{1}-K(\theta_{j}-\tau_{j-1})) \leq(k^{*}-i+1)LT_{1}-\left\lceil\frac{(k^{*}-i+1)}{2}\right\rceil K \mathrm{e}^{\frac{2(Q^{\mathsf{c}}-\kappa)}{\sigma^{2}}}\] \[\leq(k^{*}-i+1)\Bigl{(}LT_{1}-\frac{K}{2}\mathrm{e}^{\frac{2(Q^{ \mathsf{c}}-\kappa)}{\sigma^{2}}}\Bigr{)}\] \[\leq k^{*}\Bigl{(}LT_{1}-\frac{K}{2}\mathrm{e}^{\frac{2(Q^{ \mathsf{c}}-\kappa)}{\sigma^{2}}}\Bigr{)}.\] Since \(k^{*}>\log(k^{*}-1)\) for any \(k^{*}\geq 1\) and since \(\Bigl{(}LT_{1}-\frac{K}{2}\mathrm{e}^{\frac{2(Q^{\mathsf{c}}-\kappa)}{\sigma^ {2}}}\Bigr{)}\) is negative for small enough \(\sigma\), we get \[\sup_{1\leq i\leq k^{*}-1}\left\{\sum_{j=i}^{k^{*}}(LT_{1}-K(\theta_{j}-\tau_{ j-1}))\right\}\leq-\log(k^{*}-1),\] which means that it is impossible to have more than \((k^{*}-i+1)/2\) intervals of the size \(\theta_{j}-\tau_{j-1}\geq\exp\Bigl{\{}\frac{2(Q^{\mathsf{c}}-\kappa)}{\sigma^ {2}}\Bigr{\}}\). Therefore, we have: \[\mathbb{P}\left(\sup_{1\leq i\leq k^{*}-1}\Big{\{}\sum_{j=i}^{k^ {*}}(LT_{1}-K(\theta_{j}-\tau_{j-1})\}\Bigr{)}>-\log(k^{*}-1)\right)\] \[\leq\mathbb{P}\Biggl{(}\forall i\leq k^{*}-1:\#\left\{j:i\leq j \leq k^{*}:\theta_{j}-\tau_{j-1}\leq\exp\biggl{\{}\frac{2(Q^{\mathsf{c}}- \kappa)}{\sigma^{2}}\biggr{\}}\right\}\] \[\geq\frac{k^{*}-i+1}{2}\Biggr{)}\] \[\leq\min_{1\leq i\leq k^{*}-1}\sum_{n=\left\lfloor\frac{k^{*}-i+ 1}{2}\right\rfloor}^{k^{*}}\sum_{(j_{1},\ldots,j_{n})}\mathbb{P}\left(\bigcap_{ l=1}^{n}\left\{\theta_{j_{l}}-\tau_{j_{l}-1}\leq\exp\biggl{\{}\frac{2(Q^{\mathsf{c}}- \kappa)}{\sigma^{2}}\biggr{\}}\biggr{\}}\right).\] Since the number of combinations of the form \((j_{1},\ldots,j_{n})\) can be roughly bounded by \(2^{n}\), we can deduce \[\min_{1\leq i\leq k^{*}-1} \sum_{n=\left\lfloor\frac{k^{*}-i+1}{2}\right\rfloor}^{k^{*}}\sum_{ (j_{1},\ldots,j_{n})}\mathbb{P}\left(\bigcap_{l=1}^{n}\left\{\theta_{j_{l}}- \tau_{j_{l}-1}\leq\exp\biggl{\{}\frac{2(Q^{\mathbf{c}}-\kappa)}{\sigma^{2}} \biggr{\}}\right\}\right)\] \[\leq\sum_{n=\left\lfloor\frac{k^{*}}{2}\right\rfloor}^{k^{*}}2^{n }\left(\sup_{y\in B_{\rho/4}(a)}\mathsf{P}_{y}\biggl{(}\theta_{0}\leq\exp\biggl{ }\frac{2(Q^{\mathbf{c}}-\kappa)}{\sigma^{2}}\biggr{\}}\biggr{)}\right)^{n}\] \[\leq o_{\sigma}(1)\bigl{\lfloor}\tfrac{k^{*}+1}{2}\bigr{\rfloor} \frac{1+o_{\sigma}(1)\bigl{\lfloor}\tfrac{k^{*}+1}{2}\bigr{\rfloor}}{1-o_{ \sigma}(1)},\] by Proposition 3.3. Combining inequalities above, we can come back to (5.16) and conclude that \(I_{3}\) also tends to \(0\) with \(\sigma\to 0\), for each \(\kappa>0\) small enough, which finalizes the proof. ### Control of \(Y^{\sigma}\): Proof of Lemma 3.5 We can show, using large deviations techniques, that there exists a uniform upper bound on the time of convergence of \(Y^{\sigma}\) inside \(B_{\rho/4}(a)\). Namely, for any \(r>0\) small enough, there exists \(\overline{T}>0\) such that \[\sup_{y\in\mathcal{D}_{r}^{\mathbf{e}}}\mathsf{P}_{y}(Y_{\overline{T}}^{\sigma }\notin B_{\rho/4}(a))\xrightarrow[\sigma\to 0]{}0.\] The construction of such a \(\overline{T}\) can be found in [7, Proof of Lemma 5.7.19]. Therefore, for small enough \(\sigma>0\), given only \(r\) and \(\rho\), we can choose a continuous function \(\overline{o}(\sigma)\) such that \(\overline{o}(\sigma)\xrightarrow[\sigma\to 0]{}0\) and we have \[\sup_{y\in\mathcal{D}_{r}^{\mathbf{e}}}\mathsf{P}_{y}(Y_{\overline{T}}^{\sigma }\notin B_{\rho/4}(a))\leq\overline{o}(\sigma), \tag{5.17}\] for all \(\sigma>0\) small enough. Moreover, by Proposition 3.3, we know that for any \(\delta>0\) we have \(\mathbb{P}(\tau_{\mathcal{D}_{r}^{\mathbf{e}}}^{Y,\sigma}\leq\exp\Bigl{\{} \frac{2(H_{r}^{\mathbf{e}}-\delta)}{\sigma^{2}}\Bigr{\}})\xrightarrow[\sigma \to 0]{}0\). After fixing some positive \(r>0\) and choosing \(\delta\) to be small enough such that \(H<H_{r}^{\mathbf{e}}-\delta\), we can define \(\eta>0\) as a small enough number such that \(H+\eta<H_{r}^{\mathbf{e}}-\delta\). In the following, we can restrict ourselves only to those trajectories that do not leave domain \(\mathcal{D}_{r}^{\mathbf{e}}\) before time \(\exp\Bigl{\{}\frac{2(H+\eta)}{\sigma^{2}}\Bigr{\}}<\exp\Bigl{\{}\frac{2(H_{r} ^{\mathbf{e}}-\delta)}{\sigma^{2}}\Bigr{\}}\). Define the event \(A:=\{\tau_{\mathcal{D}_{r}^{\mathbf{e}}}^{Y,\sigma}>\exp\Bigl{\{}\frac{2(H+ \eta)}{\sigma^{2}}\Bigr{\}}\}\). Consider the following inequalities. By Lemma 3.1 and the definition of \(Y^{\sigma}\), for any \(\kappa>0\), we can introduce \(\overline{o}_{\kappa}(\sigma)\), the modification of function \(\overline{o}(\sigma)\) such that 5.17 still holds and also we have \[\mathbb{P}(Y_{T_{\mathrm{st}}^{\sigma}(\kappa)}^{\sigma}\notin B_{\rho/4}(a), A)\leq\overline{o}_{\kappa}(\sigma). \tag{5.18}\] At the same time, using the Markov property of diffusion \(Y^{\sigma}\), for small enough \(\sigma>0\), we have \[\mathbb{P}(Y^{\sigma}_{T^{\sigma}_{\mathfrak{s}}(\kappa)+\overline{T}} \notin B_{\rho/4}(a),A)\] \[\quad\leq\sup_{y\in B_{\rho/4}(a)}\mathsf{P}_{y}(Y^{\sigma}_{ \overline{T}}\notin B_{\rho/4}(a),A)\mathbb{P}(Y^{\sigma}_{T^{\sigma}_{ \mathfrak{s}}(\kappa)}\in B_{\rho/4}(a),A)\] \[\quad\quad+\sup_{y\in\mathcal{D}^{\bullet}_{r}\setminus B_{\rho/4 }(a)}\mathsf{P}_{y}(Y^{\sigma}_{\overline{T}}\notin B_{\rho/4}(a),A)\mathbb{P} (Y_{T^{\sigma}_{\mathfrak{s}}(\kappa)}\notin B_{\rho/4}(a),A)\] \[\quad\leq\overline{\sigma}_{\kappa}(\sigma)+\overline{\sigma}_{ \kappa}^{2}(\sigma),\] by Equations (5.17) and (5.18), while \(\mathbb{P}(Y^{\sigma}_{T^{\sigma}_{\mathfrak{s}}(\kappa)}\in B_{\rho/4}(a),A)\) is bounded by \(1\). For the next step consider: \[\mathbb{P}(Y^{\sigma}_{T^{\sigma}_{\mathfrak{s}}(\kappa)+2 \overline{T}}\notin B_{\rho/4}(a),A)\] \[\quad\leq\sup_{y\in B_{\rho/4}(a)}\mathsf{P}_{y}(Y^{\sigma}_{ \overline{T}}\notin B_{\rho/4}(a),A)\mathbb{P}(Y^{\sigma}_{T^{\sigma}_{ \mathfrak{s}}(\kappa)+\overline{T}}\in B_{\rho/4}(a),A)\] \[\quad\quad+\sup_{y\in\mathcal{D}^{\bullet}_{r}\setminus B_{\rho/4 }(a)}\mathsf{P}_{y}(Y^{\sigma}_{\overline{T}}\notin B_{\rho/4}(a),A)\mathbb{P} (Y^{\sigma}_{T^{\sigma}_{\mathfrak{s}}(\kappa)+\overline{T}}\notin B_{\rho/4}(a ),A)\] \[\quad\leq\overline{\sigma}_{\kappa}(\sigma)\left[1+\overline{ \sigma}_{\kappa}(\sigma)+\overline{\sigma}_{\kappa}^{2}(\sigma)\right],\] similarly to the previous computations. For any fixed \(\kappa>0\) and \(\sigma>0\) small enough, we can repeat this procedure \(N(\sigma):=\left\lfloor\frac{1}{\overline{T}}\exp\Bigl{\{}\frac{2(H+\eta)}{ \sigma^{2}}\Bigr{\}}\right\rfloor\) times, thus while \(A\) still holds. We finally get the following upper bound: \[\sup_{n\leq N(\sigma)}\mathbb{P}(Y^{\sigma}_{T^{\sigma}_{\mathfrak{s}}(\kappa )+n\overline{T}}\notin B_{\rho/4}(a),A)\leq\overline{\sigma}_{\kappa}(\sigma) \sum_{i=0}^{N(\sigma)}\overline{\sigma}_{\kappa}^{i}(\sigma)\leq\frac{ \overline{\sigma}_{\kappa}(\sigma)}{1-\overline{\sigma}_{\kappa}(\sigma)}. \tag{5.19}\] This allows us to confine with high probability \(Y^{\sigma}\) for points of time of the form \(T^{\sigma}_{\mathfrak{s}}(\kappa)+n\overline{T}\) inside the ball \(B_{\rho/4}(a)\). The last steps that one has to make in order to prove the lemma is, first, to control the probability \(\mathbb{P}(Y^{\sigma}_{t}\notin B_{\rho/2}(a))\) in between points of time of the form \(T^{\sigma}_{\mathfrak{s}}(\kappa)+k\overline{T}\) and \(T^{\sigma}_{\mathfrak{s}}(\kappa)+(k+1)\overline{T}\) and, second, remove event \(A\). Note that \[\sup_{t}\mathbb{P}(Y^{\sigma}_{t}\notin B_{\rho/2}(a))\leq\mathbb{P}(\overline {A})+\sup_{t}\mathbb{P}(Y^{\sigma}_{t}\notin B_{\rho/2}(a),A),\] where the suprema are taken with respect to \(t\in\left[T^{\sigma}_{\mathfrak{s}}(\kappa);\exp\Bigl{\{}\frac{2(H+\eta)}{ \sigma^{2}}\Bigr{\}}\right]\). The first probability tends to zero by Lemma 3.1 and Proposition 3.3, since \[\mathbb{P}(\overline{A})\leq\mathbb{P}(|X^{\sigma}_{T^{\sigma}_{\mathfrak{s}} (\kappa)}-a|>\kappa)+\sup_{y\in B_{\kappa}(a)}\mathsf{P}_{y}\left(\tau^{Y, \sigma}_{\mathcal{D}^{\bullet}_{r}}<\exp\biggl{\{}\frac{2(H^{\bullet}_{r}- \delta)}{\sigma^{2}}\biggr{\}}\right)\xrightarrow[\sigma\to 0]{}0.\] For the second probability, consider the following inequalities for any \(\kappa>0\) and for any \(\sigma>0\) small enough. Using the Markov property of \(Y^{\sigma}\), we have \[\sup_{t}\mathbb{P}(Y^{\sigma}_{t}\notin B_{\rho/2}(a),A) \leq\sup_{n\leq N(\sigma)}\mathbb{P}(Y^{\sigma}_{T^{\sigma}_{\mathfrak{ st}}(\kappa)+n\overline{T}}\in B_{\rho/4}(a),A)\] \[\qquad\times\sup_{y\in B_{\rho/4}(a)}\sup_{t\leq\overline{T}} \mathsf{P}_{y}(Y^{\sigma}_{t}\notin B_{\rho/2}(a),A)\] \[+\sup_{n\leq N(\sigma)}\mathbb{P}(Y^{\sigma}_{T^{\sigma}_{ \mathfrak{st}}(\kappa)+n\overline{T}}\notin B_{\rho/4}(a),A)\] \[\qquad\times\sup_{y\in\mathcal{D}^{\mathfrak{s}}_{t}}\sup_{t\leq \overline{T}}\mathsf{P}_{y}(Y^{\sigma}_{t}\notin B_{\rho/2}(a),A).\] Let us use (5.19) and bound by 1 the probabilities that are not needed for our derivations. Finally, we get for any \(\kappa>0\) and \(\sigma>0\) small enough: \[\sup_{t}\mathbb{P}(Y^{\sigma}_{t}\notin B_{\rho/2}(a),A)\leq\sup_{y\in B_{ \rho/4}(a)}\sup_{t\leq\overline{T}}\mathsf{P}_{y}(Y^{\sigma}_{t}\notin B_{ \rho/2}(a))+\frac{\overline{o}_{\kappa}(\sigma)}{1-\overline{o}_{\kappa}( \sigma)}.\] Note that \(\{Y^{\sigma}_{t}\notin B_{\rho/2}(a)\}\subseteq\{\tau^{Y,\sigma}_{B_{\rho/2}( a)}<t\}\). Therefore, we have \[\sup_{y\in B_{\rho/4}(a)}\sup_{t\leq\overline{T}}\mathsf{P}_{y}(Y^{\sigma}_{t} \notin B_{\rho/2}(a))\leq\sup_{y\in B_{\rho/4}(a)}\mathsf{P}_{y}(\tau^{Y, \sigma}_{B_{\rho/2}(a)}<\overline{T})\xrightarrow[\sigma\to 0]{}0,\] by Proposition 3.3. This finally shows that we can find \(\eta>0\) such that for any \(\kappa>0\) small enough, we have \[\sup_{t\in\left[T^{\sigma}_{\mathfrak{st}}(\kappa);\exp\left\{\frac{2(H+\eta )}{\sigma^{2}}\right\}\right]}\mathbb{P}(Y^{\sigma}_{t}\notin B_{\rho/2}(a)) \xrightarrow[\sigma\to 0]{}0,\] which proves the lemma. ### Control of the law: Proof of Lemma 3.6 In this paragraph, we prove Lemma 3.6. In order to do that, we first provide and prove Lemma 5.4 below, that is a modification of a technique introduced by J. Tugaut in [23]. Let \(\xi(t):=\mathbb{W}^{2}_{2}(\mu^{\sigma}_{t};\delta_{a})\). Consider the following lemma: **Lemma 5.4**.: _Under Assumptions A-1-A-7, there exist \(K_{1},K_{2}>0\) such that for any \(t>0\):_ \[\xi^{\prime}(t)\leq-K_{1}\xi(t)+d\sigma^{2}+K_{2}\sqrt{\mathbb{P}(X^{\sigma}_{ t}\notin B_{\rho}(a))}\,.\] Proof.: The proof is similar to the one of [23, Lemma 4.1] although it is strongly different. **Step 1.** First of all, by Ito's formula, we have \[|X^{\sigma}_{t}-a|^{2} =|X_{0}-a|^{2}+2\sigma\int_{0}^{t}\langle X^{\sigma}_{s}-a; \mathrm{d}B_{s}\rangle-2\int_{0}^{t}\langle X^{\sigma}_{s}-a;\nabla V(X^{ \sigma}_{s})\rangle\,\mathrm{d}s\] \[-2\int_{0}^{t}\langle X^{\sigma}_{s}-a;\nabla F*\mu^{\sigma}_{s}(X ^{\sigma}_{s})\rangle\,\mathrm{d}s+d\sigma^{2}t.\] For the next step, we take the expectation and derivative with respect to \(t\). We get: \[\xi^{\prime}(t)=d\sigma^{2}-2\mathbb{E}[\langle X_{t}^{\sigma}-a;\nabla V(X_{t}^{ \sigma})+\nabla F*\mu_{t}^{\sigma}(X_{t}^{\sigma})\rangle]\,.\] **Step 2.** Let us introduce \(\tilde{F}\in\mathcal{C}^{2}(\mathbb{R}^{d})\) - a modification of the function \(F\) such that \(\tilde{F}\) is "convex enough" around \(0\). Namely, if \(\nabla^{2}F(0)\succeq\frac{C_{W}}{2}\mathrm{Id}\), where \(C_{W}\) is the positive constant from Definition 1.2, then we simply let \(\tilde{F}=F\). If not, we introduce a matrix \(\mathcal{M}:=-\nabla^{2}F(0)+\frac{C_{W}}{2}\mathrm{Id}\) and define \(\tilde{F}(x):=F(x)+\frac{1}{2}\left\langle x;\mathcal{M}x\right\rangle\). In the following, without loss of generality, we consider the case \(\nabla^{2}F(0)\prec\frac{C_{W}}{2}\mathrm{Id}\). Moreover, without loss of generality, we assume that \(\tilde{F}\) is locally convex inside the ball \(B_{\rho}(0)\), where \(\rho\) is the radius of convexity of the effective potential introduced in Definition 1.2. Indeed, since \(\nabla^{2}F\) is continuous, we can always choose \(\rho\) in Definition 1.2 to be small enough such that \(\nabla^{2}F(x)-\nabla^{2}F(0)\succ-\frac{C_{W}}{2}\mathrm{Id}\) for any \(x\in B_{\rho}(0)\). Note, that, under these assumptions, \(\mathcal{M}\) is a positive definite matrix. **Step 3.** By definition of \(\tilde{F}\), we have: \[\mathbb{E}[\langle X_{t}^{\sigma}-a; \nabla F*\mu_{t}^{\sigma}(X_{t}^{\sigma})\rangle]\] \[=\mathbb{E}[\langle X_{t}^{\sigma}-a;\nabla\tilde{F}*\mu_{t}^{ \sigma}(X_{t}^{\sigma})\rangle]-\mathbb{E}[\langle X_{t}^{\sigma}-a;\mathcal{M }(X_{t}^{\sigma}-\mathbb{E}[X_{t}^{\sigma}])\rangle]\] \[=\mathbb{E}[\langle X_{t}^{\sigma}-a;\nabla\tilde{F}*\mu_{t}^{ \sigma}(X_{t}^{\sigma})\rangle]-\mathbb{E}[\langle X_{t}^{\sigma}-a;\mathcal{M }(X_{t}^{\sigma}-a)\rangle]\] \[-\mathbb{E}[\langle X_{t}^{\sigma}-a;\mathcal{M}(a-\mathbb{E}[X _{t}^{\sigma}])\rangle].\] Let \(Y_{t}^{\sigma}\) be an independent copy of \(X_{t}^{\sigma}\). Since \(\mathcal{M}\) is positive definite, this gives us the following lower bound: \[\mathbb{E}[\langle X_{t}^{\sigma}-a;\nabla F*\mu_{t}^{\sigma}(X_ {t}^{\sigma})\rangle] \geq\mathbb{E}\left[\langle X_{t}^{\sigma}-a;\nabla\tilde{F}(X_ {t}^{\sigma}-Y_{t}^{\sigma})\rangle\right] \tag{5.20}\] \[\quad-\mathbb{E}[\langle X_{t}^{\sigma}-a;\mathcal{M}(X_{t}^{ \sigma}-a)\rangle].\] **Step 4.** We now focus on the first term of the inequality above and. Let us consider separately the parts of the process \(X^{\sigma}\) lying outside and inside the ball \(B_{\rho/2}(a)\). Using the polynomial growth (Assumption A-2), for some generic constant C, we get: \[\mathbb{E}\Big{[}\langle X_{t}^{\sigma}-a;\nabla\tilde{F}(X_{t}^ {\sigma}-Y_{t}^{\sigma})\rangle\Big{]} \geq\mathbb{E}\left[\langle X_{t}^{\sigma}-a;\nabla\tilde{F}(X_{t }^{\sigma}-Y_{t}^{\sigma})\rangle\mathds{1}_{X_{t}^{\sigma}\in B_{\rho/2}(a) }\mathds{1}_{Y_{t}^{\sigma}\in B_{\rho/2}(a)}\right]\] \[\quad-\mathrm{C}\,\mathbb{E}\left[(1+|X_{t}^{\sigma}|^{2r}) \mathds{1}_{X_{t}^{\sigma}\notin B_{\rho/2}(a)}\right]\,.\] Since \(\tilde{F}\) is convex inside \(B_{\rho}(0)\) and the moments are uniformly bounded (Proposition 1.6), by using the Cauchy-Schwarz inequality, for any \(0<\sigma<1\), we immediately obtain the existence of a positive constant \(K>0\) such that: \[\mathbb{E}\Big{[}\langle X_{t}^{\sigma}-a;\nabla\tilde{F}(X_{t}^{\sigma}-Y_{t }^{\sigma})\rangle\Big{]}\geq-K\sqrt{\mathbb{P}\left(X_{t}^{\sigma}\notin B_{ \rho/2}(a)\right)}.\] We plug this inequality in Equation 5.20 and get: \[\mathbb{E}[\langle X_{t}^{\sigma}-a;\nabla F*\mu_{t}^{\sigma}(X_ {t}^{\sigma})\rangle] \geq-\mathbb{E}[\langle X_{t}^{\sigma}-a;\mathcal{M}(X_{t}^{ \sigma}-a)\rangle] \tag{5.21}\] \[\quad-K\sqrt{\mathbb{P}\left(X_{t}^{\sigma}\notin B_{\rho/2}(a) \right)}.\] **Step 5**.: We now focus on the term involving \(\nabla V\). According to Definition 1.2, for any \(x\in B_{\rho}(a)\), we have: \[\nabla^{2}V(x)\succeq C_{W}\mathrm{Id}-\nabla^{2}F(x-a).\] At the same time, by the definition of \(\mathcal{M}\): \[\nabla^{2}F(0)=-\mathcal{M}+\frac{C_{W}}{2}\mathrm{Id}.\] Since \(\nabla^{2}F\) is continuous, we can, without loss of generality, decrease \(\rho\) if necessary so that \(-\nabla^{2}F(x)\succeq-\nabla^{2}F(0)-\frac{C_{W}}{4}\mathrm{Id}\) for any \(x\in B_{\rho}(0)\). Therefore, for any \(x\in B_{\rho}(a)\), we have: \[\nabla^{2}V(x)\succeq\frac{C_{W}}{4}\mathrm{Id}+\mathcal{M}.\] Using the same logic as in **Step 4**, we get: \[\mathbb{E}[\langle X_{t}^{\sigma}-a;\nabla V(X_{t}^{\sigma}) \rangle]\] \[\quad=\mathbb{E}[\langle X_{t}^{\sigma}-a;\nabla V(X_{t}^{\sigma })\rangle\mathds{1}_{X_{t}^{\sigma}\in B_{\rho/2}(a)}]+\mathbb{E}[\langle X_ {t}^{\sigma}-a;\nabla V(X_{t}^{\sigma})\rangle\mathds{1}_{X_{t}^{\sigma} \notin B_{\rho/2}(a)}]\] \[\quad\geq\mathbb{E}\left[\langle X_{t}^{\sigma}-a;\left(\frac{C_ {W}}{4}\mathrm{Id}+\mathcal{M}\right)(X_{t}^{\sigma}-a)\rangle\right]-K\sqrt{ \mathbb{P}\left(X_{t}^{\sigma}\notin B_{\rho/2}(a)\right)}\,. \tag{5.22}\] **Final step.** As a consequence, putting 5.21 and 5.22 in the **Step 1**, we get: \[\xi^{\prime}(t)\leq d\sigma^{2}-\frac{C_{W}}{2}\xi(t)+2K\sqrt{\mathbb{P}\left( X_{t}^{\sigma}\notin B_{\rho/2}(a)\right)}\,,\] which concludes the proof. Now we are ready to prove Lemma 3.6 itself. proof of Lemma 3.6.: In order to prove the lemma, we first use Lemma 5.4, that is inequality: \[\xi^{\prime}(t)\leq-2\rho^{\prime}\,\xi(t)+d\sigma^{2}+K\sqrt{\mathbb{P}(X_{t} ^{\sigma}\notin B_{\rho}(a))}\,.\] After that, we use Lemma 3.5 along with Proposition 3.4 in order to show that the term \(\mathbb{P}(X_{t}^{\sigma}\notin B_{\rho}(a))\) tends to \(0\) with \(\sigma\to 0\) for any \(T_{\mathsf{st}}^{\sigma}(\kappa)\leq t\leq S_{\mathsf{st}}^{\sigma}(\kappa) \wedge\exp\Bigl{\{}\frac{2(H+\eta)}{\sigma^{2}}\Bigr{\}}\), which, in its term, means that we can choose \(\sigma\) to be small enough such that \(\xi(t)\leq\kappa^{2}\) for all such \(t\). Final step is to show that \(S_{\mathsf{st}}^{\sigma}(\kappa)\) can not be less or equal than \(\exp\Bigl{\{}\frac{2(H+\eta)}{\sigma^{2}}\Bigr{\}}\) or else we get contradiction between the fact that \(\xi(S_{\mathsf{st}}^{\sigma}(\kappa))\leq\kappa^{2}\) and definition of \(S_{\mathsf{st}}^{\sigma}(\kappa)\). Consider the following inequalities. For any \(T_{\mathsf{st}}^{\sigma}(\kappa)\leq t\leq S_{\mathsf{st}}^{\sigma}(\kappa) \wedge\exp\Bigl{\{}\frac{2(H+\eta)}{\sigma^{2}}\Bigr{\}}\): \[\mathbb{P}(X_{t}^{\sigma}\notin B_{\rho}(a))\leq\mathbb{P}(Y_{t}^{\sigma} \notin B_{\rho/2}(a))+\mathbb{P}(|Y_{t}^{\sigma}-X_{t}^{\sigma}|>\rho/2)=o_{ \sigma}(1),\] by Lemma 3.5 and Proposition 3.4. Thus, by Lemma 5.4, \(\xi(t)=\mathbb{W}_{2}^{2}(\mu_{t};\delta_{a})\) is bounded for any \(t\) considered above in the following way: \[\xi^{\prime}(t)\leq-2\rho^{\prime}\xi(t)+d\sigma^{2}+Ko_{\sigma}(1).\] Therefore, we can decrease \(\kappa\) and then \(\sigma\) to be small enough such that \(\xi(t)\leq\kappa^{2}\) for any \(T^{\sigma}_{\mathsf{st}}(\kappa)\leq t\leq S^{\sigma}_{\mathsf{st}}(\kappa)\wedge \exp\Bigl{\{}\frac{2(H+\eta)}{\sigma^{2}}\Bigr{\}}\). The last step is to note that if \(S^{\sigma}_{\mathsf{st}}(\kappa)<\exp\Bigl{\{}\frac{2(H+\eta)}{\sigma^{2}} \Bigr{\}}\), then we get a contradiction between the definition of \(S^{\sigma}_{\mathsf{st}}(\kappa)\) and the fact that \(\xi(S^{\sigma}_{\mathsf{st}}(\kappa))\leq\kappa^{2}\), which proves the lemma.
2308.16389
The Biased Journey of MSD_AUDIO.ZIP
The equitable distribution of academic data is crucial for ensuring equal research opportunities, and ultimately further progress. Yet, due to the complexity of using the API for audio data that corresponds to the Million Song Dataset along with its misreporting (before 2016) and the discontinuation of this API (after 2016), access to this data has become restricted to those within certain affiliations that are connected peer-to-peer. In this paper, we delve into this issue, drawing insights from the experiences of 22 individuals who either attempted to access the data or played a role in its creation. With this, we hope to initiate more critical dialogue and more thoughtful consideration with regard to access privilege in the MIR community.
Haven Kim, Keunwoo Choi, Mateusz Modrzejewski, Cynthia C. S. Liem
2023-08-31T01:42:31Z
http://arxiv.org/abs/2308.16389v3
# The Biased Journey of MSD_Audio.zip ###### Abstract The equitable distribution of academic data is crucial for ensuring equal research opportunities, and ultimately further progress. Yet, due to the complexity of using the API for audio data that corresponds to the Million Song Dataset along with its misreporting (before 2016) and the discontinuation of this API (after 2016), access to this data has become restricted to those within certain affiliations that are connected peer-to-peer. In this paper, we delve into this issue, drawing insights from the experiences of 22 individuals who either attempted to access the data or played a role in its creation. With this, we hope to initiate more critical dialogue and more thoughtful consideration with regard to access privilege in the MIR community. Haven Kim \({}^{1}\) &Keunwoo Choi \({}^{2}\) &Mateusz Modrzejewski \({}^{3}\) &Cynthia C. S. Liem \({}^{4}\) \({}^{1}\) Graduate School of Culture Technology, KAIST, South Korea \({}^{2}\) Gaudio Lab, Inc., Seoul, South Korea / Prescient Design, New York, USA \({}^{3}\) Institute of Computer Science, Warsaw University of Technology, Poland \({}^{4}\) Multimedia Computing Group, Delft University of Technology, The Netherlands [email protected], [email protected], [email protected], [email protected] ## 1 Introduction _"It was on the school server."_ _vs._ _"We did not know whom to ask."_ The Million Song Dataset (MSD) [1] has been a cornerstone for audio-centric music information retrieval (MIR) studies, such as music auto-tagging, with its significance widely acknowledged. However, access to audio data for this dataset (MSD Audio) is limited to peer-to-peer sharing since 2016, making it difficult to regard it as publicly available. As we will show, this limitation has led to disparities disadvantageing those affiliated with institutions that are personally less-connected within the MIR community, either geographically or academically. This has jeopardized the principle of equality within the community, as well as the reproducibility and advancement of previous research [2]. In this paper, we will address this issue based on anecdotal comments from one individual who contributed to the creation of the dataset, as well as 21 individuals who have attempted to access the MSD audio. We collected these comments in two ways. First, we distributed a survey via the ISMIR mailing list, and those familiar with the dataset voluntarily participated. Second, after identifying approximately sixty papers that utilized the MSD audio for their experiments, we personally contacted the authors of each paper and invited them to either complete the survey or participate in an informal interview. ## 2 From Misreported Public Availability to Institutional Divide Initially designed as a large publicly available dataset, the MSD contains metadata for a million contemporary popular music tracks [1]. Although the dataset does not directly include audio data, the contributors, based on feedback we have received from one of them, have significantly invested in aligning its metadata with another music audio set and API to enable various research opportunities that exploit information provided by both datasets. One of the benefits reported by researchers is access to 30-second audio previews of music tracks, which were matched with the MSD data by leveraging the API from 7digital.com. Based on anecdotes from researchers who attempted to access audio previews using the API, comprehensively scraping audio previews was nearly impossible without significant financial resources or technical expertise. We assume only two organizations succeeded in this before the \begin{table} \begin{tabular}{c c c c} \hline \hline **Name** & **Country** & **When** & **How** \\ \hline TU Wien & Austria & 2011 & 7Digital API \\ Ghent University & Belgium & 2011-2013 & 7Digital API \\ PNV & United States & 2011-2013 & Peer-to-peer sharing \\ Columbia University & United States & 2014 & Peer-to-peer sharing \\ University of Oxford & United Kingdom & 2014-2015 & Peer-to-peer sharing \\ University of Edinburgh & United Kingdom & 2014-2015 & Peer-to-peer sharing \\ UPF & Spain & 2014-2015 & Peer-to-peer sharing \\ Academia Sinica & Taiwan & 2015 & Peer-to-peer sharing \\ QMUL & United Kingdom & 2015 & Peer-to-peer sharing \\ Johns Hopkins University & United States & 2016 & Peer-to-peer sharing \\ JU denif & Netherlands & 2016 & Peer-to-peer sharing \\ KAIST & South Korea & 2016 & Peer-to-peer sharing \\ Dezerr & France & 2011-2016 & U/I \\ JRU & Austria & U/I & U/I \\ \hline \hline \end{tabular} \end{table} Table 1: Organizations that have accessed MSD Audio by 2016, including the approximate years and methods of access. U/I indicates unidentified. deactivation of the 7digital.com API. In fact, most of the organizations that we identified as having access to this data (listed in Table 1) acquired it through direct and informal peer-to-peer sharing. Those who obtained the data through direct sharing were, naturally, those connected peer-to-peer with data-owning organizations. MSD Audio is prohibitively large (about 700 GB) to easily share through online transfer. As a result, geographical proximity seems to have played a role. Yet, none of those who acquired the data through peer-to-peer sharing and utilized it publicly acknowledged the practical unavailability of this dataset. We received comments that those who acquired the dataset through peer-to-peer sharing reported in their papers that they had obtained the data via web scraping. This misinformation further confused researchers outside of major organizations within the MIR community, leading to numerous unsuccessful attempts to obtain data by web scraping. This unequal accessibility to MSD Audio has widened the gap between majorities and minorities within the community. ## 3 Anecdotal Analysis : Unequal Accessibility In this section, we will analyze anecdotal comments from 21 individuals who have tried accessing MSD Audio, which we collected through surveys or interviews, where we asked about the methods, results, and the approximate timings of their attempts, as well as their affiliations and professions at the time of their attempts, to address the issue of unequal accessibility. Peer-to-peer sharing instances that we identified primarily have occurred between organizations that owned the data and those closely connected to them, either geographically or academically, with the most recent instance of peer-to-peer sharing occurring in 2016. All individuals who attempted to obtain the dataset in 2017 or later and succeeded (5 individuals) were affiliated with organizations that owned the data. Conversely, all those from organizations without the data who tried in 2017 or later (6 individuals) experienced at least one unsuccessful attempt, where two of these individuals ultimately abandoned the research project they had initially planned. Four of those who were unsuccessful mentioned that they had even tried asking individuals outside their organizations, but to no avail. One of them specifically noted that they currently lack access to the dataset because they "do not know whom to ask". This institutional divide seems to lead to inequality among industrial organizations as well. One respondent evidenced this by reporting that they had no access to this data while employed at an organization with less than 50 employees, but immediately obtained access upon moving to an organization with more than 500 employees. Our further investigation into the unsuccessful attempts supports the existence of institutional divides along with geographical and prestige bias. Of the 7 unsuccessful peer-to-peer sharing attempts we identified from the anecdotes we collected, five came from relatively less active organizations within the MIR community, without first- or last-author papers at the ISMIR conference in the past three years. The remaining two were non-Western organizations. On the other hand, all of the organizations we identified to own access to this data are notably either prominent within the MIR community, with at least five papers accepted at the ISMIR conference in the past three years, or Western-based, as provided in Table 1. In addition to organization prestige, individual research experience also appears to influence the success of requests, favoring experienced researchers over novices. For instance, we observed two separate attempts made by those from the same organization, where an undergraduate student was unable to obtain the data despite the request, whereas a faculty member's request was successful. ## 4 Conclusions In this paper, we delved into the unequal accessibility issue concerning MSD audio, which divides the MIR community into those who can access the data and those who cannot. This problem arises not only due to the complexity and discontinuation of using APIs, but also because of information provided by authors who claimed to have acquired the data through web scraping when, in fact, they obtained it via peer-to-peer sharing. This has disproportionately affected researchers who are not closely connected to the data owning organizations and those with limited research experience, sidelining minorities within the MIR community. Our data of those who had access to MSD Audio is quite comprehensive, as there are few papers that are based on the dataset in the early years. However, the collected failure cases are far from being comprehensive; our survey is distributed via the ISMIR mailing list or personal contacts to authors who utilized the data for their papers, which already set a strong survivorship bias. One of the authors has received many inquiries about MSD Audio so far, but none of them participated in our survey. It is indeed not possible to discover all the hidden, doomed attempts towards MSD Audio. This situation, ever since 2011, challenges us to imagine how much MIR research could have been done if MSD Audio was available more widely and equally, and how many potential MIR researchers could have had a more active and successful research career. Yet we have been overlooking this problem, rather than facing it, because it is only a severe problem to those who do not have a voice in the research community. By presenting this survey and these anecdotes, we advocate for more inclusive and transparent data accessibility and research opportunities, and hope to cultivate a more diverse, equitable, and productive MIR research landscape.
2309.11903
Full mesh networking technology with peer to peer grid topology based on variable parameter full dimensional space
The continuous development of computer network technology has accelerated the pace of informatization, and at the same time, network security issues are becoming increasingly prominent. Networking technology with different network topologies is one of the important means to solve network security problems. The security of VPN is based on the division of geographical boundaries, but the granularity is relatively coarse, which is difficult to cope with the dynamic changes of the security situation. Zero trust network solves the VPN problem through peer to peer authorization and continuous verification, but most of the solutions use a central proxy device, resulting in the central node becoming the bottleneck of the network. This paper put forward the hard-Nat traversal formula based on the birthday paradox, which solves the long-standing problem of hard NAT traversal. A full mesh networking mechanism with variable parameter full-dimensional spatial peer-to-peer grid topology was proposed, which covers all types of networking schemes and achieve peer-2-peer resource interconnection on both methodological and engineering level.
Wenqiang Song, Chuan He, Zhaoyang Xie, Yuanyuan Chai
2023-09-21T09:14:31Z
http://arxiv.org/abs/2309.11903v1
Full Mesh Networking Technology with Peer to Peer Grid Topology Based on Variable Parameter Full Dimensional Space ###### Abstract The continuous development of computer network technology has accelerated the pace of information, and at the same time, network security issues are becoming increasingly prominent. Networking technology with different network topologies is one of the important means to solve network security problems. The security of VPN is based on the division of geographical boundaries, but the granularity is relatively coarse, which is difficult to cope with the dynamic changes of the security situation. Zero trust network solves the VPN problem through peer to peer authorization and continuous verification, but most of the solutions use a central proxy device, resulting in the central node becoming the bottleneck of the network. This paper put forward the hard-Nat traversal formula based on the birthday paradox, which solves the long-standing problem of hard NAT traversal. A full mesh networking mechanism with variable parameter full-dimensional spatial peer-to-peer grid topology was proposed, which covers all types of networking schemes and achieve peer-2-peer resource interconnection on both methodological and engineering level. Zero trust, Birthday paradox, hard NAT, port scanning, NAT traversal, full mesh networking technology ## 1 Introduction Network security is an important branch of IT industry, with the goal of protecting network systems, data, and services from unauthorized access and attack[1, 2]. With the spread of the internet and the acceleration of digitalization, the importance of network security is becoming increasingly prominent. In the early days, network security mainly focused on preventing the intrusion of malicious software (such as viruses and worms[3]). However, as the means of network attacks have become increasingly complex, the scope of network security has expanded to include preventing data breach, protecting user privacy, and preventing identity theft, among other aspects. A Virtual Private Network (VPN) is a network security technology that creates encrypted network connections, allowing users to securely access remote or public networks. The advent of VPNs can be traced back to the 1990s[4] when businesses began seeking a solution to connect remote offices and employees securely and economically. VPNs create a virtual private network on a public network (such as the internet), allowing data to be transmitted in an encrypted tunnel, thereby ensuring the security and privacy of the data. Zero Trust is a network security model whose core concept is "never trust, always verify". The emergence of this model is a reflection on the traditional "firewall" security model. In the traditional model, companies usually set up firewalls at the boundaries of their networks, and once users pass the firewall, they can access all resources within the network. However, this model overlooks internal threats and attackers who have already compromise the network. The Zero Trust model proposes that every access request should be verified, regardless of whether users are inside or outside the network and regardless of their identity. The proposition and implementation of this model has significant implications for enhancing network security. Figure 1 shows the most typical Google BeyondCorp Zero Trust topology[5]. Currently, the vast majority of Zero Trust solutions in the industry adopt this architecture or its variants. The advantage of this architecture is that it can realize all the concepts of Zero Trust. However, it highly depends on Google's enormous capability of their cloud infrastructure and assumes that all services are web-based, while not all companies meet these conditions. Therefore, we proposed a Full Mesh networking solution to implement the principles of Zero Trust without these constraints. Since the vast majority of network nodes such as PCs, mobile devices, etc., operate most of the time in an environment without public IP, that is, behind NAT routers and firewalls, to achieve Full Mesh networking, the NAT traversal problem must be solved first. The goal of networking lies in connecting various resources (such as people, machines, data, etc.), enabling them to communicate and share information with each other, thereby improving work efficiency and collaborative capabilities. Through networking, various functions such as remote access, remote control, file sharing, video conferencing, etc., can be achieved, meeting communication needs in different scenarios. The purpose of network security is to improve the security and reliability of the network, protect internal resources from unauthorized access and malicious attacks, and ensure the stability and availability of the network. Therefore, networking and network security technologies have a very important significance in modern information society, and are one of the important means to promote the development of informatization and promote economic and social progress. VPN and Zero Trust networking[6] are the two existing networking modes, each with its own characteristics. The security of VPN is based on geographical boundary, but the granularity is relatively coarse, making it difficult to cope with dynamic changes in the security situation. Zero Trust networks solve the problem of VPN through end-to-end authorization and continuous verification, but most solutions adopt centralized proxy devices, making the central node a bottleneck and single point of failure in the network. Another possible implementation is peer-to-peer full mesh communication, but it is necessary to solve the NAT traversal problem. Figure 1: Google BeyondCorp zero trust architecture This paper aimed to solve the core problem in Zero Trust networking. And as a prerequisite for the implementing full mesh networking, a hard NAT traversal formula based on the birthday paradox was put forward, which solves the long-standing hard NAT traversal problem[7]. In addition, the full mesh networking technology based on variable parameter full dimensional spatial peer-to-peer grid topology proposed in this article can also solve the problems and drawbacks of zero trust networking, achieve peer-to-peer resource interconnection, and meet the network communication requirements of full mesh networking, covering all types of networking solutions such as site to site networking. ## 2 Networking Requirements Network Address Translation (NAT) is an address translation technology that can modify the IP address in the header of an IP datagram to another IP address, and achieve address reuse by using the translated port number. NAT is widely used as a transitional technology to alleviate the exhaustion of IPv4 public network addresses, due to its simple implementation. However, NAT also poses a potential security risk, as it can make it difficult to trace the origin of network traffic and can be used to hide malicious activities. Therefore, it is important to implement appropriate security measures, such as firewalls and intrusion detection systems, to ensure the security of networks that use NAT. ### VPN networking approach A VPN (Virtual Private Network) is typically used to securely connect networks between different locations, allowing access to resources in a private network from a remote location. Common use cases include employees working from home, establishing secure connections between companies and partners to share resources, and protecting network traffic from eavesdropping when using public Wi-Fi networks. To establish a VPN connection, a VPN server, which is typically located within the private network, and a VPN client, which can run on personal computers, mobile phones, or tablets, are required. ### Zero trust networking approach Zero Trust Network (ZTN) is a network architecture designed to protect network resources by implementing strict identity verification and access control. In the ZTN architecture, any user or device must pass multiple layers of identity verification and security checks before accessing network resources. Compared to traditional VPN network architecture[8], the advantage of ZTN is that it can effectively prevent malicious attackers from using authorized users or devices to launch attacks. For example, even if a malicious attacker obtains the account credentials of an authorized user, they still cannot access network resources because they must pass other security checks to obtain access. The application scenarios of ZTN typically include enterprise internal networks that protect sensitive data, multi-tenant environments of public cloud service providers, and networks of government and military organizations. ### The difference between zero trust networking and VPN networking Zero Trust and VPN are both technologies used to establish secure connections between two computers. However, they have some significant differences: 1. Zero Trust is a cloud-based architecture that allows data exchange between different organizations without a common trust basis. In contrast, VPN is a technology used to establish a secure network connection between two organizations, usually assuming some form of trust relationship between them. 2. Zero Trust typically uses encryption to protect the privacy and integrity of data, and uses authentication and authorization techniques to ensure that only authorized users can access data. VPN also uses encryption to protect data, but it also uses Virtual Private Network (VPN) protocols to hide users' internet activity. 3. Zero Trust architecture is typically used to share data between different organizations, such as in healthcare, financial services, or government agencies. VPN is typically used to connect remote users to enterprise networks or to connect two enterprise networks together. 4. Zero Trust architecture typically requires specialized software or hardware to implement, while VPN can be implemented using software or hardware or through third-party services. ### Issues with existing networking methods The security of VPN is based on the division of geographical boundaries (intranet and internet), which has a relatively coarse granularity. Once inside the VPN boundary, access to the entire system is allowed. The security authentication of VPN is static and cannot respond well to the dynamic changes in security situations[9]. Zero Trust solves the problems of VPN by implementing end-to-end authorization and continuous verification. However, most Zero Trust solutions typically use a centralized proxy device to proxy traffic to access services. Although this solves the inherent problems of VPN's boundary division and continuous verification, the centralized topology of the proxy device causes it to become a bottleneck and a single point of failure in the network. Another possible implementation of Zero Trust[10] is for all communication nodes to implement point-to-point full mesh communication with each other, which can overcome the problems of VPN and avoid the typical issues of centralized topology in Zero Trust solutions. However, due to the existence of a large number of NAT devices in the current network, the problem of NAT traversal needs to be solved first to achieve truly feasible full mesh communication. **Contribution:** 1. NAT traversal has been a long-standing problem in the industry without a perfect solution. This article proposes a hard-NAT traversal formula based on the birthday paradox. 2. This article proposes an implementation solution for SDP (Software Defined Perimeter) based on the birthday paradox theory, named full mesh networking based on the variable parameter full dimensional spatial peer-to-peer grid topology, which meets the requirements of full mesh networking. In this section, we introduced the networking requirements and two different networking methods: VPN and Zero Trust networking. Zero Trust networking solves the problems of VPN by implementing end-to-end authorization and continuous verification, but it has the bottleneck and single point of failure problem[] caused by centralized proxy devices. Therefore, we propose a hard-NAT traversal formula based on the birthday paradox and a full mesh networking SDP solution to meet all the requirements of networking. ## 3 The Hard-Nat traversal problem and formula ### The hard-nat problem The traversal problem occurs when two private networks want to communicate over the Internet, and the NAT device is unable to properly route the packets to the correct destination because they are both using private IP addresses. The most common scenario for this problem is when both devices are on different private networks and they cannot communicate directly because their private IP addresses cannot be properly forwarded to each other over the Internet. There are two types of traversal problems: soft NAT traversal and hard NAT traversal. Soft NAT traversal is usually caused by a NAT device that is not properly configured or does not have UPnP turned on.UPnP (Universal Plug and Play) is a universal network protocol that allows devices to automatically configure port mapping rules so that ports can be opened and closed automatically when needed. If a NAT device does not have UPnP enabled or does not configure the port mapping rules correctly, this can lead to soft NAT traversal problems. The hard NAT refers to a stricter form of Network Address Translation (NAT), also known as Symmetric NAT. In hard NAT, the NAT device assigns each connection a unique port number that can only be used for that connection and cannot be used by any other connection. This assignment results in external devices not being able to directly access devices in the private network, which can lead to hard NAT traversal problem. By using asymmetric port mapping, hard NAT makes it impossible for external devices to directly access devices on the private network. When a device on a private network wants to communicate with an external device, it usually needs to use some special techniques and protocols, such as STUN, TURN, ICE, etc., to solve the hard NAT traversal problem. In easy NAT, the NAT device assigns each internal device a public IP address and port number that is unique to that device, and external devices can access that device through that address and port number. Compared to hard NAT, easy NAT uses a relatively loose port mapping method, which makes it easier for external devices to access devices on the private network. When a device initiates a connection to the outside, the NAT device uses the public IP address and port number of this device to map this connection, and when an external device initiates a connection to this device, the NAT device decides which device to forward this connection to based on the destination IP address and port number of the connection.easy NAT is a relatively loose NAT translation method that uses a relatively loose port mapping method, which makes it relatively easy for external devices to access devices on the private network. However, easy NAT also has some security issues, and the appropriate security configuration should be considered. We call hard NAT and its variants "Endpoint-Dependent Mapping" (EDM). But hard NAT is a big problem for us, as long as there is such a device in the path, the previous scheme will not work. In addition, certain networks block NAT traversal, which has a much greater impact than this hard NAT. For example, we found that UCBerkeleyguestWiFi blocks all outgoing UDP traffic except DNS traffic. No matter what NAT hacks are used, there is no way to get around this block. So we need a reliable fallback mechanism. This section discusses the NAT traversal problem in the network, including soft NAT traversal and hard NAT traversal, and two types of NAT translation methods, easy NAT and hard NAT. For the hard NAT traversal problem, the use of techniques and protocols such as STUN, TURN, and ICE are proposed to solve the problem. However, some networks that block NAT traversal would require a reliable fallback mechanism. ### The hard-NAT traversal formula based on the birthday paradox The main problem is that the easy NAT side does not know which address (IP port combination) to send data to on the hard NAT side, but must also send data to the hard NAT side to open the firewall on that side. Figure 2 shows that a specific IP and port on this side of the hard NAT because it has been pre-stunned. Assuming for a moment that the IP address is correct, then it is the port that needs to be addressed. There are 65535 possible port numbers. We can scan them one by one and find the correct port number in 10 minutes at worst, if we scan 100 per second. It can solve the problem, but not very cleverly. And it looks so much like port scanning to the IDS software (because that's what we're actually doing) that it's basically going to be blocked. Using the birthday paradox theory, we can do much better than port scanning! Instead of scanning 65535 possible ports one by one, we can open 256 ports at once on the hard NAT side by establishing 256 sockets which can send data to the easy NAT side and let the easy NAT side randomly probe the target ports. The birthday paradox is the probability that at least two people out of no less than 23 people have the same birthday is greater than 50%. For example, in an elementary school class of 30 students, the probability of two people having the same birthday is 70%. In a large class of 60 students, the probability is greater than 99%. The birthday paradox is a "trick" in the sense that it creates a logical contradiction. However, this mathematical fact is so counterintuitive that it is called a paradox. The mathematical theory of the birthday paradox has been applied to the design of a cryptographic attack method - the birthday attack. In the hard NAT traversal problem, A side is easy NAT and B side is hard NAT, the ports of A are fixed (one and known), and B hypothetically opens 256 ports (but it is impossible to know what these 256 port numbers are), which we can scan a total of m (m=t*R) times.t is the scan time and R is the scan frequency. For example, if A side opens the port 18888 (which is fixed and known) and B side opens 256 ports ( with the port numbers unknown), then a scan (assuming port B is port 1025) can simply be viewed as an attempt to connect on port A:18888 -> B:1025 to see if the number 1025 exists among the 256 ports opened on the B side. So if we consider the total number of ports that can be used in the end to be from 1025 to 65535, then the problem can be simplified as following: there are a total of (65535-1024) balls in a pool, of which there are B black balls, and the probability that we will catch the black ball if we catch it A times is the results we want. Here B is the number of ports opened on the B side, for example 256, A is the number of times the A side probed.Based on the birthday paradox, the hard NAT traversal formula (1) is as following: \[\boldsymbol{P}=1-\prod_{i=0}^{A-1}\frac{K-B-i}{K-i}\qquad(1)\] Where, P is the final calculated probability that it can be successfully traversed, the constant K is the total number of available ports (from 1025 to 65535), A is the number of probes on the A side (i.e., scan time _scan frequency: t_R), and B is the number of open ports on the B side (e.g., 256). Figure 2: easy NAT and hard NAT traversal Table 1 shows the probability of success based on the classic birthday paradox (assuming 256 ports opened on the hard NAT side and at a rate of 100 times/s). according to the formula (1) proposed in this paper, when the number of ports is fixed at 256, the success probability can be obtained as more than 98% when the number of subsequent probes is 1000, i.e. more than that. In practical application, the above conclusion can also be obtained. Therefore, this traversal formula provides a theoretical basis for the proposed full mesh networking scheme. This section describes how to use the birthday paradox theory to solve the difficult problem of determining the destination port when communicating data between easy NAT and hard NAT. By opening multiple ports at the same time, the birthday paradox theory is used to gradually narrow down the range of possible ports and finally determine the target port. The traversal formula proposed in this article can calculate the probability of successful traversal. When the number of random detections is over 1000, the probability of success can reach over 98%, providing a theoretical basis for implementing the networking scheme in the next section. ## 4 Full Mesh Networking Technology Based on Variable Parameter full dimensional space \begin{table} \begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline **Cost time of** & **Probe times** & **Success probability** & **Failure probability** \\ **probes** & & & \\ \hline 5s & 500 & 86.41018\% & 13.58982\% \\ \hline 10s & 1000 & 98.18191\% & 1.81809\% \\ \hline 15s & 1500 & 99.76061\% & 0.23939\% \\ \hline 20s & 2000 & 99.96899\% & 0.03101\% \\ \hline \end{tabular} \end{table} Table 1: Heading and text fonts. Figure 3: the probability of a successful connection as a function of the number of random probes (with 128, 256, and 512 ports opened in hard nat, respectively). This chapter mainly discusses the full mesh networking scheme based on variable parameter full dimensional space that can be realized by using the birthday paradox based NAT traversal technology on the basis of NAT traversal capability, gateway requirements and encryption requirements. These networking schemes basically cover all existing VPN network application scenarios and have high flexibility and scalability. ### Preliminaries The NAT traversal formula (2) based on birthday paradox can be simplified as following : \[P=det(t,R,n)\qquad(2)\] Where, P represents the total traversal rate, t represents the total scanning time, R represents the scanning rate (times/s), and n represents the total number of ports scanned. Usually, n is taken as 256. According to the previous conclusion, under the condition of limiting R to 100 times per second, the P value can reach 50% within 2 seconds of t, and P value can be above 99.9% before t reaches 20 seconds. Based on the calculation of NAT traversal capability, which is P, the full mesh networking scheme based on variable parameter full dimensional space can be summarized as formula (3): \[T=hom(G,P,\theta)\qquad(3)\] Where, G is Gateway, in which 0 means a network without gateway and 1 means a network with a gateway. P is the NAT traversal rate, in which 0 means unsuccessful traversal and 1 means successful traversal. \(\theta\) is end-2-end encryption, in which 0 means end-2-end encryption is not in place and 1 means end-2-end encryption is in place. ### Full Mesh Networking Scheme The full mesh networking technology based on variable parameter full dimensional space proposed in this article can comprehensively cover the following four networking schemes at both the theoretical level and engineering level. The details are shown below. #### 4.2.1 Point-2-Site According to formula (3), when G=1 and P=0, \(\theta\)=1, a Point-2 Site networking scheme can be formed. Figure 4: Figure of Point-2-Site As Figure 4 show, on the basis of the full mesh networking, if the nodes in a local area network cannot access the trusted network through the SDP Agent totally, then access can still be achieved by using the SDP Agent as a subnet agent. The cost is that the communication between nodes in the LAN and the SDP Agent is not protected by encryption. #### 4.2.2 Site-2-Site According to formula (3), when G=1 and P=0, \(\theta\)= 0, a Site-2-Site scheme can be formed. As Figure 5 show, this topology is a typical VPN connection topology, and full mesh SDP can be achieved through subnet forwarding capability. If there are multiple local area networks that need to be connected, the site mesh scheme in the next section can be a reference. #### 4.2.3 Site Mesh According to formula (3), when G=1 and P=1, \(\theta\)= 1, a Site Mesh scheme can be formed. As Figure 6 show, on the basis of the site-2 site scheme, the forwarding ability of the full mesh SDP agent can be utilized to achieve mesh interconnection between multiple sites. The site mesh capability surpasses the ability of ordinary VPN networking, allowing multiple sites to achieve their own interconnection. #### 4.2.4 Full Mesh According to formula (3), when G=0 and P=1, \(\theta\)= 1, a Full Mesh scheme can be formed. Figure 5: Figure of Site-2-Site Figure 6: Figure of Site Mesh Full mesh scheme is the most ideal network form that meets all zero trust requirements, as shown in the figure 7. Each computing node (including physical and virtual) joins a peer-to-peer fully connected network through an SDP agent, and the connection between any two points is encrypted and access permissions are individually separately. Full mesh networking has the following advantages: 1. High reliability: Due to each device in the network being directly connected to each other device, there are multiple data transmission paths. This means that if a device or link fails, there are still other paths available for data transmission, making the network highly reliable. 2. High bandwidth: Networks can have high bandwidth because each device is directly connected to each other device. This means that there is no need to route data through intermediate devices, which can save time and improve overall performance. 3. Easy setting up: The complex routing protocols or network infrastructure are not reqired. All the required content is directly connected to each device. 4. Scalability: it is easy to add new devices to the network. All the required content is a direct connection between the new device and every other device in the network. 5. Flexibility: Networks are very flexible because they can be easily modified to adapt to changes in the network. For example, if a device needs to be removed from the network, simply disconnect it, and other devices in the network will continue to operate unaffected. ## 5 Conclusion We first discussed networking requirements and two different types of network configurations: VPN and Zero Trust networking. Zero Trust networking solves the issues of VPN through end-to-end authorization and continuous verification, but it presents problems of bottlenecks and single point failures due to centralized proxy devices. Therefore, we have proposed a Hard-NAT traversal formula based on the birthday paradox and Mirage SDP solutions that can meet the requirements of Full Mesh networking. When discussing the NAT traversal issue, we introduced the concepts of Soft-NAT and Hard-NAT traversal and compared the two NAT traversal methods, Easy-NAT and Hard-NAT. For the Hard-NAT traversal problem, we suggested using technologies and protocols such as STUN, TURN, ICE, and also proposed a fallback mechanism to cope with situations where some networks may block NAT traversal entirely. Next, we detailed the network penetration technology based on the birthday paradox, which can solve the problem of being unable to determine the target port when data communication occurs between easy NAT and hard NAT. By opening multiple ports simultaneously, we use the theory of the birthday paradox to gradually narrow down the possible range of ports, ultimately determining the target port. The formula we proposed can calculate the probability of successful Figure 7: Figure of Full Mesh penetration, with a success rate of over 98% when the number of random probes exceeds 1000, providing a theoretical principle for the networking scheme to be implemented in the next section. Finally, we discussed the variable parameter full-dimensional peer-to-peer networking schemes that can be achieved using the network penetration technology based on the birthday paradox. These networking schemes basically cover all existing network application scenarios of VPNs and have high flexibility and scalability. Through this section's introduction, readers can better understand networking requirements, the NAT traversal issue, and the network penetration technology utilizing the birthday paradox, thus better addressing actual network application scenarios. ## Acknowledgements Thanks to the teachers and researchers of the Research on satellite communication security system Project Team of the Jilin Science and Technology Office. Thanks to everone!
2307.16518
Continuous-Time Channel Prediction Based on Tensor Neural Ordinary Differential Equation
Channel prediction is critical to address the channel aging issue in mobile scenarios. Existing channel prediction techniques are mainly designed for discrete channel prediction, which can only predict the future channel in a fixed time slot per frame, while the other intra-frame channels are usually recovered by interpolation. However, these approaches suffer from a serious interpolation loss, especially for mobile millimeter wave communications. To solve this challenging problem, we propose a tensor neural ordinary differential equation (TN-ODE) based continuous-time channel prediction scheme to realize the direct prediction of intra-frame channels. Specifically, inspired by the recently developed continuous mapping model named neural ODE in the field of machine learning, we first utilize the neural ODE model to predict future continuous-time channels. To improve the channel prediction accuracy and reduce computational complexity, we then propose the TN-ODE scheme to learn the structural characteristics of the high-dimensional channel by low dimensional learnable transform. Simulation results show that the proposed scheme is able to achieve higher intra-frame channel prediction accuracy than existing schemes.
Mingyao Cui, Hao Jiang, Yuhao Chen, Yang Du, Linglong Dai
2023-07-31T09:33:23Z
http://arxiv.org/abs/2307.16518v1
# Continuous-Time Channel Prediction Based on Tensor Neural Ordinary Differential Equation ###### Abstract Channel prediction is critical to address the channel aging issue in mobile scenarios. Existing channel prediction techniques are mainly designed for discrete channel prediction, which can only predict the future channel in a fixed time slot per frame, while the other intra-frame channels are usually recovered by interpolation. However, these approaches suffer from a serious interpolation loss, especially for mobile millimeter-wave communications. To solve this challenging problem, we propose a tensor neural ordinary differential equation (TN-ODE) based continuous-time channel prediction scheme to realize the direct prediction of intra-frame channels. Specifically, inspired by the recently developed continuous mapping model named neural ODE in the field of machine learning, we first utilize the neural ODE model to predict future continuous-time channels. To improve the channel prediction accuracy and reduce computational complexity, we then propose the TN-ODE scheme to learn the structural characteristics of the high-dimensional channel by low-dimensional learnable transform. Simulation results show that the proposed scheme is able to achieve higher intra-frame channel prediction accuracy than existing schemes. Channel prediction; millimeter-wave communications; massive multiple-input-multiple-output; ordinary differential equation. ## I Introduction Millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) has been a critical technology for boosting data transmission speed in 5G communication networks [1]. By deploying a large number of antennas at the base station (BS), massive MIMO can achieve several orders of magnitude improvements in beamforming gain [2]. To fully realize this potential, accurate channel state information (CSI) is required at the BS for the efficient design of precoding. According to the current 5G standard [3], each frame in 5G wireless communication systems contains multiple time slots, while only the first time slot of each frame is used to estimate the CSI through the predefined sounding reference signal (SRS). Then, the subsequent time slots within the same frame perform precoding design according to the CSI estimated in the first slot. However, since the channel is time varying in mobile scenarios, the CSI in the first time slot may significantly differ from the actual channels in the subsequent time slots. This is called channel aging in the literature [4]. Specifically, the channel coherence time is inversely proportional to the carrier frequency and user mobile speed, which could be shorter than the channel estimation period or SRS period in mobile scenarios. For example, for the case of 28 GHz carrier frequency and 60 km/h user mobile speed, the channel coherence time is about 0.32 ms, which is smaller than the shortest SRS period of 0.625 ms defined by the 5G standard [3]. In this case, the estimated CSI in the first time slot becomes outdated, which could cause a serious spectral efficiency loss of about 30% [5]. Therefore, the channel aging problem has to be carefully addressed to enable fast user mobility in mmWave massive MIMO systems. ### _Prior Works_ To address the channel aging problem, channel prediction techniques have been widely studied to predict the future channels by exploring the channel correlation in the time domain [5, 6, 7, 8, 9, 10, 11, 12]. There are two typical categories of channel prediction techniques, i.e., model-based and data-based channel prediction. For the first category [5, 6, 7], some classical models are utilized to characterize the time-varying channels, such as the linear extrapolation model [5], the auto-regressive (AR) model [6], and the spatio-temporal auto-regressive (ST-AR) model [7]. However, since the actual mobile channels simultaneously suffer from the multi-path effect and the Doppler effect, the time-varying characteristics of actual channels are complicated. Thus, for this category of channel prediction techniques, the fossilized models are difficult to match the time-varying channels, resulting in the unreliable performance in mobile scenarios. To deal with this problem, data-based channel prediction techniques have been recently proposed to match the time-varying channels in the data-driven way [8, 9, 10, 11, 12]. Since the neural network models are able to learn the intrinsic complicated feature from data, which could be exploited to improve the channel prediction accuracy. Specifically, in [8], a fully-connected (FC) network was utilized to predict future channels according to the input of high-dimensional historical channels. Then, to decrease the training complexity caused by high-dimensional historical inputs, the recurrent neural network (RNN) like architectures, such as RNN, gate recurrent unit (GRU), and long-short term memory (LSTM), were trained to iteratively process historical channels [9, 10, 11]. Furthermore, to avoid the prediction error propagation problem of the sequential prediction of future channels, the transformer model was used to predict future channels in parallel in [12]. However, the existing channel prediction techniques [5, 6, 7, 8, 9, 10, 11, 12] were designed for discrete channel prediction, while they fail to directly predict the channels in all time slots of each frame. To be more specific, as we discussed before, the channels can only be estimated in the first time slot of each frame through the transmission of SRS. Based on these discretely estimated historical channels, the future channels with the same time interval are predicted by existing channel prediction techniques. Then, the channels in other time slots between two adjacent SRS could be recovered by using interpolation methods. Unfortunately, there exists a serious interpolation loss for these discrete channel prediction techniques in mobile scenarios. One possible solution is continuous-time channel prediction for all time slots of each frame. Unfortunately, to the best of our knowledge, none of the existing methods can achieve continuous-time channel prediction. ### _Our Contributions_ To fill in this gap, we propose a tensor neural ordinary differential equation (TN-ODE) based continuous-time channel prediction scheme in this paper. Specifically, inspired by the recently developed continuous-time signal processing technology named neural ODE in the field of machine learning [13], we adopt the neural ODE architecture proposed in [13] to model the continuous-time channel prediction problem. In the above architecture, a GRU-based encoder is used to preprocess the discretely sampled historical channels, then a neural ODE-based decoder is used to predict future channels in consecutive time slots. Furthermore, to improve the channel prediction accuracy and reduce the computational complexity of the neural ODE, we propose the TN-ODE to exploit the structural characteristics of channels in multiple domains by a series of low-dimensional learnable transforms. To be more specific, in the antenna domain, the channel model is described by different angles of arrival (AoAs) and angles of departure (AoDs), while in the frequency domain, the channel model is mainly determined by multiple times of arrival (ToAs). Thanks to these structural characteristics, the proposed TN-ODE allows us to decouple the complicated high-dimensional channel prediction into efficient low-dimensional channel prediction in multiple domains. Simulation results show that the proposed TN-ODE based continuous-time channel prediction technique can effectively mitigate the interpolation loss and improve the channel prediction performance in all time slots of each frame. ### _Organization and notation_ The remainder of this paper is organized as follows. In Section II, the system model of the mmWave massive MIMO is introduced, and the continuous-time channel prediction problem in this system is then formulated. After that, we elaborate on the proposed TN-ODE based continuous-time channel prediction model in Section III. Section IV illustrates the simulation results. Finally, conclusions are drawn in Section V. _Notation_: We denote the column vector \(\mathbf{a}\) and matrix \(\mathbf{A}\) by boldface lower-case and upper-case letters, respectively; \(\mathbf{A}^{T}\), \(\mathbf{A}^{H}\), and \(\mathbf{A}^{-1}\) are the transpose, conjugate transpose, and inverse of the matrix \(\mathbf{A}\), respectively; \(\mathbf{A}\otimes\mathbf{B}\) is the Kronecker product of the matrix \(\mathbf{A}\) and matrix \(\mathbf{B}\); \(\mathbf{A}\circ\mathbf{B}\) is the Hadamard product of \(\mathbf{A}\) and \(\mathbf{B}\); \(\mathbf{I}_{N}\) denotes an \(N\times N\) identity matrix. \(\mathcal{CN}\left(\mu,\sigma^{2}\right)\) is the probability density function of the circularly symmetric complex Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\). \(\mathbb{E}\left\{\cdot\right\}\) denotes the statistical expectation. We use \(\mathbf{(A)}\) to denote the vectorization of matrix \(\mathbf{A}\). \(\sigma(x)=\frac{1}{1+e^{-x}}\) and \(\tanh(x)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}\) represent the Sigmoid function as hyperbolic tangent function, respectively. We denote \(h[n],n\in\mathbb{Z}\) as a discrete-time sequence and \(h(t),t\in\mathbb{R}\) as a continuous-time sequence. ## II System model In this section, we will first introduce the signal model of the mmWave massive MIMO system. Then, the continuous-time channel prediction is formulated to avoid the interpolation loss problem in existing discrete channel prediction schemes. assumption [16], where the channel remains time-invariant within each time slot and changes between different time slots. Let \(\mathbf{H}_{m}(t)\in\mathbb{C}^{N_{\mathrm{T}}\times N_{\mathrm{R}}}\) denote the channel at the time \(t\). Due to the limited number of scattering clusters in the mmWave propagation environment, we adopt the widely used geometric Saleh-Valenzuela multipath channel model [15] to characterize the mmWave channel. Under this model, \(\mathbf{H}_{m}(t)\) can be denoted as \[\mathbf{H}_{m}(t)=\sum_{l=1}^{L}\alpha_{l}e^{-j2\pi(vt+f_{m}\pi)}\mathbf{a}_{ \mathrm{T}}(\phi_{l,\mathrm{T}})\mathbf{a}_{\mathrm{R}}^{H}(\phi_{l,\mathrm{R}}), \tag{1}\] where \(L\) is the number of the paths, \(\alpha_{l}\), \(v_{l}\), \(\tau_{l}\), \(\phi_{l,\mathrm{T}}\), and \(\phi_{l,\mathrm{R}}\) are the complex path gain, Doppler shift, ToA, AoA, and AoD of the \(l\)-th path, respectively. For \(m\in\{1,2,\cdots,M\}\), \(f_{m}=f+\frac{B}{2}(m-\frac{M}{2})\) denotes the \(m\)-th subcarrier frequency, with \(f\), \(B\), and \(M\) being the carrier frequency, bandwidth, and the number of subcarriers. Since the ULA is considered in this paper, the array steering vector \(\mathbf{a}_{\mathrm{T}}(\phi_{l,\mathrm{T}})\) and \(\mathbf{a}_{\mathrm{R}}(\phi_{l,\mathrm{R}})\) could be represented by \[\mathbf{a}_{\mathrm{T}}(\phi_{l,\mathrm{T}}) =\frac{1}{\sqrt{N}}[e^{-j\frac{2\pi}{\lambda}\mathrm{dain}(\phi_{ l,\mathrm{T}})\mathbf{a}_{\mathrm{T}}}], \tag{2}\] \[\mathbf{a}_{\mathrm{R}}(\phi_{l,\mathrm{R}}) =\frac{1}{\sqrt{M}}[e^{-j\frac{2\pi}{\lambda}\mathrm{dain}(\phi_{ l,\mathrm{R}})\mathbf{a}_{\mathrm{R}}}], \tag{3}\] where \(\mathbf{n}_{\mathrm{T}}=[0,1,\cdots,N_{\mathrm{T}}-1]^{T}\) and \(\mathbf{n}_{\mathrm{R}}=[0,1,\cdots,N_{\mathrm{R}}-1]^{T}\), \(\lambda\) is the carrier wavelength, and \(d\) is the antenna spacing usually set as \(d=\lambda/2\). We denote \(T_{f}\) and \(T_{s}\) as the duration time of one frame and one time slot, where \(T_{f}=QT_{s}\). Accordingly, we can use \(\mathbf{H}_{m}^{(p,q)}=\mathbf{H}_{m}(t_{p,q})\) to denote the channel at the \(q\)-th time slot of the \(p\)-th frame and the \(m\)-th subcarrier, where \(t_{p,q}=pT_{f}+qT_{s}\). Then, the received signal \(\mathbf{Y}_{m}^{(p,q)}\in\mathbb{C}^{N_{\mathrm{R}}\times N_{q}}\) at the \(q\)-th time slot of the \(p\)-th frame and the \(m\)-th subcarrier in BS can be expressed by \[\mathbf{Y}_{m}^{(p,q)} =\mathbf{A}^{(p,q)}\mathbf{H}_{m}^{(p,q)}\mathbf{S}_{m}^{(p,q)}+ \mathbf{A}^{(p,q)}\mathbf{N}_{m}^{(p,q)}\] \[=\overline{\mathbf{H}}_{m}^{(p,q)}\mathbf{S}_{m}^{(p,q)}+\mathbf{ A}^{(p,q)}\mathbf{N}_{m}^{(p,q)}, \tag{4}\] where \(\mathbf{A}^{(p,q)}\in\mathbb{C}^{N_{\mathrm{R}}\times N_{\mathrm{T}}}\) is the frequency-independent combining matrix, \(\mathbf{S}_{m}^{(p,q)}\in\mathbb{C}^{N_{\mathrm{R}}\times N_{q}}\) denotes the transmitted signal, \(\mathbf{N}_{m}^{(p,q)}\in\mathbb{C}^{N_{\mathrm{R}}\times N_{q}}\) is the Gaussian noise and each element following the distribution \(\mathcal{CN}(0,\sigma^{2})\) with \(\sigma^{2}\) being the noise power, and \(\overline{\mathbf{H}}_{m}^{(p,q)}\in\mathbb{C}^{N_{\mathrm{R}}\times N_{ \mathrm{R}}}\) is the effective channel matrix in the \(q\)-th time slot of the \(p\)-th frame and the \(m\)-th subcarrier. We utilize the discrete Fourier transmission (DFT) codebook to design the analog combining \(\mathbf{A}^{(p,q)}\)[17]. In the DFT codebook, each codeword points to a specific azimuth AoA and all codewords will cover the entire beam space. By traversing all codewords, the strongest \(N_{\mathrm{RF}}\) codewords could be selected to construct \(\mathbf{A}^{(p,q)}\). Benefiting from the fact that the time-varying channel is mainly caused by the Doppler effect, while the AoA and AoD are time-invariant in several frames during tens of milliseconds [18], the optimal combining matrix stays unchanged in several frames. In this case, we suppose \(\mathbf{A}^{(p,q)}=\mathbf{A},\forall p\in\{0,1,\cdots,P-1\},\forall q\in\{0,1,\cdots,Q-1\}\), where \(P\) is the number of frames in the order of tens of milliseconds. In particular, when \(q=0\), the effective channel \(\overline{\mathbf{H}}_{m}^{(p,0)}\) of the first time slot in the \(p\)-th frame is estimated according to the predefined pilot sequence \(\mathbf{S}_{m}^{(p,0)}\) and received signal \(\mathbf{Y}_{m}^{(p,0)}\). Generally, we use the least square (LS) channel estimation method to recover the effective channel, which could be represented by \[\mathrm{vec}(\mathbf{\hat{H}}_{m}^{(p,0)})=(\mathbf{S}_{m}^{(p,0)^{T}}\otimes \mathbf{I}_{N_{\mathrm{RF}}})^{-1}\mathrm{vec}(\mathbf{Y}_{m}^{(p,0)}), \tag{5}\] where \(\mathrm{vec}(\mathbf{\hat{H}}_{m}^{(p,0)})\) is the vectorization of LS channel estimation \(\mathbf{\hat{H}}_{m}^{(p,0)}\). When \(q\neq 0\), the \(\mathbf{S}_{m}^{(p,q)}\) is the transmitted signal and the achievable average rate \(R\) could be written as \[R= \tag{6}\] \[\frac{1}{M}\sum_{m=1}^{M}\log_{2}\left|\mathbf{I}_{N_{\mathrm{R}}}+ \frac{1}{N_{\mathrm{R}}\sigma^{2}}\mathbf{D}_{m}^{(p,q)}\overline{\mathbf{H}}_{ m}^{(p,q)}\overline{\mathbf{H}}_{m}^{(p,q)^{H}}\mathbf{D}_{m}^{(p,q)^{H}} \right|.\] We utilize the classical zero-forcing method [2] to design the digital precoding \(\mathbf{D}_{m}^{(p,q)}\in\mathbb{C}^{N_{\mathrm{R}}\times N_{\mathrm{RF}}}\) in the \(q\)-th time slot of the \(p\)-th frame and the \(m\)-th subcarrier as: \[\mathbf{D}_{m}^{(p,q)}=(\mathbf{\hat{H}}_{m}^{(p,q)^{H}}\mathbf{\hat{H}}_{m}^{(p,q)})^{-1}\mathbf{\hat{H}}_{m}^{(p,q)^{H}}. \tag{7}\] ### _Problem formulation_ To calculate the digital precoding \(\mathbf{D}_{m}^{(p,q)}\), the estimated instantaneous \(\mathbf{\hat{H}}_{m}^{(p,q)}\) is required according to (7). Whereas, since only the CSI at the first time slot of each frame \(\mathbf{\hat{H}}_{m}^{(p,0)}\) is available, we usually use the \(\mathbf{\hat{H}}_{m}^{(p,0)}\) to perform precoding for the subsequent time slots, i.e., \(\mathbf{D}_{m}^{(p,q)}=\mathbf{D}_{m}^{(p,0)}\). Unfortunately, due to the channel aging issue induced by mobility, the outdated CSI \(\mathbf{\hat{H}}_{m}^{(p,0)}\) has a significant change compared with the actual effective channel \(\overline{\mathbf{H}}_{m}^{(p,q)}\), which results in a sever performance loss for mmWave MIMO in mobile scenarios. To mitigate the performance loss caused by channel aging, some channel prediction techniques [5, 6, 7, 8, 9, 10, 11, 12] have been proposed to deal with the channel aging issue by exploring the temporal correlation of the time-varying channel. Specifically, the existing channel prediction schemes could predict future channels in discrete frames, i.e., \(\mathbf{\hat{H}}_{m}^{(p+1,0)},\cdots,\mathbf{\hat{H}}_{m}^{(p+K,0)}\), based on the historical channels \(\mathbf{\hat{H}}_{m}^{(p-J,0)},\cdots,\mathbf{\hat{H}}_{m}^{(p,0)}\) with the same time interval. Since these channel prediction methods are designed for discrete channel prediction, which only predict the channel in the first time slot of each frame, they can not realize the direct prediction of the channels for all time slots in future frames. Thus, the interpolation method has to be utilized to recover the channels \(\mathbf{\hat{H}}_{m}^{(p+k,q)}\) with \(q>0\) as \[\mathbf{\hat{H}}_{m}^{(p+k,q)}=(1-\frac{q}{Q})\mathbf{\hat{H}}_{m}^{(p+k,0)}+ \frac{q}{Q}\mathbf{\hat{H}}_{m}^{(p+k+1,0)}, \tag{8}\] where \(k=0,1,\cdots,K-1\). However, due to the complicated change of the channel, simple interpolation is difficult to describe the actual change of the channel. Therefore, there is an interpolation loss for the existing discrete channel prediction schemes. Unlike the existing discrete channel prediction schemes, we reformulate the channel prediction problem as a continuous-time channel mapping problem to avoid interpolation loss. Specifically, we utilize the historical discrete channels from the past \(J\) frames to predict the future continuous-time channels in the next \(K\) frames, which could be formulated as \[\min_{\mathbf{\theta}} \sum_{k=0}^{K-1}\sum_{q=0}^{Q-1}\sum_{m=1}^{M}\mathbb{E}\bigg{\{} \frac{\|\widehat{\mathbf{H}}_{m}^{(p+k,q)}-\widehat{\mathbf{H}}_{m}^{(p+k,q)}\|^ {2}}{\|\widehat{\mathbf{H}}_{m}^{p+k,q}\|^{2}}\bigg{\}}, \tag{9a}\] \[\mathrm{s.t.} (\widehat{\mathbf{H}}_{m}^{(p,1)},\widehat{\mathbf{H}}_{m}^{(p,2) },\cdots,\widehat{\mathbf{H}}_{m}^{(p+K-1,Q-1)})\] (9b) \[=f(\widehat{\mathbf{H}}_{m}^{(p-J,0)},\cdots,\widehat{\mathbf{H}} _{m}^{(p,0)};\mathbf{\theta}),\] where \(f(\cdot)\) is the proposed continuous-time channel prediction model and \(\mathbf{\theta}\) is the parameters of the model. Since the normalized mean square error (NMSE) is not affected by the amplitude of the channel, we adopt the NMSE as the minimization target to realize stable convergence. It is worth noting that the estimated historical channels are discretely sampled at the first time slot of each frame. Correspondingly, the predicted channels are continuously distributed at any time slot of each future frame. By contrast, the existing discrete channel prediction schemes only predict the channel at the first time slot of the future frames. Thus, the proposed continuous-time channel prediction scheme realizes the direct prediction of the future channel in any time slot so that the interpolation loss can be avoided. ## III Proposed Method In this section, we first introduce the background of neural ODE and elaborate on the framework of neural ODE based channel prediction. Then, we propose the TN-ODE to explore the mmWave channel structure to improve the channel prediction performance. ### _Background of Neural ODE_ To achieve continuous-time channel prediction, it is crucial to find an appropriate technique to process continuous-time signals. Recently, with the rapid advancement in the field of dynamical systems, neural ODE becomes an attractive technology for modeling continuous-time sequences [13, 19, 20]. Neural ODEs use first-order differential equations to fit the hidden state of time sequences, so it is capable of handling continuous-time signals. To make this paper self-contained, we provide a brief background of neural ODE. Specifically, classical RNN-like architectures, including RNN, GRU, and LSTM, build complicated networks to encode time sequences into a series of hidden states: \[\mathbf{h}[n]=\mathbf{h}[n-1]+g(\mathbf{h}[n-1],\mathbf{\theta}). \tag{10}\] Here, \(\mathbf{h}[n]\) represents the hidden state at the \(n\)-th discrete time, \(g(\cdot)\) denotes the state transition function realized by neural networks, and \(\mathbf{\theta}\) is the network parameters. The transition in (10) is built on a discrete difference equation, which is awkward to deal with signals not belonging to discrete time samples. On the contrary, neural ODEs define a continuous-time hidden state \(\mathbf{h}(t)\), which can be formulated as a time-invariant differential equation: \[\frac{\mathrm{d}\mathbf{h}(t)}{\mathrm{d}t}=f(\mathbf{h}(t),\mathbf{\theta}). \tag{11}\] Besides, (11) is equivalent to the following integral form: \[\mathbf{h}(t)=\int_{t_{0}}^{t}f(\mathbf{h}(\tau),\mathbf{\theta})\mathrm{d}\tau+ \mathbf{h}(t_{0}). \tag{12}\] Here, \(\mathbf{h}(t_{0})\) is the initial hidden state, and function \(f(\mathbf{h}(t),\mathbf{\theta})\) describes the dynamic of hidden state \(\mathbf{h}(t)\). One can acquire the hidden state \(\mathbf{h}(t)\) at an arbitrary time \(t\) by solving problem (12) through an ODE solver: \[\mathbf{h}(t)=\text{ODESolver}(f(\cdot,\mathbf{\theta}),\mathbf{h}(t_{0}),t_{0},t). \tag{13}\] As indicated in [19], such an ODE solver can be implemented by various numerical schemes, including the forward and backward Euler methods, the Runge-Kutta method, and the linear multi-step method. As a consequence, applying neural ODE models (12) and solvers (13) allows us to deal with continuous-time sequences, so as to achieve continuous-time channel prediction. ### _Framework of Neural ODE Based Channel Prediction_ Based on the above background, the framework of neural ODE-based channel prediction is presented in this subsection. Our aim is to predict the channels for all time slots of the future \(K\) frames by processing those historical \(J\) channels. The Latent ODE architecture introduced in [20] is adopted to model this process. For expression clarity, we denote \(\hat{\mathbf{H}}[n]=[\text{vec}(\hat{\mathbf{H}}_{1}^{(n,0)}),\text{vec}( \hat{\mathbf{H}}_{2}^{(n,0)}),\cdots,\text{vec}(\hat{\mathbf{H}}_{M}^{(n,0)})]\) and \(\hat{\mathbf{H}}(t)=[\text{vec}(\hat{\mathbf{H}}_{m}(t)),\text{vec}(\hat{ \mathbf{H}}_{2}(t)),\cdots,\text{vec}(\hat{\mathbf{H}}_{M}(t))]\). As shown in Figure 3, the neural ODE-based channel prediction is composed of two modules, i.e., an encoder and a decoder [20]. Generally speaking, the encoder is responsible for extracting features from the historical channels \(\hat{\mathbf{H}}[n]\) for \(n=\{0,-1,\cdots,-J+1\}\). The output of the encoder serves as the initial state of the decoder. Correspondingly, the decoder exploits a neural ODE to infer future continuous-time channels \(\hat{\mathbf{H}}(t)\) for \(t>0\). Specifically, the encoder's role is to extract the features from historical channels. Since the SRS signals are transmitted and received with equally-sized time interval \(T_{f}\), RNN-like architectures are enough to deal with these sequences. We denote the hidden state of \(\hat{\mathbf{H}}[n]\) as \(\mathbf{R}[n]\). Then, based on the Markov property of RNN models, the map from \(\mathbf{R}[n-1]\) to \(\mathbf{R}[n]\) can be written as \[\mathbf{R}[n]=\text{EncoderCell}(\mathbf{R}[n-1],\hat{\mathbf{H}}[n],\mathbf{ \theta}_{E}), \tag{14}\] where \(\text{EncoderCell}(\cdot)\) is the transition function of the RNN-like network with \(\mathbf{\theta}_{E}\) being the learnable parameters. For the decoder, a neural ODE model is deployed to specific the dynamic of the future channel's hidden state. This hidden state is defined as \(\mathbf{O}(t)\). Besides, the final output \(\mathbf{R}[0]\) of the encoder is regarded as the initial state \(\mathbf{O}(0)\) of decoder. Therefore, for any times \(t>0\), the hidden state \(\mathbf{O}(t)\) can be presented as \[\frac{\mathrm{d}\mathbf{O}(t)}{\mathrm{d}t}=\text{DecoderCell}(\mathbf{O}(t), \mathbf{\theta}_{D}), \tag{15}\] where \(\text{DecoderCell}(\cdot)\) denotes the transition function of the neural ODE network with \(\mathbf{\theta}_{D}\) being its learnable parameters. ote that \((15)\) can be solved by the ODESolver in (13). After that, one layer neural network \(\mathsf{Pred}(\cdot)\) is built to output the predicted channel \(\hat{\mathbf{H}}(t)\) from the hidden state \(\mathbf{O}(t)\): \[\hat{\mathbf{H}}(t)=\mathsf{Pred}(\mathbf{O}(t),\boldsymbol{\theta}_{P}), \tag{16}\] where \(\boldsymbol{\theta}_{P}\) denotes its parameters. All in all, following this neural ODE framework, we are capable of extracting features from the previous channels and then predicting future continuous-time channels for any \(t\). ### _TN-ODE based Channel Prediction_ In this subsection, we elaborate on the idea of tensor neural ODE for designing the three crucial transition functions: \(\text{EncoderCell}(\cdot)\), \(\text{DecoderCell}(\cdot)\), and \(\text{Pred}(\cdot)\). We commence our discussion by briefly introducing the transition functions widely used in classical neural ODE framework [20]. The authors in [20] deployed a GRU model as its encoder transition function and modified the GRU model to act as the decoder transition function. To fit in our channel prediction framework, the inputs, hidden states, and outputs should be first vectorized as the following column vectors: \(\hat{\mathbf{h}}[n]=\text{vec}(\hat{\mathbf{H}}[n])\), \(\mathbf{r}[n]=\text{vec}(\mathbf{R}[n])\), \(\mathbf{o}(t)=\text{vec}(\mathbf{O}(t))\), and \(\hat{\mathbf{h}}(t)=\text{vec}(\hat{\mathbf{H}}(t))\). Then, according to the GRU architecture [20], \(\text{EncoderCell}(\cdot)\) consists of the following modules: \[\mathbf{z} =\sigma\left(\mathbf{U}^{z}\hat{\mathbf{h}}[n]+\mathbf{W}^{z} \mathbf{r}[n-1]\right), \tag{17a}\] \[\mathbf{x} =\sigma\left(\mathbf{U}^{x}\hat{\mathbf{h}}[n]+\mathbf{W}^{x} \mathbf{r}[n-1]\right),\] (17b) \[\mathbf{u} =\tanh\left(\mathbf{U}^{u}\hat{\mathbf{h}}[n]+\mathbf{W}^{u}( \mathbf{r}[n-1]\circ\mathbf{x})\right),\] (17c) \[\mathbf{r}[n] =(\mathbf{1}-\mathbf{z})\circ\mathbf{u}+\mathbf{z}\circ\mathbf{r} [n-1], \tag{17d}\] where matrices \(\{\mathbf{U},\mathbf{W}\}\) are the learnable parameters. As for the decoder, it is different from the encoder which can receive external stimulus \(\hat{\mathbf{h}}[n]\) to update its states. The hidden state transition of \(\text{DecoderCell}(\cdot)\) is an auto-regressive process without external stimulus. Thereby, to fit GRU model in this decoder, we can carry out the steps below to modify GRU: remove \(\hat{\mathbf{h}}[n]\) from (17a)-(17d); replace \(\mathbf{r}[n-1]\) and \(\mathbf{r}[n]\) with \(\mathbf{o}(t)\) and \(\frac{\mathbf{i}\mathbf{o}(t)}{\mathbf{d}t}\), respectively. Finally, function \(\text{Pred}(\cdot)\) can be realized by a fully connected layer, i.e. \(\hat{\mathbf{h}}(t)=\mathbf{W}^{h}\mathbf{o}(t)\). As a result, the entire neural ODE-based prediction is successfully established based on the classical GRU model. There is no denying that the above transition functions have the ability to process continuous-time sequences. However, they will suffer from two serious problems when applied to channel prediction. First, these transition functions fail to exploit the underlying channel structure. As shown in (1), mmWave channels exhibit obvious correlations in multiple domains. For example, the antenna-domain channel is constructed by the superposition of multiple array steering vectors with different AoAs and AoDs. Besides, in the frequency domain, the channel structure can be captured by several ToAs. However, simply vectorizing channels to fit in the GRU model will undermine such regular structures. Second, the computational complexity of these transition functions is also unaffordable. Take the function \(\text{Pred}(\cdot)\) as an example, we suppose \(\mathbf{W}^{h}\) is a square matrix. As the dimension of \(\hat{\mathbf{h}}(t)\) is \(N_{\text{RF}}N_{\text{R}}M\times 1\), then matrix \(\mathbf{W}^{h}\) will contain \(2N_{\text{RF}}^{2}N_{\text{R}}^{2}M^{2}\) floating points. If \(N_{\text{RF}}=N_{\text{R}}=4\) and \(M=256\), then the number of floating points of \(\mathbf{W}^{h}\) is \(2N_{\text{RF}}^{2}N_{\text{R}}^{2}M^{2}=33,554,432\), which costs unacceptable computational resources. To address these two critical problems, we propose the TN-ODE by exploiting the channel correlation. Our scheme is inspired by the tensor decomposition based signal processing algorithms [21], which extract the information of channels from different domains and process them separately. In our model, we preserve the matrix form of \(\hat{\mathbf{H}}[n]\), \(\mathbf{R}[n]\), \(\mathbf{O}(t)\), and \(\hat{\mathbf{H}}(t)\), and use different learnable transforms to independently extract the antenna-domain and frequency-domain information from historical channels. We take the matrix product \(\mathbf{U}^{z}\hat{\mathbf{h}}[n]\) in (17a) as an example. The classical GRU model (17a) vectorizes \(\hat{\mathbf{H}}[n]\) as \(\hat{\mathbf{h}}[n]\in\mathbb{C}^{N_{\text{RF}}N_{\text{R}}M\times 1}\) and uses a high-dimensional matrix \(\mathbf{U}^{z}\) to process \(\hat{\mathbf{h}}[n]\). Instead, we keep the shape of \(\hat{\mathbf{H}}[n]\) as \(N_{\text{RF}}N_{\text{R}}\times M\) and use two independent low-dimensional matrices \(\mathbf{U}_{l}^{z}\) and \(\mathbf{U}_{r}^{z}\) to separately work on the antenna domain and frequency domain of \(\hat{\mathbf{H}}[n]\), which gives rise to \(\mathbf{U}_{l}^{z}\hat{\mathbf{H}}[n]\mathbf{U}_{r}^{z}\). Similarly, we modify all modules in (17a)-(17d) by the same means to construct the tensor-inspired \(\text{EncoderCell}(\cdot)\) as \[\mathbf{Z} =\sigma\left(\mathbf{U}_{l}^{z}\hat{\mathbf{H}}[n]\mathbf{U}_{r}^{ z}+\mathbf{W}_{l}^{z}\mathbf{R}[n-1]\mathbf{W}_{r}^{z}\right), \tag{18a}\] \[\mathbf{X} =\sigma\left(\mathbf{U}_{l}^{x}\hat{\mathbf{H}}[n]\mathbf{U}_{r}^{ x}+\mathbf{W}_{l}^{x}\mathbf{R}[n-1]\mathbf{W}_{r}^{x}\right),\] (18b) \[\mathbf{U} =\tanh\left(\mathbf{U}_{l}^{u}\hat{\mathbf{H}}[n]\mathbf{U}_{r}^{ u}+\mathbf{W}_{l}^{u}(\mathbf{R}[n-1]\circ\mathbf{X})\mathbf{W}_{r}^{u}\right),\] (18c) \[\mathbf{R}[n] =(\mathbf{1}-\mathbf{Z})\circ\mathbf{U}+\mathbf{Z}\circ\mathbf{R} [n-1], \tag{18d}\] where matrices \(\{\mathbf{U}_{l},\mathbf{U}_{r},\mathbf{W}_{l},\mathbf{W}_{r}\}\) are all learnable parameters. Here, matrices \(\{\mathbf{U}_{l}^{x},\mathbf{U}_{l}^{y},\mathbf{U}_{l}^{u}\}\) have a size of \(F_{l}\times N_{\text{RF}}N_{\text{R}}\), matrices \(\{\mathbf{U}_{r}^{z},\mathbf{U}_{r}^{x},\mathbf{U}_{r}^{u}\}\) have a size of \(M\times F_{r}\), matrices \(\{\mathbf{W}_{l}^{y},\mathbf{W}_{l}^{x},\mathbf{W}_{l}^{y}\}\) have a size of \(F_{l}\times F_{l}\), and matrices \(\{\mathbf{W}_{r}^{z},\mathbf{W}_{r}^{x},\mathbf{W}_{r}^{y}\}\) have a size of \(F_{r}\times F_{r}\). \(F_{l}\) and Fig. 3: The framework of neural ODE-based channel prediction. denote the feature dimensions. Notice that the computations in (18a)-(18d) are all complex-valued multiplications, which are realized by the complex-valued neural network (CVNN) proposed in [7]. The Sigmoid and Tanh functions work on the real and imaginary parts respectively. Moreover, we can use the same way to transform the classical DecoderCell\((\cdot)\) and Pred\((\cdot)\) to their tensor forms. To be specific, the transition function of the ODE decoder can be written as \[\mathbf{Z}(t)=\sigma\left(\mathbf{V}_{t}^{z}\mathbf{O}(t)\mathbf{ V}_{r}^{x}\right), \tag{19a}\] \[\mathbf{X}(t)=\sigma\left(\mathbf{V}_{l}^{x}\mathbf{O}(t)\mathbf{ V}_{r}^{x}\right),\] (19b) \[\mathbf{U}(t)=\tanh\left(\mathbf{V}_{l}^{u}(\mathbf{O}(t)\circ \mathbf{X}(t))\mathbf{V}_{r}^{u}\right),\] (19c) \[\frac{\mathbf{d}\mathbf{O}(t)}{\mathbf{d}t}=(\mathbf{1}-\mathbf{ Z}(t))\circ\mathbf{U}(t)+\mathbf{Z}(t)\circ\mathbf{O}(t), \tag{19d}\] where the size of matrices \(\{\mathbf{V}_{r}^{z},\mathbf{V}_{l}^{x},\mathbf{V}_{l}^{u}\}\) is \(F_{l}\times F_{l}\) and the size of matrices \(\{\mathbf{V}_{r}^{z},\mathbf{V}_{r}^{x},\mathbf{V}_{r}^{u}\}\) is \(F_{r}\times F_{r}\). Then, the function of Pred\((\cdot)\) can be given by \[\mathbf{\hat{H}}(t)=\mathbf{W}_{l}^{h}\mathbf{O}(t)\mathbf{W}_{r}^{h}, \tag{20}\] with \(\mathbf{W}_{l}^{h}\in\mathbb{C}^{N_{\text{RF}}N_{\text{R}}\times F_{l}}\) and \(\mathbf{W}_{r}^{h}\in\mathbb{C}^{F_{r}\times M}\). Our proposed TN-ODE enjoys two crucial merits compared to the classical one [20]. To begin with, it preserves the structural features of multi-domain channels in the entire procedure, so our scheme is specific for predicting wireless continuous-time channels. Moreover, its computational complexity is much lower than that of [20]. We still take the function Pred\((\cdot)\) as an example. As shown in (20), we suppose both \(\mathbf{W}_{l}^{h}\) and \(\mathbf{W}_{r}^{h}\) are square matrices. Since the shape of \(\mathbf{\hat{H}}(t)\) is \(N_{\text{RF}}N_{\text{R}}\times M\), matrices \(\mathbf{W}_{l}^{h}\) and \(\mathbf{W}_{r}^{h}\) have sizes of \(N_{\text{RF}}N_{\text{R}}\times N_{\text{RF}}N_{\text{R}}\) and \(M\times M\), which gives rise to \(2(N_{\text{RF}}^{2}N_{\text{R}}^{2}+M^{2})\) floating points. Therefore, if \(N_{\text{RF}}=N_{\text{R}}=4\) and \(M=256\), the number of floating points is decreasing from \(33,554,432\) in \(\mathbf{W}^{h}\) to \(131,574\) in \(\mathbf{W}_{l}^{h}\) and \(\mathbf{W}_{r}^{h}\). The computational complexity is significantly improved. As a consequence, our proposed TD-ODE takes advantage of the continuous-time signal processing capability of ODE and the multi-domain structure of mmWave channels, so it is promising to achieve efficient continuous-time channel prediction, which will be demonstrated in the simulation section. ### _Training and Testing Details_ In this subsection, we supplement some training and testing details. To begin with, we adopt an offline training and online testing strategy. In the offline training stage, we use the clustered delay line (CDL) channel model to randomly generate \(N_{\text{train}}\) time-varying channel samples. We divide these samples into \(\frac{N_{\text{train}}}{BS}\) batches, with \(BS\) being the batch size. We consider the \(b\)-th batch. Each sample of this batch is a time-varying channel sequence, which is divided into two periods. The first period corresponds to the historical channels. In this period, we sample \(J\) time slots with an equal time interval of \(T_{f}\). The corresponding historical channels are \(\mathbf{H}^{\text{input}}=\{\mathbf{\hat{H}}[-J+1],\cdots,\mathbf{\hat{H}}[- 1],\mathbf{\hat{H}}[0]\}\). The second period is regarded as the future time, where \(P\) time slots are randomly sampled from the time duration \([0,KT_{f}]\). We use \(t_{1}^{b}\leq t_{2}^{b}\leq\cdots\leq t_{P}^{b}\) to index these sampled times in the \(b\)-th batch. Therefore, the corresponding noise-free channels are \(\{\mathbf{\overline{H}}(t_{1}^{b}),\mathbf{\overline{H}}(t_{2}^{b}),\cdots, \mathbf{\overline{H}}(t_{P}^{b})\}\), which are working as the training labels. Then, we use the proposed TN-ODE model to process \(\mathbf{Y}^{\text{input}}\) and predict \(\{\mathbf{\hat{H}}(t_{1}^{b}),\mathbf{\hat{H}}(t_{2}^{b}),\cdots,\mathbf{ \hat{H}}(t_{P}^{b})\}\). Finally, the NMSE is used for the loss function1: Footnote 1: The reason NMSE is adopted as the loss function instead of MSE is that NMSE loss could speed up model convergence and avoid the influence of the amplitude of channel. \[\text{Loss}=\frac{1}{P}\sum_{i=1}^{P}\mathbb{E}\left\{\frac{\| \mathbf{\hat{H}}(t_{i}^{b})-\mathbf{\overline{H}}(t_{i}^{b})\|^{2}}{\|\mathbf{ \overline{H}}(t_{i}^{b})\|^{2}}\right\}. \tag{21}\] Based on this loss function, the Adam optimizer is adopted to update the network parameters using their gradients. Notice that adjoint sensitivities proposed in [13] are used to efficiently compute the ODE's gradients. The above procedure is carried out batch by batch until convergence. The data size in the testing stage is \(N_{\text{test}}\), where each channel sample is still divided into two periods. The first period is the same as that in the training stage. Regarding the second period, our target is to predict channels for future \(KQ\) time slots (or \(K\) frames). Therefore, we sample \(KQ\) slots with an equal time interval of \(T_{s}\), which are denoted by \(t_{i}=iT_{s}\), \(i=1,\cdots,KQ\). Then, we use the well-trained TN-ODE model to predict \(\mathbf{\hat{H}}(t_{i})\), \(i=1,\cdots,KQ\). Finally, these predicted channels are used for precoding. ### _Computational Complexity_ In this subsection, we provide a detailed computational complexity analysis of the proposed scheme in the testing stage. Here, we mainly count the number of complex-valued multiplications. For a sequence of historical channels \(\{\mathbf{\hat{H}}[-J+1],\cdots,\mathbf{\hat{H}}[-1],\mathbf{\hat{H}}[0]\}\), the total \(J\) channels are processed by the EncoderCell (18a)-(18d) sequentially. Steps (18a)-(18c) have a complexity in the order of \(\mathcal{O}(F_{l}N_{\text{RF}}N_{\text{R}}M+F_{l}MF_{r})\), and step (18d) has a complexity of \(\mathcal{O}(F_{l}F_{r})\). Therefore, taking into account the \(J\) channels, the computational complexity of the encoder is \(\mathcal{O}(JF_{l}N_{\text{RF}}N_{\text{R}}M+JF_{l}MF_{r})+\mathcal{O}(JF_{l} F_{r})=\mathcal{O}(JF_{l}N_{\text{RF}}N_{\text{R}}M+JF_{l}MF_{r})\). As for the decoder, we can similarly derive that the computational complexities of calculating the functions DecoderCell\((\cdot)\) and Pred\((\cdot)\) are \(\mathcal{O}(F_{l}^{2}F_{r}+F_{l}F_{r}^{2})\) and \(\mathcal{O}(N_{\text{RF}}N_{\text{R}}F_{r}+N_{\text{RF}}N_{\text{RF}}N_{\text{R}}F _{r}M)\), respectively. Moreover, the ODESolver\((\cdot)\) in (13) needs to calculate the DecoderCell\((\cdot)\) for \(G\) times, where \(G\) is proportional to \(KQ\). Therefore, the computational complexity of the decoder is \(\mathcal{O}(GF_{l}^{2}F_{r}+GF_{l}F_{r}^{2})\). Finally, as \(KQ\) future channels are predicted, the overall number of complex-valued multiplications of the function Pred\((\cdot)\) is \(\mathcal{O}(KQN_{\text{RF}}N_{\text{R}}F_{l}F_{r}+KQN_{\text{RF}}N_{\text{R}}F _{r}M)\). As a consequence, the computational complexity of the proposed TN-ODE model is \[\mathcal{O}(JF_{l}M(N_{\text{RF}}N_{\text{R}}+F_{r}))+\mathcal{O}( GF_{l}F_{r}(F_{l}+F_{r}))\] \[+\mathcal{O}(KQN_{\text{RF}}N_{\text{R}}F_{r}(F_{l}+M)). \tag{22}\] ## IV Simulation Results In this section, simulation results are provided to demonstrate the superiority of our scheme. The CDL-B channel model in the Matlab 5G toolbox [12] is utilized to generate the data set. For each channel sample, the velocity of user is randomly generated from the uniform distribution \(\mathcal{U}(30\,\text{km}/\text{h},60\,\text{km}/\text{h})\) and the delay spread is randomly chosen from the uniform distribution \(\mathcal{U}(50\,\text{ns},200\,\text{ns})\). The simulation configurations are presented in Table I. The compared benchmarks are as follows: 1) the perfect CSI; 2) the classical AI-based algorithms, including the GRU-based channel prediction [10] and the FC network based algorithm [8]; 3) the classical model-based techniques, including the prony-based angular-delay domain channel prediction (PAD) [5] and ST-AR [7] algorithms; 4) utilizing the outdated channels without prediction. In Figure 4, the average rate performance is evaluated. We follow the 5G standard and set \(T_{f}\) as 0.625 ms and \(T_{s}\) as 0.125 ms. Therefore, the classical GRU, FC, ST-AR, and PAD algorithms predict the channels at the 5-th and 10-th time slots, and then recover the channels at other time slots through linear interpolation. It is clear from Figure 4 that the average rate performance of classical algorithms degrades at the interpolated channels. Fortunately, our proposed scheme is able to avoid interpolation loss by predicting the future channels at all time slots with the assistance of TN-ODE. Additionally, the proposed TN-ODE exploits the multi-domain channel structure, so it can even achieve higher average rate than classical algorithms at the 5-th and 10-th time slots. In Figure 5, the real part of the true future channels and the predicted channels for an arbitrary antenna index and subcarrier are presented. We can observe from this figure that the existing discrete-time channel prediction techniques can only accurately predict the future channels at SRS positions, while the interpolated channels considerably deviate from the true channels. On the contrary, the proposed TN-ODE scheme well captures the dynamic of continuous-time channels. The simulation result in Figure 6 further supports our discussion, where the NMSE performance against time slots is illustrated. It is obvious that the achieved NMSE of classical algorithms intensively fluctuates with respect to time slots, which is induced by the error of interpolation. On the contrary, the NMSE performance of our scheme smoothly deteriorates over time, and it is always lower than -10 dB. As a result, we can conclude that our TN-ODE based approach accomplishes accurate continuous-time channel prediction. ## V Conclusions In this paper, we have investigated the essential problem of continuous-time channel prediction in mobile mmWave massive MIMO systems. At first, we adopted the neural ODE to model the temporal correlation of mmWave channels, and then we introduced the neural ODE based channel prediction framework. This framework deployed a GRU-based encoder Fig. 4: Average rate performance against time slots. Fig. 5: The real part of the true future channels and the predicted channels. Fig. 6: NMSE performance against time slots. to extract features from historical channels and used a neural ODE based decoder to predict future continuous-time channels. After that, a TN-ODE model was proposed to improve this framework, which makes full use of the multi-domain channel structure. Simulations demonstrated that our scheme accomplished accurate channel prediction in all time slots of several future frames. The proposed TN-ODE model can be potentially extended to various continuous-time channel prediction scenarios, such as cell-free communication scenarios and RIS-aided communication scenarios. In the future, we will investigate the multi-user continuous-time channel prediction.
2309.09194
Understanding Representations by Exploring Galaxies in Chemical Space
We present a Monte Carlo approach for studying chemical feature distributions of molecules without training a machine learning model or performing exhaustive enumeration. The algorithm generates molecules with predefined similarity to a given one for any representation. It serves as a diagnostic tool to understand which molecules are grouped in feature space and to identify shortcomings of representations and embeddings from unsupervised learning. In this work, we first study clusters surrounding chosen molecules and demonstrate that common representations do not yield a constant density of molecules in feature space, with possible implications for learning behavior. Next, we observe a connection between representations and properties: a linear correlation between the property value of a central molecule and the average radial slope of that property in chemical space. Molecules with extremal property values have the largest property derivative values in chemical space, which provides a route to improve the data efficiency of a representation by tailoring it towards a given property. Finally, we demonstrate applications for sampling molecules with specified metric-dependent distributions to generate molecules biased toward graph spaces of interest.
Jan Weinreich, Konstantin Karandashev, Guido Falk von Rudorff
2023-09-17T07:37:19Z
http://arxiv.org/abs/2309.09194v1
# Understanding Representations by Exploring Galaxies in Chemical Space ###### Abstract We present a Monte Carlo approach for studying chemical feature distributions of molecules without training a machine learning model or performing exhaustive enumeration. The algorithm generates molecules with predefined similarity to a given one for any representation. It serves as a diagnostic tool to understand which molecules are grouped in feature space and to identify shortcomings of representations and embeddings from unsupervised learning. In this work, we first study clusters surrounding chosen molecules and demonstrate that common representations do not yield a constant density of molecules in feature space, with possible implications for learning behavior. Next, we observe a connection between representations and properties: a linear correlation between the property value of a central molecule and the average radial slope of that property in chemical space. Molecules with extremal property values have the largest property derivative values in chemical space, which provides a route to improve the data efficiency of a representation by tailoring it towards a given property. Finally, we demonstrate applications for sampling molecules with specified metric-dependent distributions to generate molecules biased toward graph spaces of interest. ## I Introduction Chemical compound space [1] (CCS) encompasses all valid molecular graphs, occurring naturally, synthesized, or existing only in theory. Investigating this space is essential for enhancing predictive models of molecular properties. Molecular representations, [2; 3] whether based on graphs or embedding vectors, qualify CCS as a metric space. While the concept of CCS is well established there was less attention on studying its topology [4] induced by metric and representation - specifically, the set of molecules in the neighborhood of any arbitrarily chosen molecule. [5] Instead, typically a set of molecules is predefined through a database and the diversity of molecules can be visualized using dimensionality reduction techniques. [6; 7; 8] This is to not be confused with the underlying topology of CCS which is _independent_ of the specific set of selected compounds. Similarity-based learning methods assume if two representation vectors are close so are the corresponding quantity values. Since grouping of molecules depends on the representation vector we argue that understanding the emergent topology is key to improving machine learning models that employ said representations. In addition to uniqueness, [9] a representation ideally contains quantities that correlate with properties of interest. Furthermore, adapting representations to reflect the physics at hand was shown to be beneficial for model accuracy. [10] This raises the question of whether a _one fits all_ solution of representation design is fundamentally flawed. Indeed, many state-of-the-art neural networks, [11; 12; 13; 14] use embeddings that are learned implicitly for each property. Except for a few examples, [15] this is not the case for physics-based representations defined by analytical expressions. Supervised molecular regression methods assume that a _good_ representation maps molecules with similar properties close to each other and _vice versa_. Our method explores chemical graph space obeying valence rules with operations inspired by genetic algorithms, [16; 17; 18] and maintaining detailed balance to guarantee exact sampling of the distributions of interest. Specifically, the implementation is based on the Evolutionary Monte Carlo [19] algorithm for generating molecular graphs **M**olecular **O**ptimization by **S**ampling **A**daptively **in **C**hemical **S**pace [20] (MOSAiCS). We use well-established molecular representation vectors [2] and generate molecules based on similarity in the original feature space, not in projected space. This allows sampling distributions of molecular distances to investigate the average properties of sets of compounds without the need for exhaustive enumeration. Systematic control of the similarity between a molecule of interest and generated molecules is achieved with potential functions dependent on interpretable distances. By identifying molecules with high similarity the proposed method can help improve property-specific representations which remains a challenge _e.g._ for learning HOMO-LUMO gap values [21; 22] or addressing activity cliffs. [25] Key distinctions from generative models are: (i) no projection into latent space or training is required. (ii) Unlike string-based methods such as EDBO [24] or REINVENT [25] derived from SMILES [26] or SELFIES, [27; 28] the approach is feature vector agnostic. (iii) Physics-based distance metrics and geometrically invariant representations are used, offering a more thorough approach than string-based encoding: generation of new molecules can be based on the statistical ensemble of conformers by using an average representation.[29] Here our focus is the topology generated by a representation and, to a lesser degree, its correlation to properties. Nonetheless, such correlations of representations with properties of interest are important for improving representation efficiency for ML.[30] Preevalent methods for probing chemical space are generative models[31] based on unsupervised learning mapping molecules into a low-dimensional latent space, e.g. SELFIES using autoencoders,[32; 33] recurrent neural networks,[34; 35; 36] generative adversarial networks,[37; 38] guided diffusion models,[39] or stochastic processes on molecular graph space.[40] While deep learning methods will likely continue leading molecule generation,[41] they may lack chemical interpretability or be less accurate when applied far from the training set.[42] We emphasize that no alternative to generative models is provided, but an exact sampling of distributions of chemical space in terms of distances. At best, this contribution aims at bridging the gap between purely data-driven unsupervised generative models and physics-inspired representations by combining exact feature vectors and Monte Carlo (MC) on chemical graphs. However, conceptually the approach may have the following advantages: (i) No reconstruction problem,[42]_i.e._ finding a valid molecular graph that corresponds to the optimized point in latent space; (ii) introducing constraints, such as the number of heavy atoms or types of allowed bonds is trivial; (iii) representation vectors with interpretable features can be used. First, we introduce potential functions defined in chemical space and present details of the algorithm (Sec. II.1). Secondly, we discuss results based on the sampling of local chemical environments of molecules (Sec. III). Finally, we analyze the relationship between property values and representation distances (Sec. III.4). ## II Methods ### Potentials in Chemical Space We study the topology of chemical space, by examining sets of molecules of varying "closeness" to each other by generating distributions of molecular graphs satisfying distance conditions. The method applies to any metric, including rather sophisticated ones.[43; 44] Here we select the distance between two molecular graphs \(A\) and \(B\) as \[d(A,B)=||\mathbf{X}(A)-\mathbf{X}(B)||_{2}, \tag{1}\] where \(\mathbf{X}\) is a representation vector and \(||\ldots||_{2}\) is the Euclidean norm. Smaller values of \(d(A,B)\) mean greater similarity between \(A\) and \(B\). Although kernel functions also capture non-linear patterns, using distances rather than kernel elements obviates the need for selecting kernel-width parameters and enables immediate use of any accessible representation. We will focus on sets of molecules drawn from a distribution that is uniform over all such molecules \(B\) that \(\gamma\leq d(A_{\mathrm{c}},B)\leq\sigma\), where \(A_{\mathrm{c}}\) is a _central molecule_. This is achieved by a potential function defined in terms of distance to the central molecule (from now on \(d\) for short) \[V(d)=\begin{cases}V_{0}(d-\gamma)^{2}&\text{if }d<\gamma\\ 0&\text{if }\gamma\leq d\leq\sigma\\ V_{0}(d-\sigma)^{2}&\text{if }d>\sigma\.\end{cases} \tag{2}\] Similarly to parallel tempering,[45; 46; 47] each replica is assigned an inverse temperature parameter \(\beta_{i}\) generating a trajectory whose probability density converges to the Boltzmann distribution with unnormalized distribution function \(\rho_{i}\) \[\rho_{i}(d)=\exp[-\beta_{i}V(d)]. \tag{3}\] The potential \(V(d)\) has a flat plateau depicted in Fig. 1, with the resulting \(\rho_{i}(d)\) distributions being biased towards \(\gamma\leq d\leq\sigma\). \(\beta_{i}\) include infinitely large values (see _greedy replicas_ in Ref. [20]) that can only accept trial moves to graphs with potential functions smaller or equal to the current one. Therefore, as the simulation progresses for each greedy replica the potential decreases until the replica reaches the plateau. Once it is inside the plateau it can never leave it by definition. The presence of finite values of \(\beta_{i}\) (see _exploration replicas_ in Ref. [20]) ensures our MC simulations do not get obstructed in moving between subsets of chemical space separated by barriers of \(V\). The \(\beta_{i}\) and curvature values \(V_{0}\) of the potential are listed in the Supplementary Information (SI). Here we study the following choices for representations: Figure 1: Two quadratic potential functions are defined by Eq. (2). Blue potential with \(\gamma=0\) and \(\sigma=2\) defined to sample molecules up to a distance of \(\sigma=2\). Orange curve with \(\gamma=2\) and \(\sigma=4\) is adapted to probe molecules in the distance interval \(2\leq d\leq 4\). 1. ECFP4[48] fingerprint, a group-based representation with bit resolution \(n_{\text{bits}}\), unless specified otherwise, \(n_{\text{bits}}=2048\). The diameter parameter, here four, is the maximum distance of neighborhoods of each atom. 2. Smooth overlap of atomic positions[49] (SOAP) with averaging over atomic sites,[50] for the remainder of the text referred to as SOAP. 3D conformations are generated with the Morfetus[51] package using MMFF94 forcefield,[51; 52] as implemented[53] in RdKit,[54] and the recipe from Ref. [55]. Any method that can generate conformers from graphs[56] can be used. Conformer-averaging of the representation accounts for the variance of the distance within the classical Boltzmann ensemble for a given temperature.[29] We focus on physics-based representations,[3] but representations from unsupervised learning, such as SchNet,[11] may also be used. It is important to identify relevant distances within the chemical space of interest, as the same metric evaluated on different representations can differ significantly. For example, one SOAP distance unit may correspond to a conformation but could represent a change of constitution for ECFP4. Since the potential is based on an absolute measure of similarity and predefined representation functions, the range of representations must be tailored to the specific chemical space of interest or machine learning application. For instance, comparing molecules of different sizes may yield larger distances in representation vectors than comparing molecules of the same size. Another example is how minor changes of the molecular structure corresponding to small differences in the representation can result in considerable differences in the property of interest, _e.g._ for the HOMO-LUMO gap, necessitating application-dependent adjustments.[22] ### ChemSpaceSampler code The Python code for the ChemSpaceSampler is available at [https://github.com/chemspacelab/mosaics](https://github.com/chemspacelab/mosaics). Notebooks with examples can be found here [https://github.com/chemspacelab/mosaics/tree/main/examples/05_chemspacesampler](https://github.com/chemspacelab/mosaics/tree/main/examples/05_chemspacesampler). As illustrated in Fig. 2 usage steps are as follows: (1) define the central molecule, (2) representation, (3) potential type, and (4) constraints for the chemical space, such as permitted bonds, number of heavy atoms, along with simulations details such as number of MC steps, moves, and set of \(\beta_{i}\). In practice, using the code is as simple as defining a set of parameters and calling the main function as follows: ``` 1params={ 2'min_d':2.0, 3'max_d':5.0, 4'Nsteps':500, 5'possible_elements': ["C", "O", "N", "F"], 6'forbidden_bonds': [(8, 9), (8, 8), (9, 9), 7], 8'nhosts_range': [1,9], 9'betas':[8,4,2], 10'nBits":2048, 11'rep_name':'ECFP' 12} 13OOLS,D=chemspace_potentials.chemspacesampler_ECFP(smiles="CC(=0)OC1=CC=CCCC(=0)0",params=params) ``` The parameter dictionary comprises setting the desired minimal and maximal distance of the constant potential, the number of MC steps, and possible elements. It includes the option of excluding certain bond combinations. The chemical spaces used in this work were inspired by the QM9 dataset[57] which consists of 134k molecules containing up to 9 heavy atoms (C, O, N, and F). Unless specified otherwise, we will be considering the "QM9sub" chemical space which is the set of molecules that contains up to 9 heavy atoms of the same elements but with the bond constraints forbidding covalent bonds O-F, O-O, F-F, and N-N, as the latter are associated with increased chemical reactivity. In addition, the number of bits \(n_{\text{bits}}=\texttt{nBits}\), inverse temperature, and representation type are defined. Finally, the simulation is initialized with a SMILES string returning the encountered molecules and their distances from the central molecule. ## III Results ### Diversity of Chemical Space We sample CCS of five randomly selected central molecules (s. Fig. 2(a)) of which four are from the QM9sub space with ECFP4 at \(n_{\text{bits}}=2048\) for \(\gamma=0\), \(\sigma=6\), see Eq. (2). In addition, we study aspirin, from QM13sub with 13 heavy atoms. A comparison between the QM9[57] dataset (grey) and the ChemSpaceSampler QM9sub results is shown as a two-dimensional UMAP[58] visualization in Fig. 2(b). A diverse chemical space is explored spanning regions not covered by QM9.[57] Sampling the chemical neighborhood of only five distinct molecules is sufficient to span a chemical space much larger than the original QM9 dataset - even though the sampled QM9sub space is more restricted due to additional bond constraints. This demonstrates that even the use of central potentials imposing locality results allows generation of chemically diverse sets of molecules, creating molecules that distinctly fall outside of the training set. ### Galaxies We examine the chemical spaces surrounding the molecules from the previous section and perform a principal component analysis of the ECFP4 fingerprints for \(32\leq n_{\text{bits}}\leq 8192\). The number of heavy atoms is fixed to exactly six for benzene and thirteen for aspirin. Histograms for the density of the first two principal components of representation vectors for benzene, aspirin, and nonane at \(n_{\mathrm{bits}}=512\) and \(8192\) are shown in Fig. 4. After applying a \(k\)-nearest neighbor algorithm the optimal number of clusters is estimated using the elbow method by studying the explained variation as a function of the number of clusters. Afterward, the most representative molecule for each sub-cluster (s. SI) was identified as the compound with the smallest distance to the \(k\)-means centers, cf. Fig. 4. For \(n_{\mathrm{bits}}=8192\), the chemical space around benzene and nonane shows distinct clusters. For benzene, these clusters are even completely disconnected suggesting that the convex hull of all clusters contains regions with vastly different densities of valid molecular graphs. No such seemingly disconnected clusters were found for the larger aspirin molecule indicating that the clustering is partially due to the upper bound on the number of heavy atoms. Next, we study the capability of ECFP4 to distinguish chemical graphs as a function of \(n_{\mathrm{bits}}\). Decreasing the resolution from \(n_{\mathrm{bits}}=8196\) to \(n_{\mathrm{bits}}=512\) results in smaller distances between cluster centers and the merging of previously distinct clusters; thus \(n_{\mathrm{bits}}\) affects the separation of different chemical environments. The number of clusters depends on an arbitrary threshold parameter for the elbow method and also on the chemical space under consideration, _i.e._ diverse chemical spaces may result in a larger number of clusters. Thus a more robust measure for the resolution of CCS should be defined. To remove any dependence on the number of clusters, we evaluate as a function of fingerprint length the average distance Figure 3: UMAP projection of the QM9sub chemical space explored by the ChemSpaceSampler. Sampling the local chemical space of five central molecules results in a diverse range of molecular structures, as illustrated by the dispersion of points throughout the plot: For instance, purple dots correspond to molecules found by ChemSpaceSampler using benzene as a central molecule. Grey crosses represent molecules from the QM9 dataset. Figure 2: ChemSpaceSampler: (1) central molecule, (2) 2D or 3D representations, and (3) potential function, s. Eq. (2), are defined. The (4) MC simulation is initialized for a given number of steps, temperatures for the replica exchanges as well as permitted elements and bonds, and number of heavy atoms. between molecules closest to the center of each cluster \[\bar{d}(n_{\rm bits})=\frac{1}{M}\sum_{l=1}^{M}\frac{1}{C^{l}}\sum_{k\neq s}^{C^ {l}}d(A_{k}^{l},A_{s}^{l}) \tag{4}\] where \(M\) is the number of considered central molecules, \(l=1,...,M\) is the index of a central molecule. \(C^{l}\) is the number of clusters of molecules observed after running an MC simulation, \(k\) and \(s\) are indices of such clusters, and \(A_{k}^{l}\) is the molecule closest to the center of such a cluster with index \(k\). Each two-dimensional PCA plot contains up to \(C^{l}\) clusters. Finally, the distances are averaged over all \(M\) considered molecules to ensure \(\bar{d}\) does not depend on the individual chemical diversity around each of the central molecules. Intuitively, \(\bar{d}(n_{\rm bits})\to 0\) corresponds to the merging of previously separated clusters whereas a higher resolution leads to larger average distances between clusters. At \(n_{\rm bits}\approx 1024\), the average distance appears to saturate (cf. Fig. 5b) supporting the choice of the default value \(n_{\rm bits}=1024\) appearing in RdKit. The closest and farthest molecules for benzene and aspirin are shown in Fig. 5a. Closest point sampling for aspirin results in small modifications of the ester group while the rest of the molecule is unaffected. For benzene, the closest points are hydrocarbons obtained by adding extra bonds to the central molecule, whereas at larger distances linear molecules and different compositions are found. Note that ECFP4 may result in similar or even completely identical distance values even if the molecular graphs differ substantially. Thus sampling at a given distance can help identify molecules at comparable distances yet possessing vastly distinct chemistries. Such cases are relevant if properties of interest change strongly between any two similar considered molecules and typically result in particularly poor ML predictions.[30] ### Molecules with predefined similarity The core assumption of any similarity-based model is that systems of similar representations are of similar properties \(P\). We exploit this observation to generate molecules with similar representation vectors and properties. First, we select 52 QM9sub molecules as central molecules and perform \(N=5000\) MC steps for ECFP4 and \(N=2000\) for SOAP (smaller number of MC steps caused by the relatively high cost of generating conformers), permitting up to nine heavy atoms. Finally, we evaluate the free energy of solvation \(P=G\) and band gap \(P=\varepsilon\) of the sampled compounds based on GFN2-xTB[59] calculations and an analytical linearized Poisson-Boltzmann model[60] to simulate the presence of water using the same procedure Figure 4: Chemical spaces with a fixed number of heavy atoms: benzene six, nonane nine, and aspirin thirteen. We show density plots of the first two principal components (PCA1 and PCA2) obtained from the sampled chemical space. The color gradient denotes the distance \(\bar{d}\) (normalized for maximum distance for each \(n_{\rm bits}\)) to the initial molecule in each fingerprint (FP) dimension. Central molecules shown in the bottom panes are represented as white stars in the PCA plots. The upper panel for bit-length \(n_{\rm bits}=512\), lower panel for \(n_{\rm bits}=8192\). Crosses within the visualization indicate the positions of molecules closest to the respective cluster centers, all listed in the SI. as described in Ref. [20]. Next, we perform a principal component analysis of the initial molecules and their sampled neighbors, as depicted in Fig 6. As expected from the superior performance of SOAP and geometry-based molecular representations in most regression tasks, better clustering of sampled molecules around the initial molecules compared to the ECFP4 fingerprint is observed. Conversely, ECFP4 does not require chemical graphs to be geometrically constrained. The number of encountered molecules surrounding the central molecule is very susceptible to its chemical graph. This supports the observation of regions with low and high molecular densities and indicates distinct topologies depending on the location in chemical space. For example, with ECFP4 and ethanol, we identify 574 unique molecules in \(2.75\leq d_{\text{ECFP4}}\leq 3.25\), but no other molecule is found for NC1=NC(=O)N=C(N)N1 in the same interval (cf. \(\mathbf{C}_{6}\) in Fig. 7). By contrast, for SOAP and the interval \(50\leq d_{\text{SOAP}}\leq 60\), no other molecule is found for ethanol, but for NC1=NC(=O)N=C(N)N1, we find 21 molecules suggesting that the density of molecules around central molecules is highly dependent on the representation type. A more quantitative comparison of the density is challenging, as there seems to be no straightforward way to compare distances between representations. Systematic generation of molecules far from an initial molecule may find application for farthest point sampling often used for active learning strategies.[61; 62] Conversely, generating molecules close to a query molecule may help identify the most relevant neighbors in chemical space for query-specific ML predictions.[63; 64] ### Property derivatives in chemical space In the following we use atomic units. Given a central molecule \(l\) one could consider average free energy of solvation \(\bar{G}_{l}(d)\) and band gap \(\bar{\varepsilon}_{l}(d)\) of molecules encountered at a distance \(d\), _i.e._ average property values at \(d\). However, an average over an exact distance can be ill-defined due to the discreteness of chemical graph space and would be impractical to evaluate with MC methods even if it was. Instead, we consider averages over shells because this allows numerical evaluation. Therefore we introduce binning and finite distance intervals with outer bin positions given by \[d^{b}_{\text{ECFP4}}=\{1.25+b\cdot 0.5\}\, \tag{5}\] where \(b\in 0...,10\). This results in discrete _shell_ intervals of \[I_{\text{ECFP4}}=\{[0.75,1.25],\underbrace{(1.25,1.75]}_{b=1},...,(5.75,6.25]\} \tag{6}\] for ECFP4 and SOAP \[d^{b}_{\text{SOAP}}=\{50+b\cdot 10\}, \tag{7}\] where \(b\in 0,...,10\). The shell intervals \(I_{\text{SOAP}}\) are defined analogously based on preliminary simulations. The bin centers are chosen according to meaningful distances for each representation respectively. For instance, at a SOAP distance of \(d=50\), only a single bond may differ from the central molecule, whereas at \(d=150\) encountered molecules will hardly resemble the central molecule. Note that the distances do not start at zero as we could not find any molecules for smaller distances. Subsequently and for all 52 central molecules \(l\) we will consider, the average of both properties within each finite shell interval is computed and denoted as \(\bar{P}(d)\) (the property \(P\) can be \(G\) or \(\varepsilon\)), _e.g._\(\bar{G}(d=1.25)\) is the average value of free energy of solvation over all molecules in the interval \([0.75,1.25]\) for ECFP4. More generally, we define shell Figure 5: (a): closest and farthest molecules encountered for benzene and aspirin respectively using ECFP4 at \(n_{\text{bits}}=8192\). (b): mean distance \(\tilde{d}\) of the clusters averaged over five different molecules, cf. Eq. (4). averages as \[\bar{P}_{l}(d^{b})=\frac{1}{M_{b}}\sum_{j=1}^{M_{b}}P_{k}, \tag{8}\] where \(M_{b}\) is the number of molecules in shell \(b\). These are molecules whose distance to the central molecule \(l\) is between \(d^{b-1}\) and \(d^{b}\). The indices \(j\in 1,...,M_{b}\) enumerate these molecules. The property value of the central molecule \(l\) is \(\bar{P}_{l}(0)\). The definition of shell averages is also depicted in Fig. 8. For simplicity the index \(b\) representing the shell index is removed and \(d\) always represents the upper shell boundary. Considering the number of encountered molecules grows quickly with distance we introduce a cutoff, evaluating only up to 300 randomly selected molecules per shell to limit computational costs. As shown in Fig. 9a for small distances, \(d_{\rm SOAP}\leq 60\), property values of new molecules are similar to those of the two central molecules. Increasing the distance further leads to a larger drift in the property values - consistent with the assumption of regression models that small distances \(d\) between training and test molecules correlate to similar \(P\). Collecting the first two principal components of all encountered molecules resulting from SOAP simulations shows how the HOMO-LUMO gap \(\bar{\varepsilon}\) (Fig. 10a) changes between molecules. The corresponding figure for the free energy \(\bar{G}\) is shown in SI Fig. 2. Both \(\bar{G}(d)\) and \(\bar{\varepsilon}(d)\) vary substantially from molecule to molecule, as shown in Fig. 9b making it challenging to extract general trends for \(\bar{P}(d)\). Still, we fit linear functions to \(\bar{P}_{l}(d)\) to approximate average trends of properties through chemical space for each molecule \(l\) as follows, \[\bar{P}_{l}(d)\approx m_{l}^{P}\cdot d+c_{l}^{P}, \tag{9}\] Figure 6: ChemSpaceSampler with ECFP4 (top) SOAP [49] (bottom) representations: comparing sampled chemical space projected principal components (PC) into two dimensions along axis PC1 and PC2 with maximal variance. Large marker symbols represent central molecules used in the potential function, Eq. (2). Small opaque markers with matching colors and styles denote molecules obtained after sampling. (a) distance intervals at small representation distances: \(2.75\leq d_{\rm ECFP4}\leq 3.25\) for ECFP4 and \(50\leq d_{\rm SOAP}\leq 60\) for SOAP; (b) same at larger distances: \(4.00\leq d_{\rm ECFP4}\leq 4.25\) and \(100\leq d_{\rm SOAP}\leq 110\). Figure 7: Identified central molecules \({\bf C}_{l}\) with extremal radial average property slope values cf. Tab. 1. where \(m_{l}^{P}\) and \(c_{l}^{P}\) are the fitted slope and intercept. After extracting \(\bar{\varepsilon}_{l}(d)\) and \(\bar{G}_{l}(d)\) for each molecule and each \(d\), we compute the slope values \(m_{l}^{G}\) and \(m_{l}^{P}\) by regressing Eq. (9) to the outer shell values as \(d\) and the 10 shell averages \(\bar{P}_{l}(d)\). Note that \(c_{l}^{P}\) is determined by the value of the central molecule. Scatter plots displaying the slope values \(m_{l}^{P}\), normalized by the average pairwise distance of all central molecules \(\langle d_{ks}\rangle\), against the property values of the central molecules (\(d=0\)) are presented in Fig. 11a,b. A near linear correlation between the radially averaged property \(m_{l}^{P}\) derivative and the central property value is found, \(m_{l}^{P}\propto\bar{P}_{l}(0)\). For instance, the extracted slope values for the solvation free energies \(m_{l}^{G}\) correlate linearly with the initial value of the respective central molecule \(G_{l}(0)\). Average property values increase (decrease) for molecules from the tails of the property distribution corresponding to smaller (larger) values for increasing shell distances. Radial average property values of sampled molecules from initial compounds with more negative solvation energies than most other molecules drift towards less soluble compounds and _vice versa_. The same trend applies to the HOMO-LUMO gap \(\varepsilon\). We also find that central molecules with average property values have a near zero slope, indicated by vertical and horizontal lines in Fig. 11ab respectively. To investigate whether finding the linear trend for both band gap and solvation energy is partially due to a correlation between both quantities, we also show a scatter plot of both in Fig. 10b revealing that the linear slope holds regardless since the properties themselves do not correlate. Given \(m_{l}^{P}\propto\bar{P}_{l}(0)\), an upper bound of the average property slope \(\bar{P}^{\prime}=m^{P}\) in a chemical space can be estimated by the compound with the largest property value, \[\max\bar{P}^{\prime}(d)=\max m^{P}=\max_{l\in\mathrm{CCS}}\left[\bar{m}^{P} \bar{P}_{l}(0)+\tilde{c}^{P}\right], \tag{10}\] where \(\bar{m}^{P}\) and \(\tilde{c}^{P}\) are linear slope and intercept extracted from the correlation plots in Fig. 11ab. The most negative slope on the other hand is defined by the molecule with the minimal property value. The regression constants \(\tilde{m}^{P}\), \(\tilde{c}^{P}\) as well as the average, minimal, and maximal slope values and Spearman's rank correlation coefficient \(r_{s}\), measuring monotonic correlation between \(\bar{P}^{\prime}\) and \(\Delta_{l}^{P}\), for both properties and representations are listed in Tab. 1. The identified extremal molecules are shown in Fig. 7. As expected \(\tilde{m}^{P}\) and \(\tilde{c}^{P}\) depend on the property, representation, and permitted CCS. For both representations, the magnitude of the maximal slope for \(\bar{G}\) corresponds to the same compound \(\mathbf{C}_{6}\) (s. Fig. 7) which is also the most soluble in QM9. The correlation coefficient \(r_{s}\) for the SOAP representation is smaller than that of the ECFP4 fingerprint. This is evident from the values recorded for \(G\), which are \(-0.788\) and \(-0.976\) respectively, as shown in Tab. 1. We propose two potential explanations for the higher correlation coefficient of ECFP4. First, we managed to perform more MC steps for ECFP4 simulations, thus potentially making them more converged. Second, the conformer sampling as required for the SOAP workflow introduces statistical noise due to a degree of randomness in generating conformers used in SOAP representation. Since Eq. (10) describes a relation over all of the permitted chemical space, even approximate values for the maximal \(m^{P}\) may help design molecular representations or tuning model flexibility by taking into account distances between data points and the expected variation in the target property. As an illustration, we show the absolute property deviation from the initial values in Fig. 9a,b: A too small slope of the curve under the 95\({}^{\mathrm{th}}\) percentile means weak correlation with the target property. Too large slopes indicate discontinuities of the representation with respect to the property, which are associated with larger errors.[22] A good representation should find a compromise between both. By calculating the derivative of the property machine learning models could be fine-tuned for each property, enhancing a model's performance. Figure 8: Illustration of the local chemical environment of ethanol obtained from simulations. The first shell is defined up to a SOAP distance of \(d_{1}\), and a second shell contains molecules \(k\) in the interval \(d_{1}\leq d(0,l)\leq d_{2}\). Evaluating the encountered molecules within different shells allows extracting shell averages \(\bar{P}\), c.f. Eq. (8). ## IV Conclusion An MC-based method was introduced to identify shortcomings of molecular representations designed for supervised learning tasks. It enables exact sampling of distributions of molecules defined in terms of distances within the original feature space with greater explainability than deep learning approaches. Access to molecules with given similarity may help design new feature-based representation vectors by tailoring the average property change with respect to representation distance (s. Fig. 9a,b). The sampling probabilities can be accessed by a simple comparison of molecular features. This could be particularly useful for interpretable descriptors commonly used in cheminformatics. A near linear trend between the initial value of the central molecule and the radial slope, \(m_{l}^{P}\propto\bar{P}_{l}(0)\), was found and allows estimating a lower bound for the max Figure 10: (a): first two principal components (PC) of central molecules, shown with markers, as well as sampled molecules, visited in all shells \(d\in\left[50,150\right]\) represented as dots colored by individual HOMO-LUMO gap values \(\varepsilon\). (b): scatter plot between solvation energy \(G\) and band gap \(\varepsilon\) for all molecules obtained by sampling. In both cases SOAP was used. Figure 9: (a): radially averaged solvation energy \(\tilde{G}(d)\) and HOMO-LUMO gap \(\tilde{\varepsilon}(d)\) as a function of SOAP distance \(d\) for molecules shown in the bottom panel. (b): Absolute values of radially averaged property shifts \(|\overline{P}(d)-P(0)|\) compared to the values \(P(0)\) of several initial molecules (opaque) for increasing ECFP4 distances \(d\). Black dashed lines show respective 95\({}^{\text{th}}\) percentiles. The blue line with error bars shows the average shift overall considered molecules at each \(d\). top: \(P=G\), solvation energy, bottom \(P=\varepsilon\), HOMO-LUMO gap. imal slope given by the molecule with the largest property value [s. Eq. (10)]. We hypothesize this trend is due to the difference between the initial molecule and the average property value, but this observation warrants further investigation. Several aspects could be investigated building on this work: (i) incorporating control for synthetic accessibility [65] into the generation of molecules will further enhance its utility. (ii) Identifying pairs of molecules that are close in representation but have significantly different properties, sometimes referred to as activity cliffs [30], presents a challenge for existing predictive models - in particular for ligand-based drug-design applications of ML. (iii) Investigating the effect of different choices of metrics on the structure of CCS, for instance, Tanimoto [66] or Wasserstein [67]. Finally, the method presented here may also serve as a tool to invert molecular representations with applications to molecular property optimization. ## V Code availability The Python code and graphical user interface for the ChemSpaceSampler is available at [https://github.com/chemspacelab/mosaics](https://github.com/chemspacelab/mosaics). Notebooks with examples can be found here [https://github.com/chemspacelab/mosaics/tree/main/examples/05_chemspacesampler](https://github.com/chemspacelab/mosaics/tree/main/examples/05_chemspacesampler). ## VI Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 957189. Obtaining the presented computational results has been facilitated using the queueing system implemented at [http://leruli.com](http://leruli.com). J.W. acknowledges support from the Faculty of Physics and supervision by Anatole v. Lilien Figure 11: Slopes \(m_{l}^{P}/\langle d_{ks}\rangle\) of average changes of a property [(a) free energy of solvation \(G\) or (b) band gap \(\epsilon\)] with respect to \(d\) for molecules surrounding a central molecule using ECFP4 (SOAP shown in the SI Fig. 1, and values given in Tab. 1). The slope values are scaled by the average pairwise distance \(\langle d_{ks}\rangle\) of all 52 molecules. Each dot corresponds to a molecule \(l\). y-axis displays the slope of a linear function fitted to Eq. (9) for each property. x-axis represents the central molecule’s property. Molecules with the largest and smallest slopes are shown in the inset. Top: solvation energy \(G\), bottom HOMO-LUMO gap. The vertical black dashed line indicates the property average over all 52 central molecules and the horizontal line indicates zero slope. \begin{table} \begin{tabular}{l l l l l l l l l} & \(P\) & Min mol & Max mol & min \(m^{P}\) & max \(m^{P}\) & \(\tilde{m}^{P}\) & \(\tilde{c}^{P}\) & \(r_{s}\) \\ \hline ECFP4 & \(G\) & \(\mathbf{C}_{1}\) & \(\mathbf{C}_{6}\) & \(-2.02\times 10^{-3}\) & \(6.42\times 10^{-3}\) & \(-2.24\times 10^{-1}\) & \(1.94\times 10^{-3}\) & -0.976 \\ & \(\varepsilon\) & \(\mathbf{C}_{1}\) & \(\mathbf{C}_{5}\) & \(-8.04\times 10^{-2}\) & \(1.58\times 10^{-2}\) & \(2.02\times 10^{-1}\) & \(1.94\times 10^{-2}\) & -0.988 \\ \hline SOAP & \(G\) & \(\mathbf{C}_{2}\) & \(\mathbf{C}_{6}\) & \(-7.32\times 10^{-5}\) & \(8.74\times 10^{-5}\) & \(-5.24\times 10^{-3}\) & \(4.66\times 10^{-5}\) & -0.788 \\ & \(\varepsilon\) & \(\mathbf{C}_{3}\) & \(\mathbf{C}_{4}\) & \(-3.41\times 10^{-3}\) & \(4.99\times 10^{-4}\) & \(-5.46\times 10^{-3}\) & \(-7.47\times 10^{-4}\) & -0.874 \\ \end{tabular} \end{table} Table 1: Minimal and maximal slope \(m^{P}\) of the average radial property derivative in chemical space and corresponding molecules \(\mathbf{C}_{i}\) identified among 52 QM9sub molecules as well as the Spearman correlation coefficient, shown in Fig. 7. Results for both molecular representations, ECFP4 [48] as well as SOAP [49]. field and C. Dellago at the University of Vienna. The authors thank Michael Sahre and Alexandre Schoepfer for their helpful discussions and feedback. ## VII Supplementary Information The following Supplementary Information contains a list of the selected inverse temperature values \(\beta_{i}\), the SOAP representation figure corresponding to Fig. 11 as well as the PCA corresponding to the solvation free energies not shown in Fig. 10a.
2309.13872
Attention and Pooling based Sigmoid Colon Segmentation in 3D CT images
Segmentation of the sigmoid colon is a crucial aspect of treating diverticulitis. It enables accurate identification and localisation of inflammation, which in turn helps healthcare professionals make informed decisions about the most appropriate treatment options. This research presents a novel deep learning architecture for segmenting the sigmoid colon from Computed Tomography (CT) images using a modified 3D U-Net architecture. Several variations of the 3D U-Net model with modified hyper-parameters were examined in this study. Pyramid pooling (PyP) and channel-spatial Squeeze and Excitation (csSE) were also used to improve the model performance. The networks were trained using manually annotated sigmoid colon. A five-fold cross-validation procedure was used on a test dataset to evaluate the network's performance. As indicated by the maximum Dice similarity coefficient (DSC) of 56.92+/-1.42%, the application of PyP and csSE techniques improves segmentation precision. We explored ensemble methods including averaging, weighted averaging, majority voting, and max ensemble. The results show that average and majority voting approaches with a threshold value of 0.5 and consistent weight distribution among the top three models produced comparable and optimal results with DSC of 88.11+/-3.52%. The results indicate that the application of a modified 3D U-Net architecture is effective for segmenting the sigmoid colon in Computed Tomography (CT) images. In addition, the study highlights the potential benefits of integrating ensemble methods to improve segmentation precision.
Md Akizur Rahman, Sonit Singh, Kuruparan Shanmugalingam, Sankaran Iyer, Alan Blair, Praveen Ravindran, Arcot Sowmya
2023-09-25T04:52:46Z
http://arxiv.org/abs/2309.13872v1
# Attention and Pooling based Sigmoid Colon Segmentation in 3D CT images ###### Abstract Segmentation of the sigmoid colon is a crucial aspect of treating diverticluitis. It enables accurate identification and localisation of inflammation, which in turn helps healthcare professionals make informed decisions about the most appropriate treatment options. This research presents a novel deep learning architecture for segmenting the sigmoid colon from Computed Tomography (CT) images using a modified 3D U-Net architecture. Several variations of the 3D U-Net model with modified hyper-parameters were examined in this study. Pyramid pooling (PyP) and channel-spatial Squeeze and Excitation (csSE) were also used to improve the model performance. The networks were trained using manually annotated sigmoid colon. A five-fold cross-validation procedure was used on a test dataset to evaluate the network's performance. As indicated by the maximum Dice similarity coefficient (DSC) of \(56.92\pm 1.42\)%, the application of PyP and csSE techniques improves segmentation precision. We explored ensemble methods including averaging, weighted averaging, majority voting, and max ensemble. The results show that average and majority voting approaches with a threshold value of 0.5 and consistent weight distribution among the top three models produced comparable and optimal results with DSC of \(88.11\pm 3.52\)%. The results indicate that the application of a modified 3D U-Net architecture is effective for segmenting the sigmoid colon in Computed Tomography (CT) images. In addition, the study highlights the potential benefits of integrating ensemble methods to improve segmentation precision. Image segmentation, deep learning, sigmoid colon segmentation, Computed Tomography, diverticulitis. ## I Introduction The _sigmoid colon_ is a segment of the large intestine that is situated in the lower left region of the abdomen. Abnormalities in the sigmoid colon, such as inflammation or tumours, can result in serious health complications, including diverticulitis and colon cancer. Timely identification and management of these ailments can potentially save lives. Diverticular diseases affect the colon and encompass a range of clinical conditions, from the mere presence of diverticula to complicated diverticulitis [1]. Diverticulia refer to small pouches that protrude from the inner lining of the digestive system. Diverticulitis occurs when the pouches become infected. Developing diverticulitis is associated with certain risk factors such as a diet that is high in fat and low in fibre, as well as low levels of physical activity. It is important to note that although these factors are linked with diverticulitis, not everyone who has these risk factors will necessarily develop the condition. The presence of air bubbles and gas outside the sigmoid colon, along with diverticulitis, during bleeding is often an indication of a more complex and severe medical condition. Based on these factors, it appears that the diverticulitis has progressed to advanced stages and may result in complications. Figure 1 shows colon anatomy, having diverticulitis and its features outside on the sigmoid colon. Medical imaging techniques, such as Computed Tomography (CT) scans, are frequently utilised to detect abnormalities in the colon [3, 4]. Manually segmenting the sigmoid colon in medical images is a tedious and time-consuming process that demands specialised expertise. The precise segmentation of the sigmoid colon can aid in the detection of diverticula by determining their existence, position, and scope. The utilisation of this technique aids in the evaluation of inflammation, distinguishing it from other medical conditions, devising a treatment plan, and tracking the advancement of the disease. Automated segmentation of the sigmoid colon can be utilised as an alternative to the arduous and time-consuming manual task. Although there is lacking research on sigmoid colon segmentation [4], recent studies have exhibited encouraging outcomes through the utilisation of Deep Learning methodologies, specifically Convolutional Neural Networks (CNNs) for medical image segmentation [5, 6]. The field Fig. 1: Colon Anatomy and diverticulitis. Figure taken from [2] of computer-aided diagnosis is gaining popularity due to advancements in medical imaging and deep learning. The focus is on creating systems that can precisely segment various organs from medical images. According to recent studies [7, 8], automated segmentation systems possess the capability to improve diagnostic precision, decrease the workload on medical professionals, and ultimately enhance patient results. The development of deep learning-based sigmoid colon segmentation has encountered challenges despite some preliminary efforts [4]. Managing variations in shape, size, multi-curve structure, and location of the sigmoid colon between patients, as well as resolving noise and artefacts in medical images, are among these challenges. To address such barriers, developing sophisticated and robust deep-learning models capable of segmenting the sigmoid colon in medical images is necessary. This will result in improved patient diagnosis and treatment outcomes. This study introduces an innovative deep learning architecture to segment the sigmoid colon. The architecture was built using 3D U-Net with Pyramid pooling (PyP) and channel-spatial Squeeze and Excitation (csSE), called PyP3DUcsSENet. An overview of the approach is in Figure 2. This research aims to evaluate and compare the segmentation performance of numerous 3D U-Net variants on 3D Computed Tomography (CT) images of the sigmoid colon. The proposed network was evaluated using a publicly available 3D CT images of colon cancer dataset. A comparative analysis was conducted to assess the performance of the proposed network in comparison to several variants of 3D U-Net. The results suggest that the PyP3DUcsSENet network is an efficient approach for segmenting the sigmoid colon. The PyP3DUcsSENet demonstrated superior precision and efficiency in comparison to alternative variants. Also, the ensemble technique results indicate that it is an effective approach for improving sigmoid colon segmentation performance. The rest of the paper is organised in the following Section II provides a comprehensive overview of the literature on medical image segmentation and colon segmentation. Then describes the employed materials and methods, including a thorough overview of the dataset, the proposed PyP3DUcsSENet architecture, and ensemble methods in Section III. Next, the experimental setup is detailed in Section IV. In Section V presents the study's findings and evaluates the performance of various variants of the 3D U-Net model utilising PyP3DUcsSENet and ensemble methodologies. Discusses the findings and their implications in Section VI, followed by recommendations for future research. Finally, in Section VII includes a summary of the paper's essential contributions and potential applications of this strategy. ## II Related Work Medical image analysis requires accurate segmentation of various organs for diagnosis, treatment planning, and disease monitoring. Various techniques, such as patch-based approaches [9], multi-atlas techniques [10], and rule-based techniques [11, 12], have been proposed for organ segmentation. Although the methods have demonstrated satisfactory performance, segmenting extensive anatomical structures remains challenging. Deep learning techniques are gaining popularity for medical image segmentation as a result of their superior outputs and efficacy [13, 14, 15]. Using deep learning techniques, various organs including the liver [16, 17, 18, 19, 20, 21], lung [21, 22], pancreas [23], prostate [24, 25], and multiple organs [26, 27, 28, 29, 30, 31, 32, 33, 5, 34] have been successfully segmented. A comprehensive analysis of colon segmentation techniques was performed, encompassing approaches that do not rely on diverticulitis diseases. A method for colon segmentation was proposed in [35]. The objective of the method is to identify polyps or growth and remove opacified fluid for virtual colonoscopy. The segmentation of air pockets was achieved through the utilisation of 3D Seeded Region Growing methodology. The seed was positioned in the axial slice that encompasses the rectum. The region growing process was executed using 6-connected neighbourhood connectivity. The segmentation of opacified fluid portions was achieved through the utilisation of fuzzy connectedness. The 3D rendering of the segmented colon was achieved by concatenating the segmented colon and opacified fluid. The study incorporated a total of 15 datasets, consisting of 7 datasets obtained from The Cancer Imaging Atlas (TCIA) [36] and 8 real-time datasets sourced from Imaging Centre, Coimbatore. The method proposed in this study attained a 98.73% accuracy rate, indicating its efficacy in virtual colonoscopy for colon segmentation. In their study, Guachi et al. [3] presented an automated technique for colon tissue segmentation. The technique is based on binary classification and utilises the LeNet-5 network, which was initially developed for handwritten digit recognition. In this study, the authors utilised the aforementioned technique to detect colon tissue at every individual pixel location. A CNN was trained using patches extracted from a dataset of CT images consisting of 100 slices. The convolutional LeNet-5 architecture proposal comprises convolution layers, ReLU activation functions, pooling layers, and fully connected layers. The method proposed in this study attained a colon tissue segmentation accuracy of \(97.83\%\). This result indicates that the Fig. 2: Block diagram showing sigmoid colon segmentation. proposed approach is effective for colon tissue segmentation. In their study, Gonzalez et al. [4] presented an iterative 2.5D deep learning methodology for the segmentation of the sigmoid colon. The approach employed a U-Net-like network structure. The approach employed a 3-D convolutional layer and down-sampling via max-pooling layers to process inputs slice by slice. Skip connections were utilised to facilitate the exchange of information between the descending and ascending branches of the network. The proposed methodology demonstrated encouraging results in both scenarios, i.e., with and without prior knowledge of other organs, for the automated segmentation of the sigmoid colon in CT images intended for radiotherapy treatment planning. The report is deficient in providing adequate information regarding their proposed architecture, thereby impeding the assessment of their approach's efficacy and replication of the outcomes. The reviewed studies indicate that deep learning, specifically 3D U-Net variants, have the capability to perform precise and effective segmentation of medical images. Although 3D U-Net has demonstrated exceptional performance in medical image segmentation, there remain certain issues that necessitate further consideration. The complicated shapes of certain organs and their closeness to other organs continue to be challenging issues to solve. In addition, the presence of notable differences in the size and structure of organs may affect the precision of segmentation. Furthermore, there exists a requirement for enhancing the segmentation performance of tiny organs. It is recommended that future research endeavours focus on addressing the challenges associated with 3D U-Net in medical image segmentation and developing novel techniques to improve its overall performance. ## III Materials and Methods We describe the dataset and different variants of the 3D U-Net model employed in this study. Specifically, we have made modifications to the network hyper-parameters and incorporated additional modules, including Pyramid pooling (PyP) [25] and channel-spatial Squeeze and Excitation (csSE) [37] to enhance the performance of 3D U-Net on sigmoid colon segmentation. Additionally, we provide comprehensive information on the ensemble techniques employed in this study. ### _Dataset_ For this study, used a publicly available dataset of CT scans of colon cancer, collected from the medical segmentation decathlon challenge [38]. At Memorial Sloan Kettering Cancer Centre in New York, \(95\) CT scans were collected from patients having primary colon cancer resection. The scans were taken using an X-ray tube operating at \(100-140\) kVp for \(500-1782\) ms of exposure time at a current of \(100-752\) mA. The reconstructed scans have diameters of between \(274\) and \(5000\) millimetres and slice thicknesses of between \(1\) and \(7.5\) millimetres. Although most of the cases in the dataset are those of colon cancer, we think it is a good place to begin our research with sigmoid colon segmentation in diverticulitis since the two diseases have many characteristics. Once CT scans were taken, the sigmoid colon was marked by hand on a 3D viewer. A colorectal expert from Liverpool Hospital checked the annotations for accuracy, and an example of this data is shown in Figure 3. ### _3D U-Net with Pyramid Pooling and Channel-Spatial Squeeze and Excitation (PyP3DUsSENet)_ In the field of image segmentation, 3D U-Net, a CNN architecture, has seen extensive application [3, 4]. It is an expansion of the 2D U-Net framework, optimised for dealing with 3D volumes. There are two nodes in the network: an encoder and a decoder. Multiple convolutional layers are used in the encoder, followed by a max pooling operation to decrease the input's spatial dimensions while simultaneously increasing the number of feature channels. To catch the complex feature details, this process is performed many times. In contrast, the decoder is built using up-convolutional layers that are concatenated with the relevant feature map from the encoder. The number of feature channels is decreased while the spatial dimensions of the input are increased by the up-convolutional layers. This allows the network to maintain the high-level characteristics learnt by the encoder while reconstructing the input image at its original resolution. As an efficient deep learning architecture for 3D medical image segmentation, the 3D U-Net model has shown state-of-the-art performance on several datasets. In this study, started with the fundamental 3D U-Net and investigated various modifications to its hyperparameters, including the number of filters in each layer, kernel size, dropout rate, and batch normalisation. In addition, we implemented two novel techniques known as Pyramid pooling (PyP) and channel-spatial Squeeze and Excitation (csSE) to boost performance. PyP is a technique that captures features with varying resolutions by aggregating at multiple dimensions. This procedure allows the network to extract finer details from the input images. csSE, on the other hand, is a technique that concentrates on improving the model's spatial and channel-wise attention. It enables the model to selectively emphasise Fig. 3: (a) Original abdominal CT slice. (b) Sigmoid colon annotation. (c) Ground-truth (annotated) mask. the most pertinent features while suppressing the less significant ones, resulting in improved performance. This study achieved enhanced segmentation results for sigmoid colon data by implementing various improvements and strategies in the 3D U-Net model. The integration of csSE and PyP techniques significantly contributed to the improved performance. Figure 4 shows the modified 3D U-Net architecture incorporating these enhancements. ### _Ensemble methods_ This investigation employed ensemble techniques, wherein the three highest-performing 3D U-Net iterations were utilised to improve the precision of 3D CT image segmentation of the sigmoid colon. The objective was to integrate the prognostications of multiple models with the aim of enhancing the outcome beyond that of any individual model. This study conducted tests using the Average [39], Weighted Average [40], Majority Voting [41], and Max Ensemble [41] techniques, for a total of four ensemble techniques. The final segmentation was obtained by averaging the SoftMax values produced by each model on a given image, which is what the Average Ensemble does. To create the Weighted Average Ensemble, we first gave a value to each model's SoftMax output and then averaged the results. We utilised model weights of \(0.33\%\), \(0.33\%\), and \(0.34\%\) for the top three models, respectively, based on their performance. We utilised the predicted binary masks from each model in the Majority Voting Ensemble to calculate the final segmentation by calculating the voxel with the greatest number of votes. To complete the segmentation, we used the Max Ensemble, which averaged the SoftMax output values for each voxel across all models. ## IV Experimental Setup In this work, we evaluated various 3D U-Net architectures and we also introduce a unique architecture called PyP3DUcsSENet. In the latter, the analysis path includes consecutive \(3\times 3\times 3\) convolutions followed by a ReLU activation function and a \(2\times 2\times 2\) max pooling layer with strides in each dimension for each layer. The range of filters that are used is from \(32\) to \(512\). In the same way, in the synthesis path, each layer consists of a \(2\times 2\times 2\) up-convolution, which is then followed by \(3\times 3\times 3\) convolutions and a ReLU activation function. For the purpose of transferring high-resolution characteristics from the analysis path to the synthesis path, we utilised concatenation connections. In addition, in order to avoid the problem of overfitting, a dropout layer that had a probability of \(0.4\) was included at the end of each ReLU that was included in each layer. In the last layer, we used a 1\(\times\)1\(\times\)1 convolution along with the sigmoid activation function to decrease the number of output channels to one. Fig. 4: Architecture of the 3D U-Net with Pyramid Pooling and Channel-Spatial Squeeze and Excitation (PyP3DUcsSENet). The number of channels is denoted in the box. This number corresponds to the total number of labels. We utilised the Adam optimizer with a learning rate of \(1e-4\) for all of the 3D U-Net variants and \(1e-6\) for PyP3DUsSENet. The momentum was set at \(0.90\), and we ran the algorithm for \(100\) iterations. During training, we used the Dice loss as our loss function, and DSC matrix was utilised to assess the final predicted segmentation outcomes. The implementation of each of the 3D U-Net architectures was done in Python using the Keras framework, which is a high-level API for TensorFlow 2.1.0. The most powerful supercomputer in Australia, NCI Gadi, was used to carry out this studies. This machine allowed to access High Performance 24 CPUs and 2 GPUs (each Tesla V100 having 32 GB VRAM), together with 240 GB of Memory. ## V Experimental Results A total of \(80\) sigmoid colon annotated data samples were utilised, which were obtained from colon cancer datasets. Five-fold cross-validation was utilised for the segmentation procedure, using \(64\) training samples and \(16\) test samples in each fold. The experimental outcomes for several versions of the 3D U-Net are presented in Table I. The outcomes of six different models that were trained for sigmoid colon segmentation in 3D CT images are shown in Table I, along with the average DSC values obtained from 5-fold cross-validation. With DSC scores of \(55.33\pm 1.79(\%)\) and \(55.61\pm 1.70(\%)\), respectively, the 3D Res-U-Net and 3D Dense U-Net models outperformed other fundamental 3D U-Net versions in terms of performance. A DSC of \(56.92\pm 1.42(\%)\) was attained using the proposed PyP3DUsSENet model, which significantly enhanced performance. Interestingly, all architectures did not perform any better after adding the PyP and csSE modules, except for the fundamental 3D U-Net model, which showed a modest increase in DSC. The results of the ensemble methods are recorded in Table II. Based on the DSC scores, we can see that the Average Ensemble and Majority Voting Ensemble techniques performed the best, with an average DSC score of \(88.11\pm 3.52(\%)\). The Weighted Average Ensemble technique demonstrated favourable performance, exhibiting an average DSC score of \(88.00\pm 3.61(\%)\). In contrast, the Max Ensemble technique exhibited notably inferior performance, as evidenced by an average DSC score of \(78.58\pm 4.22(\%)\). This outcome suggests that the Max Ensemble technique may not be an appropriate approach for this particular task. This implies that the utilisation of the top three performing models results in an enhanced accuracy of sigmoid colon segmentation across all three techniques. In summary, the findings indicate that the utilisation of ensemble technique can serve as a valuable mechanism for enhancing the efficacy of segmentation models in the context of medical imaging applications. ## VI Discussion The results from PyP3DUsSENet show varying levels of accuracy in sigmoid colon segmentation, as demonstrated in Figure 5 for some of the cases. In Figure 5, case a), the predicted segmentation achieved a DSC score of \(78.69\%\). This indicates a high level of accuracy because there was significant overlap observed between the predicted segmentation and the ground truth. In contrast, the predicted segmentation for case b) obtained a DSC score of \(65.99\%\), indicating a moderate level of inconsistency with the ground truth. The DSC scores obtained for the predicted segmentations in cases c) and d) were notably lower, measuring \(44.54\%\) and \(37.68\%\), respectively. The scores show a notable decline in accuracy when compared to cases a) and b). This is because the predicted segmentations in cases c) and d) demonstrate significant differences in shape and size, resulting in noticeable discrepancies when compared to the actual ground truth. Figure 6 shows the average ensemble performance of both the best case and worst case. In Figure 6, demonstrates a noteworthy enhancement in the performance of sigmoid colon segmentation due to the implementation of the ensemble model. In the best-case scenario (a), the ensemble model achieves an impressive DSC score of \(94.72\%\), which indicates highly accurate segmentation. The ensemble model achieves a decent DSC score of \(54.11\%\), even in the worst-case scenario. This observation implies that the network faces difficulties in learning the sigmoid colon efficiently because of the differences in its size and shape. The sigmoid colon is a relatively small and highly variable structure within the human body. The shape of the object exhibits a high degree of complexity and may exhibit considerable inter-patient variability. Insufficient manual annotation can lead to imprecisions and inconsistencies in the ground truth data, thereby exacerbating the difficulty of achieving accurate segmentation. The precision of segmentation of the sigmoid colon may exhibit significant variability, despite the \begin{table} \begin{tabular}{l|c} \hline \hline **Ensemble Technique** & **Average DSC (\%) \(\pm\) SD (\%)** \\ \hline **Average Ensemble** & \(\mathbf{88.11\pm 3.52}\) \\ Weighted Average Ensemble & \(88.00\pm 3.61\) \\ **Majority Voting Ensemble** & \(\mathbf{88.11\pm 3.52}\) \\ Max Ensemble & \(78.58\pm 4.22\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Performance of three models in ensemble for boosting sigmoid colon segmentation DSC on 3D CT images obtained from 5-fold cross validation. \begin{table} \begin{tabular}{l|c} \hline \hline **Model Name** & **Average DSC (\%) \(\pm\) SD (\%)** \\ \hline 3D U-Net & \(45.60\pm 3.13\) \\ 3D U-Net++ & \(48.57\pm 4.11\) \\ 3D wU-Net & \(48.82\pm 4.43\) \\ 3D ReU-Net & \(55.33\pm 1.79\) \\ 3D Dense-Net & \(55.61\pm 1.70\) \\ **PyP3DUsSE-Net** & \(\mathbf{56.92\pm 1.42}\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Performance of six different architectures for sigmoid colon segmentation on 3D CT images along with the average DSC obtained from 5-fold cross validation. [MISSING_PAGE_POST] utilisation of sophisticated deep-learning models. The results of this investigation demonstrate the existence of this diversity. To improve the applicability of our segmentation model, forthcoming research will focus on expanding the dataset to include a broader spectrum of patient cohorts. Additionally, future studies will be directed towards improving the precision of manual annotation through the involvement of numerous specialists in the process of annotation. The implementation of these enhancements has the potential to yield segmentation outcomes that are more precise and reliable, rendering them particularly advantageous in medical settings. ## VII Conclusion The study presents the successful achievement of sigmoid colon segmentation in CT images through the utilisation of a modified 3D U-Net architecture. The modifications incorporated in the architecture encompassed the integration of pyramid pooling, channel-spatial squeezing and excitation algorithms, and pyramid pooling. The study demonstrates the potential benefits of employing ensemble methods to enhance precision, wherein the average ensemble and majority voting ensemble techniques yielded the most favourable results. The proposed method demonstrates the capacity to facilitate precise segmentation of the sigmoid colon. ## Acknowledgement This research was facilitated by National Computational Infrastructure (NCI), which is supported by the Australian Government. We are also grateful for the financial support provided by The Australian Government Research Training Program (RTP) Scholarship at the University of New South Wales (UNSW) Sydney. ## Declaration of competing interest The authors declare no conflicts of interest.
2309.07807
Optical momentum distributions in monochromatic, isotropic random vector fields
We investigate the decomposition of the electromagnetic Poynting momentum density in three-dimensional random monochromatic fields into orbital and spin parts, using analytical and numerical methods. In sharp contrast with the paraxial case, the orbital and spin momenta in isotropic random fields are found to be identically distributed in magnitude, increasing the discrepancy between the Poynting and orbital pictures of energy flow. Spatial correlation functions reveal differences in the generic organization of different optical momenta in complex natural light fields, with the orbital current typically forming broad channels of unidirectional flow, and the spin current manifesting larger vorticity and changing direction over subwavelength distances. These results are extended to random fields with pure helicity, in relation to the inclusion of electric-magnetic democracy in the definition of optical momenta.
Titouan Gadeyne, Mark R. Dennis
2023-09-14T15:51:09Z
http://arxiv.org/abs/2309.07807v3
# Optical momentum distributions in monochromatic, isotropic random vector fields ###### Abstract We investigate the decomposition of the electromagnetic Poynting momentum density in three-dimensional random monochromatic fields into orbital and spin parts, using analytical and numerical methods. In sharp contrast with the paraxial case, the orbital and spin momenta in isotropic random fields are found to be identically distributed in magnitude, increasing the discrepancy between the Poynting and orbital pictures of energy flow. Spatial correlation functions reveal differences in the generic organization of the optical momenta in complex natural light fields, with the orbital current typically forming broad channels of unidirectional flow, and the spin current manifesting larger vorticity and changing direction over subwavelength distances. These results are extended to random fields with pure helicity, in relation to the inclusion of electric-magnetic democracy in the definition of optical momenta. _Keywords:_ statistical optics, optical momentum, Poynting vector, spin momentum, orbital momentum ## I Introduction Conservation of electromagnetic (EM) energy is determined by the well-known theorem of Poynting [1]: in the absence of charges, the rate of change of EM energy density is equal to the divergence of the Poynting vector \(\mathcal{P}=\mathcal{E}\cross\mathcal{H}\), the cross product of the electric and magnetic fields. By analogy with other continuity equations, it is customary to interpret \(\mathcal{P}\) as the direction and magnitude of EM energy flow [2; 3]. However, this choice often fails to produce an intuitive picture, even in seemingly elementary situations : for instance, the Poynting vector for two crossed plane waves [4] or in a single evanescent surface wave [5; 6] exhibits a counterintuitive component _perpendicular_ to the direction of propagation. Similarly, the (time-averaged) radiation pressure exerted by an optical field on a subwavelength probe particle is generally _not_ proportional to the Poynting vector [7; 8]. Divided by \(c^{2}\), the Poynting vector also defines the linear momentum density of the EM field. It is now well understood that in monochromatic fields, the time-averaged linear momentum \(\mathbf{P}\) is the sum of _orbital_\(\mathbf{P}_{O}\) and _spin_\(\mathbf{P}_{S}\) parts, respectively generating the orbital and spin angular momenta [9; 10; 11]. In [10], these vector fields were dubbed _optical currents_. This Poynting vector splitting has deep foundations, as the orbital momentum is in fact equal to the canonical momentum derived from application of Noether's theorem to translational invariance in the relativistic field theory formulation of electromagnetism [12; 13]. The orbital momentum correctly accounts for the radiation pressure on dipole particles, and can provide a more intuitive picture of energy flow than the Poynting vector in the situations mentioned above. In the field theory framework, the spin momentum corresponds to a term introduced by Belinfante [14] to restore symmetry and gauge-invariance to the EM stress-energy tensor, which when integrated over space does not contribute to the total linear momentum of the field. As such, the Belinfante _spin momentum_ is often described as a "virtual" quantity introduced for theoretical reasons. Nevertheless, this spin momentum has recently been evidenced experimentally, by measuring the extraordinary optical force it induced on a nano-cantilever [15]. Importantly, the couplings to the orbital and spin parts of the Poynting vector differ by orders of magnitude, highlighting their distinct physical nature. Recent experimental and theoretical studies have thus demonstrated striking differences between the Poynting, orbital, and spin momenta, and continue to redefine our views of EM energy flow and optical forces [8; 16; 11]. Still, they have so far been limited to rather elementary, highly symmetric fields, with geometries optimized to best showcase the differences between the three optical currents. In this work, we explore _generic_ features of these optical currents, to build insight into their organization in _natural light fields_ : what are their properties when many independent waves interfere with no particular symmetries? To this end, we investigate their behaviour in monochromatic, isotropic random EM vector fields, a convenient statistical model of 3D EM fields specified by only one physical parameter, the wavelength \(\lambda\). Strikingly, we will see that in this model, the magnitudes of the spin and orbital currents have the same probability distribution, but that the two vector fields have different spatial correlations : the apparent weakness of the spin current is due to its failure to organise coherent correlated vector structures over large distances in space, unlike the orbital current (and Poynting vector itself). We demonstrate these facts using analytical statistics and numerical simulations for the vector random wave model. ## II Theoretical framework ### Poynting, orbital and spin momenta We work in units where \(\varepsilon_{0}=\mu_{0}=c=1\). In a monochromatic field with frequency \(\omega=ck=c(2\pi/\lambda)\), the electric and magnetic fields are represented by complex-valued vector fields, \(\mathcal{E}=\mathrm{Re}[\mathbf{E}e^{-i\omega t}]\) and \(\mathrm{Re}\{\mathbf{H}e^{-i\omega t}\}\). The temporal cycle-averaged Poynting momentum can be written \[\mathbf{P}=\frac{1}{2}\,\mathrm{Re}\big{\{}\mathbf{E}^{\star}\boldsymbol{\times }\mathbf{H}\big{\}}. \tag{1}\] Using Maxwell's equations and the vector identity \(\mathbf{A}\times(\boldsymbol{\nabla}\times\mathbf{B})=\mathbf{A}\cdot( \boldsymbol{\nabla})\mathbf{B}-(\mathbf{A}\cdot\boldsymbol{\nabla})\mathbf{B}\) (where we use the customary notation \([\mathbf{A}\cdot(\boldsymbol{\nabla})\mathbf{B}]_{i}=\sum_{j}A_{j}\partial_{k }B_{j}\)), the Poynting momentum can be split into a sum of orbital and spin momenta [10], \[\begin{split}\mathbf{P}&=\frac{1}{2\omega}\, \mathrm{Im}\big{\{}\mathbf{E}^{\star}\cdot(\boldsymbol{\nabla})\mathbf{E} \big{\}}-\frac{1}{2\omega}\,\mathrm{Im}\{(\mathbf{E}^{\star}\cdot\boldsymbol {\nabla})\mathbf{E}\}\\ &=\mathbf{P}_{O}^{\mathbf{E}}+\mathbf{P}_{S}^{\mathbf{E}}.\end{split} \tag{2}\] This splitting is not unique : expressing the Poynting momentum using the electric (resp. magnetic) field only, we obtain the _electric-biased_ (resp. _magnetic-biased_) momenta \(\mathbf{P}_{O,S}^{\mathbf{E}}\) (resp. \(\mathbf{P}_{O,S}^{\mathbf{H}}\)) as in (2). But another option, retaining the electric-magnetic symmetry of Maxwell's equations for free fields, is to take the mean of the two representations, \[\begin{split}\mathbf{P}&=\frac{1}{4\omega}\, \mathrm{Im}\big{\{}\mathbf{E}^{\star}\cdot(\boldsymbol{\nabla})\mathbf{E}+ \mathbf{H}^{\star}\cdot(\boldsymbol{\nabla})\mathbf{H}\big{\}}\\ &-\frac{1}{4\omega}\,\mathrm{Im}\big{\{}(\mathbf{E}^{\star}\cdot \boldsymbol{\nabla})\mathbf{E}+(\mathbf{H}^{\star}\cdot\boldsymbol{\nabla}) \mathbf{H}\big{\}}\\ &=\mathbf{P}_{O}^{\mathbf{EH}}+\mathbf{P}_{S}^{\mathbf{EH}},\end{split} \tag{3}\] producing the so-called _democratic_ (or _dual_) momenta \(\mathbf{P}_{O}^{\mathbf{EH}},\mathbf{P}_{S}^{\mathbf{EH}}\)[10]. In general non-paraxial fields, these are all distinct quantities, and in monochromatic fields, their definition is unambiguous (otherwise the splitting is gauge-dependent). An interesting situation arises in fields with _pure helicity_, only containing circularly-polarized plane wave components of same handedness : such fields satisfy \(\mathbf{E}=\pm i\mathbf{H}\), such that all biased and democratic quantities become identical. The dual formulation of electromagnetism that treats electric and magnetic fields equally has many attractive features when working with free fields, in the absence of matter [13; 17] -- for instance, democratic momenta naturally split into two independent parts associated with components of opposite helicity [18]. However, experimental measurements require material probes, which typically do _not_ respond identically to electric and magnetic fields : a common example is the radiation pressure on a subwavelength particle responding in the electric dipole approximation, which is proportional to the _electric-biased_ orbital momentum \(\mathbf{P}_{O}^{\mathbf{E}}\) only. We therefore choose to center our discussion on electric-biased quantities, and devote section III.3 to observations on democratic momenta and pure helicity fields for which the distinction vanishes. For reference, we briefly recall the typical magnitudes of the three momenta in a paraxial beam [9; 11] propagating along \(z\), for which the field is approximately \(\mathbf{E}\approx e^{ikz}(E_{x},E_{y},0)\) with transverse amplitude and polarization profiles \(E_{x}(x,y)\) and \(E_{y}(x,y)\) varying over a lengthscale \(W\gg\lambda\), on the order of the beam waist. We find that \(\mathbf{P}\) and \(\mathbf{P}_{O}^{\mathbf{E}}\) are mostly longitudinal, that \(\mathbf{P}_{S}^{\mathbf{E}}\) is purely transverse, and the following orders of magnitude \[\begin{split}\left|\mathbf{P}\right|&\sim E^{2},\\ \left|\mathbf{P}_{O}^{\mathbf{E}}\right|&\sim\frac{1 }{\omega}\,\mathrm{Im}\{E^{\star}\partial_{z}E\}\sim\frac{k}{\omega}E^{2} \sim E^{2},\\ \left|\mathbf{P}_{S}^{\mathbf{E}}\right|&\sim\frac{1 }{\omega}\,\mathrm{Im}\{E^{\star}\partial_{x}E\}\sim\frac{\lambda}{W}E^{2} \ll E^{2}.\end{split} \tag{4}\] We conclude that in a regular optical beam, orbital and Poynting momenta are closely aligned, the spin momentum being small in comparison. ### Gaussian random optical fields We model generic, natural EM light fields as superpositions of \(N\to+\infty\) plane waves, with uniformly randomly sampled propagation directions, polarizations, and global phases. This construction aims to portray an unprepared field, akin to ambient light in a room, or thermal black-body radiation, composed of many waves emitted from independent points or having scattered off various surfaces, producing a field with no particular symmetries or preferred directions, but statistically homogeneous and isotropic. This approach builds on the long history of the study of speckle patterns [19; 20; 21] and statistical properties of random light fields [22; 23; 24; 25; 26], which revealed salient features underlying the organization of all EM fields. Physically and geometrically, these random vector fields are very different from paraxial optical beams. The complex electric and magnetic fields can be parameterized as follows, \[\begin{split}\mathbf{E}=\sqrt{\frac{2}{N}}\sum_{n=1}^{N}e^{i \mathbf{k}_{n}\cdot\mathbf{r}+i\psi_{n}}\Big{[}e^{i\alpha_{n}/2}\cos\frac{ \beta_{n}}{2}\mathbf{e}_{+}(\mathbf{k}_{n})\\ +e^{-i\alpha_{n}/2}\sin\frac{\beta_{n}}{2}\mathbf{e}_{-}( \mathbf{k}_{n})\Big{]},\end{split} \tag{5}\] \[\begin{split}\mathbf{H}=\sqrt{\frac{2}{N}}\sum_{n=1}^{N}e^{i \mathbf{k}_{n}\cdot\mathbf{r}+i\psi_{n}}\frac{1}{i}\Big{[}e^{i\alpha_{n}/2} \cos\frac{\beta_{n}}{2}\mathbf{e}_{+}(\mathbf{k}_{n})\\ -e^{-i\alpha_{n}/2}\sin\frac{\beta_{n}}{2}\mathbf{e}_{-}( \mathbf{k}_{n})\Big{]},\end{split} \tag{6}\] where the sum runs over the \(N\gg 1\) plane waves, with wavevectors \(\mathbf{k}_{n}\) sampled uniformly on the sphere of directions with spherical angles (\(\theta_{n}\), \(\phi_{n}\)) and identical magnitudes \(k\), and polarizations sampled uniformly on the Poincare sphere with angles (\(\beta_{n}\), \(\alpha_{n}\)), and uniformly sampled global phases \(\psi_{n}\). \(\mathbf{e}_{\pm}(\mathbf{k})=[\mathbf{e}_{1}\pm i\mathbf{e}_{2}]/\sqrt{2}\) are helicity basis vectors, with \(\{\mathbf{e}_{1},\mathbf{e}_{2}\}\) a basis of two real orthogonal unit vectors transverse to \(\mathbf{k}\) (see the SI for explicit expressions and alternative parameterizations). We introduce the following notation for the real and imaginary parts of the fields [2; 26] \[\begin{split}\mathbf{E}=\mathbf{p}^{\mathbf{E}}+i\mathbf{q}^{ \mathbf{E}},\quad\mathbf{H}=\mathbf{p}^{\mathbf{H}}+i\mathbf{q}^{\mathbf{H}}, \end{split}\] since statistics are convenient with real quantities only. Ensemble-averaging over many random fields is denoted by brackets and amounts to integrating over the five random angles \[\begin{split}\left\langle\bullet\right\rangle=\prod_{n=1}^{N}& \left[\frac{1}{32\pi^{3}}\int_{0}^{\pi}\sin\theta_{n}\mathrm{d}\theta_{n}\int_{0 }^{2\pi}\mathrm{d}\phi_{n}\right.\\ &\left.\times\int_{0}^{\pi}\sin\beta_{n}\mathrm{d}\beta_{n}\int_{0 }^{2\pi}\mathrm{d}\alpha_{n}\int_{0}^{2\pi}\mathrm{d}\psi_{n}\right]\bullet.\end{split} \tag{7}\] From the definitions above, it can be seen that any component of the real or imaginary part of a field is a sum of \(N\) real-valued, identically distributed random variables. The central limit theorem ensures that in the limit \(N\to+\infty\), each component is a real random variable obeying Gaussian statistics [19; 20]. The same reasoning holds for all derivatives of the components. In our case these variables are all centered, hence we only require their variances and correlations to fully describe the statistics. They are obtained by direct integration using (5), and are tabulated in the SI. With these provided, an ensemble average rewrites as an integral over a set of \(M\) Gaussian random variables \(\mathbf{u}=(p_{y}^{\mathbf{E}},p_{y}^{\mathbf{E}}\ldots)\) \[\langle\bullet\rangle=\sqrt{\frac{\det\{\mathbf{\Sigma}^{-1}\}}{(2\pi)^{M}}} \int\ldots\int\mathrm{d}^{M}\mathbf{u}\exp\Bigl{\{}-\frac{\mathbf{u}^{\intercal }\mathbf{\Sigma}^{-1}\mathbf{u}}{2}\Bigr{\}}\bullet, \tag{6}\] where \(\mathbf{\Sigma}\) is the covariance matrix, with \(\Sigma_{ij}=\langle u_{i}u_{j}\rangle\). Useful formulae and strategies for computing averages are further described in the SI, and can be found in references [22; 23; 25; 26; 27]. ### Spatial correlation functions To investigate local order in the spatial organization of the optical currents, we will average products of vector components at two different positions in space. The statistical, directional correlators in random Gaussian vector fields, here representing EM waves, are analogous to those used in the theory of isotropic turbulence in fluids [28]. For a homogeneous random vector field \(\mathbf{v}\) to be isotropic requires the two-point correlation tensor to have the form \[\langle v_{i}(\mathbf{0})v_{j}(\mathbf{r})\rangle=[f(r)-g(r)]\frac{r_{i}r_{j}} {r^{2}}+g(r)\delta_{ij},\] where \(f\) and \(g\) are scalar functions depending only on the magnitude \(r=|\mathbf{r}|\) of the separation vector. They respectively describe _longitudinal_ and _lateral_ autocorrelations of a given vector component \[f(r)=\langle v_{i}(\mathbf{0})v_{i}(r\mathbf{e}_{i})\rangle\,,\ \ \ \ g(r)= \langle v_{i}(\mathbf{0})v_{i}(r\mathbf{e}_{j})\rangle\ (i\neq j).\] where the separation vector \(r\mathbf{e_{i}}\) is taken along some chosen direction \(i=x,y,z\). If, in addition, the field is solenoidal (\(\boldsymbol{\nabla}\cdot\mathbf{v}=0\)), \(f\) and \(g\) are related, such that the full correlation tensor can be determined from, for example, the longitudinal correlation function \(f\) only, \[\langle v_{i}(\mathbf{0})v_{j}(\mathbf{r})\rangle=-\frac{rf^{\prime}(r)}{2} \frac{r_{i}r_{j}}{r^{2}}+\delta_{ij}\left[f(r)+\frac{rf^{\prime}(r)}{2}\right]. \tag{7}\] Since there are no charges in the model field, cycle-averaging Poynting's theorem yields \(\boldsymbol{\nabla}\cdot\mathbf{P}=0\). As the spin momentum itself is the curl of a vector field [10], it is divergenceless \(\boldsymbol{\nabla}\cdot\mathbf{P}_{S}=0\), and consequently we also have \(\boldsymbol{\nabla}\cdot\mathbf{P}_{O}=0\). Hence all momenta are isotropic homogeneous solenoidal random fields, to which the above results apply. They also apply to the complex electric and magnetic fields themselves. In our calculations, we will be able to express all correlation functions using the longitudinal and lateral autocorrelation functions of the electric field [29], that we respectively denote Figure 1: **Magnitudes and relative orientations of the Poynting, orbital and spin momenta.** (a) Analytical (lines) and numerical (dots) probability distributions for the magnitudes of the Poynting, orbital and spin momenta. (b) Distribution of the angles between momenta, obtained numerically. (c) Illustration of one realisation of the random field, showing the three vector fields on the faces of a cubic region of side \(\lambda/2\), and a set of streamlines seeded near the center of the cube. Numerical data in this figure was obtained from \(10^{5}\) realizations of the random field, each containing \(N=10^{3}\) plane waves. and \(T\) \[L(r) =\left\langle p_{x}^{\mathbf{E}}(\mathbf{0})p_{x}^{\mathbf{E}}(r \mathbf{e}_{x})\right\rangle=\frac{\sin(R)-R\cos(R)}{R^{3}}\] \[T(r) =\left\langle p_{x}^{\mathbf{E}}(\mathbf{0})p_{x}^{\mathbf{E}}(r \mathbf{e}_{y})\right\rangle=\frac{R\cos(R)-(1-R^{2})\sin(R)}{2R^{3}},\] where \(R=kr\). Further useful strategies and elementary correlation functions are provided in the SI. ## III Results and discussion All analytical derivations can be found in great detail in the SI. We mostly state final results here, except when intermediate steps are useful for understanding how a result comes about. ### Magnitudes and relative directions of the optical momenta We begin by deriving the fundamental statistical distributions for the magnitudes of the Poynting, orbital and spin momenta. In terms of real and imaginary field components, the Poynting momentum writes \[\mathbf{P}=\frac{1}{2}\left[\mathbf{p}^{\mathbf{E}}\times\mathbf{p}^{\mathbf{ H}}+\mathbf{q}^{\mathbf{E}}\times\mathbf{q}^{\mathbf{H}}\right].\] Each component of \(\mathbf{P}\) is a sum of products of two Gaussian random variables. As detailed in the SI, isotropy allows us to retrieve the magnitude distribution \(D(P)\) from that of the \(x\)-component \(D_{x}(P_{x})\) only [26]. We briefly outline this first derivation, to see the main steps involved: \[D_{x}(P_{x}) =\left\langle\delta\Big{(}P_{x}-\sum_{j,k}\frac{\epsilon_{xjk}}{2 }\Big{[}p_{j}^{\mathbf{E}}p_{k}^{\mathbf{H}}+q_{j}^{\mathbf{E}}q_{k}^{ \mathbf{H}}\Big{]}\Big{)}\right\rangle\] \[=\int\frac{\mathrm{d}s}{2\pi}e^{-isP_{x}}\left\langle\exp\Bigl{\{} i\frac{is}{2}p_{y}^{\mathbf{E}}p_{z}^{\mathbf{H}}\Bigr{\}}\right\rangle^{4}\] \[=\int\frac{\mathrm{d}s}{2\pi}e^{-isP_{x}}\left[\frac{1}{1+s^{2} \sigma_{x}^{2}/4}\Bigr{]}^{2}\] \[=\frac{1+2|P_{x}|/\sigma_{x}^{2}}{2\sigma_{x}^{2}}\exp\Bigl{\{}- \frac{2|P_{x}|}{\sigma_{x}^{2}}\Bigr{\}},\] where \(\sigma_{x}^{2}=\left\langle(p_{x}^{\mathbf{E}})^{2}\right\rangle=1/3\) (see tabulated variances in the SI). The second step involves factorization of the average using the statistical independence of field components, the third step uses (6), and the last step is an integration in the complex plane. The distribution for the magnitude of the Poynting momentum is then \[D(P)=-2P\frac{\partial D_{x}(P_{x})}{\partial P_{x}}\Big{|}_{P_{x}=P}=108P^{2} \exp\{-6P\}.\] The electric-biased orbital and spin momenta read \[\mathbf{P}_{O}^{\mathbf{E}} =\frac{1}{2\omega}\mathbf{p}^{\mathbf{E}}\cdot(\boldsymbol{\nabla })\mathbf{q}^{\mathbf{E}}-\frac{1}{2\omega}\mathbf{q}^{\mathbf{E}}\cdot( \boldsymbol{\nabla})\mathbf{p}^{\mathbf{E}}\] \[\mathbf{P}_{S}^{\mathbf{E}} =-\frac{1}{2\omega}(\mathbf{p}^{\mathbf{E}}\cdot\boldsymbol{ \nabla})\mathbf{q}^{\mathbf{E}}+\frac{1}{2\omega}(\mathbf{q}^{\mathbf{E}} \cdot\boldsymbol{\nabla})\mathbf{p}^{\mathbf{E}}.\] Again, each component is a sum of products of two Gaussian random variables (one field component and one space derivative), and we only have to find the distribution for the \(x\)-component. For the orbital momentum this is \[D_{x}(P_{O,x}^{\mathbf{E}})=\left\langle\delta\Big{(}P_{O,x}^{ \mathbf{E}}-\sum_{j}\frac{1}{2}\left[p_{j}^{\mathbf{E}}\partial_{x}q_{j}^{ \mathbf{E}}-q_{j}^{\mathbf{E}}\partial_{x}p_{j}^{\mathbf{E}}\right]\Big{)}\right\rangle\] \[=\int\frac{\mathrm{d}s}{2\pi}e^{-isP_{x}^{\mathbf{E}}}\left\langle \exp\Bigl{\{}\frac{is}{2}p_{x}^{\mathbf{E}}\partial_{x}q_{x}^{\mathbf{E}} \Bigr{\}}\right\rangle^{2}\left\langle\exp\Bigl{\{}\frac{is}{2}p_{y}^{\mathbf{ E}}\partial_{x}q_{y}^{\mathbf{E}}\Bigr{\}}\right\rangle^{4}\] \[=\ldots\] and the distribution for the magnitude is \[D(P_{O}^{\mathbf{E}})=180P_{O}^{\mathbf{E}}\left[e^{-6\sqrt{5}P_{O}^{ \mathbf{E}}}-\left[1-\frac{3\sqrt{10}}{2}P_{O}^{\mathbf{E}}\right]e^{-3\sqrt{1 0}P_{O}^{\mathbf{E}}}\right].\] Surprisingly, we find that in repeating the calculation for the spin momentum, the result is the same. Indeed, the first steps of the derivation read \[D_{x}(P_{S,x}^{\mathbf{E}})=\left\langle\delta\Big{(}P_{S,x}^{ \mathbf{E}}-\sum_{j}\frac{1}{2}\left[p_{j}^{\mathbf{E}}\partial_{j}q_{x}^{ \mathbf{E}}-q_{j}^{\mathbf{E}}\partial_{j}p_{x}^{\mathbf{E}}\right]\Big{)}\right\rangle\] \[=\int\frac{\mathrm{d}s}{2\pi}e^{-isP_{x}^{\mathbf{E}}}\left\langle \exp\Bigl{\{}\frac{is}{2}p_{x}^{\mathbf{E}}\partial_{x}q_{x}^{\mathbf{E}} \Bigr{\}}\right\rangle^{2}\left\langle\exp\Bigl{\{}\frac{is}{2}p_{y}^{\mathbf{ E}}\partial_{y}q_{x}^{\mathbf{E}}\Bigr{\}}\right\rangle^{4}\] and since \(\partial_{x}q_{y}^{\mathbf{E}}\) and \(\partial_{y}q_{x}^{\mathbf{E}}\) are both uncorrelated to \(p_{y}^{\mathbf{E}}\) and have the same variance (see tables in the SI), the rest of the calculation is strictly identical to that for the orbital momentum, and we conclude that the orbital and spin momenta obey the exact same magnitude distribution. All these distributions are wavelength-independent, and only scale with the overall intensity in the field. They are plotted and checked against numerical estimates in Figure 1.a). It is interesting to observe that the spin momentum, usually negligibly small in paraxial beams, becomes here equivalent in magnitude to the orbital momentum, responsible for the actual energy flow. The intuitive reason for the different order of magnitude is that in the fully non-paraxial case, there are waves propagating in all directions, such that all space derivatives result in a factor \(\sim ik\), whereas transverse gradients are only of order \(\sim 1/W\) in the paraxial case. Moving away from paraxiality, part of the linear momentum converts from an orbital to a spin nature, in a manner strictly similar to how the _angular_ momentum does [30]. To complete the picture of the three momenta at a given point in space, we present in Figure 1.b) the distributions for the angles between each pair of momenta. They were obtained numerically, as attempting to compute analytical joint distributions of two momenta hardly leads to tractable expressions. We observe that the angle between \(\mathbf{P}_{O}^{\mathbf{E}}\) and \(\mathbf{P}_{S}^{\mathbf{E}}\) has a broad distribution roughly centered on \(\pi/2\) (with a slight skew towards larger angles), indicating that they tend to point in perpendicular directions. Since they have comparable magnitudes, the resulting Poynting momentum is generically not closely aligned with either of them. This implies that the streamlines for the three optical currents tend to diverge away from one another. In Figure 1.c), we illustrate one realisation of the random field, in a cubic region of side \(\lambda/2\). We plot the three momenta on the sides of the box, and a set of streamlines seeded near the center of the cube. The three vector fields are indeed observed to generically point in different directions, and the streamlines to follow seemingly unrelated paths in space, crossing with angles in agreement with the distributions of Figure 1.b). This reinforces the claim that the Poynting and orbital currents generally provide contrasting pictures of EM energy flow, both in terms of magnitude and direction. These observations could prove important for simulating optical forces in complex nanophotonics systems. ### Short-range organization of the currents #### ii.2.1 Spatial correlation tensors Going beyond their identical magnitude distribution, we find that the orbital and spin momentum vector fields are actually arranged very differently in space. To explore this, we compute two-point spatial correlation tensors for all pairs of components of a given momentum. Each tensor will be of the form in (7), given entirely by the longitudinal autocorrelation function \(f(r)\). For the Poynting momentum, this function writes \[f_{P}(r)=\langle P_{x}(\mathbf{0})P_{x}(r\mathbf{e_{x}})\rangle\] \[= \sum_{j,k,l,m}\left\langle\frac{1}{4}\epsilon_{xjk}\epsilon_{zlm }[p_{j}^{\mathbf{E},\mathbf{H}}+q_{j}^{\mathbf{E},\mathbf{H}}](\mathbf{0})[p_ {l}^{\mathbf{E},\mathbf{H}}+q_{l}^{\mathbf{E},\mathbf{H}}](r\mathbf{e_{x}})\right\rangle.\] To evaluate these averages we make use of Isserlis' theorem for moments of Gaussian variables [19]. \(f_{P}(r)\) is obtained as \[f_{P}(r)=T^{2}(r)+\frac{(kr)^{2}}{4}L^{2}(r),\] and the correlation tensor is \[\left\langle P_{i}(\mathbf{0})P_{j}(\mathbf{r})\right\rangle= \frac{r_{i}r_{j}}{r^{2}}\Big{[}2R\left(R^{2}-3\right)\sin(2R)\] \[+\Big{(}6R^{2}-3\Big{)}\cos(2R)+2R^{4}+3\Big{]}\Big{/}8R^{6}\] \[+\delta_{ij}\Big{[}R\left(2-R^{2}\right)\sin(2R)+\left(1-2R^{2} \right)\cos(2R)-1\Big{]}\Big{/}4R^{6}.\] Figure 2: **Local structure of the optical currents.** First three columns : optical currents of the vector EM field \(\mathbf{V}=\mathbf{P}_{O}^{\mathbf{E}},\ \mathbf{P}_{S}^{\mathbf{E}},\ \mathbf{P}\). Last column : current in the complex scalar field \(\mathbf{V}=\mathbf{J}\). First row : normalized analytical spatial autocorrelation functions \(\left\langle V_{x}(\mathbf{0})V_{x}(\mathbf{r})\right\rangle/\left\langle(V_{ x})^{2}\right\rangle\) of the \(x\)-component of each momentum, for separation vectors \(\mathbf{r}\) in the \(x-y\) plane. Second row : in-plane streamlines of each momentum in a slice through one realization of the random vector field (first three columns) and one realization of the random scalar field (last column), each containing \(N=10^{3}\) plane waves. Streamlines are colored according to the value of the \(x\)-component of the vector field, and zero-crossings are shown in black to better distinguish regions having a flow oppositely directed along \(x\). Third row : in-plane streamlines in the same slices as in the second row, after local averaging of the vector fields over a spherical volume of diameter \(\lambda\) (dashed circle). All plots in a given row share the same colorbar. The strategy is similar for the orbital and spin momenta. We find \[\omega^{2}f_{O}(r)=\frac{1}{2}\left[L^{\prime 2}(r)-L(r)L^{\prime\prime}(r) \right]+\left[T^{\prime 2}(r)-T(r)T^{\prime\prime}(r)\right],\] giving the correlation tensor \[\left\langle P^{\mathbf{E}}_{O,i}(\mathbf{0})P^{\mathbf{E}}_{O,j} (\mathbf{r})\right\rangle=\frac{r_{i}r_{j}}{r^{2}}\Big{[}\frac{1}{2}\left(R^{ 4}-24R^{2}+72\right)R\sin(2R)\] \[+3\left(R^{4}-10R^{2}+6\right)\cos(2R)+R^{6}-3R^{4}-6R^{2}-18 \Big{]}\Big{/}4R^{8}\] \[+\delta_{ij}\Big{[}-\left(R^{4}-20R^{2}+54\right)R\sin(2R)\] \[\quad+\left(-5R^{4}+46R^{2}-27\right)\cos(2R)+3R^{4}+8R^{2}+27 \Big{]}\Big{/}8R^{8}.\] And for the spin momentum, \[\omega^{2}f_{S}(r)=\frac{3}{4}L^{\prime 2}(r)-\frac{L(r)L^{\prime\prime}(r)}{2} -2L^{\prime}(r)\frac{T(r)}{r}\] with the correlation tensor \[\left\langle P^{\mathbf{E}}_{S,i}(\mathbf{0})P^{\mathbf{E}}_{S,j }(\mathbf{r})\right\rangle=\frac{r_{i}r_{j}}{r^{2}}\Big{[}3\left(R^{4}-14R^{2}+ 24\right)R\sin(2R)\] \[+\delta_{ij}\Big{[}\left(-3R^{4}+32R^{2}-54\right)R\sin(2R)\] \[+\left(-13R^{4}+52R^{2}-27\right)\cos(2R)-R^{4}+2R^{2}+27\Big{]} \Big{/}8R^{8}.\] For each momentum, the (normalized) autocorrelation function of the \(x\)-component for separation vectors \(\mathbf{r}\) in the \(xy\)-plane is plotted in the top row of Figure 2. For the orbital momentum, the degree of correlation is largely positive, and longer-ranged in the longitudinal direction. In sharp contrast, components of the spin momentum tend to change sign periodically, and more strongly so in the lateral directions. These findings hint at qualitatively distinct spatial organizations for the two currents. In the middle row of Figure 2, we show 2D streamlines for each momentum in a slice through one realisation of the random field, and colour them according to the value of the \(x\)-component. Zero-crossings of the \(x\)-component are shown in black to better distinguish regions of "upwards" and "downwards" flow along \(x\). We observe that the orbital current keeps the same direction across relatively broad channels, with a typical size in accordance with the correlation function given above. Such structures are channels of energy flow. Conversely, the spin current changes direction more frequently, particularly along the lateral (\(y\)) direction, forming narrow pockets of oppositely directed flow. These two contrasting behaviours can seemingly be traced back to the elementary building block of the non-paraxial field, consisting of two interfering plane waves and studied in [4], in which it was found that \(\mathbf{P}_{O}\) homogeneously points along the bisector, whereas \(\mathbf{P}_{S}\) oscillates in the transverse direction. Corresponding results for the Poynting current, shown in the third column of Figure 2, indicate a less clear-cut behaviour, close but not identical to that of the orbital current. At this point, it is enlightening to compare the currents of the vector EM field to the simpler case of a random complex scalar field \(\Psi=p^{\Psi}+iq^{\Psi}\), defined by dropping the polarization term in brackets in (4). \(\Psi\) obeys the Helmoltz equation with wavevector \(k\), and there is a single, divergenceless current \(\mathbf{J}=\frac{1}{2\pi}\operatorname{Im}\{\Psi^{*}\mathbf{\nabla}\Psi\}\). Its longitudinal autocorrelation function is given by \[\omega^{2}f_{J}(r)=\frac{1}{2}\left[C^{\prime 2}(r)-C(r)C^{\prime\prime}(r) \right],\] with \(C(r)=L(r)+2T(r)=\sin(kr)/kr\) (we remark the similarity of this expression to that for the orbital momentum), and the correlation tensor is \[\left\langle J_{i}(\mathbf{0})J_{j}(\mathbf{r})\right\rangle =\frac{r_{i}r_{j}}{r^{2}}\Bigg{[}\frac{\left(2R^{2}+R\sin(2R)+2\cos (2R)-2\right)}{4R^{4}}\Bigg{]}\] \[+\delta_{ij}\Bigg{[}-\frac{(R\sin(2R)+\cos(2R)-1)}{4R^{4}}\Bigg{]}.\] The correlation behaviour of the scalar current, shown in the rightmost column of Figure 2, appears to lie in between that of the orbital and Poynting currents, and is similar to both. This in turn emphasizes the "spin" nature of \(\mathbf{P}_{S}\), which possesses a behaviour unfound in the scalar case ; it raises the interesting question of how corresponding currents would behave for tensor waves describing other fundamental particles with different spin. Finally, we discuss the experimental observability of these optical currents. As mentioned in [11], a small probe particle can hardly image subwavelength structures, as its own presence will distort the field on a comparable lengthscale. With this in mind, it is tempting to only consider local spatial averages of the currents. Our correlation functions suggest that the orbital current will survive local averaging, as it is largely positively correlated to itself over a wavelength-sized volume. Conversely, neighbouring pockets of opposite spin flow will cancel each other out. In the bottom row of Figure 2, we plot the same streamlines again, but after having performed a local average of the field over a spherical volume of diameter \(\lambda\) (rendered by the dashed circle). The integrated spin current indeed quickly vanishes. As a result, orbital and Poynting currents will tend to reconcile, if probed by sufficiently large particles that effectively average over the generic subwavelength inhomogeneities of the spin momentum. Consequently, we expect the difference in the orbital and Poynting streamlines highlighted in Figure 1 to have its significant impact on the motion of very subwavelength objects, such as single atoms or atomic clusters. #### iii.1.2 Vorticity of the currents The tendency of the spin current to "turn" more can be further quantified by deriving statistical distributions for the _vorticities_ of the optical currents, that were discussed in previous studies [10; 11] \[\mathbf{\Omega_{P}}=\mathbf{\nabla}\times\mathbf{P}\quad\mathbf{\Omega_{O}^{E}}= \mathbf{\nabla}\times\mathbf{P_{O}^{E}}\quad\mathbf{\Omega_{S}^{E}}=\mathbf{\nabla} \times\mathbf{P_{S}^{E}}. \tag{8}\] The strategy for these calculations follows closely that for the magnitudes of the momenta themselves. We note that the additional space derivative involved now makes the distributions wavelength-dependent. The magnitude distributions for the three vorticities are \[D(X=\Omega_{P}/\omega)=\frac{9X}{78~{}886~{}240}\] \[\mathbf{\times}\left[\left(237\sqrt{5}X+172\right)819~{}200e^{-6\sqrt{5 }X}\right.\] \[+\left(939~{}752~{}400X-44~{}642~{}639\sqrt{10}\right)\sinh(20X)e^ {-8\sqrt{10}X}\] \[+\left(1~{}860~{}213\sqrt{10}X-880~{}640\right)160\cosh(20X)e^{-8 \sqrt{10}X}\right]\] \[D(X=\Omega_{O}^{\mathbf{E}}/\omega)=\frac{225}{77}X\] \[\mathbf{\times}\left[64e^{-(15/2)X}-99e^{-10X}+35e^{-6\sqrt{5}X}\right]\] \[D(X=\Omega_{S}^{\mathbf{E}}/\omega)=\frac{25X}{361~{}504}\] \[\mathbf{\times}\left[83~{}187e^{-20X}+9~{}628~{}125e^{-12X}\right.\] \[-5~{}824~{}512e^{-10X}+286~{}374\sqrt{10}\sinh(20X)e^{-8\sqrt{10}X}\] \[-4~{}792~{}320e^{-6\sqrt{5}X}+905~{}520\cosh(20X)e^{-8\sqrt{10}X}\] These distributions are shown in Figure 3. Despite being identically distributed in magnitude, orbital and spin momenta have different vorticities : in agreement with the observations of the previous section, that of the spin current is statistically larger. An interesting extension of this investigation could be to explore whether or not this relates to some difference in the density of singularities in the orbital and spin flows [11]. The geometry of these singularities, in the special case where all components of the complex electric field vanish, was recently studied in [31], where it was found that the orbital momentum always arranges in elongated "pseudo vortex lines" in the vicinity of such zeros. Visual exploration of the random fields (not shown) indicates that such a coiling structure seems to occur frequently near generic zeros of both the orbital and spin momenta. ### Democratic momenta and fields with pure helicity Throughout this work, we have focused on electric-biased momenta. Equivalent statistics would evidently hold for the magnetic-biased quantities, but not for democratic ones. Berry and Shukla recently investigated the difference between biased and democratic quantities in similar statistical calculations [26], and concluded that as a rule of thumb, democratic quantities tend to vary more smoothly and follow narrower distributions. Indeed, including contributions from both the electric and magnetic fields (which are uncorrelated to some extent) effectively suppresses regions of strong interference, similarly to the way vector quantities built from three field components also show less interference detail than corresponding scalar quantities. We derived the magnitude distributions for the democratic momenta (see SI) and present them in Figure 4.a). The distribution is still identical for the orbital and spin parts, but is indeed slightly narrower than for the biased momenta (dashed grey line). Interestingly, when computing the angle distributions numerically in Figure 4.b), we find that the angle between \(\mathbf{P}_{O}^{\mathbf{E}\mathbf{H}}\) and \(\mathbf{P}_{S}^{\mathbf{E}\mathbf{H}}\) is on average narrower than that between the corresponding biased quantities. As a result, democratic momenta are (slightly) more closely aligned with the Poynting vector than their biased counterparts. Our investigations in randomly polarized fields did not reveal more striking differences between biased and democratic momenta, and we believe all qualitative descriptions given in previous sections to hold for democratic currents as well. It is enlightening at this point to backtrack on our assumption of _randomly polarized_ plane wave components, to consider instead random fields with pure helicity \(\sigma=\pm 1\). This amounts to fixing \(\beta_{n}\) to \(0\) or \(\pi\) in (4), and enforces \(\mathbf{H}=-i\sigma\mathbf{E}\) such that biased and democratic quantities become equal (we denote them by a \(\sigma\) superscript). As detailed in the SI, this adds new non-zero correlations between variables in our statistics, though values of local averages that were already non-zero in the randomly polarized case are unaffected. Taking these new correlations into account, we can proceed through similar calculations. It is however easy to predict what the distributions will be, as democratic momenta always split into two independent terms originating from components of opposite helicity \(\mathbf{P}=[\mathbf{P}^{+}+\mathbf{P}^{-}]/2\)[18]. In a randomly polarized field, this becomes a sum of two independent identically distributed variables, whose distribution simply results from the self-convolution of the distribution for a pure helicity term. This easily appears considering the Fourier transform form of our calculations (see SI). Distributions in pure helicity fields are also shown and checked against numerics in Figure 4.a), and they are broader than all distributions in the randomly polarized case. This is likely explained by a weaker "suppression of interference" effect, since there is now even less independence between the different components of the EM field. Finally, it was recently shown by Aiello that for _instantaneous_ (that is, not time-averaged) democratic quantities, the fast-oscillating double-frequency terms Figure 3: **Vorticity distributions.** Analytical (lines) and numerical (dots) probability distributions for the magnitudes of the vorticities of the Poynting, orbital and spin currents. Numerical data was obtained from \(10^{5}\) independent realizations of the random field, each containing \(N=10^{5}\) plane waves. also happen to be the cross-helicity terms [32; 33]. Consequently, cycle-averaging becomes equivalent to ignoring cross-helicity terms, and has therefore no effect on democratic quantities in pure helicity fields. For this reason, the distributions derived here for pure helicity fields are also expected to be the magnitude distributions for _instantaneous_ democratic momenta (for which the nature of the polarization should be irrelevant). Extending our approach to general time-dependent polychromatic fields is beyond the scope of this article, but represents an intriguing avenue, that could highlight profound relations between electric-magnetic democracy, helicity and time-averaging. ## Concluding remarks We have investigated various statistical properties of the Poynting, orbital and spin optical momenta in generic isotropic random light fields. Non-paraxiality was found to increase the discrepancy between Poynting and orbital flows, as the spin momentum unexpectedly becomes equivalent in magnitude to the orbital one. Deriving correlation functions, we were able to describe the distinct spatial structures of the orbital and spin currents, the former arranging in broad channels of energy flow akin to those found in a scalar random field, when the latter has higher vorticity and changes direction on a subwavelength scale. Upon local averaging over a wavelength-sized volume, the spin current rapidly averages out, leading the orbital and Poynting currents to reconcile. Still, the very different behaviour of the orbital and spin currents interrogates what our approach would reveal in other types of waves. Indeed, the field-theoretic formalism decomposing the kinetic (Poynting) momentum into canonical (orbital) and Belinfante (spin) parts is of broader generality, and these investigations could be extended and compared to waves describing other particles, such as electrons described by the Dirac equation whose current decomposition into orbital and spin contributions is known as the Gordon decomposition [34; 35], but also to turbulence in acoustic [36] and gravity water waves [37], the latter extensions appearing very natural considering that results from fluid dynamics were used in the present study. The spin _angular_ momentum density of all types of waves could also be studied, as it is arguably the more relevant quantity from a field-theory perspective, the Belinfante momentum being constructed from it. Further investigations of the electromagnetic case could characterize the generic singularities of the optical currents (isolated points in 3D space) and the statistical geometry of the flows around them, something that has so far only been explored for non-generic zeros of the full complex electric field [31]. More advanced correlation functions (involving more than two positions, evaluated near extrema, etc...) could reveal finer features of the optical currents as well ; random fields generally offer endless possibilities of statistical investigation [21]. Finally, there appears to be profound links to uncover in relating electric-magnetic democracy, helicity and time-averaging. This prompts the extension of our approach to general time-dependent fields, which could require introducing the vector potentials for defining instantaneous momenta, and the weighing of plane wave components by a power spectrum [22; 25]. This could represent a step towards better understanding of the spin-orbit decomposition of optical momentum, which as of today remains largely confined to the monochromatic case. ## Data availability statement See the Supplementary Information for details on the parameterization of the random fields, tables of elementary averages and correlation functions, general strategies for our computations, and detailed derivations. ## Acknowledgements We are grateful to Luke Hands, Nikitas Papasimakis, Michael Berry and Konstantin Bliokh for helpful discussions. MRD acknowledges support from the EPSRC Centre for Doctoral Training in Topological Design (EP/S02297X/1). The Ecole Normale Superieure (Paris) is gratefully acknowledged for TG's fellowship. Figure 4: **Statistics of democratic momenta in randomly polarized and pure helicity fields.** (a) Analytical (lines) and numerical (dots) probability distributions for the magnitudes of the Poynting, orbital and spin (democratic) momenta in randomly polarized fields (the thin dashed curve shows the distribution for the biased momenta of Figure 1), and in pure helicity fields (\(\mathbf{P}^{\sigma}\)). (b) Distribution of the angles between democratic momenta in randomly polarized fields, obtained numerically. Numerical data in this figure was obtained from \(10^{5}\) realizations of the random field, each containing \(N=10^{3}\) plane waves.
2305.19734
Knowledge Base Question Answering for Space Debris Queries
Space agencies execute complex satellite operations that need to be supported by the technical knowledge contained in their extensive information systems. Knowledge bases (KB) are an effective way of storing and accessing such information at scale. In this work we present a system, developed for the European Space Agency (ESA), that can answer complex natural language queries, to support engineers in accessing the information contained in a KB that models the orbital space debris environment. Our system is based on a pipeline which first generates a sequence of basic database operations, called a %program sketch, from a natural language question, then specializes the sketch into a concrete query program with mentions of entities, attributes and relations, and finally executes the program against the database. This pipeline decomposition approach enables us to train the system by leveraging out-of-domain data and semi-synthetic data generated by GPT-3, thus reducing overfitting and shortcut learning even with limited amount of in-domain training data. Our code can be found at \url{https://github.com/PaulDrm/DISCOSQA}.
Paul Darm, Antonio Valerio Miceli-Barone, Shay B. Cohen, Annalisa Riccardi
2023-05-31T10:55:41Z
http://arxiv.org/abs/2305.19734v1
# Knowledge Base Question Answering for Space Debris Queries ###### Abstract Space agencies execute complex satellite operations that need to be supported by the technical knowledge contained in their extensive information systems. Knowledge bases (KB) are an effective way of storing and accessing such information at scale. In this work we present a system, developed for the European Space Agency (ESA), that can answer complex natural language queries, to support engineers in accessing the information contained in a KB that models the orbital space debris environment. Our system is based on a pipeline which first generates a sequence of basic database operations, called a sketch, from a natural language question, then specializes the sketch into a concrete query program with mentions of entities, attributes and relations, and finally executes the program against the database. This pipeline decomposition approach enables us to train the system by leveraging out-of-domain data and semi-synthetic data generated by GPT-3, thus reducing overfitting and short-cut learning even with limited amount of in-domain training data. Our code can be found at [https://github.com/PaulDrm/DISCOSQA](https://github.com/PaulDrm/DISCOSQA). ## 1 Introduction Space debris are uncontrolled artificial objects in space that are left in orbit during either normal operations or due to malfunctions. Collisions involving space debris can generate secondary debris which can cause more collisions, potentially leading to a runaway effect known as "Kessler Syndrome" (Kessler and Cour-Palais, 1978; Kessler et al., 2010), which in the worst-case scenario could make large ranges of orbits unusable for space operations for multiple generations. Therefore, space agencies have established departments responsible for cataloging the space debris environment, which can be used for space traffic management, collision avoidance, re-entry analysis, and raising public awareness of the problem.1 Footnote 1: [https://tinyurl.com/44tc2444](https://tinyurl.com/44tc2444) The European Space Agency (ESA) has catalogued over 40,000 trackable and unidentified objects in its DISCOS (Database and Information System Characterizing Objects in Space) Knowledge Base (KB) (Klinkrad, 1991; Flohrer et al., 2013). Accessing this information efficiently often requires technical expertise in query languages and familiarity with the specific schema of DISCOS, which may fall outside the skillset of the engineers searching for relevant information in the database. In this project, we developed a question answering system for the DISCOS KB. This deployed prototype enables ESA engineers to query the database with complex natural language (English) questions, improving their ability to make informed decisions regarding space debris. Recent breakthroughs in open question answering have been achieved using large language models that have been fine-tuned as dialog assistants, such as ChatGPT.2 These models, however, are black boxes that store knowledge implicitly in their Figure 1: Two representative queries for DISCOS and their decomposition according to the Program Induction method. parameters which makes it hard to guarantee that their answers are supported by explicit evidence, understand their failures and update them when the supporting facts change. In contrast, parsing a question into a query program and then executing it on an explicit KB is guaranteed to provide a factual correct answer provided the KB and query program are correct. Our approach is particularly useful for applications such as satellite operations where accuracy and reliability are critical. The main challenge for this project was that no training set or example questions were available for the DISCOS KB. This issue, combined with the large amount of unique and diverse objects in the database, precluded a straightforward application of common supervised learning techniques. Although possible strategies for solving this task, such as direct semantic parsing of the query with seq2seq models, were identified in the literature, they suffer from problems with compositional generalization (Herzig et al., 2021; Furrer et al., 2020). Furthermore, very little work has been done on generalizing to KB element components that were never seen during training (Cao et al., 2022; Das et al., 2021; Huang et al., 2021). To overcome these challenges, we apply and adapt a methodology from the literature called Program Transfer (Cao et al., 2022) to significantly reduce the required dataset for adequate generalization over the complete DISCOS KB. This is a two-step approach. For each user query first a program sketch is predicted, consisting of a sequence of query functions where the arguments are either variables or placeholders, then the representation of the query is compared to the representations of the KB entities, in order to fill out the placeholders with arguments relevant to the query text. The underlying query language of this approach is called Knowledge-oriented-Programming-Language (KoPL) for which two representative example questions are shown together with their decomposition into sketch and arguments in Figure 1. We also conduct a data collection study with domain experts, and we apply a data augmentation pipeline leveraging the underlying ontology of the KB and prompting a Large Language Model (LLM) to generate automatically more training examples. The architecture was retrained with different domain-specific LMs and baselines to determine the benefits of using a domain-specific pre-trained encoder. The main contributions of this paper are: * Applying and adapting a methodology described in the literature for complex knowledge base question answering (CKBQA) on a novel industry-relevant database, with a large and dynamic set of unique entities; * Collecting a new dataset on this database from domain-experts and leveraging the in-context learning capability of LLMs for data augmentation on it; * Evaluating the use of domain-specific LMs as different encoders on our curated dataset; * Demonstrating the effectiveness of the approach by achieving comparable results to general-purpose LLMs ## 2 Related Work Low-resource CKBQAPre-trained language models have demonstrated state-of-the-art performance in semantic parsing for complex question answering on KBs where the same logic compounds are contained in both the training and validation sets (Furrer et al., 2020). However, they struggle with compositional generalization, where the "compounds" (combinations) of components are diverse between training and validation, even if all components (entity, relation, program filters) have been seen during training (Herzig et al., 2021). Das et al. (2021) explored retrieval-based methods to pick the top \(n\) similar examples from the training set and use them as additional input for the prediction. In theory, this would make it possible to reason over changes on the KB by only adding new examples to the training set without the need of retraining the whole model. Another approach is adapting the architecture of language models to incorporate the structure of a KB directly for the prediction. For example, Huang et al. (2021) ranked FreeBase KB entities by using an EleasticSearch search API to identify these entities. When generating the query program, instead of entities a special token is predicted, which in the post-processing step get replaced by the top ranked entity identified by ElasticSearch. Although, achieving good results, it is unclear how this would translate to queries with multiple entities and also has the typical limitations of ElasticSearch. Another method is the Program Induction and Program Transfer method, where a sequence of _functions_, or a _sketch_, is generated from the input query. A single function here stands for a basic logic operation on the KB. The premise is that the sketch is mostly dependent on the formulation of the input query and less dependent on the KB, therefore the training on a source domain can transfer to inference on a target domain. In a second step, the particular inputs that each function in the sketch receives are identified from the elements of the KB through comparing their representations with the one of the model at the specific function. During training, the goal is to create sophisticated representations for the components of the KB as well as for the query that can generalize to components which were not seen during training (Cao et al., 2022). Domain-specific Language ModelsUsing self-supervised pre-trained language models is the de-facto standard approach in modern natural language processing (NLP). These models are trained on large volumes of text, learning representations that can generalize over natural language variations and capture long-term dependencies between the input tokens. During fine-tuning, these learned features and representations commonly lead to improved results on the downstream task. While the interplay between the amount of in-domain data, model capacity and training regime is complex (Zhao et al., 2022), as a general rule, training these models on in-domain task-related text improves the performance of this task (Maheshwari et al., 2021; Berquand et al., 2021; Arnold et al., 2022; Joshi et al., 2023). An alternative approach involves modifying the pre-training objective according to the domain. In the context of question-answering with tabular data, it was explored how a language model could function during pre-training as a SQL-query executor, predicting the results of an automated created SQL query on the corresponding concatenated table, to elicit an understanding of the underlining dependencies in tables. This approach resulted in improved performance on related downstream tasks (Liu et al., 2022). For KBs, together with the standard Masked-Language-Modelling loss an architecture called Kepler was tested that also minimizes a contrastive loss on related KB triples, where both the correct triples and randomly perturbed incorrect triples are scored, and the loss penalizes scoring the perturbed triples higher than the correct ones (Wang et al., 2021). ## 3 Dataset Collection and Use We used two sources of data for the study. The first one is the original dataset which first introduced the KoPL-format for general-domain knowledge-base question answering (described in SS3.1). The second one is for our particular domain with information about objects in space (SS3.2). We also describe in SS3.3 how we further collected training data for fine-tuning a language model, and how we augmented this dataset with new question-answer pairs (SS3.4). ### The KQA Pro Dataset The KQA pro dataset (Cao et al., 2022) is a large scale dataset for complex question answering over a knowledge base. It contains over 120,000 diverse questions for an subset of entities from the Free-base KB and their associated relations, attributes and concepts. The reasoning process to arrive at a solution is provided in the form of a Knowledge-oriented-Programming-Language (KoPL), which was designed specifically for this dataset. The question-program pairs were automatically generated by randomly sampling the extracted KB and using novel compositional templates to create a canonical form of the question and associated answer. To increase ambiguity these questions were then paraphrased and controlled by Amazon Mechanical Turk workers (Cao et al., 2022). ### The DISCOS Database The ESA Database and Information System Characterising Objects in Space (DISCOS)3 is a regularly updated source for information about launches, launch vehicles, objects, spacecraft registration numbers, mission-specific information (mass, mission objective, operator), and most importantly orbital and fragmentation data histories for all trackable as well as unclassified objects. With currently over 40,000 objects being tracked, this tracking provides rich information for ESA offices monitoring and managing space debris, collision avoidance, re-entry analyses, and contingency support. Other actors, such as research institutes, government organisations, or industrial companies from ESA Member states can apply for an account to access the information provided by DISCOS free of charge. A comparison between the the DISCOS KB and the KB used for the KQA Pro dataset can be seen in Table 1 ### Data Collection As there was no question-program-answer training set available on the DISCOS KB, potential queries had to be collected from domain-experts via a simple user interface. The interface allowed domain experts to input queries of interest, along with their username and feedback, see Appendix E. Based on the domain experts' feedback, a manually labeled baseline dataset of around 102 question-KoPL program pairs was created. ### Data Augmentation To extend the limited baseline dataset and increase diversity of potential queries, we augmented the dataset by creating paraphrases of the questions, shown to add robustness to question answering systems [16, 15]. For each unique program sketch, the schema of the ontology was used to alter the arguments of the single functions in that sketch. For example, for the program _Find('Saturn V')_\(\rightarrow\)_QueryAttr('mass')_, the concept of the _Find_ function argument (_Saturn V_) was identified as _LaunchVehicle_. Subsequently, the ontology was queried for other entities of the same concepts and also for associated attributes to substitute the argument for the _QueryAttr_ function. In order to generate appropriate questions for each "augmented" program, we used the few-shot and in-context learning capabilities of GPT-3 [1]. A prompt was curated, consisting of question-program pairs from the manually labeled dataset, so that GPT-3 could generate a question for an unlabelled augmented program. The sampling included all programs from the manual dataset with the same sketch as the augmented one, as well as using examples with the same relation type, which showed great abilities to generate a correct question. Additionally, an instruction section was added to the prompt, consisting mainly of a list of expansions for acronyms commonly used in the KB's ontology. This ensured that acronyms were expanded correctly and reduced the likelihood of hallucinations by the LLM for these acronyms. Examples of generated questions with their corresponding programs can be found in Appendix D. The prompt schema can be found in Appendix A. The benefits of an automatically generated dataset include cost-effective data sample generation, while also ensuring a balanced distribution of complex and simple queries as well as common (ex: Saturn V, Hubble Space Telescope) and uncommon arguments (ex: L-186, PSLV-Q). With the use of LLMs, the generated questions also have already slightly different semantics and syntactic structure as the generation process is inherently statistical and can be adjusted with parameters such as the temperature. LLMs can also leverage their stored world knowledge, e.g. we observed the entity name " " to be automatically translated into the english "Tianzhou 4" designation. More examples can be seen in Appendix D. However, it is important to note that the question generation is not foolproof and could be further optimized e.g. through additional prompt engineering or a subsequent data cleaning procedure. ## 4 Methodology and Model Architecture We describe in SS4.1 the problem that our model aims to solve. In SS4.2, we describe what modification we applied to the methodology from [11]. ### Problem Definition The task is defined in the following way, given a natural language question \(Q\) we want to predict a program \(y\) that traverses the knowledge base \(K\) and produces an answer \(A\) for \(Q\). This means: \[A=y(Q,K),\quad K=\{E,R,C,A\},\] where \(E,R,C,A\) represent respectively the mutually disjoint sets of entities, relations, concepts and attributes in \(K\). More specifically for a set entities in the training set \(E_{t}\), the task is to be able to generalise to the set of \(E_{v}\), which were unseen during training, with \(E=E_{t}\cup E_{v}\), \(E_{t}\cap E_{v}=\emptyset\). Therefore, we apply a program induction and transfer methodology, predicting for \(Q\) a program, a tuple of actions, \[y(Q,K)=(o^{1}(arg^{1}),\dots,o^{t}(arg^{t})),\] \[o^{i}\in O,\forall i=1,\dots,t;\quad arg^{t}\in E\cup R\cup C\cup A,\] where \(O\) is a set predefined basic functions executable on the KB, with each of them taking \begin{table} \begin{tabular}{l r r r} \hline **Dataset** & **\#Entities** & **\#Relations** & **\#Concepts** \\ \hline KQA Pro & 16,960 & 363 & 794 \\ \hline DISCOS & 73,354 & 32 & 39 \\ \hline \end{tabular} \end{table} Table 1: Entity, relation, and concept counts for the KQA Pro and DISCOS KBs one disjoint set as inputs from the pool of arguments from the KB. In the first step, the sketch \([o^{1},\ldots,o^{t}]\in O^{t}\) is generated by encoding the question \(Q\) with a pre-trained language model and using its representation as the starting point for a GRU-based decoder with attention mechanism Bahdanau et al. (2015); Cao et al. (2022). The input arguments for each function, \(o^{1},\ldots,o^{t}\), are chosen in a second step by calculating the probability between the encoded representations of the \(i\)-th candidate \(R_{i}^{t}\) in the KB at position \(t\) in the sequence, and the representation of the decoder at position \(t\), \(h^{t}\). The probability is computed as: \[p(arg^{t}|R_{i}^{t},h^{t})=\text{softmax}(R_{i}^{t},h^{t}),\] and is followed by choosing the most likely candidate as the input argument for the function. We refer the reader to Cao et al. (2022) for the details. In addition to the standard BERT model we also train RoBERTA-based Liu et al. (2019) domain-specific encoders (see Appendix B). ### Enhancements to Program Transfer At the beginning, the prediction for entities, relations, and operations (\(\{<,>,=\}\)) had to be re-implemented as it was not available in the associated source code.4 Footnote 4: [https://github.com/thu-keg/programtransfer](https://github.com/thu-keg/programtransfer) The standard methodology of comparing the representations to all inputs during training posed another challenge for entities, as during the training step the gradients for all them would need to be stored, which were exceeding the available virtual random access memory on the system. As a result, instead of comparing the representation at each sequence position to all entities in the KB, only the subset of entities in the current batch are compared with each other.5 In addition, random samples from the complete entity set where selected and added to the batch to also give signals to entities not occurring in the training set. Footnote 5: Note that these entities are part of the training set entities, hence are all known apriori. More formally, let \(X^{(j)}\) be the \(j\)th batch from the training set, \(E\) be the overall set of entities, and \(X^{(j)}_{E}\) be the set of entities of the training batch. At each training batch \(j\), we use the set of entities \(e^{(j)}\) to compare the probability of their representations against the candidate input argument in batch \(j\), as defined by: \[e^{(j)}=X^{(j)}_{E}\cup f^{(j)}, \tag{1}\] where \(f^{(j)}=\{e_{1},e_{2},\ldots,e_{n}\}\) are \(n\) randomly sampled entities from \(E\) without replacement. Furthermore, for extracting time values from queries we use the SUTime library Chang and Manning (2012), whereas for extracting numerical patterns we use a simple regex schema. In addition, for parsing the predicted inputs into a form than can be read by the KoPL engine a parser function had to be written as well as a method for assigning the dependencies between the single basic functions. Other small adaptation of the original approach included the addition of a normalization layer between the linear layers for the prediction of the single arguments as well as masking-out padding tokens from the loss calculation of the function generation component. A schematic overview of the pipeline can be seen in Appendix C. ## 5 Experiments We next describe our dataset (SS5.1), the way we trained our models (SS5.2) and our results (SS5.3). ### The DISCOS-Questions-Programs Dataset The creation of the training and validation datasets involved the following steps: First, all the unique sequences of functions were extracted from the manually labeled dataset. Then, ten augmented programs were generated for each unique program in the manually labeled dataset by substituting the inputs and generating questions as described in the data augmentation methodology. It is noteworthy that the entities included in the manually labeled dataset were not considered as candidates for the data augmentation. The questions for the augmented examples were generated using the large language model _code-davinci-002_.6 Through trial and error, a temperature of 0.75 was used for generation as it produced diverse examples while still capturing the meaning of the associated KoPL-program. Finally, the augmented dataset was split into training and validation datasets, where the original manually labeled dataset and 0.05% of randomly sampled examples from the augmented dataset were used as the validation set and the rest of the augmented dataset was used for training. In a subsequent filtering step, programs that appeared in the validation set were filtered from the training dataset. The resulting DISCOS-Question-Programs (DQP) dataset consists of 905 samples for training and 151 samples for validation. ### Model Training The preliminary results indicated that training directly on the DQP dataset did not lead to convergence in entity prediction. We then proceeded to first train on the out-of-domain original KQA Pro dataset and then further train on the DQP dataset. The hyperparameters for pretraining on the KQA pro dataset were adopted from the original paper. For training the DQP dataset, the hyperparameters were left unchanged, except for an adapted learning rate for the decoder, which was set to \(10^{-4}\) instead of \(10^{-3}\). For the experiments, different domain-adapted models were used as the encoder and then compared to each other as well as baseline models. For more information about the domain-adapted models see Appendix B. ### Results The analysis of different models was challenging as multiple components (functions, entities, relations, etc.) need to be predicted to arrive at the full program that can be run against the KB. Therefore, the analysis was divided into two parts. Firstly, the accuracy at the lowest validation loss for each component was compared separately between all the different trained models, providing an overview of each model's best predictive performance for each component. The results can be seen in Table 2. Although no model consistently outperformed the others on all components, CosmicRoBERTa (Berquand et al., 2021) achieved the highest performance on four out of six accuracies. The validation loss curves showed that during training, the validation accuracy can drop for one component while it rises for another. The validation loss curves can be found in Appendix F. For deployment of a single model, it is necessary to identify a checkpoint where the model predicts accurately across all components. To obtain a more holistic view of the performance, the models were also compared by summing up the normalized validation losses for each component. The results of summing up the normalized validation losses with equal weights are shown in Table 3. On average, the RoBERTa-base model has the lowest validation loss, followed by the Kepler model. However, no model consistently outperforms the others in terms of prediction accuracy. From an application perspective, the most important metrics are the accuracy in predicting functions and entities. To obtain the correct answer, the most crucial step is to predict the correct sequence of functions, and for multi-hop queries, it is essential to identify the correct starting entity. In addition, the number of unique entities is magnitudes higher than the number of unique attributes or relations in the KB, which makes identifying the right entity more difficult. Among the models considered, CosmicRoBERTa stands out as the model that performs well both in predicting functions and entities. Unfortunately, the Kepler model only sporadically showed improvements over RoBERTa-base. The reason for this could be the very limited pre-training corpus, which as a result was significantly smaller than the one from CosmicRoBERTa As a result, for the purpose of deploying a single model, the decision was made to choose CosmicRoBERTa. The results of the prediction are overall very impressive, with high accuracy scores over all components. It is especially worth highlighting that from over 40,000 entities in the database, only 400 appear in the training and validation set and only 1 of 71 entities in the validation set also appear in the training set. Despite this, some models achieve an \begin{table} \begin{tabular}{l r r r r} \hline **Accuracy** & **BERT** & **Kepler** & **RoBERTa** & **CR** \\ \hline Function & 0.797 & _0.812_ & 0.796 & **0.826** \\ Entity & 0.874 & 0.887 & _0.907_ & **0.927** \\ Attribute & **0.955** & _0.948_ & _0.948_ & _0.948_ \\ Relation & 0.938 & _0.983_ & _0.983_ & **1** \\ Concept & 0.872 & _0.896_ & **0.92** & 0.89 \\ Operations & **1** & **1** & **1** & **1** \\ \hline \end{tabular} \end{table} Table 2: Accuracy at lowest validation loss for each respective component. CR stands for CosmicRoBERTa. The best score for each component is highlighted in **bold**, second best in _italics_. \begin{table} \begin{tabular}{l r r r r} \hline **Accuracy** & **BERT** & **Kepler** & **RoBERTa** & **CR** \\ \hline Min. sum valid loss & 0.21 & _0.171_ & **0.11** & 0.252 \\ \hline Function & **0.79** & 0.734 & 0.775 & _0.789_ \\ Entity & 0.874 & 0.887 & _0.894_ & **0.927** \\ Attribute & 0.935 & **0.948** & _0.941_ & 0.915 \\ Relation & **0.978** & 0.95 & _0.969_ & _0.969_ \\ Concept & 0.853 & 0.8841 & _0.908_ & **0.927** \\ Operations & **1** & _0.97_ & **1** & 0.94 \\ \hline \end{tabular} \end{table} Table 3: Accuracy at the lowest validation loss summed over all compoennts. The best score for each component is highlighted in **bold**, second best in _italics_. CR stands for CosmicRoBERTa. accuracy of over 90% in predicting entities, demonstrating their strong ability to generalize to entities not seen during training, which was a critical user requirement for our system. In addition, we benchmarked our method to recently released general purpose models such as ChatGPT7 and GPT-4 [15]. For each model, training set examples were randomly added to the prompt until the respective model's context limit is reached. Then the models were prompted to generate the right program for a question from the validation set. Our methodology has an overall accuracy of predicting the right program completely of 48%, which is higher than the performance of around 25% of ChatGPT and comparable with the performance of GPT-4 of around 50%. A detailed comparison can be found in Table 4. This further demonstrates the efficiency of our methodology as it can be run at a fraction of the necessary compute as well as locally on consumer-grade hardware. Footnote 7: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) ## 6 Conclusion We developed a system for ESA to address the challenge of answering complex natural language questions on their DISCOS KB. The main obstacles included a lack of training data, the diversity and regular updates of the database entries, and the need for an economically feasible solution. The program transfer for complex KBQA methodology was selected for its potential to reduce the amount of required training samples through transfer learning and its capability to potentially generalize for examples which were never seen during training. A data collection study was conducted with domain experts, which was then used to augment the data through leveraging the underlying ontology of the KB and prompting a large language model to generate fitting questions. The architecture was retrained with different domain-specific models and baselines to determine the benefits of using a domain-specific pre-trained encoder. Although the results were mixed, the best performance was achieved by CosmicRoBERTa, a pre-trained model on a space domain corpus. With an accuracy of over 90% of predicting the right entity on the validation set over the vast pool of candidate entities, the method demonstrates its strong ability to predict the correct input arguments for unseen examples. This is further demonstrated in the comparison with general-purpose models such as ChatGPT or GPT-4, where our method achieved competitive results. Therefore, this approach has the potential to be extended to other databases and query languages in the future, especially in scenarios where there are few to no training examples. ## Limitations The study has several clear limitations. Firstly, the training and validation datasets used in this study are still relatively small. A larger dataset would give more robust results for comparing different encoders. Additionally, the experiments were only conducted on the ability to generalize to unseen entities and not on the ability to generalize to unseen sketch types, which is also of key importance when addressing low resource CKBQA. Moreover, the methodology used in this study relies on annotated question-program pairs, which are expensive to collect. Learning only from question-answer pairs or even a question with an indicated difficulty based on whether the model was able to answer the question, could be more easier to collect. While the models achieve high accuracy on most of the knowledge base components, overfitting can occur at different stages during training, leading to high accuracy for one component at one training step but poor accuracy for another component at another step. In the future, revising the training procedure or the model setup may help address this issue. \begin{table} \begin{tabular}{l c c c} \hline \hline & **CosmicRoBERTa** & **GPT-4** & **ChatGPT-3.5** \\ \hline Functions & **0.79** & **0.79** & 0.516 \\ Enities & **0.93** & 0.86 & 0.41 \\ Relations & **0.97** & 0.87 & 0.32 \\ Concepts & **0.93** & 0.82 & 0.61 \\ Attributes & **0.92** & 0.84 & 0.72 \\ \hline Overall & 0.48 & **0.5** & 0.25 \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy of deployed model (CosmicRoBERTa) versus general purpose models ChatGPT-3.5 and GPT-4. The best score for each component is highlighted in **bold**. ### Ethics Statement In any safety-critical context like spacecraft operations, there is an inherent risk associated with the use of automatic methods supporting human operators. The transparency of the predicted programs could mitigate this issue as it allows even for an engineer with limited knowledge about the underlying query method to interpret the program to some degree. In any case, the developed systems might support human analysis and decision making by decreasing workload, but cannot replace it. As mentioned before the DISCOS KB can be accessed after creating a user account. We plan on publishing the created question-program paris and trained models online in accordance with ESA's guidelines. ## Acknowledgments This study was funded by the European Space Agency (ESA) under the Intelligent Operational Assistant project with contract No.AO/1-10776/21/D/SR. This work was also supported by the UKRI Research Node on Trustworthy Autonomous Systems Governance and Regulation (grant EP/V026607/1) which provided additional funding for Antonio Valerio Miceli-Barone. We thank Evridiki Ntagiou (ESA) and Paulo Leitao (Vision Space) for their roles as project leader and industrial partner, respectively. We are grateful for Callum Wilson's help (University of Strathclyde) in creating the GraphDB KG from the original DISCOSweb API, which was used to create the DISCOS KB. We also thank Sean Memery and Maria Luque Anguita (University of Edinburgh) for their contribution to the KEPLER data collection, and everyone who took part in the data collection for the first iteration of the DISCOSWeb-questions dataset. We also thank Marcio Fonseca and the reviewers for useful feedback on the paper.
2309.03491
Holographic renormalized Entanglement and entropic $c-$function
We compute holographic entanglement entropy (EE) and the renormalized EE in AdS solitons with gauge potential for various dimensions. The renormalized EE is a cutoff-independent universal component of EE. Via Kaluza-Klein compactification of $S^1$ and considering the low-energy regime, we deduce the $(d-1)$-dimensional renormalized EE from the odd-dimensional counterpart. This corresponds to the shrinking circle of AdS solitons, probed at large $l$. The minimal surface transitions from disk to cylinder dominance as $l$ increases. The quantum phase transition occurs at a critical subregion size, with renormalized EE showing non-monotonic behavior around this size. Across dimensions, massive modes decouple at lower energy, while degrees of freedom with Wilson lines contribute at smaller energy scales.
Mitsutoshi Fujita, Song He, Yuan Sun, Jun Zhang
2023-09-07T06:01:38Z
http://arxiv.org/abs/2309.03491v2
# Holographic renormalized Entanglement and entropic \(c-\)function ###### Abstract We compute holographic entanglement entropy (EE) and the renormalized EE in AdS solitons with gauge potential for various dimensions. The renormalized EE is a cutoff-independent universal component of EE. Via Kaluza-Klein compactification of \(S^{1}\) and considering the low-energy regime, we deduce the \((d-1)\)-dimensional renormalized EE from the odd-dimensional counterpart. This corresponds to the shrinking circle of AdS solitons, probed at large \(l\). The minimal surface transitions from disk to cylinder dominance as \(l\) increases. The quantum phase transition occurs at a critical subregion size, with renormalized EE showing non-monotonic behavior around this size. Across dimensions, massive modes decouple at lower energy, while degrees of freedom with Wilson lines contribute at smaller energy scales. Introduction Quantum entanglement entropy stands as a pivotal concept in quantum mechanics, offering insight into the level of entanglement among distinct segments of a quantum system. By quantifying the entanglement between different components, this entropy provides a metric for the extent of shared information. Its implications span various facets of quantum mechanics, encompassing quantum information theory, black hole physics, and condensed matter physics. Notably, the entanglement entropy of subsystem A quantifies the entangled degrees of freedom within a given quantum field theory [1, 2]. Within the context of condensed matter physics, this entropy displays divergence at critical junctures of quantum critical phase transitions, assuming the role of an order parameter [3]. This phenomenon encapsulates the geometric essence of field theories, manifested in an area law that draws parallels between subregion entanglement entropy and black hole entropy. Introducing the Ryu-Takayanagi formula establishes a holographic counterpart for entanglement entropy [5, 6, 7], emerging as a robust tool for dissecting strongly coupled systems traditionally resistant to conventional analysis. In specific contexts, this formula has served as an order parameter, signaling the onset of confinement/deconfinement phase transitions within confining gauge theories [10, 11, 12, 13, 14, 15]. 1 The transitions emanate from the interplay between two minimal surfaces, resulting in the post-transition confinement phase entanglement entropy becoming trivial at the infrared limit. Moreover, the holographic entanglement entropy (HEE) emerges as a probing tool for phase transitions in holographic superconductors [17, 18, 19, 20, 21, 22, 23] as well as for unveiling topological phases of matter [24]. Footnote 1: On the other hand, holographic quark anti-quark potential can distinguish confinement and topological phases [16]. The entropic \(c-\)function concept provides deeper insights into entanglement entropy in quantum systems [8, 9]. It represents the logarithmic derivative of entanglement entropy with respect to subsystem size, revealing the intricate interplay between entanglement and subsystem dimensions. The general entropic \(c-\)function, proposed by [10], efficiently quantifies degrees of freedom in confining theories and yields the central charge of the corresponding conformal field theory (CFT) [25]. In the context of a quantum field theory dual to an AdS soliton, the behavior of the entropic \(c-\)function, as it decreases with increasing length, effectively serves as a probe for the deconfinement phase transition. Recently, a study [26] computed the entropic \(c-\)function for a striped entangling surface in the same background. Intriguingly, this function displays non-monotonic behavi background gauge field strength. Importantly, the entropic \(c-\)function for the strip incorporates both A-type and B-type anomalies due to the coexistence of these two anomaly types. [28] has derived constraints on anisotropic RG flows from holographic entanglement entropy. Our focus lies in assessing the degrees of freedom through entanglement entropy using a spherical entangling surface, with a specific emphasis on anomaly effects. The renormalized entanglement entropy, calculated from the entanglement entropy of this spherical surface, offers a solution to the issue of cutoff dependence [27]. This renormalized quantity, independent of the cutoff, measures degrees of freedom in quantum entangled states at an energy scale of \(1/l\), yielding the central charge in conformal field theory (CFT). In the context of four-dimensional (4D) CFT, it manifests as the A-type anomaly, in alignment with the C-theorem: the renormalized entanglement entropy decreases in the infrared (IR) limit as anticipated. [29] derived the renormalized entanglement entropy for a kink region which reduced to a universal positive finite term in the UV limit. The computation of renormalized entanglement entropy for a spherical entangling surface within the AdS soliton framework with a gauge potential remains unexplored. The gauge potential's interpretation in this background involves a twisted boundary condition along a circle within the cigar direction. This contributes to the negative Casimir energy of the dual field theory, which can eventually become positive. An intriguing aspect emerges from the interplay of Wilson lines, capable of inducing mass shifts in charged particles [30]. It becomes desirable to capture such alterations through the renormalized entanglement entropy. Conversely, contrasting the analysis presented in [26] for a striped entangling surface in the same background, the renormalized entanglement entropy in \(R^{1,2}\times S^{1}\) quantum field theory exhibits solely B-type anomaly characteristics. Generally, this quantity doesn't adhere to the C-theorem. Therefore, an engaging pursuit lies in investigating this aspect, including scenarios in higher-dimensional cases. This study focuses on introducing a holographic renormalized entanglement entropy (HREE), which encompasses the finite portion of entanglement entropy. We extensively investigate HREE's behavior across diverse scenarios to uncover the universal properties of quantum phase transitions. Our analysis reveals a phase transition between disk and cylinder geometries. The dominance of the disk type is evident for small \(l\), while the cylinder shape prevails for larger \(l\). This critical size marks the occurrence of a quantum phase transition. Notably, this transition is an outcome of the large \(N\) limit and is absent in free theory. The anticipated function of the renormalized entanglement entropy is to quantify the degrees of freedom in the dual quantum field theory (QFT). We probe HREE's response to changes in operator mass and gauge potential. Specifically, as the operator mass decreases, we expect HREE to increase significantly for larger \(l\), given the decoupling of massive degrees of freedom. The rest of this paper is organized as follows. In section II, we analyze the UV structure of the entanglement entropy. We derive the renormalized entanglement entropy of QFT dual to AdS solitons with gauge potential. We discuss properties of the renormalized entanglement entropy in both odd and even dimensions. Section III mainly focuses on holographic stress-energy tensors in AdS solitons with the gauge potential. In sections IV and V, we analyze the quantum phase transition of HREE in higher dimensional backgrounds. We end with conclusions and prospects in section VI. Some calculation details are presented in the appendices. ## 2 The UV structure of the entanglement entropy The \(d+1\)-dimensional AdS solitons with gauge potential exhibit a geometry akin to a cigar. In this setup, a compact circle gradually contracts to zero size in the bulk, completing the geometry. This behavior is detailed in [26]. The dual theory on \(R^{1,d-2}\times S^{1}\) transforms into a confining theory with a discernible energy gap in this context. Notably, this theory incorporates Wilson lines along the \(S^{1}\) direction, which inherently alters the boundary conditions. We calculate the entanglement entropy in this theory. The spherical entangling surface is chosen to have the topology \(S^{d-3}\times S^{1}\), where the entangling surface wraps another circle \(S^{1}\)[31]. For \(d=4\), \(S^{1}\times S^{1}\) is a cylinder with one identified direction, i.e., a torus. Because a spherical entangling surface for QFT on \(R^{1,d-1}\) has a different topology \(S^{d-2}\), we find that the UV scaling structure is different from those of QFT on \(R^{1,d-1}\). According to [31], the UV divergent structure of entanglement entropy \(S_{UV}\) is of the form \[S_{UV}=\frac{L_{\phi}}{R}S_{UV,0}, \tag{1}\] where \(S_{UV,0}\) is the UV structure of entanglement entropy of QFT \(S_{EE}^{(0)}\) on \(R^{1,d-1}\) and \(L_{\phi}\) is the periodicity along a circle \(S^{1}\) of the cigar. Two UV scaling structures are related to each other. Using (1) and operating differentiation on \(S_{EE}\), the renormalized entanglement entropy (the UV-independent part of the entanglement entropy) then becomes [31] \[S_{ren}=\frac{1}{R}f_{d}(R\partial_{R})RS_{EE}=L_{d}(R\partial_{R})S_{EE}\] \[=\begin{cases}&\frac{1}{(d-2)!!}R\partial_{R}(R\partial_{R}-2)\ldots(R\partial_ {R}-(d-3))S_{EE},\quad d=\text{odd},\\ &\frac{1}{(d-2)!!}(R\partial_{R}+1)(R\partial_{R}-1)\ldots(R\partial_{R}-(d-3 ))S_{EE},\quad d=\text{even},\end{cases} \tag{2}\] where \(f_{d}\) defines the operation of the renormalized entanglement entropy for \(R^{1,d-1}\)[27]: \[f_{d}(R\partial_{R})S_{EE}^{(0)}=\begin{cases}&\frac{1}{(d-2)!!}(R \partial_{R}-1)(R\partial_{R}-3)\ldots(R\partial_{R}-(d-2))S_{EE}^{(0)},\quad d =\text{odd},\\ &\frac{1}{(d-2)!!}R\partial_{R}(R\partial_{R}-2)\ldots(R\partial_{R}-(d-2))S_ {EE}^{(0)},\quad d=\text{even}.\end{cases} \tag{3}\] Recall that the first line in (2) is equivalent to the \(d-1\) dimensional renormalized entanglement entropy on \(R^{1,d-2}\) (the second line in (3)) up to coefficients. Especially we obtain \[S_{ren} =R\partial_{R}S_{EE}\quad\text{for }d=3, \tag{4}\] \[S_{ren} =\frac{1}{3}R\partial_{R}(R\partial_{R}-2)S_{EE}\quad\text{for }d=5. \tag{5}\] The formula (4) for QFT on \(R^{1,1}\times S^{1}\) corresponds to the well-established expression of the entropic \(c\)-function on \(R^{1,1}\). Through Kaluza-Klein reduction along \(S^{1}\), the renormalized entanglement entropy effectively embodies the 2-dimensional entropic \(c-\)function in the low-energy limit. In systems respecting Lorentz symmetry, the 2-dimensional entropic \(c\)-function is both non-negative and monotonically increasing. For \(R\ll L_{\phi}\) (in the UV limit), the renormalized entanglement entropy mirrors the behavior of a 3-dimensional system. The subregion's topology is not a disk \(D\) but rather \(L\times S^{1}\) with an interval \(L\), while the entangling surface forms \(S^{1}\). Moving to formula (5) for QFT on \(R^{1,3}\times S^{1}\), it captures one variant of the 4-dimensional renormalized entanglement entropy on \(R^{1,3}\). Kaluza-Klein reduction along \(S^{1}\) approximates the renormalized entanglement entropy on \(R^{1,3}\) in the low-energy regime. However, the behavior of the 4-dimensional renormalized entanglement entropy can be either negative or positive, displaying non-monotonic tendencies. In this scenario, the subregion's topology does not correspond to a ball \(B^{4}\) but rather \(B^{3}\times S^{1}\), while the entangling surface takes the form of \(S^{2}\times S^{1}\). ### Renormalized entanglement entropy of 4d QFT This section examines the entanglement entropy and renormalized entanglement entropy for various cases: a free scalar, Dirac fermion, and a 4-dimensional conformal field theory (CFT) on \(R^{1,2}\times S^{1}\). Additionally, we provide an overview of the trace anomaly in general CFT, which is intricately linked to the logarithmic term present in the entanglement entropy. The entanglement entropy can be derived from the effective action \(w=-\log Z\) in a \(d(=4)\)-dimensional manifold featuring conical singularities. By taking the limit \(n\to 1\), the entanglement entropy assumes an analytical expression involving \(w\) on a manifold with such singularities. Notably, the effective action \(w\) generally exhibits logarithmic divergence, which is connected to the concept of conformal anomaly. We consider the infinitesimal rescaling \(g^{\mu\nu}\rightarrow(1-2\delta\lambda)g^{\mu\nu}\). We then have \[\frac{dw}{d\lambda}=-2g^{\mu\nu}\frac{\delta w}{\delta g^{\mu\nu}}=-\int d^{4} x\sqrt{g}\langle T^{\mu}_{\ \ \mu}\rangle \tag{6}\] The equation stands as the trace anomaly, indicating the deviation from the traceless condition \(\langle T^{\mu}_{\ \ \mu}\rangle=0\) within Quantum Field Theory (QFT) for Conformal Field Theory (CFT). The trace anomaly is characterized by polynomials of the curvature tensor, a formulation contingent on the dimension \(d\). Notably, in odd dimensions, the trace anomaly must satisfy the condition of vanishing. When we define the length scale \(R_{1}\) of the subregion, this scale is related to rescaling the metric (6). Thus, one obtains the following formula: \[\begin{split} R_{1}\partial_{R_{1}}S_{A}=&\lim_{n \to 1}(-2\partial_{n}\int d^{d+1}xg_{\mu\nu}(x)\frac{\delta}{\delta g_{\mu \nu}(x)}[w-nw|_{n=1}])\\ &\frac{1}{2\pi}\lim_{n\to 1}\partial_{n}\Big{(}\langle\int d ^{d+1}x\sqrt{g}T_{\mu}^{\ \mu}(x)\rangle_{M_{n}}-n\langle\int d^{d+1}x\sqrt{g}T_{\mu}^{\ \mu}(x)\rangle_{M_{1}}\Big{)}.\end{split} \tag{7}\] Here, the entanglement entropy has been replaced with a function of the partition function in the manifold with conical singularities. The above-mentioned formula relates the entanglement entropy to the trace anomaly. To evaluate the entanglement entropy, we examine a subsystem \(\sigma\) with a cylindrical configuration where one direction is identified as \(\phi\sim\phi+L_{\phi}\). Interestingly, this subsystem aligns with the one in QFT corresponding to \(d+1\)-dimensional AdS solitons. Due to this, a conical singularity arises, characterized by a curvature tensor proportional to a delta function. The resulting logarithmic contribution to the entanglement entropy is expressed as \(S_{EE}=s\log(\epsilon/R_{1})+\dots\), where \(\epsilon\) represents the ultraviolet cut-off. Remarkably, this logarithmic term can also be derived by integrating the entanglement entropy of a \(3d\) free theory [32, 33]. \(s\) is expressed in terms of extrinsic curvatures [34]. According to [32, 33], \(s\) becomes \[s=\frac{a}{180}\int_{\partial\sigma}d^{2}x\sqrt{h}E_{2}+\frac{c}{120}\int_{ \partial\sigma}d^{2}x\sqrt{h}I_{2}, \tag{8}\] where \(E_{2}\) is the Euler density and \(I_{2}\) is a Weyl invariant. Compared with the normalization of [27], we have \(a=360a_{4}\) and \(c=120c_{4}\). \(c=1\) for a real scalar and \(c=6\) for Dirac fermion. Coefficients are consistent with trace anomalies. For CFT on a cylinder of length \(L_{\phi}\) and radius \(l\), \(s\) becomes \[s=\frac{c}{240}\frac{L_{\phi}}{l}. \tag{9}\] By using the renormalized entanglement entropy for cylinder type topology in (2), we obtain \[S_{ren}=\frac{1}{2}(l\partial_{l}+1)(l\partial_{l}-1)S_{EE}=s= \frac{cL_{\phi}}{240l}. \tag{10}\] This formula shows that the renormalized entanglement entropy agrees with the coefficient \(s\). Furthermore, according to [27], \(s\) agrees with the renormalized entanglement entropy for a spherical entangling surface \(S^{2}\) as follows: \[S_{ren}=\frac{1}{2}R\partial_{R}(R\partial_{R}-2)S_{EE}=s=\frac{ a}{90}. \tag{11}\] ## 3 The \(AdS\) soliton with the gauge potential The AdS soliton is achieved through a double Wick rotation of the AdS black hole, following the Einstein equation. It corresponds to the QFT system with anti-periodic boundary conditions [35]. In our investigation, we apply this approach to the Reissner Nordstrom \(AdS\) black hole [36], performing an analytical continuation of the metric in both temporal and spatial dimensions. The metric of the \(AdS\) soliton with the gauge potential becomes [26] \[\begin{split} ds_{d+1}^{2}=&\frac{L^{2}}{z^{2}} \Big{(}\frac{dz^{2}}{f_{d}(z)}+f_{d}(z)d\phi^{2}-dt^{2}+dR^{2}+R^{2}d\Omega_{ d-3}\Big{)},\\ A_{\phi}=& a_{\phi}^{(0)}\Big{(}1-\Big{(}\frac{z}{ z_{+}}\Big{)}^{d-2}\Big{)},\end{split} \tag{12}\] where \[f_{d}(z)=1-\Big{(}1+\tilde{\epsilon}z_{+}^{2}a_{\phi}^{2}\Big{)} \Big{(}\frac{z}{z_{+}}\Big{)}^{d}+\tilde{\epsilon}z_{+}^{2}a_{\phi}^{2}\Big{(} \frac{z}{z_{+}}\Big{)}^{2(d-1)}. \tag{13}\] Here, we set \(\tilde{\epsilon}=-1\)2, and define \(a_{\phi}^{2}=a_{\phi}^{(0)2}/\gamma^{2}\), where \(\gamma^{2}=\frac{(d-1)g_{\epsilon}^{2}L^{2}}{(d-2)\kappa^{2}}\) is a dimensionless parameter. The gauge field \(a_{\phi}^{(0)}\) acts as the source for the conserved current and induces a non-zero VEV for the current, \(\langle J_{\phi}\rangle\neq 0\). Alternatively, this gauge field can be interpreted as a Wilson line, altering the boundary condition (twisted boundary condition) due to a gauge transformation. As the Wilson line vanishes at the tip of the soliton (\(z=z_{+}\)), the gauge connection remains regular there. The radial coordinate \(z\) in (12) is confined to \(z\leq z_{+}\), while the \(\phi\) direction follows the periodicity \(\phi\rightarrow\phi+1/M_{0}\) to prevent conical singularities. The Kaluza-Klein mass \(M_{0}\) of the \(\phi\) direction is given by \[M_{0}=\frac{1}{4\pi z_{+}}\Big{(}d+(d-2)\frac{a_{\phi}^{2}}{z_{+}^{2}}\Big{)}. \tag{14}\] The formula (14) can also be rewritten in terms of \(M_{0}\) and \(a_{\phi}\) as follows: \[z_{+}=\frac{d}{2\pi M_{0}\pm\sqrt{4\pi^{2}M_{0}^{2}-d(d-2)a_{\phi}^{2}}}. \tag{15}\] There is also a minus branch. However, \(z_{+}\) is divergent at small \(a_{\phi}\) in that case, and the background does not approach the \(AdS_{d+1}\) soliton. It can be shown that the solution with the plus sign in (15) is always more stable than the one in the minus branch. The boundary stress tensor \(T_{\mu\nu}^{(0)}\) for field theory dual to the above background was computed in our previous work [26]. Here, we quote the results of the boundary stress tensor for later use. For more details, please refer to [26] \[T_{tt}^{(0)}=-T_{x^{i}x^{i}}^{(0)}=-\frac{L^{d-1}}{2\kappa^{2}}\frac{1}{z_{+} ^{d}}\Big{(}1-z_{+}^{2}a_{\phi}^{2}\Big{)}=-\frac{L^{d-1}}{2\kappa^{2}}\frac{1 }{z_{+}^{d}}\bar{\alpha}_{\phi}, \tag{16}\] \[T_{\phi\phi}^{(0)}=\frac{dL^{d-1}}{2\kappa^{2}}\frac{1}{z_{+}^{d}}\Big{(}1-z_ {+}^{2}a_{\phi}^{2}\Big{)}\Big{(}-1+\frac{1}{d}\Big{)}. \tag{17}\] with \[\bar{a}_{\phi}=1-\Big{(}z_{+}a_{\phi}\Big{)}^{2}. \tag{18}\] The boundary energy then is [37] \[M=-\frac{V_{d-2}}{M_{0}}\frac{L^{d-1}\bar{a}_{\phi}}{2\kappa^{2}z_{+}^{d}}. \tag{19}\] The boundary energy can change the sign when we change Wilson lines (gauge potential). \[\begin{cases}M<0&z_{+}a_{\phi}<1\\ M>0&z_{+}a_{\phi}>1.\end{cases} \tag{20}\] In other words, \(M\) is negative for \(a_{\phi}<2\pi M_{0}/(d-1)\), while it can become positive when \(a_{\phi}>2\pi M_{0}/(d-1)\). For \(a_{\phi}=0\), it realizes Casimir energy of \(4d\) SYM theory on \(R^{3}\times S^{1}\)[35]. This behavior is analogous to Casimir energy of fields with twisted boundary conditions in \(2d\) CFT [30]. Casimir energy is different between the periodic and anti-periodic boundary conditions. The holographic entanglement entropy In this section, we compute the entanglement entropy [5, 6]. The entangling surface is specified by \(z=0\) at \(R=l\), and \(0\leq\phi\leq L_{\phi}\) at a constant time slice in the background (12). Its topology becomes \(S^{1}\times S^{d-3}\), where \(S^{1}\) and \(S^{d-3}\) be of radius \(L_{\phi}\) and \(l\) respectively. Note that the topology of the entangling surface differs from the theory without the compactification of the \(\phi\) direction. The surface action becomes \[A=\int d^{d-1}x\mathcal{L}=\Omega_{d-3}L_{\phi}L^{d-1}\int dz \frac{R^{d-3}}{z^{d-1}}\sqrt{1+f\dot{R}^{2}}, \tag{21}\] where \(\mathcal{L}=\sqrt{\det g_{ind}}\) and \(g_{ind}\) is the induced metric. The holographic entanglement entropy is given by \[S_{EE}=\frac{2\pi}{\kappa^{2}}A \tag{22}\] with \(A\) minimized. Recall that \(L^{d-1}/\kappa^{2}\) is dimensionless. We omit the \(AdS\) radius (\(L=1\)) for the convenience. We solve EOM derived from (21) to obtain the minimal surface. The EOM of \(R\) become 3 Footnote 3: EOM in terms of \(z\) is \[-f(z)\left((d-1)Rz^{\prime}(R)^{2}+z(R)\left((d-3)z^{\prime}(R)+rz ^{\prime\prime}(R)\right)\right)+(1-d)Rf(z)^{2}\] \[-(d-3)z(R)z^{\prime}(R)^{3}=0. \tag{23}\] \[2z\left(d-R(z)f^{\prime}(z)R^{\prime}(z)-3\right)+f(z)(2(d-3)zR^ {\prime}(z)^{2}-R(z)(-2(d-1)R^{\prime}(z)\] \[+zf^{\prime}(z)R^{\prime}(z)^{3}+2zR^{\prime\prime}(z)))+2(d-1)f (z)^{2}R(z)R^{\prime}(z)^{3}=0.\] One should specify the IR boundary condition. In fact, there are two kinds of RT surfaces, as drawn in Fig.(1) schematically. The turning point of the disk type RT surface is \(R(z_{t})=0\). Moreover, the surface of the disk type is smooth at the bulk. The embedding scalar must satisfy \(R^{\prime}(z_{t})=\infty\). The surface ends at the tip of the soliton \(z_{t}=z_{+}\) for a cylinder case. Varying \(A\) in terms of \(l\) and fixing \(z=\epsilon\), the Hamiltonian-Jacobi method (please refer to appendix A for a brief review of this method) becomes [27] \[\frac{dA}{dl}=-H(z_{t})\frac{dz_{t}}{dl}-\Pi(\epsilon)\frac{dR( \epsilon)}{dl}=-\Pi(\epsilon)\frac{dR(\epsilon)}{dl}, \tag{24}\] where \[\Pi=\frac{\partial\mathcal{L}}{\partial\dot{R}}=\Omega_{d-3}L^{d -1}L_{\phi}\frac{R^{d-3}f\dot{R}}{z^{d-1}\sqrt{1+f\dot{R}^{2}}},\quad H=\Pi \dot{R}-\mathcal{L}=-\frac{\Omega_{d-3}L_{\phi}R^{d-3}L^{d-1}}{z^{d-1}\sqrt{1+ f\dot{R}^{2}}}. \tag{25}\] The first term of (25) drops out due to the following IR boundary conditions \[R(z_{t})=0,\quad\dot{R}(z_{t})=\infty,\quad H(z_{t})=0,\quad\text{ for a disk},\] \[\frac{dz_{t}}{dl}=\frac{d\epsilon}{dl}=0,\quad\text{for a cylinder}.\] Equation (25) only depends on the solution near the \(AdS\) boundary, and an asymptotic expansion is useful. We compute the asymptotic expansion of the embedding scalar near \(z=0\). The UV behavior of \(R(z)\) has the following ansatz: \[R(z)=l+b_{0}\log\frac{z}{l}+\sum_{n=1}\Big{(}a_{n}+b_{n}\log\Big{(}\frac{z}{l} \Big{)}\Big{)}z^{n}, \tag{27}\] where the log term arises in (27) similar to the Fefferman-Graham expansion of fields in the \(AdS\) spacetime [38, 39]. We can determine coefficients \(a_{n}\) and \(b_{n}\) after substituting the ansatz mentioned above into (24). Below we will analyze the cases with \(d=4,5,3\) in detail. ### \(d=4\) Let us begin with the \(d=4\) case, where the boundary QFT lives on \(R^{1,2}\times S^{1}\), and the topology of the entangling surface becomes \(S^{1}\times S^{1}\). By substituting expansion of \(R(z)\) near the boundary \(z=0\) (27) into equation of motion, one can obtain \[R(z)=l-\frac{z^{2}}{4l}+a_{4}(l)z^{4}+\frac{z^{4}}{32l^{3}}\log\frac{z}{l}+\dots \tag{28}\] Here the higher order terms can be determined by parameters \(l\) and \(a_{4}(l)\). The coefficient \(a_{4}(l)\) can not be determined from the UV expansion of the EOM. Instead, \(a_{4}(l)\) has information determined by the IR boundary condition. Substituting (28) into (25), the \(l\) derivative of the surface becomes \[\frac{dA}{2\pi L_{\phi}dl}=-4la_{4}(l)-\frac{3}{32l^{2}}+\frac{1}{2\epsilon^{2 }}-\frac{1}{8l^{2}}\log\Big{(}\frac{\epsilon}{l}\Big{)}+\dots \tag{29}\] Figure 1: The RT surfaces corresponding to the small (left) and large (right) subsystem in the AdS-Soliton background. If the subsystem is a spherical region, then the RT surface has a disk (cylinder) topology for the small (large) subsystem. The divergent structure of (29) is \[\frac{1}{2\epsilon^{2}}-\frac{1}{8l^{2}}\log\Big{(}\frac{\epsilon}{l} \Big{)}. \tag{30}\] Divergent pieces are defined up to a logarithmic term. Because the inside of the log term should be dimensionless, there are no unique ways to remove it. However, the renormalized entanglement entropy is finite and does not depend on the cut-off. Thus, it has unique descriptions. We compute the renormalized entanglement entropy, corresponding to DOF at an energy scale \(E\sim 1/l\). According to (2), the \(4d\) renormalized entanglement entropy becomes \[S_{ren}=L_{4}(l\partial_{l})S=\frac{1}{2}(l\partial_{l}+1)(l \partial_{l}-1)S=\frac{1}{2}(l^{2}S^{\prime\prime}+lS^{\prime}-S), \tag{31}\] where we have used the commutation relation \([\partial_{l},l\partial_{l}]=\partial_{l}\). Recall that the central charge of \(d=4\)\(\mathcal{N}=4\) SYM is \(a=\pi^{5}L^{8}/\kappa_{10}^{2}=N^{2}/4\), where \(\kappa_{10}^{2}=\pi^{3}L^{5}\kappa^{2}\). The renormalized entanglement entropy depends on the entangling surface and the trace anomaly [27] as follows: \[S_{ren}^{d=4}=2a_{4}\int_{\partial A}d^{2}x\sqrt{h}E_{2}+c_{4} \int_{\partial A}d^{2}x\sqrt{h}I_{2}, \tag{32}\] where \(\partial A\) is the entangling surface (see also [6, 34]). In 4 dimensions, we have an A-type anomaly \(a_{4}\) and a B-type anomaly \(c_{4}\) on the entangling surface. \(E_{2}\) is the Euler density and \(I_{2}\) is a Weyl invariant. For the spherical entangling surface \(\partial A=S^{2}\), \(\int d^{2}x\sqrt{h}E_{2}=2\) and the Weyl invariant is zero. Thus, \(S_{ren}^{d=4}=4a_{4}\). The renormalized entanglement entropy will satisfy the C-theorem in that case. Because the entangling surface is \(S^{1}\times S^{1}\) for QFT on \(R^{1,2}\times S^{1}\), however, the Euler number is zero. Only the B-type anomaly remains. The renormalized entanglement entropy can be non-monotonic since there is no universal C-theorem for B-type anomalies. To compute the renormalized entanglement entropy, we need \(a_{4}(l)\), \(l\) appearing in (28) and \(S\) (not confused with anomaly \(a_{4}\) in (32)). A numerical result of \(a_{4}(l)\) is obtained in Fig. 2. For pure imaginary \(a_{\phi}\), it leads to results of geometric entropy [40]. The geometric entropy is related to entanglement entropy via the double Wick rotation [41, 42]. The disk shape is dominant for small \(l\), and the cylinder shape is dominant for large \(l\). \(a_{4}(l)\) becomes multi-valued near the phase transition at a critical length between disk and cylinder surfaces. Multi-valued behavior is also observed for other \(a_{\phi}\). Substituting the boundary expansion (28) into the action (21) and expanding at a small \(z\), we obtain the following divergent part of \(A=A_{fin}+A_{div}\): \[\frac{A_{div}}{2\pi L_{\phi}}=\frac{l}{2\epsilon^{2}}+\frac{1}{8l }\log\Big{(}\frac{\epsilon}{l}\Big{)}. \tag{33}\] Figure 3: Left: the finite part of on-shell action \(A_{fin}\) for \(a_{\phi}=\frac{i}{2},\ 0,\ \frac{2}{3},\ \frac{1}{\sqrt{2}}\). The figure shows that the entanglement entropy increases with the Wilson line \(a_{\phi}\) increase. The quantum phase transition happens when \(M_{0}l_{c}=0.21,\ 0.23,\ 0.32,\ 0.4\). Right: closed-up figure of \(A_{fin}\) for \(a_{\phi}=\frac{1}{\sqrt{2}}\). Figure 2: \(a_{4}\) as a function of \(l\). The disk shape dominates the behavior for small \(l\), while the cylinder shape dominates the behavior for large \(l\). Left: \(a_{\phi}=\frac{i}{2}\). The critical length of the phase transition is \(l_{c}=0.66\). Right: \(a_{\phi}=\frac{1}{\sqrt{2}}\). The critical length is \(l_{c}=1.26\). Figure 5: \(S_{ren}\) for \(d=4\). The renormalized entanglement entropy for several \(M_{0}\): \(M_{0}=1/\pi,\ 2/5,\ 3/5\) from the right to the left. The renormalized entanglement entropy non-monotonically behaves near critical lengths. The quantum phase transition occurs when \(M_{0}l_{c}=0.24,\ 0.27,\ 0.4\). The figure shows that massive modes \(Ml>1\) decouple others soon. The final states will be product states. Thus, the divergent structure of entanglement entropy \(A\) is \[A=2\pi L_{\phi}\Big{(}\frac{l}{2e^{2}}+\frac{\log\epsilon}{8l}\Big{)}+S_{fin}(l), \tag{34}\] where the log dependence is included in the finite part \(S_{fin}\). \(S_{fin}=A_{fin}-2\pi L_{\phi}\frac{1}{8l}\log l\). To find minimal surfaces, one must compute the on-shell action of (21). Minimal surfaces between the disk and the cylinder dominate the phase in Fig. 3. The quantum phase transition occurs at a critical length \(l_{c}\). Yellow and Blue curves show that the confinement occurs and decreases DOF [43, 10, 11]. Recall that \(a_{\phi}\) increases Casimir energy of dual QFT, and then the entanglement entropy increases with the increase of \(a_{\phi}\). \(S_{fin}\) is different from \(S_{ren}\) because the cut-off dependence is removed at \(S_{ren}\). Actually, \(S_{fin}\) is related to \(S_{ren}\) via \[S_{ren}=\frac{1}{2}(l^{2}S_{fin}^{\prime\prime}+lS_{fin}^{\prime}-S_{fin}). \tag{35}\] Considering (29), \(S_{fin}(l)\) satisfies the following relation: \[S_{fin}^{\prime}(l)=2\pi L_{\phi}\Big{(}-4la_{4}(l)-\frac{3}{32l^{2}}+\frac{1} {8l^{2}}\log(l)\Big{)}. \tag{36}\] Due to (36), \(S_{fin}^{\prime\prime}\) or \(A_{fin}^{\prime\prime}\) becomes \[\begin{split}\frac{S_{fin}^{\prime\prime}}{2\pi L_{\phi}}=& \frac{5}{16l^{3}}-\frac{\log(l)}{4l^{3}}-4la_{4}^{\prime}(l)-4a_{4 }(l),\\ \frac{A_{fin}^{\prime\prime}}{2\pi L_{\phi}}=&-\frac {1}{16l^{3}}-4la_{4}^{\prime}(l)-4a_{4}(l).\end{split} \tag{37}\] Recall that (37) is the finite part of the minimal surface \(A\). The renormalized entanglement entropy \(S_{ren}\) is finite. Substituting (36) and (37) into (35), \(S_{ren}\) is rewritten as follows: \[\frac{\kappa^{2}S_{ren}}{4\pi^{2}L_{\phi}}=-4l^{2}a_{4}(l)-2l^{3} a_{4}^{\prime}(l)+\frac{7}{64l}-\frac{A_{fin}}{2}\] \[=-4l^{2}a_{4}(l)-2l^{3}a_{4}^{\prime}(l)+2\int^{l}l^{\prime}a_{4} (l^{\prime})dl^{\prime}+\frac{1}{8l}+c_{\tau}, \tag{38}\] The coefficient of \(1/l\) comes from only the logarithmic term of \(S\), which is brought from the Weyl anomaly. The formula (38) also depends on a function of \(a_{4}(l)\) unlike the holographic entanglement entropy of the spherical entangling surface in 4 dimensions [6]. For the spherical entangling surface, here, HREE describes the A type anomaly in CFT: \(S_{ren}^{d=4}=4a_{4}\). Recall that QFT dual to the AdS soliton with gauge potential breaks conformal invariance. The first three terms in (38) represent terms breaking conformal invariance and the last term is to realize the small \(l\) limit of HREE (CFT behavior) [34]. Fig. 4 shows the renormalized entanglement entropy \(S_{ren}\) for several \(a_{\phi}\). It becomes non-monotonic behavior, which is similar to a behavior of GPPZ flow [27]. Intuitively, the renormalized entanglement entropy is also a detector of the effective DOF of entangling states at the energy scale \(El\sim 1\). For large energy \(E/M_{0}\sim 1/(M_{0}l)>1\), \(S_{ren}\) decreases as a function of \(lM_{0}\). Because the degrees of freedom with Wilson lines contribute to large \(a_{\phi}\) and energy, however, \(S_{ren}\) slowly decreases until the critical length (see green and red curves). For small \(E/M_{0}<1\), the renormalized entanglement entropy can not detect effective DOF and can almost become a constant. Even if the renormalized entanglement entropy increases after the quantum phase transition, it satisfies a kind of C-theorem: \(S_{ren}(l\to 0)>S_{ren}(l\rightarrow\infty)\). ### \(d=5\) We proceed with \(d=5\) case, where the AdS boundary expansion for \(d=5\) has the following form \[R(z)=l-\frac{z^{2}}{3l}-\frac{5z^{4}}{54l^{3}}+z^{5}a_{5}(l)\ldots, \tag{39}\] Similar to the case \(a_{4}(l)\) discussed in the previous section, the parameter \(a_{5}(l)\) is not determined from the AdS boundary expansion but determined from the IR boundary condition. Numerically, \(a_{5}(l)\) is plotted in Fig. 6 for fixed values of \(M_{0}\) and \(a_{\phi}\). Once we have the expansion (39), by making use of (25), we can obtain the following derivative \[\frac{1}{\Omega_{2}L_{\phi}}\frac{dA}{dl}=\frac{2l}{3\epsilon^{3}}-5l^{2}a_{5} (l), \tag{40}\] where \(\Omega_{2}=4\pi\). The first term is the cut-off dependent term. There are no log divergences. The finite part of \(dA/dl\) is determined by the second term, which is similar to \(d=3\) cases: a term corresponding to trace anomaly is not present in odd dimensions. Substituting the expansion (39) into the action (21) and expanding in terms of small \(z\), we can obtain the divergent part of entanglement entropy which has the following divergent structure \[A(l)=4\pi L_{\phi}\Big{(}\frac{l^{2}}{3\epsilon^{3}}-\frac{4}{9 \epsilon}\Big{)}+A_{fin}(l). \tag{41}\] Since the second term in the parentheses does not depend on \(l\), \(O(\epsilon^{-1})\) part is absent in (40). The \(l\) dependence of \(A(l)\) differs from the CFT one. We plotted the finite part of the entanglement entropy in Fig. 7, where it is shown that the disk-shaped RT surface dominates the behavior for small \(l\) and cylinder shape dominates for large \(l\). The finite part Figure 6: \(a_{5}\) as function of \(l\) with \(T=\frac{1}{\pi},a_{\phi}=\frac{1}{2}\) in the case of \(d\)=5. The red line corresponds to disk shaped RT surface and the blue line corresponds to a cylinder shape. The critical length of the phase transition is \(l_{c}=1.57\). Figure 7: Left: the finite part of \(A\) for \(a_{\phi}=i/2,\ 0,\ 1/2,\ 2/\sqrt{15}\) from left to right. Right: close-up version of \(a_{\phi}=1/2\) curve. There is a phase transition at the critical length. \(A_{fin}\) has the quantum phase transition at the critical length. Making use of (40) and (41), the finite part satisfies the following relation \[\frac{A^{\prime}_{fin}(l)}{4\pi L_{\phi}}=-5l^{2}a_{5}(l). \tag{42}\] Next consider the 5-dimensional renormalized entanglement entropy defined in (2), which is \[S_{ren}=\frac{1}{3}l\partial_{l}(l\partial_{l}-2)S. \tag{43}\] Substituting (41), the renormalized entanglement entropy can be rewritten as \[S_{ren}(l)=\frac{l}{3}(-A^{\prime}_{fin}(l)+lA^{\prime\prime}_{ fin}(l))=-\frac{5}{3}\cdot 4\pi L_{\phi}\cdot l^{3}(a_{5}(l)+la^{\prime}_{5}(l)). \tag{44}\] The renormalized entanglement entropy is plotted in Fig. 8 with different \(a_{\phi}\). The renormalized entanglement entropy counts the DOF of the entangling states at an energy scale \(E\sim 1/l\). It has non-monotonic behaviors. The quantum phase transition happens at critical limits. And \(S_{ren}\) approaches to zero at large distances. Fig. 8 (right) shows that massive DOF decouples other modes at low energy soon. This behavior is easily seen when \(a_{\phi}=0\), where scaling symmetry \(M_{0}\rightarrow\lambda M_{0},\ l\rightarrow\lambda^{-1}l\) arises. This is the symmetry of the action and EOM. Scaling symmetry simultaneously rotates \(a_{\phi}\) when it is nonzero. The scaling symmetry implies that the critical length is inversely proportional to \(M_{0}\): \(l_{c}=c_{5}/M_{0}\). The quantum phase transition quickly occurs for large masses, while it slowly occurs for small masses. After the phase transition, the renormalized entanglement entropy gradually becomes constant. Figure 8: Left: The renormalized entanglement entropy for \(a_{\phi}=i/2,\ 0,\ 1/2,\ 2/\sqrt{15}\). Renormalized entanglement entropy non-monotonically behaves. For large \(a_{\phi}\), DOF slowly decreases. Right: The renormalized EE for \(a_{\phi}=2/\sqrt{15}\). Mass is changed from \(M_{0}=1/2,\ 2/5,\ 1/\pi\). Massive modes quickly decrease. \(d=3\) (a striped shape) In this section, we analyze the holographic entanglement entropy for \(d=3\). The configuration for \(d=3\) is equal to a striped boundary shape. We start with \(d\) dimensional striped shapes, and \(d=3\) is a special case. To consider a striped shape, we replace \(R^{2}d\Omega_{d-3}\) with \(\sum dx_{\perp}^{2}\) in \(d+1\) dimensional \(AdS\) soliton with a gauge field (12) as follows: \[ds_{d+1}^{2}=\frac{L^{2}}{z^{2}}\Big{(}\frac{dz^{2}}{f_{d}(z)}+f_{d}(z)d\phi^{ 2}-dt^{2}+dR^{2}+\sum dx_{\perp}^{2}\Big{)},\] where \(R\) is along \((-\infty,\ \infty)\) unlike the radial direction of polar coordinate systems. We consider a strip with a length of \(l\) along the \(R\) direction (\(-l/2\leq R\leq l/2\)) and choose \(R=R(z)\) as an embedding scalar. The surface action becomes \[A_{s}=\int d^{d-1}x\mathcal{L}=2V_{d-3}L_{\phi}L^{d-1}\int dz\frac{1}{z^{d-1}} \sqrt{1+f\dot{R}^{2}}, \tag{45}\] where \(V_{d-3}\) was used to replace the volume of \(d-3\) dimensional space spanned by \(x_{\perp}\). Factor 2 comes from two contributions of the minimal surface. Note that the lagrangian density of (45) does not depend explicitly on \(R\). This simplifies the analysis for a rectangular region more than for a circular region. The momentum doesn't explicitly depend on \(R\) for a striped Figure 9: Left: \(s_{3}(l)=dA_{s}/(3L_{\phi}dl)\) as a function of \(lM_{0}\). The curve is for \(a_{\phi}=i/2,\ 0,\ 1,\ 2/\sqrt{3}\) from the left to the right. The phase transition happens at critical lengths \(l_{c}=0.15,\ 0.19,\ 0.24,\ 0.34\) in units of \(1/M_{0}\) from the left to the right, respectively. \(s_{3}(l)\) has two values. The physical curve (an upper one) is consistent with the strong subadditivity. Right: \(S_{ren}\) for \(d=3\). The curve is for \(a_{\phi}=0,\ 1,\ 2/\sqrt{3}\) from the left to the right. \(S_{ren}\) monotonically decreases as a function of \(lM_{0}\). \(S_{ren}\) becomes 0 at a large distance. It implies that theory is a product state there [44]. boundary shape. Solving the condition \(\Pi=\)const and imposing the IR boundary condition \(dz/dR|_{z=z_{t}}=0\), we have \[\dot{R}=\frac{1}{\sqrt{f_{d}(z)\Big{(}\frac{f_{d}(z)z_{t}^{2(d-1)}}{f_{d}(z_{t}) z^{2(d-1)}}-1\Big{)}}}. \tag{46}\] This formula demonstrates that \(z=z_{t}\) is the turning point \(dz/dR|_{z=z_{t}}=0\). The equation (46) gives the profile of the minimal surface satisfying \(R(\epsilon)=-l/2\) and \(R(z_{t})=0\). Using the \(AdS\) boundary expansion \(R^{\prime}(z)=ds_{d}(l)z^{d-1}+\dots\), \(R(z)\) is expanded as follows \[R(z)=-\frac{l}{2}+s_{d}(l)z^{d}\dots, \tag{47}\] where \[s_{d}(l)=\frac{\sqrt{f_{d}(z_{t})}}{dz_{t}^{d-1}}. \tag{48}\] The Hamilton-Jacobi equation for a striped shape has the same form as (25) \[\frac{dA_{s}}{dl}=-\Pi(\epsilon)\frac{dR(\epsilon)}{dl}, \tag{49}\] where \[\Pi=\frac{\partial\mathcal{L}}{\partial\dot{R}}=2V_{d-3}L^{d-1}L_{\phi}\frac{ f\dot{R}}{z^{d-1}\sqrt{1+f\dot{R}^{2}}} \tag{50}\] Figure 10: \(S_{ren}\) for \(d=3\). The curve is for \(a_{\phi}=2/\sqrt{3}\). \(M_{0}=3/5,\ 2/5,\ 1/\pi\) from the left to the right. \(S_{ren}\) monotonically decreases as a function ozimf \(lM_{0}\). The phase transition happens at critical lengths \(l_{c}M_{0}=0.19,\ 0.23,\ 0.34\) from the left to the right, respectively. It implies that massive modes \(M>1/l\) quickly decouple others and remain product states. and \(H(z_{t})=0\) due to the IR boundary condition. When (46) is substituted into (25), the \(l\) derivative of the surface becomes \[\frac{dA_{s}}{V_{d-3}L_{\phi}L^{d-1}dl}=ds_{d}(l)=\frac{\sqrt{f_{d} (z_{t})}}{z_{t}^{d-1}}. \tag{51}\] Note that the minimal surface must satisfy strong subadditivity. It requires that the minimal surface must be concave [45] as follows: \[A_{s}^{\prime\prime}=dV_{d-3}L_{\phi}L^{d-1}s_{d}^{\prime}(l)=V_ {d-3}L_{\phi}L^{d-1}\frac{\partial z_{t}}{\partial l}\partial_{z_{t}}(ds_{d}(l ))\leq 0. \tag{52}\] Actually, one can show the following inequality: \[\partial_{z_{t}}(ds_{d}(l))\equiv\frac{h(z_{t})}{2\sqrt{f_{d}(z_{ t})}}=\frac{(-2(d-1)z_{t}^{-d}+(d-2)(1-a_{\phi}^{2}z_{0}^{2})z_{0}^{-d})}{2 \sqrt{f_{d}(z_{t})}}<0. \tag{53}\] because \(h(z)\) satisfies the following condition: 4 Footnote 4: When \(a_{\phi}\) is pure imaginary \(a_{\phi}=ia_{\phi}^{\prime}\), the second condition is replaced with \[h(z_{0})=-\frac{4\pi(\sqrt{4\pi^{2}+d(d-2)(a_{\phi}^{\prime}/M_{ 0})^{2}}-2\pi)}{(a_{\phi}^{\prime}/M_{0})^{2}(d-2)}<0. \tag{54}\] \[h(0)=-\infty,\quad h(z_{0})=(-d-(d-2)(a_{\phi}z_{0})^{2})z_{0}^ {-d}<0,\quad h^{\prime}(z)=2d(d-1)z^{-d-1}>0. \tag{55}\] According to [26], in addition, \(\partial z_{t}/\partial l\) has two values. It is positive for a minimal surface and negative for an unphysical curve. Next, we consider the case in which the minimal surface is disconnected. We focus on \(d=3\). From \[\frac{\mathrm{d}}{\mathrm{d}z}\frac{\partial\mathcal{L}}{ \partial R}=\frac{\partial\mathcal{L}}{\partial R}=0 \tag{56}\] where \[\mathcal{L}\equiv\frac{1}{z^{2}}\sqrt{1+f\dot{R}^{2}}, \tag{57}\] it is easy to see that one of the solutions is \(R=\mathrm{const}\), which is for the disconnected minimal surface. Then the area of the surface is \[A_{s}=2L_{\phi}\int_{\epsilon}^{z_{t}}\mathrm{d}z\frac{1}{z^{2} }=2L_{\phi}\left(\frac{1}{\epsilon}-\frac{1}{z_{t}}\right), \tag{58}\] and the corresponding entanglement entropy \[S_{EE}=\frac{4\pi L_{\phi}}{\kappa}\left(\frac{1}{\epsilon}- \frac{1}{z_{t}}\right), \tag{59}\] where \(z_{t}=z_{+}\). The finite part of \(S_{EE}\) is \(-4\pi L_{\phi}/(\kappa z_{+})\). For disconnected surfaces, \(s_{3}(l)=0\). One can compute the renormalized entanglement entropy from the entanglement entropy \(S_{EE}\). We apply the renormalized entanglement entropy formula defined in (2): \(S_{ren}=l\partial_{l}S\) substituting \(d=3\). Because dual 3-dimensional QFT theory is defined on \(R^{1,1}\times S^{1}\), the renormalized entanglement entropy will give the result of \(2d\) QFT on \(R^{1,1}\) in the low energy limit. The renormalized entanglement entropy can be written in terms of \(l\) and \(s_{3}(l)\) as follows: \[\frac{\kappa^{2}}{2\pi L_{\phi}L^{2}}S_{ren}=\frac{ldA_{s}}{L_{ \phi}L^{2}dl}=3ls_{3}(l)=\frac{l\sqrt{f(z_{t})}}{z_{t}^{2}}, \tag{60}\] where \(S=\frac{2\pi}{\kappa^{2}}A_{s}\). Note that the central charge of \(3d\) dual CFT is \(32L^{2}/(\pi G_{4})=2^{6}\sqrt{2}N^{3/2}\sqrt{k}/(3\pi)\), where \(k\) is the Chern-Simons level. For the disconnected surface, we see that the entanglement entropy does not depend on the size \(l\) of the entangling surface at the boundary, which means \[S_{ren}=l\partial_{l}S_{EE}=0. \tag{61}\] We plotted \(dA_{s}/(3L_{\phi}dl)=s_{3}(l)\) in Fig. 9 (left). Because the concavity of the entanglement entropy [46][47] means \(A_{s}^{\prime\prime}\leq 0\), \(A_{s}^{\prime}\) monotonically decreases as a function of \(l\). We plotted the renormalized entanglement entropy in Fig. 9 (right). The renormalized entanglement entropy detects the DOF of the entangling states at the energy scale \(El\sim 1\) again. Because massive degrees of freedom decouple in the low energy limit, \(S_{ren}\) decreases as a function of \(l\) (with energy \(E\sim 1/l\)). 5 This monotonic behavior (\(S_{ren}^{\prime}(l)\leq 0\)) will also be the consequence of Lorentz symmetry and strong subadditivity [45]. The quantum phase transition happens at critical lengths \(l_{c}M_{0}=0.19,\ 0.24,\ 0.34\) for \(a_{\phi}=0,\ 1,\ 2/\sqrt{3}\), respectively. \(S_{ren}\) becomes \(0\) at large distances. Fig. 10 changes \(M_{0}\) after fixing \(a_{\phi}\). It shows that massive modes \(M_{0}>1/l\) quickly decouple others at low energy, remaining product states. When \(a_{\phi}=0\), one can recover scaling symmetry \(M_{0}\rightarrow\lambda M_{0}\), \(l\rightarrow\lambda^{-1}l\), in addition to bulk parameters. This is the symmetry of the action and EOM. Scaling symmetry simultaneously changes \(a_{\phi}\) if it is nonzero. Thus, the critical length is the same as those for different \(M_{0}\) and is given by \(M_{0}=c/l_{c}\), where \(c\) is a constant. The phase transition happens soon for large masses and slowly for small masses. Footnote 5: Another entropic c function non-monotonically behaves in [26] and it does not satisfy a c-theorem. The non-monotonic behavior occurs due to the competition between a power of \(l\) and \(dA_{s}/dl\). Our results agree with the fact that the renormalized entanglement entropy is equal to the formula of the entropic \(c\)-function \(l\partial_{l}S_{EE}\) on \(R^{1,1}\), which becomes observable in renormalizable theory. The entropic \(c\)-function on \(R^{1,1}\) counts DOF and depends on the radius \(l(\sim 1/E)\) including the information in \(S_{EE}(l_{1})-S_{EE}(l_{2})\) (\(l_{1}>l_{2}\)). It is known that the \(c\) function for a massive scalar field exponentially decreases for large \(r\)[8]. Finally, there remains no degree of freedom (zero entropic \(c\) function) at a large distance. Recall that the trace anomaly exists in 2 dimensions. The monotonic quantity is the Euler term in the trace anomaly there. ### Small subregions By employing the our previous results, where \(z_{t}\) can be expressed in terms of \(l\) for small \(l\)[26] \[\begin{split} z_{t}=&\frac{l}{2e_{-1}-2k_{-1}}+ \frac{l^{4}\bar{a}_{\phi}\left(-4e_{-1}+4k_{-1}+\pi\right)}{256z_{+}^{3}\left( e_{-1}-k_{-1}\right){}^{5}}+\frac{l^{5}\left(1-\bar{a}_{\phi}\right)}{160z_{+}^{4} \left(e_{-1}-k_{-1}\right){}^{5}}\\ &+\frac{l^{7}\bar{a}_{\phi}^{2}\left(-302e_{-1}k_{-1}+126e_{-1}^ {2}-84\pi e_{-1}+176k_{-1}^{2}+84\pi k_{-1}+21\pi^{2}\right)}{172032z_{+}^{6} \left(e_{-1}-k_{-1}\right){}^{9}}+\ldots,\end{split} \tag{62}\] we obtain \[\begin{split}\frac{dA_{s}}{L_{\phi}L^{2}dl}=&\frac{4 \left(e_{-1}-k_{-1}\right){}^{2}}{l^{2}}-\frac{l\pi\bar{a}_{\phi}}{16\left(z_{ +}^{3}\left(e_{-1}-k_{-1}\right){}^{2}\right)}+\frac{9l^{2}\left(\bar{a}_{\phi }-1\right)}{40z_{+}^{4}\left(e_{-1}-k_{-1}\right){}^{2}}\\ &+\frac{5l^{4}\bar{a}_{\phi}^{2}\left(80e_{-1}k_{-1}-80k_{-1}^{2 }-21\pi^{2}\right)}{86016z_{+}^{6}\left(e_{-1}-k_{-1}\right){}^{6}}+\ldots\\ =&\frac{8\pi^{3}}{l^{2}\Gamma\left(\frac{1}{4} \right)^{4}}-\frac{8l\bar{a}_{\phi}\Gamma\left(\frac{5}{4}\right)^{4}}{\pi^{2} z_{+}^{3}}+\frac{9(\bar{a}_{\phi}-1)l^{2}\Gamma\left(\frac{1}{4}\right)^{4}}{80 \pi^{3}z_{+}^{4}}+\frac{5(20-21\pi)\bar{a}_{\phi}^{2}l^{4}\Gamma\left(\frac{1 }{4}\right)^{12}}{688128\pi^{8}z_{+}^{6}}+\ldots,\end{split} \tag{63}\] where we have defined \(\bar{a}_{\phi}=1-(z_{+}a_{\phi})^{2}\). For the notation \(e_{-1},k_{-1}\) please refer to [26] for more details. The expansion (63) agrees with the holographic entanglement entropy in [26], and the formula (51) is also valid for other dimensions realizing the holographic entanglement entropy. The result is compared with our previous paper [26] in the appendix B. ## 6 Summary and discussion We computed holographic EE and the renormalized EE in the AdS soliton with the gauge potential for several dimensions. The disk shape of the minimal surface was dominant for small \(l\), and the cylinder shape was dominant for large \(l\) as similar to [31]. The quantum phase transition occurs at a critical size of the subregion. The renormalized EE, a universal part of EE independent of the cutoff, was computed by operating differentiation on EE [27]. By containing modes with KK mass and considering the low energy limit, we continuously derived from odd dimensional renormalized EE to the formula of \(d-1\) dimensional renormalized EE. 6 Actually, the \(\phi\) circle shrinks to zero at the tip of the AdS soliton (\(z=z_{+}\)), which is probed for large \(l\). The logarithmic term is absent since we don't have a Weyl anomaly in odd dimensions. This is a sort of topology change in the entanglement entropy. In any dimension, massive modes \(M_{0}l>1\) decouple others as a decrease of energy as shown in Fig. 10 and then product states are retained. For high energy limit (\(l\ll L_{\phi}\)), the renormalized EE recovers behaviors of the original dimensions because the renormalized EE measures the degrees of freedom in a state with energy \(E\sim 1/l\). Because the degrees of freedom with Wilson lines contribute to large \(a_{\phi}\) and high energy, the renormalized EE slowly changes until the critical length (see Fig. 9). The paper [49] also tracked the entanglement entropy across dimensions and found transitions. Footnote 6: Note that the topology of the subregion is not one ball \(B^{d-1}\) but the ball and a circle \(B^{d-2}\times S^{1}\) with the periodicity \(L_{\phi}\). Interpreting this \(S^{1}\) as one perpendicular direction to \(B^{d-2}\) with the endpoint identified will be convenient. In section 5, we analyzed striped surfaces for \(d=3\). Our results demonstrated that the renormalized EE is positive (non-negative) and satisfies the C theorem. After dimensional reduction, the renormalized entanglement entropy of 2d QFT with Kaluza-Klein modes will also be consistent with the C theorem of the 2-dimensional entropic \(c-\)function. We showed that when \(d=3\), the renormalized EE for the entangling boundary with a striped shape decreases monotonically and jumps to zero when the size of the entangling surface becomes large, which means that it probes a first-order phase transition and the corresponding boundary field theory runs into a product state at the low energy scale. Unlike the generalized entropic \(c-\)function studied in [26], the renormalized EE here behaves monotonically even for the large value of gauge potential \(a_{\phi}\), and since \(a_{\phi}\) can increase the degrees of freedom and the increase of \(a_{\phi}\) leads to the increase of the renormalized EE, which implies that the renormalized EE is counting the degrees of freedom (DOF) of the boundary field theory. We analyzed the striped entangling surface in the previous analysis and computed an entropic \(c-\)function. This entropic \(c-\)function is always positive and non-monotonically behaves. The phase transition happens at a critical length. The non-monotonic behavior is caused by the effective DOF of Wilson lines along the \(S^{1}\) direction. That is, effective DOF increases the renormalized entanglement entropy. It implies that this Wilson line will decrease the mass of particles such as glueballs [48] because particles of small mass contribute to the entropic \(c-\)function at large \(l\). On the other hand, in higher dimensions, HREE can become negative near the phase transition point, different from an entropic \(c-\)function. This will be considered as an artifact in the large \(N\) and strongly coupled limit. The two are similar in the quantum phase transition's presence and the Wilson lines' effect. The gauge potential will decrease the mass of particles such as glueballs. Thus, more DOF will contribute to HREE and let it increase at a large distance. ## Acknowledgments We would like to thank X. Chen and P. Zhang for their helpful discussion. S.H. would appreciate the financial support from the Fundamental Research Funds for the Central Universities and Max Planck Partner Group and the Natural Science Foundation of China (NSFC) Grants No. 12075101 and No. 12235016. This work is also supported by the National Natural Science Foundation of China (No.12105113) ## Appendix A Hamilton-Jacobi equations In the appendix, we give a brief review of the Hamilton-Jacobi method used in analyzing minimal surfaces in (25). We introduce the following action \[S=\int_{t_{1}}^{t_{2}}\mathcal{L}(q,\dot{q},t)dt. \tag{64}\] We assume that the field \(q\) can change at the boundary times \(t_{1}\) and \(t_{2}\). Moreover, we allow changes of times \(t_{1}(t_{2})\) into \(t_{1}^{\prime}(t_{2}^{\prime})\), respectively. The variation of the action becomes \[\delta S= \int_{t_{1}^{\prime}}^{t_{2}^{\prime}}dt\mathcal{L}(q^{\prime}, \dot{q}^{\prime},t)dt-\int_{t_{1}}^{t_{2}}dt\mathcal{L}(q,\dot{q},t)dt\] \[= \int_{t_{2}}^{t_{2}^{\prime}}\mathcal{L}(q^{\prime},\dot{q}^{ \prime},t)dt+\int_{t_{1}}^{t_{2}}(\mathcal{L}(q^{\prime},\dot{q}^{\prime},t)- \mathcal{L}(q,\dot{q},t))dt+\int_{t_{1}^{\prime}}^{t_{1}}\mathcal{L}(q^{\prime },\dot{q}^{\prime},t)dt\] \[= \mathcal{L}(q^{\prime},\dot{q}^{\prime},t_{2})\delta t_{2}- \mathcal{L}(q^{\prime},\dot{q}^{\prime},t_{1})\delta t_{1}+\frac{\partial \mathcal{L}}{\partial\dot{q}}\delta q\Big{|}_{t_{2}}-\frac{\partial\mathcal{L }}{\partial\dot{q}}\delta q\Big{|}_{t_{1}},\] where the EOM is used in the second line. We did not assume \(\delta q=0\) at the time boundary \(t_{1}\) and \(t_{2}\). We use the following transformation \[\frac{\partial\mathcal{L}}{\partial\dot{q}}\delta q\Big{|}_{t_{i}}=\frac{ \partial\mathcal{L}}{\partial\dot{q}_{i}}\delta q_{i}-\frac{\partial\mathcal{L }}{\partial\dot{q}_{i}}\dot{q}_{i}\delta t_{i}, \tag{65}\] where \(\delta q_{i}=q^{\prime}(t_{i}^{\prime})-q(t_{i})\). The variation of the action becomes the total derivative as follows: \[\delta S=-\mathcal{H}\delta t_{2}+\mathcal{H}\delta t_{1}+\frac{\partial \mathcal{L}}{\partial\dot{q}}\delta q_{2}-\frac{\partial\mathcal{L}}{\partial \dot{q}}\delta q_{1}. \tag{66}\] It shows that \[\frac{\partial S}{\partial t_{2}}=-\mathcal{H},\quad\frac{\partial S}{\partial t_{ 1}}=\mathcal{H},\quad\frac{\partial S}{\partial q_{2}}=\frac{\partial\mathcal{L }}{\partial q_{2}}=p_{2},\quad\frac{\partial S}{\partial q_{1}}=-\frac{ \partial\mathcal{L}}{\partial\dot{q_{1}}}=-p_{1}. \tag{67}\] Thus, the motion in which (66) becomes the total derivative is possible. ## Appendix B Hamilton-Jacobi equations for small \(l\) We derive the Hamilton-Jacobi equation for a striped shape for \(d=5,\;6\) in this appendix. When we restrict to the small size \(l\) of the subregion, we can use the analytic expression. The Hamilton Jacobi equation is given by eq. (49) and eq. (51). We consider \(d=5\) first. The turning point, \(z_{t}\) is expanded in the small \(l\) limit (see [26]), \[\begin{split} z_{t}=&\frac{5l\Gamma\left(\frac{9}{ 8}\right)}{2\sqrt{\pi}\Gamma\left(\frac{13}{8}\right)}-\frac{15625l^{6}\bar{a }_{\phi}\left(3\Gamma\left(\frac{3}{4}\right)\Gamma\left(\frac{9}{8}\right)^{ 6}\Gamma\left(\frac{13}{8}\right)-5\Gamma\left(\frac{9}{8}\right)^{7}\Gamma \left(\frac{5}{4}\right)\right)}{1536\pi^{3}z_{h}^{5}\Gamma\left(\frac{3}{4} \right)\Gamma\left(\frac{13}{8}\right)^{7}}\\ &-\frac{1953125l^{9}(\bar{a}_{\phi}-1)\Gamma\left(\frac{9}{8} \right)^{9}}{2304\pi^{9/2}z_{h}^{8}\Gamma\left(\frac{13}{8}\right)^{9}}+O(l^ {11}).\end{split} \tag{68}\] The Hamilton-Jacobi equation becomes \[\begin{split}&\frac{dA_{s}}{V_{2}L_{\phi}L^{4}dl}=\frac{\sqrt{f(z_ {t})}}{z_{t}^{4}}\\ =&\frac{16\pi^{2}\Gamma\left(\frac{13}{8}\right)^{4 }}{625l^{4}\Gamma\left(\frac{9}{8}\right)^{4}}-\frac{25l\bar{a}_{\phi}\Gamma \left(\frac{9}{8}\right)^{2}\Gamma\left(\frac{5}{4}\right)}{12\sqrt{\pi}z_{h} ^{5}\Gamma\left(\frac{3}{4}\right)\Gamma\left(\frac{13}{8}\right)^{2}}+\frac{ 15625(\bar{a}_{\phi}-1)l^{4}\Gamma\left(\frac{9}{8}\right)^{4}}{288\pi^{2}z_{ h}^{8}\Gamma\left(\frac{13}{8}\right)^{4}}+O(l^{6}).\end{split} \tag{69}\] For \(d=6\), \(z_{t}\) is expanded in terms of small \(l\) as follows: \[\begin{split} z_{t}=&\frac{3l\Gamma\left(\frac{11}{ 10}\right)}{\sqrt{\pi}\Gamma\left(\frac{8}{5}\right)}-\frac{2187\bar{a}_{\phi}l ^{7}\left(7\Gamma\left(\frac{7}{10}\right)\Gamma\left(\frac{8}{5}\right)-12 \Gamma\left(\frac{11}{10}\right)\Gamma\left(\frac{6}{5}\right)\right)\Gamma \left(\frac{11}{10}\right)^{7}}{70\pi^{7/2}z_{h}^{6}\Gamma\left(\frac{7}{10} \right)\Gamma\left(\frac{8}{5}\right)^{8}}\\ &-\frac{885735(\bar{a}_{\phi}-1)l^{11}\Gamma\left(\frac{11}{10} \right)^{11}}{22\pi^{11/2}z_{h}^{10}\Gamma\left(\frac{8}{5}\right)^{11}}+O(l^ {13}).\end{split} \tag{70}\] The Hamilton-Jacobi equation becomes \[\frac{dA_{s}}{V_{3}L_{\phi}L^{5}dl}=\frac{\sqrt{f(z_{t})}}{z_{t}^{ 5}}=\frac{\pi^{5/2}\Gamma\left(\frac{8}{5}\right)^{5}}{243l^{5}\Gamma\left( \frac{11}{10}\right)^{5}}-\frac{18l\bar{a}_{\phi}\Gamma\left(\frac{11}{10} \right)^{2}\Gamma\left(\frac{6}{5}\right)}{7\sqrt{\pi}z_{h}^{6}\Gamma\left( \frac{7}{10}\right)\Gamma\left(\frac{8}{5}\right)^{2}}\] \[+l^{5}(\bar{a}_{\phi}-1)\frac{4374\Gamma\left(\frac{11}{10}\right) ^{5}}{11\pi^{5/2}z_{h}^{10}\Gamma\left(\frac{8}{5}\right)^{5}}+O(l^{7}). \tag{71}\]
2309.09802
DFL-TORO: A One-Shot Demonstration Framework for Learning Time-Optimal Robotic Manufacturing Tasks
This paper presents DFL-TORO, a novel Demonstration Framework for Learning Time-Optimal Robotic tasks via One-shot kinesthetic demonstration. It aims at optimizing the process of Learning from Demonstration (LfD), applied in the manufacturing sector. As the effectiveness of LfD is challenged by the quality and efficiency of human demonstrations, our approach offers a streamlined method to intuitively capture task requirements from human teachers, by reducing the need for multiple demonstrations. Furthermore, we propose an optimization-based smoothing algorithm that ensures time-optimal and jerk-regulated demonstration trajectories, while also adhering to the robot's kinematic constraints. The result is a significant reduction in noise, thereby boosting the robot's operation efficiency. Evaluations using a Franka Emika Research 3 (FR3) robot for a variety of tasks further substantiate the efficacy of our framework, highlighting its potential to transform kinesthetic demonstrations in contemporary manufacturing environments. Moreover, we take our proposed framework into a real manufacturing setting operated by an ABB YuMi robot and showcase its positive impact on LfD outcomes by performing a case study via Dynamic Movement Primitives (DMPs).
Alireza Barekatain, Hamed Habibi, Holger Voos
2023-09-18T14:18:38Z
http://arxiv.org/abs/2309.09802v3
# DFL-TORO: A One-Shot Demonstration Framework for Learning Time-Optimal Robotic Manufacturing Tasks ###### Abstract This paper presents DFL-TORO, a novel Demonstration Framework for Learning Time-Optimal Robotic tasks via One-shot kinesthetic demonstration. It aims at optimizing the process of Learning from Demonstration (LfD), applied in the manufacturing sector. As the effectiveness of LfD is challenged by the quality and efficiency of human demonstrations, our approach offers a streamlined method to intuitively capture task requirements from human teachers, by reducing the need for multiple demonstrations. Furthermore, we propose an optimization-based smoothing algorithm that ensures time-optimal and jerk-regulated demonstration trajectories, while also adhering to the robot's kinematic constraints. The result is a significant reduction in noise, thereby boosting the robot's operation efficiency. Evaluations using a Franka Emika Research 3 (FR3) robot for a reaching task further substantiate the efficacy of our framework, highlighting its potential to transform kinesthetic demonstrations in contemporary manufacturing environments. ## I Introduction In the manufacturing industry, the transition from mass production to mass customization [1] necessitates flexible robot programming adaptable to frequently changing tasks [2]. Traditional programming, with its reliance on robot experts, increases costs and downtime [3]. As a solution, Learning from Demonstration (LfD) [4] allows robots to learn tasks flexibly from non-experts [5]. However, deploying LfD in practical manufacturing settings poses performance challenges [4, 6], which motivates the current work. In robotic manufacturing, the performance can be evaluated based on "Implementation Efficiency" and "Operation Efficiency" metrics [4]. Implementation efficiency pertains to the resources required for teaching a task, while operation efficiency considers production speed against ongoing costs such as maintenance and robot wear and tear. Often, the quality and the procedure of acquiring demonstrations are sub-optimal with respect to the mentioned metrics [7]. Our focus in this paper is to address the challenges faced when attempting to acquire information-rich demonstrations in an efficient manner, in order to enable LfD algorithms to optimally learn and generalize tasks. Human demonstrations inherently dictate the requirements of a "correct" execution of a task. While LfD's aim is not to mirror the demonstration but to grasp the core recipe, it is vital to assign significance to each segment of the demonstration. This highlights how much variation the robot can tolerate while still fulfilling the task reliably. For instance, in a Reaching task, high precision is required while approaching a target, but the initial motion can vary without undermining reliability. This variation can be encoded as a set of tolerance values, capturing the task's pivotal points and providing flexibility. However, providing such detailed demonstrations is challenging. Existing methods rely on multiple demonstration statistics to deduce these tolerances, which reduces implementation efficiency. Humans demonstrate tasks slowly due to the cognitive complexity of illustrating various aspects simultaneously [8]. As minimizing the execution time is a crucial factor, it is essential to obtain the most optimal timing law. Most existing LfD methods address this by letting humans speed up initial demonstrations [9]. However, the robot's kinematic limits are often overlooked, with no guarantee of achieving the optimal time in practice. Complicating this, human demonstrations can be noisy from hand trembling or sensor errors, resulting in jerky trajectories. Speeding up such demonstrations can amplify the irregularities, causing erratic motions, raising maintenance costs, and wasting energy. Moreover, LfD algorithms are likely to overfit and mimic this noise. These issues collectively diminish operation efficiency, underscoring the importance of smooth and time-optimal demonstrations. Given the outlined challenges, this paper introduces DFL-TORO, a novel **D**emonstration **F**ramework for **L**earning **T**ime-**O**ptimal **R**obotic tasks via **O**ne-shot kinesthetic demonstration. DFL-TORO intuitively captures human demonstrations and obtains task tolerances, yielding smooth, jerk-regulated, timely, and noise-free trajectories, as illustrated in Fig. 1. Our main contributions are as follows: Fig. 1: Advantages of DFL-TORO * An optimization-based smoothing algorithm, considering the robot's kinematic limits and task tolerances, which delivers time-and-jerk optimal trajectories and filters out the noise, enhancing operation efficiency. * A method for intuitive refinement of velocities and acquisition of tolerances, reducing the need for repetitive demonstrations and boosting operational efficiency. * Evaluation of DFL-TORO for a Reaching task via Franka Emika Research 3 (FR3) robot, highlighting its superiority over conventional kinesthetic demonstrations, using Dynamic Movement Primitives (DMPs) [10]. ## II Background and Related Work Recent literature has shifted from focusing on robustness against sub-optimal demonstrations [11] towards assessing the information gain and quality of demonstrations [7, 12]. To do so, several studies have employed incremental learning to simplify the demonstration process, enabling humans to teach task components in phases. Authors in [13] let humans demonstrate path and timing separately, by allowing them to adjust the speed of their initial demonstration. The proposed framework in [9] enables path and speed refinement through kinesthetic guidance, while the one in [8] uses teleoperated feedback for efficient task execution. Building on these, our approach incorporates the robot's kinematic limits into an optimization problem, solving for the best feasible timing law for the demonstrated path. Then, instead of merely speeding up, we permit users to "slow down" demonstrations until reliable execution, and hence ensure the optimal timing. Task variability tolerances are primarily used to adjust a robot's impedance, e.g., low impedance when there is high variability [14], but they are also pivotal for noise removal. Given that demonstrations do not have a set ground truth, distinguishing signals from noise is challenging. Using tolerances as trajectory guidelines ensures the trajectory is with the desired accuracy, preserving the demonstration's authenticity [15]. The study in [16] offers a detailed comparison of various smoothing algorithms, and the noise-removing effect of RBF-based methods like DMPs. However, without careful hyperparameter tuning, they risk overfitting and noise replication. The study concludes that optimization-based smoothing is more effective in preserving demonstration features and improving information gain. Within this context, some works have omitted tolerances [17] or used fixed user-defined tolerances [18]. However, our approach extracts these tolerances directly from human demonstrations and incorporates them into the optimization. To deduce tolerances from human demonstrations, it is common to consider the variance of multiple demonstrations, as an indicator of variability tolerance [15, 19, 20]. To enhance implementation efficiency, we propose a novel approach to discern tolerances from a single demonstration. Grounded in the psychological observation that humans naturally slow down when aiming for precision [21], our method captures tolerances during periods where the demonstrator slows down. Although demonstrations are presented in joint space, we extract tolerances in the Cartesian space, emphasizing the end-effector pose as the task's crucial point. It has remained a challenge to simultaneously optimize for time and jerk, as might create an infeasible problem. This stems from the fact that applying path constraints requires knowing the relative time when each part of the path is reached. On the other hand, such timings cannot be considered as decision variables, since the underlying trajectory representation is a piece-wise polynomial [22]. Also, the original timing of the demonstration is not optimal. We address these challenges by solving the problem in two steps, where first the timing is optimized and then the jerk profile is regulated by removing the noise. ## III DFL-TORO Methodology DFL-TORO workflow (Fig. 2) is detailed as follows: **A)**: The human teacher provides an inherently noisy demonstration \(q_{o}(t)\) to a \(n\) DoF manipulator via kinesthetic guidance, where the underlying path is extracted and represented as \(m\) waypoints, denoted by \(w_{i}\) for \(\forall i=1,\cdots,m\). The waypoints encode the joint configuration \(Q_{w_{i}}\in\mathbb{R}^{n}\), as well as the end-effector's position \(p_{w_{i}}=[x_{w_{i}},y_{w_{i}},z_{w_{i}}]^{T}\in\mathbb{R}^{3}\) and orientation \(\theta_{w_{i}}\in\mathbb{S}^{3}\). **B)**: "Time Optimization" module computes the ideal timing \(\tau_{i}\in[0,1]\) for \(\forall i=1,\cdots,m\) for each waypoint. **C)**: "Trajectory Generation" module solves a comprehensive optimization problem to minimize the jerk. Given that specific task tolerances are not extracted yet, default tolerance values \(\epsilon_{p}^{d}\in\mathbb{R}^{3}\) and \(\epsilon_{p}^{d}\in\mathbb{R}\) are incorporated to generate an initial trajectory. \(\epsilon_{p}^{d}\) represents the tolerance in Cartesian position, while \(\epsilon_{\theta}^{d}\) signifies the angular difference tolerance. Consequently, the outcome of this hierarchical optimization procedure is a minimum-time, smooth trajectory \(q_{f}(t)\). **D)**: "Refinement Phase" allows the teacher to interactively slow down and correct the timing law. The robot replays the trajectory, allowing the human teacher to visually assess the execution speed. The teacher can pinpoint areas where the speed is either unreliable or unsafe and determine which segments require slowing down. Since the current trajectory \(q_{f}(t)\) executes very rapidly, it leaves no time for humans to observe the motion and provide feedback. Therefore, the Refinement Phase operates at a reduced speed \(V_{0}^{r}\in\mathbb{R}_{+}\), giving humans enough reaction time to make changes. Fig. 2: DFL-TORO workflow. Red arrows indicate interactive procedures with the human-robot in the loop. * Refinement Phase provides revised timings for the trajectory, denoted as \(\tau_{i}^{r}\in[0,1]\), as well as the task tolerances \(\epsilon_{p}^{i}\in\mathbb{R}^{3}\) and \(\epsilon_{b}^{i}\in\mathbb{R}\) extracted from human feedback. These updated values are subsequently fed to Trajectory Generation module which leads to a fine-tuned trajectory \(q_{f}^{r}(t)\) in accordance with the new timings and tolerances. The steps of the proposed demonstration framework is provided in Algorithm 1, and elaborated in the following. ### _Optimization-based Smoothing_ We employ B-Splines to represent trajectories in our approach. B-Splines are ideal for robot manipulator trajectory optimization due to their smoothness, local control, and efficient computation [23]. Their piecewise polynomial nature ensures smooth motion paths, making them highly suitable for complex motion planning. A B-spline curve is defined as a linear combination of control points \(P_{i^{*}},i^{*}=1,\cdots,\bar{n}\) and B-spline basis functions \(N_{i^{*},k}(s)\). Control points \(P_{i^{*}}\) determine the shape of the curve. A B-Spline \(\xi(s)\) of \(k^{th}\) order with \(\bar{n}\) control points \(P_{1},P_{2},...,P_{\bar{n}}\) is expressed as: \[\xi(s)=\sum_{i^{*}=1}^{\bar{n}}N_{i^{*},k}(s)P_{i^{*}}, \tag{1}\] where \(s\in[0,1]\) is the normalized time. Let \(u_{1}\leq u_{2}\leq\ldots\leq u_{\bar{m}}\) be the knots and \(\bar{m}=\bar{n}+k\), with \[N_{i^{*},1}(s) =\begin{cases}1&\text{if }u_{i^{*}}\leq s<u_{i^{*}+1}\\ 0&\text{otherwise}\end{cases},\] \[N_{i^{*},k}(s) =\frac{s-u_{i^{*}}}{u_{i^{*}+k-1}-u_{i^{*}}}N_{i^{*},k-1}(s)\] \[+\frac{u_{i^{*}+k}-s}{u_{i^{*}+k}-u_{i^{*}+1}}N_{i^{*}+1,k-1}(s).\] \(\xi(s)\) is associated with a duration parameter \(T\), which maps the normalized time \(s\) into actual time \(t\)[23]. \(P_{i^{*}}\) and \(T\) are the optimization variables. In the following, we describe two modules of optimization-based smoothing. #### Iii-A1 Time Optimization Module The objective of this optimizer is to find the minimum-time trajectory \(q_{t}(t)\) that strictly passes through all the waypoints \(Q_{w_{i}}\). At this stage, we ignore constraints related to acceleration and jerk since the noise is still present in the path. Adding such constraints would prevent the optimizer from finding the ideal timings. B-Spline sub-trajectories \(\xi_{j}\) for \(\forall j=1,\cdots,m-1\) are used to represent the trajectory between every two adjacent waypoints \(w_{j}\) and \(w_{j+1}\), with control points \(P_{i^{*},j}\) and durations \(T_{j}\). Then, the time optimization problem is formulated as: \[(P_{i^{*},j}^{*},T_{j}^{*}) =\operatorname*{arg\,min}_{P_{i^{*},j},T_{j}}\sum_{j=1}^{m-1}T_{j}, \tag{2a}\] \[q_{j}(t_{j}) =\xi_{j}(\frac{t_{j}}{T_{j}}),\] (2b) \[q_{\text{min}} \leq q_{j}(t_{j})\leq q_{\text{max}},\] (2c) \[v_{\text{min}} \leq\dot{q}_{j}(t_{j})\leq v_{\text{max}},\] (2d) \[\xi_{j}(0) =Q_{w_{j}},\] (2e) \[\xi_{j}(1) =Q_{w_{j+1}},\] (2f) \[\dot{\xi}_{1}(0) =\dot{\xi}_{m-1}(1)=0, \tag{2g}\] for \(\forall t_{j}\in[0,T_{j}]\). (2b) relates the normalized and actual trajectories. We set bounds \(q_{\text{min}}\), \(q_{\text{max}}\), \(v_{\text{min}}\) and \(v_{\text{max}}\in\mathbb{R}^{n}\) for joint position and velocity as (2c) and (2d), respectively. Note that "\(\leq\)" represents element-wise inequality of vectors. Continuity constraints (2e) and (2f) are applied at the intersection of these sub-trajectories. (2g) enforces the trajectory to start and rest with zero velocity. The optimization yields an unrealistic trajectory due to the existing noise, with high acceleration and jerk values. (2a) determines the ideal timings \(T_{j}^{*}\) to move through all waypoints with highest velocity, which provides normalized time \(\tau_{i}\) defined as \[\tau_{i}=\begin{cases}\frac{\sum_{j=1}^{i-1}T_{j}^{*}}{T_{m-1}^{*}}&i\geq 2\\ 0&i=1\end{cases}, \tag{3}\] which sets the groundwork for Trajectory Generation module, that addresses the full optimization problem. #### Iii-A2 Trajectory Generation Module Given \(\tau_{i}\), we fit one B-Spline across all the waypoints, denoted as \(\xi_{f}\) with control points \(P_{i,f}\) and duration variable \(T_{f}\). We match the number of control points with the waypoints, i.e., \(\bar{n}=m\), giving flexibility to the optimizer to locally adjust the trajectory around each waypoint. The cost function is formulated as: \[J_{f}=\alpha T_{f}+\beta\int_{0}^{1}||\,\vec{\xi}\,_{f}||^{2}ds+\gamma\sum_{i= 1}^{m}||P_{i,f}-Q_{w_{i}}||^{2}, \tag{4}\] where \(\alpha,\beta\) and \(\gamma\) are positive weights. The term \(\int_{0}^{1}||\,\vec{\xi}\,_{f}||^{2}ds\) attempts to minimize the normalized jerk profile, exploiting the tolerances of waypoints. This strategy is crucial for eliminating noise while respecting the tolerances. The term \(\sum_{i=1}^{m}||P_{i,f}-Q_{w_{i}}||^{2}\) ensures that control points align closely with the original joint configurations at the waypoints. Given the robot manipulator's kinematic redundancy, an end-effector pose can correspond to multiple configurations. This term limits the robot's configuration null space, prompting the optimizer to remain near the initial configuration. The trajectory generation problem is formulated as follows: \[(P_{i,f}^{*},T_{f}^{*}) =\operatorname*{arg\,min}_{P_{i,f},T_{f}}J_{f}, \tag{5a}\] \[q_{f}(t) =\xi_{f}(\frac{t}{T_{f}}),\] (5b) \[q_{\text{min}} \leq q_{f}(t)\leq q_{\text{max}},\] (5c) \[v_{\text{min}} \leq\dot{q}_{f}(t)\leq v_{\text{max}},\] (5d) \[a_{\text{min}} \leq\ddot{q}_{f}(t)\leq a_{\text{max}},\] (5e) \[J_{\text{min}} \leq\dddot{q}_{f}(t)\leq J_{\text{max}},\] (5f) \[\xi_{f}(0) =Q_{w_{1}},\] (5g) \[\xi_{f}(1) =Q_{w_{m}},\] (5h) \[\dot{\xi}_{f}(0) =\dot{\xi}_{f}(1)=0,\] (5i) \[(p_{f}(t),\theta_{f}(t)) =F_{K}(q_{f}(t)),\] (5j) \[-\epsilon_{p}^{i} \leq p_{f}(\tau_{i}T_{f})-p_{w_{i}}\leq\epsilon_{p}^{i},\] (5k) \[\text{QuatDiff}(\theta_{f}(\tau_{i}T_{f}),\theta_{w_{i}})\in[0, \epsilon_{\theta}^{i}], \tag{5l}\] for \(\forall t\in[0,T_{f}]\). We have introduced acceleration and jerk bounds \(a_{\text{min}}\), \(a_{\text{max}}\), \(J_{\text{min}}\) and \(J_{\text{max}}\in\mathbb{R}^{n}\) in (5e) and (5f), respectively, to ensure the trajectory's feasibility. Constraints (5g)-(5i) enforce the trajectory to start at \(Q_{w_{1}}\) and rest at \(Q_{w_{m}}\) with zero velocity. (5j) represents the forward kinematic function \(F_{K}\), which is used to assign the tolerance in the end-effector's Cartesian space with \(p_{f}(t)=[x_{f}(t),y_{f}(t),z_{f}(t)]\in\mathbb{R}^{3}\) and \(\theta_{f}(t)\in\mathbb{S}^{3}\). The goal is to limit the position and angle deviation of the end-effector at each waypoint \(w_{i}\) to \(\epsilon_{p}^{i}\) and \(\epsilon_{\theta}^{i}\), via (5k) and (5l), respectively, at normalized time \(\tau_{i}\). In (5k), "\(\leq\)" represents element-wise inequality of vectors. \(\text{QuatDiff}(.,.)\in[0,\pi]\) is a conventional function to compute the absolute angle difference of two quaternions [24]. Solving (5a) yields \(P_{i,f}^{*}\), with which \(q_{f}(t)\) is computed using (1) and (5b). ### _Refinement Phase_ During Refinement Phase, the teacher can practically slow down the time progression and hence the trajectory velocity. To do so, the trajectory \(q_{f}(t)\) with duration \(T_{f}\) is replayed with a reduced speed \(V_{0}^{r}=\frac{1}{\eta T_{f}^{r}}\) for \(\eta>1\), letting the human teacher observe and provide interactive feedback in the loop. This is achieved through a teleoperated command, \(C(t)\in[-1,0]\), which functions like a brake pedal. It allows the teacher to safely and intuitively interact with the robot in real-time, controlling its velocity during execution. To attenuate noise or abrupt changes in \(C(t)\), we pass it as the attractor point of a virtual spring-damper system: \[\tau\ddot{R}(t)=K(C(t)-R(t))-D\dot{R}(t). \tag{6}\] By carefully setting the values of \(\tau\), \(K\), and \(D\), we can configure (6) to be critically damped, as well as adjust its settling time which, in turn, influences the delay in response to human command. The output \(R(t)\in[-1,0]\) is used as a deceleration term to adjust the trajectory velocity, as follows. #### Iii-B1 Velocity Adjustment Since \(q_{f}(t)=\xi_{f}(s(t))\) where \(s(t)=V_{0}^{r}\cdot t\), by adjusting \(s(t)\) we can directly modify the timing law of the trajectory \(q_{f}(t)\). Slowing down specific portions of \(\xi_{f}(s(t))\) locally can be done by incorporating \(R(t)\) as a notion of time deceleration, which leads to a modified mapping \(s_{r}(t)\). We compute this new mapping throughout the refinement phase via a straightforward kinematic model: \[\begin{cases}\dot{v}(t)=R(t)\\ \dot{s}_{r}(t)=v(t)\end{cases},\quad\begin{aligned} & v(t)\in[V_{min}^{r},V_{0}^{r}],\\ & s_{r}(t)\in[0,1]\end{aligned} \tag{7}\] To prevent the robot from completely stopping, we bound \(v(t)\) to a minimum execution speed \(V_{min}^{r}\in\mathbb{R}_{+}\). \(s_{r}(t)\) and \(\tau_{i}\) are used to determine the updated timing \(\tau_{i}^{r}\) as \[\tau_{i}^{r}=\frac{s_{r}^{-1}(\tau_{i})}{s_{r}^{-1}(\tau_{m})}, \tag{8}\] where \(s_{r}^{-1}\) is the inverse of \(s_{r}\) obtained via interpolation. It is worth mentioning that refinement at the reduced speed \(V_{0}^{r}\) applies the same proportional adjustment on the real speed of execution, as \(\tau_{i}^{r}\) are in normalized time. #### Iii-B2 Tolerance Extraction To extract the tolerances, we posit a direct correlation between the value of \(R(t)\) and the desired level of accuracy at a given point on the trajectory. This means, the more intensively the brake pedal is pressed, the greater the criticality of that trajectory portion, necessitating increased precision. This is captured by introducing the functions \(\Gamma_{p}(t)\) and \(\Gamma_{\theta}(t)\), defined as: \[\Gamma_{\kappa}(t)=(\epsilon_{\kappa}^{max}-\epsilon_{\kappa}^{min})(1-(-R(t))^{n +\zeta})+\epsilon_{\kappa}^{min}, \tag{9}\] where \(\kappa\in\{p,\theta\}\). In (9), \(R(t)=0\) is associated with maximum tolerances \(\epsilon_{p}^{max}\) and \(\epsilon_{\theta}^{max}\), whereas a \(R(t)=-1\) corresponds to the minimum tolerances \(\epsilon_{p}^{min}\) and \(\epsilon_{\theta}^{min}\). The parameters \(\zeta\) and \(n\) determine the shape and curve of \(\Gamma_{\kappa}\). By computing \(\Gamma_{\kappa}(\tau_{i})\) for each waypoint, we extract the requisite tolerances and pass it along with \(\tau_{i}^{r}\) for re-optimization. ## IV Validation Experiments The effectiveness of DFL-TORO is assessed on a 7 DoF FR3 robot in two reaching task scenarios. The performance of Optimization-based Smoothing is analyzed in Sec. IV-B, covering stages (B) and (C) of Algorithm 1. In Sec. IV-C, we examine Refinement Phase (stages (D) and (E)). Using DMP as the learning method, a baseline comparison to the state-of-the-art kinesthetic demonstration is conducted in Sec. IV-D. The choice of DMP is due to the fact that it requires only one demonstration to train. Nevertheless, our proposed method is generic and can be applied to any LfD approach. Note that in order to meaningfully evaluate trajectories with different durations, Maximum Absolute Normalized Jerk (MANJ) in normalized time \(s\in[0,1]\) is used. Furthermore, we only illustrate one axis of position trajectory for brevity. ### _Experimental Setup_ Demonstrations are recorded through kinesthetic guidance using a ROS interface at a frequency of 10 Hz. For teleoperation, we utilize a PS5 wireless controller, receiving continuous brake commands. Waypoints are automatically selected based on an end-effector movement threshold, requiring a shift of at least 1 cm or 0.1 radians for Reaching Task #1 (RT1) and Reaching Task #2 (RT2). The kinematic limits of FR3 are obtained from the robot's datasheet [25]. B-Splines of order \(k=4\) are selected for smoothness up to the velocity level. \(u_{1},\cdots,u_{\bar{m}}\) are distributed in a clamped uniform manner [23] to create basis functions in (1). Time Optimization and Trajectory Generation modules are implemented using the Drake toolbox [26]. The weights in (4) are set as \(\alpha=1\), \(\beta=0.04\), and \(\gamma=1\). Finally, Fig. 4: \(s(t)\) and \(s_{r}(t)\) during the Refinement Phase of RT1. \(t^{*}\) marks the moment the teacher starts giving command \(C(t)\). Fig. 5: RT1 fine tuning. Upper: \(x_{f}^{r}(t)\), with the extracted tolerance ranges via \(R(t)\). Lower: waypoints timing law before and after refinement. Fig. 3: Evolution of RT1 from original to smooth time-optimal joint trajectory. Each color represents a distinct DoF. default tolerance values are chosen as \(\epsilon_{p}^{d}=[2,2,2]\) cm and \(\epsilon_{\theta}^{d}=0.1\) radians. In Refinement Phase, we select \(\eta=5\), \(V_{min}^{r}=0.2V_{0}^{r}\), \(\tau=0.1\), \(D=20\), and \(K=100\). Also, we choose \(\epsilon_{p}^{max}=[5,5,5]\) cm, \(\epsilon_{p}^{min}=[1,1,1]\) cm, \(\epsilon_{\theta}^{max}=0.3\) radians, and \(\epsilon_{\theta}^{min}=0.1\) radians. For illustration, \(p_{f}^{r}=[x_{f}^{r}(t),y_{f}^{r}(t),z_{f}^{r}(t)]\), representing the Cartesian position trajectory associated with \(q_{f}^{r}(t)\), is considered. Besides lowering tolerances in the precise portions, we assign maximum tolerance for \(C(t)=0\), providing more freedom for Trajectory Generation to adapt the motion. This is best shown via RT2, where higher tolerances allow the optimizer to find better solutions. We implement DMP in the joint space via [27], to study the effectiveness of DFL-TORO on RT1 in terms of kinematic feasibility and jerk regulation. ### _Performance of Optimization-based Smoothing_ The original demonstration's joint trajectory \(q_{o}(t)\) is depicted in Fig. 2(a). The original demonstration's velocity, acceleration, and jerk are differentiated in a non-casual manner [27]. Notably, the demonstration trajectory is slow, while noise reflects itself in the jerk profile. Fig. 2(b) shows the output of the Time Optimization module, stage (B), yielding \(q_{t}(t)\) with \(\tau_{i}\) extracted via (3). Using \(\tau_{i}\) along with \(\epsilon_{p}^{d}\) and \(\epsilon_{\theta}^{d}\) in stage (C) leads to \(q_{f}(t)\), a smooth, noise-free, and time-optimal trajectory shown in Fig. 2(c). \(q_{o}(t)\) and \(q_{f}(t)\) are compared in Table I considering the metrics of execution time and MANJ. Evidently, DFL-TORO has significantly improved the demonstration quality. ### _Demonstration Refinement_ Given \(C(t)\), the progression of \(s(t)\), \(s_{r}(t)\) and \(R(t)\) (using (6)) are depicted in Fig. 4 (stage (D)). Initially, without any command, \(s(t)\) and \(s_{r}(t)\) overlap, leaving the timing law unchanged. Upon receiving the command at \(t^{*}\), \(s_{r}(t)\) progression changes to slow down the motion and thus extend execution time. This revised mapping leads to the derivation of \(\tau_{i}^{r}\) using (8). Also, we extract \(\epsilon_{p}^{i}\) and \(\epsilon_{\theta}^{i}\) via (9). Fig. 5 shows \(x_{f}^{r}(t)\in p_{f}^{r}(t)\), the fine-tuned trajectory (stage (E)). The tolerance range is adjusted to be more precise towards the end of reaching, where slow speed was commanded, shown in Fig. 4. Moreover, to showcase the velocity adjustment effect on the trajectory's timing law, we present the distribution of \(y_{w_{i}}\) over the trajectory before and after refinement. Evidently, the waypoints are stretched towards the end of the task. It is also noticeable that while the timing law of the beginning of the trajectory is left unchanged during refinement, the waypoint distribution does not overlap. This is due to the fact that since the tolerance values are increased from \(\epsilon_{p}^{d}\) to \(\epsilon_{p}^{max}\), as in (9), the optimizer has managed to find a faster trajectory to traverse these waypoints. Fig. 6 shows the effect of increasing tolerance values in RT2, which leads to a 15% reduction in cycle time and a 50% improvement in reducing the maximum jerk. ### _Baseline Comparison_ For the same start and goal configuration, we train \(\mathrm{DMP}_{o}\) via \(q_{o}(t)\), and \(\mathrm{DMP}_{f}\) via \(q_{f}(t)\), reproducing \(q_{o,dmp}(t)\) and \(q_{f,dmp}(t)\), respectively. Since \(q_{o}(t)\) is slower than \(q_{f}(t)\), we leverage the inherent scaling factor in \(\mathrm{DMP}_{o}\) to produce \(q_{s,dmp}(t)\)[10], to have the same duration as \(q_{f,dmp}(t)\) for meaningful comparison. Notation-wise, \(p_{f,dmp}(t)\), \(p_{o,dmp}(t)\) and \(p_{s,dmp}(t)\) represent the associated Cartesian position trajectories. Fig. 6(a) shows the path of \(p_{o,dmp}(t)\) and \(p_{f,dmp}(t)\). As shown, \(p_{o,dmp}(t)\) has produced unnecessary curves at the beginning of the motion, replicating the existing noise. The noise causes \(q_{s,dmp}(t)\) to become an infeasible trajectory. The kinematic violations are highlighted in Fig. 6(b), illustrating the infeasibility of \(q_{s,dmp}(t)\) while \(q_{f,dmp}(t)\) remains feasible for the same execution time. Furthermore, a comparison of the jerk values for \(q_{s,dmp}(t)\) and \(q_{f,dmp}(t)\) is shown in Fig. 6(c), showcasing that \(q_{f,dmp}(t)\) creates significantly lower jerk values. Table I summarizes the time and jerk values for different trajectories associated with RT1. Obviously, the execution time and MANJ metrics are significantly reduced by DFL-TORO. This is further verified by comparing \(\mathrm{DMP}_{o}\) and \(\mathrm{DMP}_{f}\). Even though DMP naturally smoothens the original demonstration, \(q_{f,dmp}(t)\) still outperforms \(q_{o,dmp}(t)\) by a significant margin. Furthermore, although \(q_{s,dmp}(t)\) is scaled to the optimal time, it yields an infeasible trajectory with high jerk values, comparing \(q_{s,dmp}(t)\) and \(q_{f,dmp}(t)\). ## V Conclusion In this paper, we presented DFL-TORO, a novel demonstration framework within the LfD process, addressing the poor quality and efficiency of human demonstrations. The framework captures task requirements intuitively with no need for multiple demonstrations, as well as locally adjusting the velocity profile. The obtained trajectory was ensured to be Fig. 7: Performance comparison of \(\mathrm{DMP}_{o}\) and \(\mathrm{DMP}_{f}\) with respect to kinematic feasibility and jerk profiles. time-optimal, noise-free, and jerk-regulated while satisfying the robot's kinematic constraints. The effectiveness of DFL-TORO was experimentally evaluated using a FR3 in a reaching task scenario. Future research can explore the richness of information in tolerance values and integrate the task's semantic nuances. Additionally, by tailoring the learning algorithm, we can improve DMP performance, ensuring optimal output trajectories and more effective motion generalization. Reinforcement Learning is a suitable option for improving DMP generalization in manufacturing contexts.
2309.07191
Information Flow as an Emergent Property of Divergence in Phase-Space
Recent developments have created the ability to quantify information flow among components that interact in a dynamical system, and have led to significant advances in characterizing the dependence between the variables involved. In particular, they have been used to characterize causal dependency and feedback using observations across diverse fields such as environment, climate, finance, and human health. What causes information flow among coupled components of a dynamical system? This fundamental question has remained unanswered so far. Here it is established that the information flow is an emergent response resulting from the divergence of trajectories in phase-space of a dynamical system. This finding shows that the dynamics encapsulated in the traditional expression of Liouville equation, which neglects this divergence, merely propagates the dependence encoded in the initial conditions. However, when this is not the case, the informational dependence between the components change creating an information flow. This finding has significant implications in a variety of fields, both for the interpretation of observational data for causal inference in natural dynamics, and design of systems with targeted informational dependency.
Praveen Kumar
2023-09-13T09:49:00Z
http://arxiv.org/abs/2309.07191v2
# Information Flow as an Emergent Property of Divergence in Phase-Space ###### Abstract Recent developments have created the ability to quantify information flow among components that interact in a dynamical system, and have led to significant advances in characterizing the dependence between the variables involved. In particular, they have been used to characterize causal dependency and feedback using observations across diverse fields such as environment, climate, finance, and human health. What causes information flow among coupled components of a dynamical system? This fundamental question has remained unanswered so far. Here it is established that the information flow is an emergent response resulting from the divergence of trajectories in phase-space of a dynamical system. This finding shows that the dynamics encapsulated in the traditional expression of Liouville equation, which neglects this divergence, merely propagates the dependence encoded in the initial conditions. However, when this is not the case, the informational dependence between the components change creating an information flow. This finding has significant implications in a variety of fields, both for the interpretation of observational data for causal inference in natural dynamics, and design of systems with targeted informational dependency. ## I Introduction Dynamics in natural systems, such as those associated with the environment, climate, brain etc. exhibit a range of emergent responses arising as a result of interdependencies between interacting components. The dynamical representations of these systems often capture the coupling between components through force balance and/or conservation laws such as those for mass, momentum, and energy. However, the interdependencies also reflect information propagation between system components, as fluctuations in one component drive those in others. We characterize this exchange as flow of information since the pattern of variability, or uncertainty, in one variable shapes the variability in the coupled variable [1]. Thus, information flow, quantified as uncertainty-reducing, or predictive knowledge from one variable to another [2], serves as the currency of exchange between these interacting variables. Quantifying information flow provides a powerful approach for understanding and characterizing the dependence among components in a variety of physical systems [1; 3; 4]. Empirical characterizations of information flow using observed data through measures based on transfer entropy [5] in a two-way dependence [6; 7; 8], or pairwise dependence in a network of interacting variables [9], have become a standard approach for Granger causality based inference [10] and offer significant possibilities for understanding the behavior of natural systems. More recently, partial information decomposition has offered a more refined way to characterize dependence in a network of interacting variables through a systemic view [11; 12] or through their temporal evolution represented using directed acyclic graphs [13; 14; 15; 7; 16]. However, a central question still remains unanswered - what causes information flow among coupled variables in a dynamical system? That is, what attributes of a dynamical system give rise to information flow among the set of variables involved? Answering these questions will provide a foundational perspective for understanding the behavior of natural systems. We address them by identifying the basis of information flow in dynamical systems. We derive general results for continuous time multivariate autonomous systems, and specific results associated with multivariate interactions in two- and three-variable systems. Our results below establish the important role played by the divergence of trajectories in phase-space [17] in shaping information flow among component variables. These formulations expand upon the Liouville representation of densities associated with divergence-less flows. They also augment the generalized Liouville representation [18; 19] that was aimed at overcoming these limitations and associated entropy dynamics [20; 21; 22]. In particular, they draw out the dependence structure through explicit formulation of the dynamics of multivariate dependence with that of bivariate mutual information as a special case. In commonly used Liouville representation associated with dynamical systems [23; 24] which neglects the divergence in phase-space, we show that entropic structure encapsulated in the initial conditions is merely advected and not altered through the dynamics. However, when the divergence of the flow field in the phase-space is non-zero, the entropic dependence changes and drives information flow among the system variables. Since we use variables in continuous time, entropy is interpreted as differential entropy or may be considered in the context of quantization of the variable involved. However, this limitation is of no practical consequence when mutual information or other multivariate dependence is considered (see chapter 9 in [25]). As such the results derived here are broadly applicable. Probability density in phase-space To approach our key question, we first develop the equation governing the dynamics of the multivariate probability distribution of a system. This is then used to derive the dynamical equations for the joint and marginal entropies along with the mutual information between the variables. These equations then provide the insights regarding information flow among the system components. We consider a system consisting of \(N\) variables \(\underline{\mathsf{Z}}(t)\equiv[Z_{1}(t),Z_{2}(t),\ldots,Z_{N}(t)]\), with \(Z_{i}(t)\) defined on the support \(\Omega_{i}\). Consider its dynamics given as: \[\dot{\underline{\mathsf{Z}}}(t)\equiv\frac{d\underline{\mathsf{Z}}(t)}{dt}= \underline{\mathsf{F}}(\underline{\mathsf{Z}}(t)) \tag{1}\] where \(\underline{\mathsf{Z}}(t)\in\Omega\) with \(\Omega=\Omega_{1}\times\Omega_{2}\times\cdots\times\Omega_{N}\). \(\underline{\mathsf{F}}(\underline{\mathsf{Z}})\equiv[F_{1}(\underline{ \mathsf{Z}}),F_{2}(\underline{\mathsf{Z}}),\ldots,F_{N}(\underline{\mathsf{ Z}})]\) where the function \(F_{i}(\underline{\mathsf{Z}})\) captures the dynamics of the individual components as a function of all variables, that is, \(dZ_{i}(t)/dt=F_{i}(\underline{\mathsf{Z}}(t))\). Let us consider the representation in the phase-space, that is, the space of coordinates introduced by the components \(Z_{i}\). We explore the probability of finding a trajectory in any differential volume \(d\Omega\) at time \(t\). A practical approach to obtain this probability is by considering a large number of trajectories, starting with random initial conditions. The fraction of these trajectories that pass through \(d\Omega\) at time \(t\) provide an estimate of the probability density function (_pdf_) \(p(\underline{\mathsf{Z}},t)\) with \(\int_{\Omega}p(\underline{\mathsf{Z}},t)\,d\underline{\mathsf{Z}}=1\). Equivalently, we may consider \(p(\underline{\mathsf{Z}},t)\) as a density field in phase-space through which the trajectories traverse. We assume trajectories are distinct and they are neither created nor destroyed. We also assume that \(p(\underline{\mathsf{Z}},t)\) has a compact support over \(\Omega\) or decays exponentially fast. By considering the total derivative of a trajectory \(\frac{dp}{dt}=\frac{\partial p}{\partial t}+\sum_{i}\frac{\partial p}{ \partial Z_{i}}\frac{dZ_{i}}{dt}\) we get (see also Appendix A): \[\frac{dp}{dt}=\frac{\partial p}{\partial t}+\nabla p\cdot\dot{\underline{ \mathsf{Z}}}(t)\equiv\frac{\partial p}{\partial t}+\nabla p\cdot\underline{ \mathsf{F}}(\underline{\mathsf{Z}}). \tag{2}\] Since all trajectories remain confined within the support \(\Omega\) by definition, the total probability over \(\Omega\) remains unity at any time resulting in \(\frac{d}{dt}\int_{\Omega}p(\underline{\mathsf{Z}},t)\,dt=0\), and therefore we set \(\frac{dp}{dt}=0\) to arrive at \[\frac{\partial p}{\partial t}+\nabla p\cdot\underline{\mathsf{F}}=0, \tag{3}\] where the arguments for \(p\) and \(\underline{\mathsf{F}}\) have been dropped for brevity but will be expanded when there is a possibility of ambiguity, a practice we will follow throughout. Note that the second term in equation (3) captures the gradient of the probability density projected along the flow in phase space. Using the identity \(\nabla\cdot(p\underline{\mathsf{F}})=\nabla p\cdot\underline{\mathsf{F}}+p \nabla\cdot\underline{\mathsf{F}}\), we equivalently obtain \[\frac{\partial p}{\partial t}+\nabla\cdot(p\underline{\mathsf{F}})-p\nabla \cdot\underline{\mathsf{F}}=0 \tag{4}\] which further illustrates the role if the divergence of the flow vector, \(\nabla\cdot\underline{\mathsf{F}}\). That is, the _pdf_ changes as a result of both the way in which the trajectories occupy the phase-space at any time, and the way in which the flow field is structured in phase-space. We note that when \(\nabla\cdot\underline{\mathsf{F}}=0\), we obtain the standard form of the Liouville equation: \[\frac{\partial p}{\partial t}+\nabla\cdot(p\underline{\mathsf{F}})=0, \tag{5}\] which expresses that the volume in phase-space is preserved in the absence of divergence, and we have a conservative system. To examine the important role of phase-space divergence in the dynamics of \(p(\underline{\mathsf{Z}},t)\), let us consider a prototypical example of damped harmonic oscillator given in the standard form as \(m\ddot{x}+b\dot{x}+kx=0\). Although this example is elementary, it serves to illustrate the role of the divergence in phase space. Using \(Z_{1}\) and \(Z_{2}\) for position and velocity, we get the phase-space dynamics given as \[\dot{Z_{1}} = Z_{2}(t) \tag{6}\] \[\dot{Z_{2}} = -b/m\,Z_{2}(t)-k/m\,Z_{1}(t),\] resulting in \(\nabla\cdot\underline{\mathsf{F}}=-b/m<0\) for \(b,m>0\). For \(b=0\) the equation corresponds to the simple harmonic oscillator with \(\nabla\cdot\underline{\mathsf{F}}=0\), a prototypical example of a conservative system, but otherwise it corresponds to a dissipative dynamical system. In this particular situation the trajectories converge closer to each other with time (Fig. 1a,b,c). To illustrate the role of \(\nabla\cdot\underline{\mathsf{F}}\), we further show the evolution of the _pdf_ for two situations corresponding to \(b=0\) (Fig. 1e-g) and \(b/m=1/2\) (Fig. 1h-k) starting with the same initial _pdf_ (Fig. 1d). For the dissipative case, as the trajectories close in together (as illustrated in Fig. 1b), the structure of the _pdf_ is modified. This is in contrast to the conservative case where the _pdf_ merely gets advected in phase-space. As a result, in the case of a conservative system, information is conserved over time [26], that is, the dynamics doesn't create or destroy any information that is not already contained in the initial condition. However, for the dissipative system, the information content changes with time because the entropic behavior of the _pdf_ changes. The classic Lorenz equation for deterministic chaos given as \[\dot{Z}_{1} = \sigma(Z_{2}(t)-Z_{1}(t)) \tag{7}\] \[\dot{Z_{2}} = Z_{1}(t)(\rho-Z_{3}(t))-Z_{2}(t)\] \[\dot{Z_{3}} = Z_{1}(t)Z_{2}(t)-\beta Z_{3}(t)\] results in \(\nabla\cdot\underline{\mathsf{F}}=-(\sigma+\beta+1)<0\) for usual parameters \(\sigma,\beta,\rho>0\), and serves as another important example of a dissipative system. The phase-space changes its structure, and volumes in phase-space are not conserved with the evolution of the system, thereby making the use of the standard Liouville equation (5) inadmissible for its exploration or other such systems. To the best of author's knowledge, the general form in equation (4) (or equation (3)) has not been previously considered in characterization of information flow in dynamical system. Indeed the work presented by [23; 24] is based on Liouville equation (5) which is formulated based on the underlying assumption of \(\nabla\cdot\underline{\mathrm{E}}=0\), thereby excluding the impact of the divergence of the phase-space on the probability density \(p(\underline{\mathrm{Z}},t)\) (as illustrated in Fig. 1). This brings us to the key tenet of this work. From equation (4) we note that the change in \(p(\underline{\mathrm{Z}},t)\) is a balance between the divergence of trajectories resulting from the divergence of the flow field in the phase-space. One way to interpret the initial _pdf_, \(p(\underline{\mathrm{Z}},0)\), is to think of it as representing the probability of a selection (or ensemble) of trajectories whose dynamics we wish to explore. As the system evolves, the phase-space volume occupied by the trajectories is preserved when \(\nabla\cdot\underline{\mathrm{E}}=0\) and as a result the _pdf_ is not entropically altered, merely advected in the phase-space. However, when \(\nabla\cdot\underline{\mathrm{E}}\neq 0\), the trajectories either diverge or are squeezed together. This is accomplished through the modification of the relationship that the components \(Z_{i}\)'s have with each other within a trajectory as dictated by the structure embodied in the relationship \(\underline{\mathrm{E}}(\underline{\mathrm{Z}})\). So while the change in density is associated with the squeezing or expansion of nearby trajectories, this is a result of the interaction between the different components comprising the dimensions of the phase-space. Therefore the changing _pdf_ of the ensemble is a reflection of the changing relationship between the variables in the individual trajectories. In other words, the dynamical relation \(\underline{\mathrm{E}}\) induces an informational dependence between the system components \(Z_{i}\). This is akin to vehicles squeezing from a closed lane in a multilane highway, and the vehicles in the open lanes slowing down to accommodate the changing pattern of traffic flow drawing upon the information of changing traffic pattern. We can therefore use the dual view for the _pdf_, one associated with the ensemble and the other with the changing relation between components of the dynamics. So we in Figure 1: Illustration of the role of \(\nabla\cdot\underline{\mathrm{E}}\) in the dynamics of _pdf_ for a damped harmonic oscillator (equation (6)) in comparison to an undamped case. (a) Phase-space plot of an undamped, and (b) damped harmonic oscillator (with two nearby trajectories). The dots indicate position at times in multiples of \(\pi/2\) after the initial time. Subplot (c) shows the time series of position (\(Z_{1}\)) [blue, grey] and velocity (\(Z_{2}\)) [red, orange] corresponding to the two trajectories in (b). Subplot (d) shows the initial condition for \(p\) (product of two independent univariate Gaussian distributions with equal variance (0.25)). Subplots (e), (f) and (g) show the evolution of \(p\), for the undamped case with \(b=0\), therefore corresponding to equation (5), at times \(\pi/2\), \(\pi\), and \(3\pi/4\) from initial time. At \(t=2\pi\) the systems returns to that in (d). Starting with the initial condition in (d), subplots (h), (i), (j), and (k) show evolution of \(p\) for \(b/m=1/2\), thereby corresponding to equation (4) with \(\nabla\cdot\underline{\mathrm{E}}\neq 0\), at times \(\pi/2\), \(\pi\), \(3\pi/4\) and \(2\pi\) respectively. (color online) terpret the change in the _pdf_ as a reflection of the changing relation between the components \(Z_{i}\) in the dynamics. That is, the dynamics of the _pdf_, and the informational attributes it encapsulates, is not merely a statistical characterization of the trajectories but a physical attribute of the system behavior itself. We can therefore use this _pdf_ to characterize the dynamics of entropy and multivariate mutual information among the components \(Z_{i}\). ## IV Dynamics of entropy We can now use equation (4) to determine the evolution of entropy and explore its dependence on \(\nabla\cdot\underline{\mathrm{F}}\). The dynamics of the system entropy, \(H_{\underline{\mathrm{Z}}}(t)\), associated with the joint distribution \(p(\underline{\mathrm{Z}},t)\) can be derived as (see Appendix B): \[\frac{dH_{\underline{\mathrm{Z}}}}{dt}-\int_{\Omega}(p\log\frac{1}{p})\nabla \cdot\underline{\mathrm{F}}\,d\underline{\mathrm{Z}}=0. \tag{8}\] Alternatively this may be written as \[\frac{dH_{\underline{\mathrm{Z}}}}{dt}-\mathcal{E}\left[\psi(\underline{ \mathrm{Z}},t)\nabla\cdot\underline{\mathrm{F}}\right]=0 \tag{9}\] where \(\mathcal{E}\) is the expectation operator and \(\psi(\underline{\mathrm{Z}},t)=\log(1/p(\underline{\mathrm{Z}},t))\) is the pointwise information in the phase-space, such that \(H_{\underline{\mathrm{Z}}}(t)=\mathcal{E}[\psi(\underline{\mathrm{Z}},t)]\). This equation immediately draws out the crucial role of \(\nabla\cdot\underline{\mathrm{F}}\) in the evolution of the system entropy. For the situation when \(\nabla\cdot\underline{\mathrm{F}}\) is independent of \(\underline{\mathrm{Z}}\), i.e. it invariant in the phase-space, for example as in the case of the damped harmonic oscillator or the Lorenz system, we have \[\mathcal{E}\left[\psi(\underline{\mathrm{Z}},t)\nabla\cdot\underline{\mathrm{ F}}\right]=(\nabla\cdot\underline{\mathrm{F}})H_{\underline{\mathrm{Z}}} \tag{10}\] and equation (9) gives us the dynamics of the system entropy as, \[\frac{dH_{\underline{\mathrm{Z}}}}{dt}-(\nabla\cdot\underline{\mathrm{F}})\, H_{\underline{\mathrm{Z}}}=0. \tag{11}\] This equation admits a direct solution \[H_{\underline{\mathrm{Z}}}(t)=H_{\underline{\mathrm{Z}}}(t_{0})\exp\left\{( \nabla\cdot\underline{\mathrm{F}})\Delta t\right\} \tag{12}\] where \(\Delta t=t-t_{0}\) with \(H_{\underline{\mathrm{Z}}}(t_{0})\) being the entropy at the initial time \(t_{0}\). We see that the system is entropically altered by phase-space divergence during its evolution. We note that for a conservative system governed by Liouville equation (5) we get \(H_{\underline{\mathrm{Z}}}(t)=H_{\underline{\mathrm{Z}}}(t_{0})\) reflecting that entropy is temporally invariant and no information is generated by the dynamics, consistent with known understanding. So while for a simple harmonic oscillator the entropy is constant, for a damped harmonic oscillator it decays as \(H(t)=H(t_{0})\exp\left\{-(b/m)\Delta t\right\}\), and for the Lorenz system it varies as \[H_{\underline{\mathrm{Z}}}(t)=H_{\underline{\mathrm{Z}}}(t_{0})\exp\left\{-( \sigma+\beta+1)\Delta t\right\}. \tag{13}\] There are situations when \(\nabla\cdot\underline{\mathrm{F}}\) is not invariant in the phase-space, In such cases \(\psi(\underline{\mathrm{Z}},t)\) plays an important role. An example is provided by the Rossler system given as \[\dot{Z_{1}} = -Z_{2}(t)-Z_{3}(t) \tag{14}\] \[\dot{Z_{2}} = Z_{1}(t)+aZ_{2}(t)\] \[\dot{Z_{3}} = b+Z_{3}(t)(Z_{1}(t)-c)\] where \(a,b\) and \(c\) are parameters. It is easily seen that \(\nabla\cdot\underline{\mathrm{F}}=a-c+Z_{1}\) and \(\mathcal{E}\left[\psi(\underline{\mathrm{Z}},t)\nabla\cdot\underline{\mathrm{ F}}\right]=(a-c)H_{\underline{\mathrm{Z}}}+\int_{\Omega}(p\log(1/p)Z_{1}\,d \underline{\mathrm{Z}}\), which appears more complex than that for the Lorenz system. Equation (9) (or equation (8)) is the key result that characterizes the evolution of the system entropy and shows that phase-space divergence is the primary determinant of this dynamics. We can now use equation (9) for the joint entropy to characterize multivariate interaction between the system components. ## V Dynamics of multivariate interaction To understand how mutual information and higher dimensional multivariate interactions evolve, we invoke the chain rule for entropy, i.e., \(H_{Z_{1},\cdots,Z_{N}}(t)=H_{\sum_{i=1}^{N}\{Z_{i}|Z_{i-1},\cdots,Z_{i}\}}(t)\), and by substituting in equation (9) we get \[\sum_{i=1}^{N}\frac{\partial}{\partial t}H_{Z_{i}|Z_{i-1},\cdots,Z_{1}}- \mathcal{E}\left[\psi(\underline{\mathrm{Z}},t)\nabla\cdot\underline{\mathrm{ F}}\right]=0. \tag{15}\] For a 2-variable case, using equation (15) we can show that the mutual information evolves as a function of the marginal entropies as (see Appendix C) : \[\frac{\partial I_{Z_{1};Z_{2}}}{\partial t}=\frac{\partial}{\partial t}\left(H _{Z_{1}}+H_{Z_{2}}\right)-\mathcal{E}\left[\psi(\underline{\mathrm{Z}},t) \nabla\cdot\underline{\mathrm{F}}\right]. \tag{16}\] This equation again encapsulates the contribution of phase-space divergence in the evolution of dependence in a bivariate system. For the special case when \(\nabla\cdot\underline{\mathrm{F}}\) is independent of \(\underline{\mathrm{Z}}\), i.e. equation (11) holds, from equation (16) we get \[\sum_{i=1}^{2}\bigg{(}\frac{\partial H_{Z_{i}}}{\partial t}-(\nabla\cdot F)H_{Z _{i}}\bigg{)}-\bigg{(}\frac{\partial I_{Z_{1};Z_{2}}}{\partial t}-(\nabla\cdot F )I_{Z_{1};Z_{2}}\bigg{)}=0. \tag{17}\] This equation links the dynamics of the marginal entropies of \(Z_{1}\) and \(Z_{2}\) to the dynamics of their mutual information. For conservative systems, using \(\nabla\cdot\underline{\mathrm{E}}=0\), we easily get \[\frac{\partial}{\partial t}\left(H_{Z_{1}}+H_{Z_{2}}-I_{Z_{1};Z_{2}}\right)=0 \tag{18}\] consistent with \(H_{Z_{1},Z_{2}}(t)\) remaining invariant with \(t\) although the balance between the marginal entropies and mutual information can change in time. We now consider dependence between three variables, for which we have \(H_{Z_{1},Z_{2},Z_{3}}(t)=H_{Z_{1}}(t)+H_{Z_{2}|Z_{1}}(t)+H_{Z_{3}|Z_{2},Z_{1}} (t)\) from the chain rule. This gives us, from equation (15), \[\frac{\partial}{\partial t}\left(H_{Z_{1}}+H_{Z_{2}|Z_{1}}+H_{Z_{3}|Z_{2},Z_{ 1}}\right)-\mathcal{E}\left[\psi(\underline{\mathrm{Z}},t)\nabla\cdot \underline{\mathrm{E}}\right]=0. \tag{19}\] By noting the following identities \(H_{Z_{i}|Z_{j}}=H_{Z_{i}}-I_{Z_{i};Z_{j}}\) and \(H_{Z_{3}|Z_{2},Z_{1}}=H_{Z_{3}|Z_{2}}-I_{Z_{3};Z_{1}|Z_{2}}\) the above equation can be written as \[\frac{\partial}{\partial t}\left(H_{Z_{1}}+H_{Z_{2}}+H_{Z_{3}}\right) - \tag{20}\] \[\frac{\partial}{\partial t}\left(I_{Z_{1};Z_{2}}+I_{Z_{2};Z_{3}}+ I_{Z_{3};Z_{1}|Z_{2}}\right) - \mathcal{E}\left[\psi(\underline{\mathrm{Z}},t)\nabla\cdot\underline{ \mathrm{E}}\right]=0.\] Noting further that interaction information is given as \(I_{Z_{1};Z_{2};Z_{3}}=I_{Z_{1};Z_{2}}-I_{Z_{1};Z_{2}|Z_{3}}=I_{Z_{2};Z_{3}}-I_ {Z_{2};Z_{3}|Z_{1}}\) we get two equivalent forms involving multivariate information, \(M_{VI}(\underline{\mathrm{Z}})\), that captures the dependence among the variables: \[M_{VI}(\underline{\mathrm{Z}}) = I_{Z_{1};Z_{2}}+I_{Z_{2};Z_{3}}+I_{Z_{3};Z_{1}}-I_{Z_{1};Z_{2};Z_ {3}}\] \[= I_{Z_{1};Z_{2}|Z_{3}}+I_{Z_{2};Z_{3}|Z_{1}}+I_{Z_{3};Z_{1}|Z_{2}} +2I_{Z_{1};Z_{2};Z_{3}}\] and \[\frac{\partial M_{VI}}{\partial t}=\left(\sum_{i=1}^{3}\frac{\partial}{ \partial t}H_{Z_{i}}\right)-\mathcal{E}\left[\psi(\underline{\mathrm{Z}},t) \nabla\cdot\underline{\mathrm{E}}\right] \tag{22}\] where, akin to equation (16), the LHS characterizes the dynamics of the interaction between the variables. We again note that \(\nabla\cdot\underline{\mathrm{E}}\) asserts an important role in the evolution of the multivariate interaction information. For the special case when \(\nabla\cdot\underline{\mathrm{E}}\) is independent of \(\underline{\mathrm{Z}}\), equation (22) reduces to \[\frac{\partial M_{VI}}{\partial t}-\left(\nabla\cdot\underline{\mathrm{E}} \right)\left(M_{VI}\right)=\sum_{i=1}^{3}\left(\frac{\partial H_{Z_{i}}}{ \partial t}-\left(\nabla\cdot\underline{\mathrm{E}}\right)H_{Z_{i}}\right). \tag{23}\] Since the multivariate interaction information, \(M_{VI}\), is a function of the marginal entropies, it is possible to obtain explicit equations for the evolution of the marginal entropy \(H_{Z_{i}}(t)\) as (see Appendix D): \[\frac{\partial H_{Z_{i}}}{\partial t}=\int_{\Omega}\log p_{i}\frac{\partial( pF_{i})}{\partial Z_{i}}\,d\underline{\mathrm{Z}}_{i}-\mathcal{E}\left[(1- \psi(Z_{i},t))\nabla\cdot\underline{\mathrm{E}}\right], \tag{24}\] where \(\psi(Z_{i},t)=\log\frac{1}{p_{i}}\) with the property that \(\mathcal{E}\left[\psi(Z_{i},t)\right]=\int_{\Omega}p\log\frac{1}{p_{i}}d \underline{\mathrm{Z}}\,=\int_{\Omega_{i}}p_{i}\log\frac{1}{p_{i}}dZ_{i}=H_{Z_ {i}}\). This formulation for the marginal entropy together with that for the system (equation (8) or (9)) allows us to completely characterize the dynamics of the multivariate dependence. For the two variable case, inserting equation (24) into equation (16), we get \[\frac{\partial I_{Z_{1};Z_{2}}}{\partial t}=\sum_{i=1}^{2}\left[\int_{\Omega} \log p_{i}\frac{\partial(pF_{i})}{\partial Z_{i}}\,dZ_{i}\right]-\left(2+I_{Z _{1};Z_{2}}\right)\nabla\cdot\underline{\mathrm{E}} \tag{25}\] Similarly for a three variable case, using equation (24) in equation (22), we get \[\frac{\partial M_{VI}}{\partial t}=\sum_{i=1}^{3}\left[\int_{\Omega}\log p_{i} \frac{\partial(pF_{i})}{\partial Z_{i}}\,dZ_{i}\right]-\left(3+M_{VI}\right) \nabla\cdot\underline{\mathrm{E}} \tag{26}\] The above two equations serve to illustrate how multivariate interactions between variables are shaped by both the dynamical relations captured in \(F(\underline{\mathrm{Z}})\) as well its divergence in phase-space. In the form of equations (25 and 26), the dynamics of the interaction can be directly computed without the need to compute the marginal and joint entropies. ## IV Conclusion Our key results are encapsulated in the dynamics of the _pdf_ (equations 3 or 4), joint entropy (equations 8 or 9), marginal entropy (equation 24), and multivariate interaction information (equation 15). Based on these, specific results for the dynamics of the bivariate mutual information (equation (25)) and trivariate interaction information (equation (26)) are established. Based on the insights gained from these results we conclude that the divergence of the flow field in the phase-space alters the entropic structure of the _pdf_, that is, it creates temporal information change, during the evolution of a dynamical system. As a result it induces information flow among the component variables involved. If this divergence is zero, as is the case with the traditional implementation associated with Liouville equation, we simply propagate the dependence embodied in the initial conditions. These results provide a foundational basis for thinking about the evolving dynamics of information flow and have potential applications in many fields. In particular, in the study of natural phenomena, such as those associated with environmental and climatic systems, these results provide the potential to explore the basis of evolution of dependence among interacting variables. While the results include expression only for temporally synchronous dependence through information flow, we can esily envision that time-lagged dependence between the system components, such as those sought through transfer entropy [5], also change as a result. These will be explored in a future studies. ## Appendix A Here we provide an alternate derivation of equation (2) by considering the \(pdf\,p(\mathds{Z},t)\) of a trajectory in phase space (see Figure 2). Consider the Taylor series expansion about \((\mathds{Z},t)\): \[p(\mathds{Z}+\mathds{F}\Delta t,t+\Delta t) = p(\mathds{Z},t)+\frac{\partial p(\mathds{Z},t)}{\partial t}\Delta t\] \[+ \nabla p(\mathds{Z},t)\cdot\Delta\mathds{Z}+\text{higher order terms}.\] Therefore, by noting \(\lim_{\Delta t\to 0}\Delta Z/\Delta t=\mathds{F}(\mathds{Z})\) and neglecting higher order terms, the Lagrangian derivative is given as \[\frac{dp}{dt} = \lim_{\Delta t\to 0}\frac{p(\mathds{Z}+\mathds{F}\Delta t,t+ \Delta t)-p(\mathds{Z},t)}{\Delta t}\] \[= \frac{\partial p(\mathds{Z},t)}{\partial t}+\nabla p(\mathds{Z},t)\cdot\mathds{F}(\mathds{Z})\] where the terms on the RHS comprise the Eulerian derivative. ## Appendix B Here we show the derivation of equation (8). We multiply equation (3) with \(1+\log p\) to get \[(1+\log p)\frac{\partial p}{\partial t}+(1+\log p)\nabla p\cdot\mathds{F}=0, \tag{29}\] Noting that \((1+\log p)\frac{\partial p}{\partial t}=\frac{\partial p\log p}{\partial t}\), and further expanding \(\nabla p\cdot\mathds{F}\) and \(\log p\nabla p\cdot\mathds{F}\) and adding individual term we get \((1+\log p)\nabla p\cdot\mathds{F}=\sum_{i}F_{i}(1+\log p)\frac{\partial p}{ \partial Z_{i}}=\sum_{i}F_{i}\frac{\partial(p\log p)}{\partial Z_{i}}\) giving us \[\frac{\partial(p\log p)}{\partial t}+\nabla(p\log p)\cdot\mathds{F}=0 \tag{30}\] which can be written as \[\frac{\partial(p\log p)}{\partial t}+\nabla\cdot(p\log p\,\mathds{F})-(p\log p )\nabla\cdot\mathds{F}=0. \tag{31}\] Multiplying by \(-d\mathds{Z}\) and integrating over \(\Omega\) we get \[\int_{\Omega}\frac{\partial}{\partial t}(p\log\frac{1}{p})d \mathds{Z} + \int_{\Omega}\nabla\cdot(p\log\frac{1}{p}\,\mathds{F})d \mathds{Z}\] \[- \int_{\Omega}(p\log\frac{1}{p})\nabla\cdot\mathds{F}\,d\mathds{ Z}=0.\] The first term is \(\frac{\partial H\mathds{Z}}{\partial t}=\frac{\partial}{\partial t}\int_{ \Omega}(p\log\frac{1}{p})d\mathds{Z}\) where \(H_{\underline{Z}}(t)\) is the Shannon entropy associated with the joint distribution over the phase-space \(\mathds{Z}\) at time \(t\). To evaluate the second term, we invoke the divergence theorem to get \(\int_{\Omega}\nabla\cdot(p\log\frac{1}{p}\,\mathds{F})d\mathds{Z}=\int_{ \delta\Omega}(p\log\frac{1}{p})\,\mathds{F}\cdot\vec{n}\,ds\) where \(\delta\Omega\) represents the surface for the domain \(\Omega\), \(ds\) is a differential element on this surface and \(\vec{n}\) is the normal to this surface. Since the flux of probability through this surface is of measure zero, this term is zero and we get equation (8). ## Appendix C Here we show the derivation of equation (16). Equation (15) can be written in terms of marginal entropies and multivariate interaction. Consider a two variable case, where \(H_{Z_{1},Z_{2}}(t)=H_{Z_{1}}(t)+H_{Z_{2}|Z_{1}}(t)\). Noting that \(H_{Z_{1}|Z_{1}}(t)=H_{Z_{2}}(t)-I_{Z_{1};Z_{2}}(t)\) where \(I_{Z_{1};Z_{2}}(t)\) is the mutual information between \(Z_{1}(t)\) and \(Z_{2}(t)\), we get \[\frac{\partial}{\partial t}\left(H_{Z_{1}}+H_{Z_{2}}-I_{Z_{1};Z_{2}}\right)- \mathcal{E}\left[\psi(\mathds{Z},t)\nabla\cdot\mathds{F}\right]=0. \tag{33}\] Alternatively, this can be written to show that the mutual information evolves as a function of the marginal entropies as: \[\frac{\partial I_{Z_{1};Z_{2}}}{\partial t}=\frac{\partial}{\partial t}\left(H _{Z_{1}}+H_{Z_{2}}\right)-\mathcal{E}\left[\psi(\mathds{Z},t)\nabla\cdot \mathds{F}\right]. \tag{34}\] This gives equation (16). ## Appendix D Here we show the derivation of equation (24). We note that the marginal entropy \(H_{Z_{i}}(t)=\int_{\Omega_{i}}p_{i}\log(1/p_{i})\,dZ_{i}\) where \(p_{i}(t)\) is the marginal pdf is obtained as \[p_{i}(t)=\int_{\Omega\setminus\Omega_{i}}p(\mathds{Z},t)\,d\mathds{Z} \tag{35}\] and where \(\setminus\) is the exclusion operator and \(d\underline{\underline{Z}}_{i}\) in this case is understood to exclude \(dZ_{i}\) from the context of integral over \(\Omega\setminus\Omega_{i}\). We first integrate equation (4) over the subspace \(\Omega\setminus\Omega_{i}\) as: \[\frac{\partial}{\partial t}\int_{\Omega\setminus\Omega_{i}}p( \underline{Z},t)\,d\underline{\underline{Z}} + \int_{\Omega\setminus\Omega_{i}}\nabla\cdot(p\underline{\underline {F}})\,d\underline{\underline{Z}}\] \[- \int_{\Omega\setminus\Omega_{i}}p\nabla\cdot\underline{\underline {F}}\,d\underline{\underline{Z}}=0.\] For the second term we expand \(\nabla\cdot(p\underline{\underline{F}})\) and integrate each term \(j\) first as \(\int_{\Omega_{j}}\frac{\partial(pF_{j})}{\partial Z_{j}}\,dZ_{j}\). This evaluates to \(0\) since \(p\) has a compact support. As a result \(\int_{\Omega\setminus\Omega_{i}}\nabla\cdot(p\underline{\underline{F}})\,d \underline{\underline{Z}}=\int_{\Omega\setminus\Omega_{i}}\frac{\partial(pF_{ i})}{\partial Z_{i}}\,dZ_{i}\) as all other terms are \(0\). Further using equation (35), the above simplifies to \[\frac{\partial p_{i}}{\partial t}+\int_{\Omega\setminus\Omega_{i}}\frac{ \partial(pF_{i})}{\partial Z_{i}}\,d\underline{\underline{Z}}_{i}-\int_{ \Omega\setminus\Omega_{i}}p\nabla\cdot\underline{\underline{F}}\,d\underline {\underline{Z}}=0. \tag{37}\] Multiplying the above by \(-(1+\log p_{i})\) and integrating with respect to \(Z_{i}\) and noting again that \(\int_{\Omega_{i}}\frac{\partial(pF_{i})}{\partial Z_{i}}\,dZ_{i}=0\), we get \[\frac{\partial H_{Z_{i}}}{\partial t}=\int_{\Omega}\log p_{i}\frac{\partial( pF_{i})}{\partial Z_{i}}\,dZ_{i}-\int_{\Omega}(1-\log\frac{1}{p_{i}})p\nabla \cdot\underline{\underline{F}}\,d\underline{\underline{Z}}. \tag{38}\] This reduces to equation (24). Funding support from the following ARPA-E grant DE-AR0001225, and NSF grants EAR 1331906, EAR 2012850, and OAC 1835834 are acknowledged. Special thanks to Peishi Jiang and Allison Goodwell for providing excellent insights with the derivations and interpretation, and to Francina Dominguez and Hoshin Gupta for broader discussions.
2309.14853
Unipotent Representations of Complex Groups and Extended Sommers Duality
Let $G$ be a complex reductive algebraic group. In arXiv:2108.03453, we have defined a finite set of irreducible admissible representations of $G$ called `unipotent representations', generalizing the special unipotent representations of Arthur and Barbasch-Vogan. These representations are defined in terms of filtered quantizations of symplectic singularities and are expected to form the building blocks of the unitary dual of $G$. In this paper, we provide a description of these representations in terms of the Langlands dual group $G^{\vee}$. To this end, we construct a duality map $D$ from the set of pairs $(\mathbb{O}^{\vee},\bar{C})$ consisting of a nilpotent orbit $\mathbb{O}^{\vee} \subset \mathfrak{g}^{\vee}$ and a conjugacy class $\bar{C}$ in Lusztig's canonical quotient $\bar{A}(\mathbb{O}^{\vee})$ to the set of finite covers of nilpotent orbits in $\mathfrak{g}^*$.
Lucas Mason-Brown, Dmytro Matvieievskyi, Shilin Yu
2023-09-26T11:32:16Z
http://arxiv.org/abs/2309.14853v1
# Unipotent representations of complex groups and extended semmers duality ###### Abstract. Let \(G\) be a complex reductive algebraic group. In [12], we have defined a finite set of irreducible admissible representations of \(G\) called _unipotent representations_, generalizing the _special unipotent representations_ of Arthur ([1]) and Barbasch-Vogan ([2]). These representations are defined in terms of filtered quantizations of symplectic singularities and are expected to form the building blocks of the unitary dual of \(G\). In this paper, we provide a description of these representations in terms of the Langlands dual group \(G^{\vee}\). To this end, we construct a duality map \(D\) from the set of pairs \((\mathbb{O}^{\vee},\bar{C})\) consisting of a nilpotent orbit \(\mathbb{O}^{\vee}\subset\mathfrak{g}^{\vee}\) and a conjugacy class \(\bar{C}\) in Lusztig's canonical quotient \(\bar{A}(\mathbb{O}^{\vee})\) to the set of finite covers of nilpotent orbits in \(\mathfrak{g}^{*}\). ## 1. Introduction Let \(G\) be a complex reductive algebraic group with Lie algebra \(\mathfrak{g}\). A _nilpotent cover_ is a finite connected \(G\)-equivariant cover of a nilpotent co-adjoint orbit \(\mathbb{O}\subset\mathfrak{g}^{*}\). Write \(\mathsf{Cov}(G)\) for the (finite) set of isomorphism classes of nilpotent covers for \(G\). In [12], we attach to each nilpotent cover \(\widehat{\mathbb{O}}\) a finite set \(\mathrm{Unip}_{\widehat{\mathbb{O}}}(G)\) of irreducible unitary representations of \(G\) called _unipotent representations_. Such representations possess a variety of special properties and are conjectured to form the building blocks of the unitary dual. They include, as a proper subset, all special unipotent representations in the sense of Arthur ([1]) and Barbasch-Vogan ([2]). The purpose of this article is to give a description of the sets \(\mathrm{Unip}_{\widehat{\mathbb{O}}}(G)\) in terms of the Langlands dual group \(G^{\vee}\). The dual-group parameters which appear in our description are called _Lusztig-Achar data_. A Lusztig-Achar datum for \(G^{\vee}\) is a pair \((\mathbb{O}^{\vee},\bar{C})\) consisting of a nilpotent orbit \(\mathbb{O}^{\vee}\subset\mathfrak{g}^{\vee}\) and a conjugacy class \(\bar{C}\) in Lusztig's canonical quotient \(\bar{A}(\mathbb{O}^{\vee})\) of the \(G^{\vee}\)-equivariant fundamental group of \(\mathbb{O}^{\vee}\), see Section 2.9. Such objects have previously appeared in the work of Achar [1]. Write \(\mathsf{LA}(G^{\vee})\) for the (finite) set of Lusztig-Achar data for \(G^{\vee}\). We will also consider the subset \(\mathsf{LA}^{*}(G^{\vee})\subset\mathsf{LA}(G^{\vee})\) of so-called _special Lusztig-Achar data_, defined in [1, Section 3]. This subset includes all pairs of the form \((\mathbb{O}^{\vee},1)\) (among others). Choose a Cartan subalgebra \(\mathfrak{h}\subset\mathfrak{g}\) and let \(W\) denote the Weyl group. Let \(\mathfrak{h}_{\mathbb{R}}^{*}\) denote the real form of the dual space \(\mathfrak{h}^{*}\) spanned by the roots in \(G\). To each Lusztig-Achar datum \((\mathbb{O}^{\vee},\bar{C})\), we attach a \(W\)-invariant subset \(S(\mathbb{O}^{\vee},\bar{C})\subset\mathfrak{h}_{\mathbb{R}}^{*}\). If \((\mathbb{O}^{\vee},\bar{C})\) is special we show in Theorem 4.8 that there is a unique minimal-length \(W\)-orbit \(\gamma(\mathbb{O}^{\vee},\bar{C})\subset S(\mathbb{O}^{\vee},\bar{C})\). This \(W\)-orbit determines an infinitesimal character for \(U(\mathfrak{g})\) by the Harish-Chandra isomorphism (which we also denote by \(\gamma(\mathbb{O}^{\vee},\bar{C})\)), and hence a maximal ideal \(I(\mathbb{O}^{\vee},\bar{C})\) in \(U(\mathfrak{g})\). We consider the finite set \[\mathrm{Unip}_{(\mathbb{O}^{\vee},\bar{C})}(G)=\{\text{irreducible $G$-equivariant Harish-Chandra $U(\mathfrak{g})$-bimodules}\] \[\text{which are annihilated on both sides by $I(\mathbb{O}^{\vee},\bar{C})$}\}.\] The notation suggests a relationship between \(\operatorname{Unip}_{(\mathbb{O}^{\vee},\bar{C})}(G)\) and \(\operatorname{Unip}_{\mathbb{O}}(G)\). To make this relationship precise, we define in Section 4 a natural duality map \[D:\mathsf{LA}^{*}(G^{\vee})\to\mathsf{Cov}(G)\] This map generalizes the duality maps of Barbasch-Vogan-Lusztig-Spaltenstein ([1]), Sommers ([16]), and Mason-Brown-Matvieievskyi-Losev ([15]), and is equivalent (under some nontrivial identifications) to the duality map of Achar ([1]). It enjoys various nice properties. For example, it is injective, maps distinguished Lusztig-Achar data to birationally rigid covers, and intertwines saturation of Lusztig-Achar data with birational induction of nilpotent covers. Our first main result is the following. **Theorem 1.1** (See Theorem 4.8 below).: _Let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\) and let \(\widetilde{\mathbb{O}}=D(\mathbb{O}^{\vee},\bar{C})\). Then there is an equality_ \[\operatorname{Unip}_{(\mathbb{O}^{\vee},\bar{C})}(G)=\operatorname{Unip}_{ \widetilde{\mathbb{O}}}(G).\] Our second main result gives a parameterization of the set \(\operatorname{Unip}_{(\mathbb{O}^{\vee},\bar{C})}(G^{\vee})\) in terms of the Langlands dual group. For each special Lusztig-Achar datum \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\), define \[\gamma=\gamma(\mathbb{O}^{\vee},\bar{C}),\quad s=\exp(2\pi i\gamma),\quad R^{ \vee}=Z_{G^{\vee}}(s)^{\circ},\quad L^{\vee}=Z_{G^{\vee}}(\gamma).\] Note that \(R^{\vee}\) is a pseudo-Levi subgroup of \(G^{\vee}\) and \(L^{\vee}\) is a Levi subgroup of \(R^{\vee}\). Hence, we can consider the Richardson orbit \(\mathbb{O}_{R^{\vee}}\) for \(R^{\vee}\) corresponding to \(L^{\vee}\) (this is the nilpotent orbit for \(R^{\vee}\) obtained by Lusztig-Spaltenstein induction from the \(0\) orbit for \(L^{\vee}\)). **Theorem 1.2** (See Corollary 4.16 below).: _Assume \(G\) is adjoint and let \((\mathbb{O}^{\vee},\bar{C})\in\mathrm{LA}^{*}(G^{\vee})\). Then there is a natural bijection_ \[\operatorname{Unip}_{(\mathbb{O}^{\vee},\bar{C})}(G)\simeq\{\text{irreducible representations of }\bar{A}(\mathbb{O}_{R^{\vee}})\}.\] **Remark 1.3**.: _In fact, the representations in \(\operatorname{Unip}_{(\mathbb{O}^{\vee},\bar{C})}(G)\) are the irreducible objects in the monoidal category \(\operatorname{HC}^{G}(U(\mathfrak{g})/I(\mathbb{O}^{\vee},\bar{C}))\) of \(G\)-equivariant Harish-Chandra \(U(\mathfrak{g})\)-modules annihilated on both sides by \(I(\mathbb{O}^{\vee},\bar{C})\), see Section 3.13. The bijection in Theorem 1.2 comes from a monoidal equivalence of categories_ \[\operatorname{HC}^{G}(U(\mathfrak{g})/I(\mathbb{O}^{\vee},\bar{C}))\simeq \bar{A}(\mathbb{O}_{R^{\vee}})\operatorname{-mod}\] We note that the main definitions in this paper (the duality map \(D\), the the set \(S(\mathbb{O}^{\vee},\bar{C})\subset\mathfrak{h}_{\mathbb{R}}^{*}\), and the pair \((R^{\vee},\mathbb{O}_{R^{\vee}})\)) are formulated in a canonical (i.e. case-free) manner, although many of our proofs require case-by-case anaylsis. To deduce Theorem 1.2, we were forced to prove several new results on conical symplectic singularities and nilpotent covers, which may be of independent interest (see Section 3, in particular Sections 3.10-3.14). The main such result is Theorem 3.48 -- it states that birational induction of nilpotent covers takes maximal covers to maximal covers (in the sense of [15, Section 6.5], see also Lemma 2.5). Now fix a nilpotent adjoint orbit \(\mathbb{O}^{\vee}\subset\mathfrak{g}^{\vee}\). Choose an \(\mathfrak{sl}(2)\)-triple \((e^{\vee},f^{\vee},h^{\vee})\) with \(e^{\vee}\in\mathbb{O}^{\vee}\). The corresponding homomorphism \(\psi_{\mathbb{O}^{\vee}}:SL(2)\to G^{\vee}\) is a unipotent Arthur parameter for the complex group \(G\) and the corresponding Arthur packet coincides with the set \(\operatorname{Unip}_{(\mathbb{O}^{\vee},1)}(G)\). So in fact Theorem 1.2 provides, as a special case, a parameterization of the elements of the unipotent Arthur packets for a complex reductive group (such a parameterization was previously obtained in [1] and [16] only in the special case when \(\mathbb{O}^{\vee}\) is special). We conclude with a few remarks regarding symplectic duality, which appears to be connected, in somewhat mysterious ways, to the main results of this paper. In [14, Section 9.3] it was conjectured that the nilpotent Slodowy slice \(S^{\vee}\) to the nilpotent orbit \(\mathbb{O}^{\vee}\subset\mathfrak{g}^{\vee}\) is symplectically dual to the affinization of the nilpotent cover \(D(\mathbb{O}^{\vee},1)\). In ongoing work, Finkelberg, Hanany, and Nakajima produce for each nilpotent orbit \(\mathbb{O}^{\vee}\) in \(\mathfrak{so}(2n)\) or \(\mathfrak{sp}(2n)\) an ortho-symplectic quiver gauge theory \(Q\) with Higgs branch \(S^{\vee}\) and Coulomb branch isomorphic to the affinization of a certain cover of the special nilpotent orbit \(d(\mathbb{O}^{\vee})\) with Galois group \(\bar{A}(\mathbb{O}^{\vee})\). It is reasonable to conjecture that this nilpotent cover is isomorphic to \(D(\mathbb{O}^{\vee},1)\). The results of this paper offer strong evidence for this. Indeed, it follows from Theorem 4.14 that \(D(\mathbb{O}^{\vee},1)\) is a Galois cover of \(d(\mathbb{O}^{\vee})\) with Galois group \(\bar{A}(\mathbb{O}^{\vee})\). To each conjugacy class \(\bar{C}\) in \(\bar{A}(\mathbb{O}^{\vee})\) we can associate a certain finite group \(\Pi\) depending on \(\bar{C}\) which acts on both \(S^{\vee}\) and \(\operatorname{Spec}(\mathbb{C}[D(\mathbb{O}^{\vee},1)])\) by graded Poisson automorphisms. Based on the observations in [10], we expect that the variety of fixed points \(\operatorname{Spec}(\mathbb{C}[D(\mathbb{O}^{\vee},1)])^{\Pi}\) is identified with \(\operatorname{Spec}(\mathbb{C}[D(\mathbb{O}^{\vee},\bar{C})])\). We believe that the assignment \[(S^{\vee},\Pi)\mapsto\operatorname{Spec}(\mathbb{C}[D(\mathbb{O}^{\vee}, \bar{C})])\] should be regarded as a special case of a (still highly conjectural) _equivariant_ version of symplectic duality. This topic will be explored in a future paper. ### Structure of paper In Section 2, we recall some preliminaries related to nilpotent orbits and Lie theory, including (birational) induction, Lusztig's canonical quotient, primitive ideals in the universal enveloping algebra, and Sommers duality. In Section 3, we recall some preliminaries related to symplectic singularities and unipotent ideals. This section also includes several new results. In Section 4 we state our main results on unipotent representations. The proofs of these results appear in Sections 5-8. The classical cases are proved in Sections 5 and 6; the exceptional cases are handled in Sections 7 and 8. ### Acknowledgments We would like to thank Jeffrey Adams, Dan Barbasch, Ivan Losev, Hiraku Nakajima, Alexander Premet, Eric Sommers, and David Vogan for helpful discussions. The third author is grateful to Chen Jiang and Yoshinori Namikawa for numerous discussions on birational geometry, and to Binyong Sun for his hospitality and inspiring discussions during his visits to the Institute for Advanced Study at Zhejiang University. The work of S.Yu was partially supported by China NSFC grants (Grants No. 12001453 and 12131018) and Fundamental Research Funds for the Central Universities (Grants No. 20720200067 and 20720200071). ###### Contents * 1 Introduction * 1.1 Structure of paper * 1.2 Acknowledgments * 2 Preliminaries * 2.1 Saturation * 2.2 Nilpotent covers * 2.3 Induction * 2.4 Birational induction and equivariant fundamental groups * 2.5 Primitive ideals * 2.6 BVLS duality 2.7 Truncated induction * 2.8 Sommers duality * 2.9 Lusztig's canonical quotient * 2.10 Lusztig-Achar data * 2.11 Special Lusztig-Achar data * 3 Symplectic singularities and unipotent ideals * 3.1 Poisson deformations * 3.2 Filtered quantizations * 3.3 Symplectic singularities and \(\mathbb{Q}\)-factorial terminalizations * 3.4 The Namikawa space and Weyl group * 3.5 Filtered quantizations of conical symplectic singularities * 3.6 \(\mathbb{Q}\)-factorial terminalizations of nilpotent covers * 3.7 Universal Poisson deformations of conical symplectic singularities * 3.8 Universal Poisson deformations of nilpotent covers * 3.9 Graded Poisson automorphisms of conical symplectic singularities * 3.10 The extended Namikawa Weyl groups of nilpotent covers * 3.11 Extended Namikawa Weyl group vs parabolic induction * 3.12 Unipotent ideals * 3.13 Unipotent bimodules * 3.14 Birational induction preserves maximal covers * 4 Main results * 5 Combinatorics in classical types * 5.1 Nilpotent orbits * 5.2 The group \(A(\mathbb{O})\) * 5.3 Levi subalgebras * 5.4 Maximal pseudo-Levi subalgebras * 5.5 Saturation of nilpotent orbits * 5.6 (Birational) induction of nilpotent orbits * 5.7 Birationally rigid covers * 5.8 BVLS duality * 5.9 Lusztig-Achar data * 5.10 Special Lusztig-Achar data * 5.11 Saturation of conjugacy data * 5.12 Saturation of Lusztig-Achar data * 5.13 Sommers duality * 5.14 Block decompositions * 5.15 Unipotent infinitesimal characters * 6 Proofs of main results in classical types * 6.1 Proof of Proposition 4.3 * 6.2 Proof of Theorem 4.8 * 6.3 Proof of Proposition 4.11 * 6.4 Proof of Theorem 4.14 * 7 Proofs of main results in exceptional types * 7.1 Proof of Proposition 4.3 and Theorem 4.8 * 7.2 Proof of Proposition 4.11 and Theorem 4.14 * 8 Tables ## 2. Preliminaries Let \(G\) be a complex connected reductive algebraic group with Lie algebra \(\mathfrak{g}\). A _nilpotent orbit_ for \(G\) is a co-adjoint orbit \(\mathbb{O}\subset\mathfrak{g}^{*}\) which is stable under scaling. Let \[\mathsf{Orb}(G):=\{\text{nilpotent orbits }\mathbb{O}\subset\mathfrak{g}^{*}\}\] It is well-known that \(\mathsf{Orb}(G)\) is finite and independent of isogeny. Let \(\mathcal{N}\subset\mathfrak{g}^{*}\) denote the union of all nilpotent orbits in \(\mathfrak{g}^{*}\). For each \(\mathbb{O}\in\mathsf{Orb}(G)\), choose an element \(e\in\mathbb{O}\), and let \(A(\mathbb{O})\) denote the (finite) component group of the centralizer of \(e\) in \(G\). Note that \(A(\mathbb{O})\) is independent (up to conjugacy) of the choice of \(e\) in \(\mathbb{O}\). A _conjugacy datum_ for \(G\) is a pair \((\mathbb{O},C)\) consisting of a nilpotent orbit \(\mathbb{O}\in\mathsf{Orb}(G)\) and a conjugacy class \(C\) in \(A(\mathbb{O})\). Let \[\mathsf{Conj}(G):=\{\text{conjugacy data }(\mathbb{O},C)\text{ for }G\}\] Since \(\mathsf{Orb}(G)\) is finite and \(A(\mathbb{O})\) is finite (for each \(\mathbb{O}\in\mathsf{Orb}(G)\)), \(\mathsf{Conj}(G)\) is finite. A _pseudo-Levi subgroup_ of \(G\) is the identity component of the centralizer of a semisimple element \(s\in G\). A _McNinch-Sommers datum_ for \(G\) is a triple \((M,tZ^{\circ},\mathbb{O}_{M})\) consisting of a pseudo-Levi subgroup \(M\subset G\), a coset \(tZ^{\circ}\) in the component group \(Z/Z^{\circ}\) of \(Z=Z(M)\), and a nilpotent orbit \(\mathbb{O}_{M}\in\mathsf{Orb}(M)\) such that \(M=Z_{G}(tZ^{\circ})^{\circ}\). Note that \(G\) acts by conjugation on the set of McNinch-Sommers data. Let \[\mathsf{MS}(G):=\{\text{McNinch-Sommers data }(M,tZ^{\circ},\mathbb{O}_{M}) \text{ for }G\}/G.\] ### Saturation Suppose \(L\subset G\) is a Levi subgroup. If \(\mathbb{O}_{L}\in\mathsf{Orb}(L)\), then \(G\cdot\mathbb{O}_{L}\in\mathsf{Orb}(G)\). This defines a map \[\text{Sat}_{L}^{G}:\mathsf{Orb}(L)\to\mathsf{Orb}(G),\qquad\text{Sat}_{L}^{G }\mathbb{O}_{L}=G\cdot\mathbb{O}_{L}. \tag{2.1.1}\] Now choose \(e\in\mathbb{O}_{L}\). The inclusion \(Z_{L}(e)\subset Z_{G}(e)\) induces a group homomorphism \[\iota:A(\mathbb{O}_{L})\to A(\text{Sat}_{L}^{G}\mathbb{O}_{L}). \tag{2.1.2}\] (in fact, \(\iota\) is injective, but we will not use this fact). Thus we get a map \[\text{Sat}_{L}^{G}:\mathsf{Conj}(L)\to\mathsf{Conj}(G),\qquad\text{Sat}_{L}^{G }(\mathbb{O}_{L},C_{L})=(\text{Sat}_{L}^{G}\mathbb{O}_{L},\iota(C_{L})). \tag{2.1.3}\] Finally, suppose \((M,tZ^{\circ},\mathbb{O}_{M})\in\mathsf{MS}(L)\). Since \(Z_{G}(tZ^{\circ})\subseteq Z_{G}(Z^{\circ})\subseteq Z_{G}(Z(L)^{\circ})=L\), we have \(Z_{G}(tZ^{\circ})=Z_{L}(tZ^{\circ})^{\circ}=M\). So there is a tautological map \[\text{Sat}_{L}^{G}:\mathsf{MS}(L)\to\mathsf{MS}(G),\qquad\text{Sat}_{L}^{G}(M, tZ^{\circ},\mathbb{O}_{M})=(M,tZ^{\circ},\mathbb{O}_{M})\] A nilpotent orbit (resp. conjugacy datum, resp. McNinch-Sommers datum) is _distinguished_ if cannot be obtained by saturation from a proper Levi subgroup. Let \(\mathfrak{z}(\mathfrak{a})\) denote the center of a Lie algebra \(\mathfrak{a}\). **Lemma 2.1**.: _Let \((M,tZ^{\circ},\mathbb{O}_{M})\in\mathsf{MS}(G)\). Then the following are equivalent:_ 1. \((M,tZ^{\circ},\mathbb{O}_{M})\) _is distinguished._ 2. \(M\) _is not contained in a proper Levi subgroup of_ \(G\)_._ 3. \(\mathfrak{z}(\mathfrak{g})=\mathfrak{z}(\mathfrak{m})\)_._ 4. \(M\) _is of maximal semisimple rank._ Proof.: Clearly (ii) implies (i). To see that (i) implies (ii), suppose that \(M\) is contained in a proper Levi subgroup \(L\) of \(G\). Then \(Z_{L}(tZ^{\circ})^{\circ}=Z_{G}(tZ^{\circ})^{\circ}\cap L=M\cap L=M\). So \((M,tZ^{\circ},\mathbb{O}_{M})\in\mathsf{MS}(L)\), i.e. \((M,tZ^{\circ},\mathbb{O}_{M})\in\mathsf{MS}(G)\) is not distinguished. The equivalence of (ii) and (iii) is an immediate consequence of the following well-known facts: if \(L\) is a Levi subgroup of \(G\), then \(L=Z_{G}(\mathfrak{z}(\mathfrak{l}))\) and \(L\neq G\) if and only if \(\mathfrak{z}(\mathfrak{l})\neq\mathfrak{z}(\mathfrak{g})\). (iii) and (iv) are equivalent by definition. We say that a McNinch-Sommers datum \((M,tZ^{\circ},\mathbb{O}_{M})\) is _large_ if \(\mathbb{O}_{M}\) is distinguished. Denote the set of conjugacy classes of large McNinch-Sommers data by \(\mathsf{MS}^{large}(G)\). It is clear that largeness is preserved under saturation. Let \((M,tZ^{\circ},\mathbb{O}_{M})\in\mathsf{MS}(G)\). Choose \(e\in\mathbb{O}_{M}\). Then \(Z^{\circ}\subset Z_{G}(e)^{\circ}\) and \(t\in Z_{G}(e)\). Thus, we get a map \[\pi:\mathsf{MS}(G)\to\mathsf{Conj}(G),\qquad\pi(M,tZ^{\circ},\mathbb{O}_{M})= (G\cdot e,tZ_{G}(e)^{\circ}).\] We will sometimes write \(\pi^{G}\) instead of \(\pi\) when we wish to emphasize the dependence on \(G\). **Lemma 2.2**.: _The following are true_ 1. _The restriction of_ \(\pi\) _to_ \(\mathsf{MS}^{large}(G)\) _is a bijection onto_ \(\mathsf{Conj}(G)\)_._ 2. _If_ \(L\subset G\) _is a Levi subgroup, then the following diagram commutes_ _If_ \(L\subset G\) _is a Levi subgroup, then the following diagram commutes_ _If_ \(L\subset G\) _is a Levi subgroup, then the following diagram commutes_ _If_ \(L\subset G\) _is a_ _\(\mathsf{MS}(L)\)_._ Proof.: (i) is the content of [13, Theorem 1]. (ii) is immediate from the definitions. For (iii), suppose \((\mathbb{O},C)\in\mathsf{Conj}(G)\) is distinguished. It follows from (ii) that \(\pi^{-1}(\mathbb{O},C)\) consists of distinguished McNinch-Sommers data. Suppose \((M,tZ(M)^{\circ},\mathbb{O}_{M})\in\pi^{-1}(\mathbb{O},C)\) is not large, i.e. \(\mathbb{O}_{M}\) is not distinguished. Then there is a proper Levi subgroup \(L\subset M\) and a nilpotent orbit \(\mathbb{O}_{L}\in\mathsf{Orb}(L)\) such that \(\mathbb{O}_{M}=\operatorname{Sat}_{L}^{G}\mathbb{O}_{L}\). A Levi subgroup of a pseudo-Levi subgroup of \(G\) is a pseudo-Levi subgroup of \(G\). So \(L\) is a pseudo-Levi subgroup of \(G\). Since \(L\subset M\), there are inclusions \(Z(M)\subset Z(L)\) and \(Z(M)^{\circ}\subset Z(L)^{\circ}\). Hence, \(tZ(M)^{\circ}\) determines a coset \(tZ(L)^{\circ}\) in \(Z(L)/Z(L)^{\circ}\). Since \(tZ(M)^{\circ}\subseteq tZ(L)^{\circ}\), we have \[Z_{G}(tZ(L)^{\circ})^{\circ}\subseteq Z_{G}(tZ(M)^{\circ})^{\circ}=M\] Also \[Z_{G}(tZ(L)^{\circ})^{\circ}\subseteq Z_{G}(Z(L)^{\circ})\] Thus, \[Z_{G}(tZ(L)^{\circ})^{\circ}\subseteq Z_{M}(Z(L)^{\circ})=L\] On the other hand, clearly \(L\subseteq Z_{G}(tZ(L)^{\circ})^{\circ}\). So in fact, \(L=Z_{G}(tZ(L)^{\circ})^{\circ}\), i.e. \((L,tZ(L)^{\circ},\mathbb{O}_{L})\in\mathsf{MS}(G)\). By the definition of \(\pi\), we have \(\pi(L,tZ(L)^{\circ},\mathbb{O}_{L})\), so \((L,tZ(L)^{\circ},\mathbb{O}_{L})\) must be distinguished by the argument above. And yet, since \(L\subset M\) is a proper Levi subgroup, we have \(\mathfrak{z}(\mathfrak{g})\subseteq\mathfrak{z}(\mathfrak{m})\subsetneq \mathfrak{z}(\mathfrak{l})\), so \((L,tZ(L)^{\circ},\mathbb{O}_{L})\) is not distinguished by Lemma 2.1. This is a contradiction. Write \[\mathsf{Orb}_{0}(G) :=\{(L,\mathbb{O}_{L})\mid\mathbb{O}_{L}\in\mathsf{Orb}(L)\text{ distinguished}\}/G\] \[\mathsf{Conj}_{0}(G) :=\{(L,(\mathbb{O}_{L},C_{L}))\mid(\mathbb{O}_{L},C_{L})\in \mathsf{Conj}(L)\text{ distinguished}\}/G\] \[\mathsf{MS}_{0}(G) :=\{(L,(M,tZ^{\circ},\mathbb{O}_{M}))\mid(M,tZ^{\circ},\mathbb{ O}_{M})\in\mathsf{MS}(L)\text{ distinguished}\}/G\] \[\mathsf{MS}_{0}^{large}(G) =\{(L,(M,tZ^{\circ},\mathbb{O}_{M}))\mid(M,tZ^{\circ},\mathbb{ O}_{M})\in\mathsf{MS}^{large}(L)\text{ distinguished}\}/G\] where \(L\) runs over all Levi subgroups of \(G\). Saturation gives rise to surjective maps \[\mathsf{Orb}_{0}(G)\to\mathsf{Orb}(G),\quad\mathsf{Conj}_{0}(G)\to\mathsf{ Conj}(G),\quad\mathsf{MS}_{0}(G)\to\mathsf{MS}(G),\quad\mathsf{MS}_{0}^{large}(G)\to \mathsf{MS}^{large}(G)\] At the risk of abusing notation, we denote all four maps by 'Sat'. **Proposition 2.3**.: _The maps_ \[\mathsf{Orb}_{0}(G)\to\mathsf{Orb}(G),\quad\mathsf{Conj}_{0}(G)\to\mathsf{ Conj}(G),\quad\mathsf{MS}_{0}(G)\to\mathsf{MS}(G),\quad\mathsf{MS}_{0}^{large}(G)\to \mathsf{MS}^{large}(G)\] _are bijections._ Proof.: The assertion for \(\mathsf{Orb}(G)\) is the classical Bala-Carter theorem, see [13, Theorem 8.2.12] for a proof. By Lemma 2.1, a McNinch-Sommers datum \((M,tZ^{\circ},\mathbb{O}_{M})\in\mathsf{MS}(G)\) is distinguished if and only if \(\mathfrak{z}(\mathfrak{z})=\mathfrak{z}(\mathfrak{m})\). On the other hand, there is a unique Levi subgroup \(L\subset G\) containing \(M\) such that \(\mathfrak{z}(\mathfrak{m})=\mathfrak{z}(\mathfrak{l})\), namely \(L:=Z_{G}(\mathfrak{z}(\mathfrak{m}))\). This proves this assertion for both \(\mathsf{MS}(G)\) and \(\mathsf{MS}^{large}(G)\). The assertion for \(\mathsf{Conj}(G)\) follows from the assertion for \(\mathsf{MS}^{large}(G)\) and Lemma 2.2. ### Nilpotent covers A _nilpotent cover_ for \(G\) is a finite etale \(G\)-equivariant cover of a nilpotent co-adjoint \(G\)-orbit. A _morphism of nilpotent covers_ is a finite etale map \(\widetilde{\mathbb{O}}\to\widehat{\mathbb{O}}\) which intertwines the covering maps \(\widehat{\mathbb{O}}\to\mathbb{O}\) and \(\widehat{\mathbb{O}}\to\mathbb{O}\) (any such map is automatically \(G\)-equivariant, see Example 3.8). Let \(\operatorname{Aut}(\widehat{\mathbb{O}},\mathbb{O})\) denote the set of invertible endomorphisms of \(\widetilde{\mathbb{O}}\to\mathbb{O}\). If we fix a morphism of covers \(\widehat{\mathbb{O}}\to\widehat{\mathbb{O}}\), we can similarly define \(\operatorname{Aut}(\widehat{\mathbb{O}},\widehat{\mathbb{O}})\). In this case, we call \(\operatorname{Aut}(\widehat{\mathbb{O}},\widehat{\mathbb{O}})\) the _Galois group_ of the cover \(\widetilde{\mathbb{O}}\to\widehat{\mathbb{O}}\). We say that the moprhism \(\widetilde{\mathbb{O}}\to\widehat{\mathbb{O}}\) is _Galois_ if it induces an isomorphism \(\widehat{\mathbb{O}}\simeq\widetilde{\mathbb{O}}/\operatorname{Aut}(\widetilde {\mathbb{O}},\widehat{\mathbb{O}})\). Let \[\mathsf{Cov}(G):=\{\text{isomorphism classes of nilpotent covers }\widehat{ \mathbb{O}}\text{ for }G\}\] If \(\mathbb{O}\in\mathsf{Orb}(G)\), then (isomorphism classes of) nilpotent covers of \(\mathbb{O}\) are in one-to-one correspondence with (conjugacy classes of) subgroups of \(A(\mathbb{O})\). In particular, \(\mathsf{Cov}(G)\) is finite. Occasionally, we will also consider the category \(\mathcal{C}ov(G)\) consisting of nilpotent covers for \(G\), equipped with morphisms of nilpotent covers. Following [10], we will define an equivalence relation \(\sim\) on \(\mathsf{Cov}(G)\). Suppose \(\widetilde{\mathbb{O}}\to\widehat{\mathbb{O}}\) is a morphism of covers. Then there is an induced map of affine varieties \(\operatorname{Spec}(\mathbb{C}[\widehat{\mathbb{O}}])\to\operatorname{Spec}( \mathbb{C}[\widehat{\mathbb{O}}])\). We say this map is _almost etale_ if it is etale over all \(G\)-orbits in \(\operatorname{Spec}(\mathbb{C}[\widehat{\mathbb{O}}])\) of codimension \(2\) (it is automatically etale over the open \(G\)-orbit \(\widehat{\mathbb{O}}\)). We write \(\widetilde{\mathbb{O}}\geq\widehat{\mathbb{O}}\) if there exists a morphism \(\widehat{\mathbb{O}}\to\widehat{\mathbb{O}}\) such that the induced map \(\operatorname{Spec}(\mathbb{C}[\widehat{\mathbb{O}}])\to\operatorname{Spec}( \mathbb{C}[\widehat{\mathbb{O}}])\) has this property. This defines a partial order on \(\mathsf{Cov}(G)\). **Definition 2.4** (Definition 6.5.1, [10]).: _Let \(\sim\) be the equivalence relation on \(\mathsf{Cov}(G)\) defined by taking the symmetric closure of \(\geq\). For \(\widetilde{\mathbb{O}}\in\mathsf{Cov}(G)\), let \([\widetilde{\mathbb{O}}]\) denote the equivalence class of \(\widetilde{\mathbb{O}}\)._ We will need some basic facts about equivalence classes in \(\mathsf{Cov}(G)\). **Lemma 2.5** (Lemma 6.5.3, [13]).: _Let \([\widehat{\mathbb{O}}]\subset\mathsf{Cov}(G)\) be an equivalence class. Then the following are true:_ 1. \(\big{[}\widehat{\mathbb{O}}\big{]}\) _contains a unique maximal cover_ \(\widetilde{\mathbb{O}}_{max}\)_._ 2. \(\widehat{\mathbb{O}}_{max}\) _is Galois over every cover in_ \([\widehat{\mathbb{O}}]\)_._ ### Induction Suppose \(L\subset G\) is a Levi subgroup and let \(\mathbb{O}_{L}\in\mathsf{Orb}(L)\). Fix a parabolic subgroup \(P\subset G\) with Levi decomposition \(P=LU\). The annihilator of \(\mathfrak{p}\) in \(\mathfrak{g}^{*}\) is a \(P\)-stable subspace \(\mathfrak{p}^{\perp}=\mathfrak{g}^{*}\). Form the \(G\)-equivariant fiber bundle \(G\times^{P}(\overline{\mathbb{O}}_{L}\times\mathfrak{p}^{\perp})\) over the partial flag variety \(G/P\). There is a proper \(G\)-equivariant map \[\mu:G\times^{P}(\overline{\mathbb{O}}_{L}\times\mathfrak{p}^{\perp})\to \mathfrak{g}^{*}\qquad\mu(g,\xi)=\operatorname{Ad}^{*}(g)\xi\] The image of \(\mu\) is a closed irreducible \(G\)-invariant subset of \(\mathcal{N}\), and hence the closure in \(\mathfrak{g}^{*}\) of a nilpotent \(G\)-orbit, denoted \(\operatorname{Ind}_{L}^{G}\mathbb{O}_{L}\in\mathsf{Orb}(G)\). It is a standard fact that \(\operatorname{Ind}_{L}^{G}\mathbb{O}_{L}\) is independent of the choice of parabolic \(P\). The correspondence \[\operatorname{Ind}_{L}^{G}:\mathsf{Orb}(L)\to\mathsf{Orb}(G)\] is called _Lusztig-Spaltenstein_ induction ([11]). Now let \(\widehat{\mathbb{O}}_{L}\in\mathsf{Cov}(L)\) and form the affine variety \(\operatorname{Spec}(\mathbb{C}[\widehat{\mathbb{O}}_{L}])\). There is an \(L\)-action on \(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}_{L}])\) (induced from the \(L\)-action on \(\widetilde{\mathbb{O}}_{L}\)) and a finite surjective \(L\)-equivariant map \(\operatorname{Spec}(\mathbb{C}[\widehat{\mathbb{O}}_{L}])\to\overline{ \mathbb{O}}_{L}\). Let \(\widetilde{\mu}\) denote the composition \[G\times^{P}(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}_{L}])\times \mathfrak{p}^{\perp})\to G\times^{P}(\overline{\mathbb{O}}_{L}\times\mathfrak{ p}^{\perp})\stackrel{{\mu}}{{\to}}\mathfrak{g}^{*}.\] The image of \(\widetilde{\mu}\) is the closure of \(\operatorname{Ind}_{L}^{G}\mathbb{O}_{L}\) and the preimage of \(\operatorname{Ind}_{L}^{G}\mathbb{O}_{L}\) is a finite etale \(G\)-equivariant cover, denoted \(\operatorname{Bind}_{L}^{G}\widehat{\mathbb{O}}_{L}\in\mathsf{Cov}(G)\). Again, \(\operatorname{Bind}_{L}^{G}\widetilde{\mathbb{O}}_{L}\) is independent of the choice of parabolic \(P\) (see [13, Proposition 2.4.1(i)]). The correspondence \[\operatorname{Bind}_{L}^{G}:\mathsf{Cov}(L)\to\mathsf{Cov}(G)\] is called _birational induction_. We will need several basic facts about birational induction. **Proposition 2.6**.: _Birational induction has the following properties:_ 1. _Suppose_ \(\widetilde{\mathbb{O}}_{L},\widehat{\mathbb{O}}_{L}\in\mathsf{Cov}(L)\) _and let_ \(\widetilde{\mathbb{O}}=\operatorname{Bind}_{L}^{G}\widetilde{\mathbb{O}}_{L}\)_,_ \(\widehat{\mathbb{O}}=\operatorname{Bind}_{L}^{G}\widehat{\mathbb{O}}_{L}\)_. Then any_ \(L\)_-equivariant morphism_ \(p_{L}:\widehat{\mathbb{O}}_{L}\to\widehat{\mathbb{O}}_{L}\) _induces a canonically defined finite_ \(G\)_-equivariant morphism_ \(p=\operatorname{Bind}_{L}^{G}(p_{L}):\widehat{\mathbb{O}}\to\widehat{\mathbb{O}}\) _with_ \(\deg p=\deg p_{L}\)_. Therefore birational induction induces a well-defined functor_ \(\operatorname{Bind}_{L}^{G}:\mathsf{Cov}(L)\to\mathsf{Cov}(G)\)_,_ \(\widetilde{\mathbb{O}}_{L}\mapsto\operatorname{Bind}_{L}^{G}\widetilde{ \mathbb{O}}_{L}\)_._ 2. _Suppose the morphism_ \(p_{L}:\widehat{\mathbb{O}}_{L}\to\widetilde{\mathbb{O}}_{L}\) _in (i) is a finite Galois_ \(L\)_-equivariant covering. Then the induced covering_ \(p=\operatorname{Bind}_{L}^{G}(p_{L}):\widehat{\mathbb{O}}\to\widetilde{\mathbb{O}}\) _is a finite Galois_ \(G\)_-equivariant covering. Moreover, the induced group homomorphism_ \[\operatorname{Aut}(\widehat{\mathbb{O}}_{L},\widehat{\mathbb{O}}_{L})\xrightarrow{ \sim}\operatorname{Aut}(\widehat{\mathbb{O}},\widehat{\mathbb{O}})\] _is an isomorphism._ 3. _Suppose_ \(M\subset G\) _is a Levi subgroup containing_ \(L\)_. Then there is a canonical natural equivalence of functors_ \[\operatorname{\mathcal{B}}\!\mathit{ind}_{L}^{G}\simeq\operatorname{\mathcal{B} }\!\mathit{ind}_{M}^{G}\circ\operatorname{\mathcal{B}}\!\mathit{ind}_{L}^{M}.\] 4. _For any_ \(\widetilde{\mathbb{O}}_{L}\in\mathsf{Cov}(L)\)_,_ \(\deg(\widetilde{\mathbb{O}}_{L}\to\mathbb{O}_{L})\) _divides_ \(\deg(\operatorname{Bind}_{L}^{G}\widetilde{\mathbb{O}}_{L}\to\operatorname{ Ind}_{L}^{G}\mathbb{O}_{L})\)_._ Proof.: Let \(\widetilde{Y}=G\times^{P}(\operatorname{Spec}(\mathbb{C}[\widehat{\mathbb{O}}_{L}]) \times\mathfrak{p}^{\perp})\) and \(\widehat{Y}=G\times^{P}(\operatorname{Spec}(\mathbb{C}[\widehat{\mathbb{O}}_{L}] )\times\mathfrak{p}^{\perp})\). Extend the morphism \(p_{L}\) to \(p_{L}:\operatorname{Spec}(\mathbb{C}[\widehat{\mathbb{O}}_{L}])\to \operatorname{Spec}(\mathbb{C}[\widehat{\mathbb{O}}_{L}])\). \(p_{L}\) induces a natural finite morphism \(\widehat{Y}\to\widetilde{Y}\) which restricts to the morphism \(p=\operatorname{Bind}_{L}^{G}(p_{L}):\widehat{\mathbb{O}}\to\widehat{\mathbb{O}}\) as claimed in (i). The equality \(\deg p=\deg p_{L}\) is clear by construction. The independence of \(p=\operatorname{Bind}_{L}^{G}(p_{L})\) on the parabolic \(P\) follows from the argument of [10, Lemma 4.1]. For (ii), we argue as follows. Every automorphism \(\gamma\) of \(\widehat{Y}\to\widetilde{Y}\) restricts to an automorphism \(\widetilde{\gamma}|_{\widehat{\mathbb{O}}}\in\operatorname{Aut}(\widehat{ \mathbb{O}},\widehat{\mathbb{O}})\). Thus, we obtain an injective homomorphism \(\operatorname{Aut}(\widehat{\mathbb{O}}_{L},\widehat{\mathbb{O}}_{L})\to \operatorname{Aut}(\widehat{\mathbb{O}},\widehat{\mathbb{O}})\). We wish to show that this homomorphism is surjective. Note that \[|\operatorname{Aut}(\widehat{\mathbb{O}},\widehat{\mathbb{O}})|\leq\deg p=\deg p _{L}=|\operatorname{Aut}(\widehat{\mathbb{O}}_{L},\widehat{\mathbb{O}}_{L})|\] and equality holds if and only if the cover is Galois. The statement follows. (iii) and (iv) are (ii) and (iv) of [12, Proposition 2.4.1]. A nilpotent orbit (resp. nilpotent cover) is _rigid_ (resp. _birationally rigid_) if it cannot be obtained by induction (resp. birational induction) from a proper Levi subgroup. Note that if \(\mathbb{O}\in\operatorname{Orb}(G)\) is rigid and \(\widehat{\mathbb{O}}\) covers \(\mathbb{O}\), then \(\widehat{\mathbb{O}}\) is birationally rigid. Let \[\operatorname{Cov}_{0}(G):=\{(L,\widehat{\mathbb{O}}_{L})\mid\widehat{ \mathbb{O}}_{L}\in\operatorname{Cov}(L)\text{ is birationally rigid}\}/G\] where \(L\) runs over all Levi subgroups of \(G\). **Proposition 2.7** (Proposition 2.4.1(iii), [12]).: _The map_ \[\operatorname{Bind}:\operatorname{Cov}_{0}(G)\to\operatorname{Cov}(G),\qquad \operatorname{Bind}(L,\widehat{\mathbb{O}}_{L})=\operatorname{Bind}_{L}^{G} \widehat{\mathbb{O}}_{L}\] _is a bijection._ We will see in Section 3.12 that birational induction preserves the equivalence relation on nilpotent covers and takes maximal covers to maximal covers. We conclude this subsection by describing a large class of orbits which are birationally induced from \(\{0\}\). Suppose \(\mathbb{O}\in\operatorname{Orb}(G)\). Using an \(\operatorname{Ad}(\mathfrak{g})\)-invariant identification \(\mathfrak{g}\simeq\mathfrak{g}^{*}\), we can regard \(\mathbb{O}\) as a nilpotent \(G\)-orbit in \(\mathfrak{g}\). Choose an element \(e\in\mathbb{O}\) and an \(\mathfrak{sl}(2)\)-triple \((e,f,h)\). The operator \(\operatorname{ad}(h)\) defines a \(\mathbb{Z}\)-grading on \(\mathfrak{g}\) \[\mathfrak{g}=\bigoplus_{i\in\mathbb{Z}}\mathfrak{g}_{i},\qquad\mathfrak{g}_{i }:=\{\xi\in\mathfrak{g}\mid\operatorname{ad}(h)(\xi)=i\xi\}.\] We say that \(\mathbb{O}\) is _even_ if \(\mathfrak{g}_{i}=0\) for every odd integer \(i\). In any case, we can define a parabolic subalgebra \[\mathfrak{p}_{\mathbb{O}}=\mathfrak{l}_{\mathbb{O}}\oplus\mathfrak{n}_{ \mathbb{O}},\qquad\mathfrak{l}_{\mathbb{O}}:=\mathfrak{g}_{0},\qquad\mathfrak{ n}_{\mathbb{O}}:=\bigoplus_{i\geqslant 1}\mathfrak{g}_{i}. \tag{2.3.1}\] We call \(\mathfrak{p}_{\mathbb{O}}\) (resp. \(\mathfrak{l}_{\mathbb{O}}\)) the _Jacobson-Morozov_ parabolic (resp. Levi) associated to \(\mathbb{O}\). Both \(\mathfrak{p}_{\mathbb{O}}\) and \(\mathfrak{l}_{\mathbb{O}}\) are well-defined up to conjugation by \(G\). The following result is well-known. The proof is contained in [11], see also [13, Thm 3.3.1]. **Proposition 2.8**.: _Suppose \(\mathbb{O}\) is an even nilpotent \(G\)-orbit. Then_ \[\mathbb{O}=\operatorname{Bind}_{L_{\mathbb{O}}}^{G}\{0\}.\] ### Birational induction and equivariant fundamental groups Fix the notation of Section 2.3, e.g. \(L\), \(P\), \(\widetilde{\mathbb{O}}_{L}\), \(\widetilde{\mathbb{O}}\), \(\widetilde{\mu}:G\times^{P}(\operatorname{Spec}(\mathbb{C}[\widetilde{ \mathbb{O}}_{L}])\times\mathfrak{p}^{\perp})\to\overline{\mathbb{O}}\) and so on. Let \(U\subset P\) denote the unipotent radical. In this section, we will construct a surjective homomorphism \(\phi_{L}^{G}(\widetilde{\mathbb{O}}_{L}):\pi_{1}^{G}(\widetilde{\mathbb{O}}) \to\pi_{1}^{L}(\widetilde{\mathbb{O}}_{L})\) between the equivariant fundamental groups of \(\widetilde{\mathbb{O}}\) and \(\widetilde{\mathbb{O}}_{L}\). Let \(\widetilde{Z}^{0}:=G\times^{P}(\widetilde{\mathbb{O}}_{L}\times\mathfrak{p}^{ \perp})\), a fiber bundle over the partial flag variety \(G/P\). The inclusion \(i:\widetilde{\mathbb{O}}=\widetilde{\mu}^{-1}(\mathbb{O})\subset\widetilde{Z}^ {0}\) induces a group homomorphism \[i_{*}:\pi_{1}(\widetilde{\mathbb{O}})\to\pi_{1}(\widetilde{Z}^{0}). \tag{2.4.1}\] Since \(i:\widetilde{\mathbb{O}}\hookrightarrow\widetilde{Z}^{0}\) is an open embedding of smooth complex manifolds, the complement \(\widetilde{Z}^{0}-i(\widetilde{\mathbb{O}})\) is of real codimension \(\geq 2\). Hence, the homomorphism (2.4.1) is surjective. Note that \(\widetilde{Z}^{0}\) is a vector bundle over the homogeneous space \(G\times^{P}\widetilde{\mathbb{O}}_{L}\) with fiber \(\mathfrak{p}^{\perp}\). So there is a natural isomorphism \[\pi_{1}(\widetilde{Z}^{0})\simeq\pi_{1}(G\times^{P}\widetilde{\mathbb{O}}_{L}).\] If we fix a base point \(x\in\widetilde{\mathbb{O}}\), we get a fibration \(G\to\widetilde{\mathbb{O}}\) with fiber \(G_{x}\). This fibration gives rise to an exact sequence of homotopy groups \[\pi_{1}(G)\to\pi_{1}(\widetilde{\mathbb{O}})\to\pi_{0}(G_{x})\to 1.\] The final (nontrivial) term is isomorphic to \(\pi_{1}^{G}(\widetilde{\mathbb{O}})\). Similarly, if we fix a base point \((1,y)\in G\times^{P}\widetilde{\mathbb{O}}_{L}\), we get a fibration \(G\to G\times^{P}\widetilde{\mathbb{O}}_{L}\) with fiber \(P_{y}\). This fibration gives rise to an exact sequence of homotopy groups \[\pi_{1}(G)\to\pi_{1}(G\times^{P}\widetilde{\mathbb{O}}_{L})\to\pi_{0}(P_{y}) \to 1.\] Since \(U\subset P_{y}\), we have \(P_{y}=L_{y}\ltimes U\). It follows that \(\pi_{0}(P_{y})\simeq\pi_{0}(L_{y})\simeq\pi_{1}^{L}(\widetilde{\mathbb{O}}_{L})\). Now choose \(x\in\widetilde{\mathbb{O}}\) and \(y\in\widetilde{\mathbb{O}}_{L}\) such that \(x=(1,\bar{x})\), where \(\bar{x}\in\widetilde{\mathbb{O}}_{L}\times\mathfrak{p}^{\perp}\), and \(\bar{x}\) is mapped to \(y\) under the projection \(\widetilde{\mathbb{O}}_{L}\times\mathfrak{p}^{\perp}\to\widetilde{\mathbb{O}} _{L}\). Then we have group isomorphisms \[\pi_{0}(L_{\bar{x}})\simeq\pi_{0}(P_{\bar{x}})\simeq\pi_{0}(G_{x})=\pi_{1}^{G} (\widetilde{\mathbb{O}})\] induced by the inclusions \(L\subset P\subset G\). There is a commutative diagram of groups where the fundamental groups \(\pi_{1}(G)\), \(\pi_{1}(\widetilde{\mathbb{O}})\), \(\pi_{1}(\widetilde{Z}^{0})\), and \(\pi_{1}^{L}(\widetilde{\mathbb{O}}_{L})\) are defined with respect to the base points \(1\in G\), \(x\in\widehat{\mathbb{O}}\), \(x\in\widetilde{Z}^{0}\), and \(y\in\widetilde{\mathbb{O}}_{L}\), respectively. We define \[\phi_{L}^{G}(\widetilde{\mathbb{O}}_{L}):\pi_{1}^{G}(\widetilde{\mathbb{O}}) \to\pi_{1}^{L}(\widetilde{\mathbb{O}}_{L})\] to be the unique homomorphism which makes the above diagram commute. Sometimes we simply write \(\phi\) for \(\phi_{L}^{G}(\widetilde{\mathbb{O}}_{L})\) when there is no ambiguity. The following lemma about properties of \(\phi\) is standard and the proof is left to the reader. **Lemma 2.9**.: _With the notations above, the following are true:_ 1. _Let_ \(\widetilde{\mathbb{O}}_{L},\widehat{\mathbb{O}}_{L}\in\mathcal{C}ov(L)\) _and let_ \(\widetilde{\mathbb{O}}=\operatorname{Bind}_{L}^{G}\widetilde{\mathbb{O}}_{L}\)_,_ \(\widehat{\mathbb{O}}=\operatorname{Bind}_{L}^{G}\widehat{\mathbb{O}}_{L}\)_. Let_ \(p_{L}:\widehat{\mathbb{O}}_{L}\to\widehat{\mathbb{O}}_{L}\) _be an_ \(L\)_-equivariant covering map, inducing a_ \(G\)_-equivariant covering map_ \(p=\operatorname{Bind}_{L}^{G}(p_{L}):\widehat{\mathbb{O}}\to\widehat{\mathbb{O}}\) _as in Proposition_ 2.6_. Then we have the following commutative diagram_ \(\pi_{1}^{G}(\widehat{\mathbb{O}})\)\(\pi_{1}^{G}(\widehat{\mathbb{O}})\)\(\pi_{1}^{G}(\widehat{\mathbb{O}})\)\(\pi_{1}^{L}(\widehat{\mathbb{O}}_{L})\) _Moreover, we have \(\phi^{-1}(\pi_{1}^{G}(\widehat{\mathbb{O}}_{L}))=\pi_{1}^{G}(\widehat{\mathbb{O}})\)._ 2. _Under the group isomorphisms_ \(\pi_{0}(L_{\bar{x}})\simeq\pi_{1}^{G}(\widehat{\mathbb{O}})\) _and_ \(\pi_{0}(L_{y})\simeq\pi_{1}^{L}(\widehat{\mathbb{O}}_{L})\)_, the map_ \(\phi:\pi_{1}^{G}(\widehat{\mathbb{O}})\to\pi_{1}^{L}(\widehat{\mathbb{O}}_{L})\) _is identified with the map_ \(\pi_{0}(L_{\bar{x}})\to\pi_{0}(L_{y})\) _induced by the natural inclusion_ \(L_{\bar{x}}\subset L_{y}\)_._ 3. _Suppose_ \(M\subset G\) _is a Levi subgroup containing_ \(L\)_. Let_ \(\widetilde{\mathbb{O}}_{L}\in\mathsf{Cov}(L)\) _and_ \(\widehat{\mathbb{O}}_{M}=\operatorname{Bind}_{L}^{M}\widehat{\mathbb{O}}_{L}\) _and_ \(\widehat{\mathbb{O}}=\operatorname{Bind}_{M}^{G}\widehat{\mathbb{O}}_{M}= \operatorname{Bind}_{L}^{G}\widehat{\mathbb{O}}_{L}\)_. Then the map_ \(\phi_{L}^{G}(\mathbb{O}_{L}):\pi_{1}^{G}(\widehat{\mathbb{O}})\to\pi_{1}^{L}( \widehat{\mathbb{O}}_{L})\) _is equal to the composition of_ \(\phi_{L}^{M}(\widehat{\mathbb{O}}_{L}):\pi_{1}^{M}(\widehat{\mathbb{O}}_{M}) \to\pi_{1}^{L}(\widehat{\mathbb{O}}_{L})\) _and_ \(\phi_{M}^{G}(\widehat{\mathbb{O}}_{M}):\pi_{1}^{G}(\widehat{\mathbb{O}})\to\pi _{1}^{M}(\widehat{\mathbb{O}}_{M})\)_._ ### Primitive ideals Let \(U(\mathfrak{g})\) be the universal enveloping algebra of \(\mathfrak{g}\). Recall that \(U(\mathfrak{g})\) has a natural natural filtration and there is a \(G\)-equivariant isomorphism \(\operatorname{gr}U(\mathfrak{g})\simeq S(\mathfrak{g})\). Now let \(I\subset U(\mathfrak{g})\) be a two-sided ideal. Then \(\operatorname{gr}(I)\) corresponds under the isomorphism \(\operatorname{gr}U(\mathfrak{g})\simeq S(\mathfrak{g})\) to a \(G\)-invariant ideal in \(S(\mathfrak{g})\). The _associated variety_ of \(I\) is defined to be the vanishing locus \(V(I)\subset\mathfrak{g}^{*}\) of this ideal. A two-sided ideal \(I\subset U(\mathfrak{g})\) is said to be _primitive_ if it is the annihilator of a simple left \(U(\mathfrak{g})\)-module. If \(I\) is primitive, then \(V(I)\) is irreducible (see [11]) and hence the closure of a (unique) nilpotent orbit. By Quillen's lemma, the intersection of \(I\) with the center \(\mathfrak{Z}(\mathfrak{g})\) of \(U(\mathfrak{g})\) is the kernel of an algebra homomorphism \(\mathfrak{Z}(\mathfrak{g})\to\mathbb{C}\), called the _infinitesimal character_ of \(I\). Such homomorphisms are identified via the Harish-Chandra isomorphism with \(W\)-orbits on \(\mathfrak{h}^{*}\). For each \(\gamma\in\mathfrak{h}^{*}/W\), there is a _unique_ maximal (primitive) ideal in \(U(\mathfrak{g})\) with infinitesimal character \(\gamma\), which we will denote by \(J(\gamma)\subset U(\mathfrak{g})\). ### BVLS duality Let \(G^{\vee}\) be the Langlands dual group of \(G\) and let \(\mathfrak{g}^{\vee}\) be its Lie algebra. If we fix a Cartan subalgebra \(\mathfrak{h}\subset\mathfrak{g}\), we get a Cartan subalgebra \(\mathfrak{h}^{\vee}\subset\mathfrak{g}^{\vee}\) and a canonical identification \(\mathfrak{h}^{\vee}\simeq\mathfrak{h}^{*}\). To each nilpotent \(G^{\vee}\)-orbit \(\mathbb{O}^{\vee}\in\mathsf{Orb}(G^{\vee})\) we can associate a maximal ideal in \(U(\mathfrak{g})\) in the following manner. First, using an \(G^{\vee}\)-invariant isomorphism \((\mathfrak{g}^{\vee})^{*}\simeq\mathfrak{g}^{\vee}\), we can identify \(\mathbb{O}^{\vee}\) with a nilpotent \(G^{\vee}\)-orbit in \(\mathfrak{g}^{\vee}\) (still denoted \(\mathbb{O}^{\vee}\)). Next, we choose an \(\mathfrak{sl}(2)\)-triple \((e^{\vee},f^{\vee},h^{\vee})\) with \(e^{\vee}\in\mathbb{O}^{\vee}\) and \(h^{\vee}\in\mathfrak{h}^{\vee}\simeq\mathfrak{h}^{*}\). The element \(\frac{1}{2}h^{\vee}\) is well-defined up to the \(W\)-action on \(\mathfrak{h}^{*}\), and hence determines an infinitesimal character for \(U(\mathfrak{g})\), which we will denote by \(\gamma_{\mathbb{O}^{\vee}}\). **Definition 2.10**.: _Let \(\mathbb{O}^{\vee}\in\mathsf{Orb}(G^{\vee})\). Then the special unipotent ideal attached to \(\mathbb{O}^{\vee}\) is the unique maximal ideal \(J(\gamma_{\mathbb{O}^{\vee}})\subset U(\mathfrak{g})\) with infinitesimal character \(\gamma_{\mathbb{O}^{\vee}}\)._ For each nilpotent orbit \(\mathbb{O}^{\vee}\in\mathsf{Orb}(G^{\vee})\), let \(d(\mathbb{O}^{\vee})\) denote the (unique) open \(G\)-orbit in \(V(J(\gamma_{\mathbb{O}^{\vee}}))\). This defines a map \[d:\mathsf{Orb}(G^{\vee})\to\mathsf{Orb}(G)\] called _Barbasch-Vogan-Lusztig-Spaltenstein (BVLS) duality_. We will sometimes write \(d^{G}\) instead of \(d\), when we wish to emphasize the dependence on \(G\). **Proposition 2.11** (Proposition A2, [10]).: _The map \(d\) enjoys the following properties:_ 1. \(d\) _is order-reversing (with respect to the closure orderings on_ \(\mathsf{Orb}(G)\) _and_ \(\mathsf{Orb}(G^{\vee})\)_)._ 2. \(d^{3}=d\)_._ 3. _If_ \(L\subset G\) _is a Levi subgroup, then_ \[d^{G}\circ\operatorname{Sat}_{L^{\vee}}^{G^{\vee}}=\operatorname{Ind}_{L}^{G} \circ d^{L}\] An orbit \(\mathbb{O}\in\mathsf{Orb}(G)\) is said to be _special_ if it lies in the image of \(d^{G}\). We write \(\mathsf{Orb}^{*}(G)\) for the set of special nilpotent orbits. ### Truncated induction Let \(M^{\vee}\subset G^{\vee}\) be a pseudo-Levi subgroup and let \(M\) be the Langlands dual group of \(M^{\vee}\). In [13, Chapter 13.3], Lusztig defines a map called _truncated induction_ \[j_{M}^{G}:\mathsf{Orb}^{*}(M)\to\mathsf{Orb}(G)\] This map generalizes Lusztig-Spaltenstein induction in the following sense: if \(M\) is a Levi subgroup of \(G\), then \(j_{M}^{G}\) coincides with the restriction of \(\operatorname{Ind}_{M}^{G}\) to \(\mathsf{Orb}^{*}(M)\). The definition of \(j_{M}^{G}\) is not particularly relevant for the purposes of this paper, so we will be brief in recalling it. Choose a Cartan subalgebra \(\mathfrak{h}\subset\mathfrak{m}\) and let \(W\) (resp. \(W_{M}\)) denote the Weyl group of \(G\) (resp. \(M\)). There is a surjective map \[\{(\mathbb{O},\psi)\mid\mathbb{O}\in\mathsf{Orb}(G),\ \psi\in\widehat{A( \mathbb{O})}\}\to\widehat{W}\sqcup\{0\},\qquad(\mathbb{O},\psi)\mapsto E_{( \mathbb{O},\psi)}. \tag{2.7.1}\] called the _Springer correspondence_ (see [10]). We say that an irreducible representation \(\psi\) of \(A(\mathbb{O})\) is of _Springer type_ if \(E_{(\mathbb{O},\psi)}\neq 0\) (the trivial representation of \(A(\mathbb{O})\) is always of Springer type). For each irreducible representation \(E\) of \(W\), Lusztig defines a nonnegative integer \(b_{E}\) called the _b-value_ or _fake degree_ of \(E\). If \(\mathbb{O}\in\mathsf{Orb}^{*}(M)\), it is shown in [13, Chapter 13.3] that \(\operatorname{Ind}_{W_{M}}^{W}E_{\mathbb{O},1}\) contains a unique irreducible subrepresentation \(E^{\prime}\) such that \(b_{E^{\prime}}=b_{E_{(\mathbb{O},1)}}\) and \(E^{\prime}=E_{(\mathbb{O}^{\prime},1)}\) for some \(\mathbb{O}^{\prime}\in\mathsf{Orb}(G)\). Then \(j_{M}^{G}\) is defined by \(j_{M}^{G}\mathbb{O}=\mathbb{O}^{\prime}\). ### Sommers duality Consider the map \[\underline{d}_{S}:\mathsf{MS}(G^{\vee})\to\mathsf{Orb}(G),\qquad\underline{d }_{S}(M^{\vee},tZ^{\circ},\mathbb{O}_{M})=j_{M}^{G}d(\mathbb{O}_{M^{\vee}})\] _Sommers duality_ is defined to be the composition \[d_{S}:\mathsf{Conj}(G^{\vee})\stackrel{{\pi^{-1}}}{{\to}} \mathsf{MS}^{large}(G^{\vee})\stackrel{{\underline{d}_{S}}}{{\to}} \mathsf{Orb}(G)\] where \(\pi:\mathsf{MS}^{large}(G^{\vee})\to\mathsf{Conj}(G^{\vee})\) is the bijection of Lemma 2.2(i). We will sometimes write \(d_{S}^{G}\) instead of \(d_{S}\) when we wish to emphasize the dependence on \(G\). **Proposition 2.12** (Section 6, [11]).: _The map \(d_{S}:\mathsf{Conj}(G^{\vee})\to\mathsf{Orb}(G)\) has the following properties:_ 1. \(d_{S}\) _is surjective._ 2. _If_ \(L\subset G\) _is a Levi subgroup, then_ \[d_{S}^{G}\circ\operatorname{Sat}_{L^{\vee}}^{G^{\vee}}=\operatorname{Ind}_{L}^ {G}\circ d_{S}^{L}\] 3. \(d_{S}(\mathbb{O}^{\vee},1)=d(\mathbb{O}^{\vee})\)_, for every_ \(\mathbb{O}^{\vee}\in\mathsf{Orb}(G^{\vee})\)_._ ### Lusztig's canonical quotient For each \(\mathbb{O}\in\mathsf{Orb}(G)\), there is a canonically defined quotient group \(\bar{A}(\mathbb{O})\) of \(A(\mathbb{O})\), called _Lusztig's canonical quotient_. In [10], this quotient is defined in the case when \(\mathbb{O}\) is _special_, but in fact Lusztig's definition is valid for arbitrary \(\mathbb{O}\). An alternative description of \(\bar{A}(\mathbb{O})\) is provided in [11, Section 5], which we briefly recall here. For each conjugacy datum \((\mathbb{O},C)\in\mathsf{Conj}(G)\), Sommers defines a nonnegative integer \(\tilde{b}_{(\mathbb{O},C)}\) called the _\(\tilde{b}\)-value_ of \((\mathbb{O},C)\) as follows. If \((M,tZ^{\circ},\mathbb{O}_{M})\in\mathsf{MS}(G)\) is such that \(\pi(M,tZ^{\circ},\mathbb{O}_{M})=(\mathbb{O},C)\), then \[\tilde{b}_{(\mathbb{O},C)}:=\frac{1}{2}(\dim M-\dim(d^{M}(\mathbb{O}_{M}))- \operatorname{rank}G),\] Equivalently, \(\tilde{b}_{(\mathbb{O},C)}\) is the dimension of the Springer fiber attached to any element in \(d^{M}(\mathbb{O}_{M})\). Sommers shows that \(\tilde{b}_{(\mathbb{O},C)}\) only depends on \((\mathbb{O},C)\) (i.e. is independent of the lift \((M,tZ^{\circ},\mathbb{O}_{M})\)) [11, Proposition 1]. In fact, \(\tilde{b}_{(\mathbb{O},C)}\) coincides with the dimension of the Springer fibers of the orbit \(d_{S}(\mathbb{O},C)\) in \(\mathfrak{g}^{\vee}\) (see [11, Section 6]). The \(\tilde{b}\)-values satisfy \(\tilde{b}_{\mathbb{O},C}\geq\tilde{b}_{\mathbb{O},1}\) for any conjugacy class \(C\) of \(A(\mathbb{O})\) ([11, Proposition 3]). Let \(N^{\prime}\) be the union of all conjugacy classes \(C\) in \(A(\mathbb{O})\) such that \(\tilde{b}_{\mathbb{O},C}=\tilde{b}_{\mathbb{O},1}\). A case-by-case calculation shows that \(N^{\prime}\) is a subgroup of \(A(\mathbb{O})\) and coincides with a subgroup \(N\) which Lusztig defines using Springer theory and generic degrees of irreducible Weyl group representations, so that \(\bar{A}(\mathbb{O})=A(\mathbb{O})/N=A(\mathbb{O})/N^{\prime}\) ([11, Theorem 6]). ### Lusztig-Achar data A _Lusztig-Achar datum_ for \(G\) is a pair \((\mathbb{O},\bar{C})\) consisting of a nilpotent orbit \(\mathbb{O}\in\mathsf{Orb}(G)\) and a conjugacy class \(\bar{C}\) in \(\bar{A}(\mathbb{O})\). Let \[\mathsf{LA}(G):=\{\text{Lusztig-Achar data $(\mathbb{O},\bar{C})$ for $G$}\}\] Since the groups \(\bar{A}(\mathbb{O})\) are independent of isogeny, so is the set \(\mathsf{LA}(G)\). Of course, there is a natural projection \(\mathsf{Conj}(G)\to\mathsf{LA}(G)\). The following is a consequence of [1, Theorems 5.1, 6.1]. **Lemma 2.13**.: _Let \(\mathbb{O}\in\mathsf{Orb}(G)\) and let \(C_{1}\) and \(C_{2}\) be conjugacy classes in \(A(\mathbb{O})\). Then the following are equivalent_ 1. \(d_{S}(\mathbb{O},C_{1})=d_{S}(\mathbb{O},C_{2})\)_._ 2. \(C_{1}\) _and_ \(C_{2}\) _have the same image in_ \(\bar{A}(\mathbb{O})\)_._ **Corollary 2.14**.: _The map \(d_{S}:\mathsf{Conj}(G^{\vee})\to\mathsf{Orb}(G)\) factors through the projection \(\mathsf{Conj}(G^{\vee})\to\mathsf{LA}(G^{\vee})\)._ We wish to define a notion of'saturation' for Lusztig-Achar data analogous to (2.1.3). So let \(L\subset G\) be a Levi subgroup and let \(\mathbb{O}_{L}\in\mathsf{Orb}(L)\). **Proposition 2.15**.: _The homomorphism \(\iota:A(\mathbb{O}_{L})\to A(\operatorname{Sat}_{L}^{G}\mathbb{O}_{L})\) (cf. (2.1.2)) descends to a homomorphism \(\bar{\iota}:\bar{A}(\mathbb{O}_{L})\to\bar{A}(\operatorname{Sat}_{L}^{G} \mathbb{O}_{L})\)._ Proof.: Let \(\mathbb{O}=\operatorname{Sat}_{L}^{G}\mathbb{O}_{L}\). It follows from the definition of \(\tilde{b}\)-value and Proposition 2.12(ii) that \[\tilde{b}(\operatorname{Sat}_{L}^{G}(\mathbb{O}_{L},C_{L}))=\tilde{b}(\mathbb{ O}_{L},C_{L})\] for any conjugacy class \(C_{L}\) in \(A(\mathbb{O}_{L})\). Therefore \(\iota(H^{\prime}_{\mathbb{O}_{L}})\subset H^{\prime}_{\mathbb{O}}\) and the proposition holds. Using Proposition 2.15, we can define a map \[\operatorname{Sat}_{L}^{G}:\mathsf{LA}(L)\to\mathsf{LA}(G),\qquad\operatorname{Sat }_{L}^{G}(\mathbb{O}_{L},\bar{C}_{L})=(\operatorname{Sat}_{L}^{G}\mathbb{O}_{L},\bar{t}(\bar{C}_{L}))\] A Lusztig-Achar datum \((\mathbb{O},\bar{C})\) is _distinguished_ if it cannot be obtained by saturation from a proper Levi subgroup. Let \[\mathsf{LA}_{0}(G):=\{(L,(\mathbb{O}_{L},\bar{C}_{L}))\mid(\mathbb{O}_{L},\bar {C}_{L})\text{ is distinguished}\}/G\] where, as usual, \(L\) runs over all Levi subgroups of \(G\). **Lemma 2.16**.: _Suppose \(Z(G)\) is connected. Then for each \(\mathbb{O}\in\mathsf{Orb}(G)\), there is a subgroup \(K(\mathbb{O})\subset A(\mathbb{O})\) such that_ 1. _The quotient map_ \(A(\mathbb{O})\to\bar{A}(\mathbb{O})\) _restricts to an isomorphism_ \(K(\mathbb{O})\xrightarrow{\sim}\bar{A}(\mathbb{O})\)_._ 2. _The natural map from conjugacy classes in_ \(K(\mathbb{O})\) _to conjugacy classes in_ \(A(\mathbb{O})\) _is injective._ 3. _If_ \(L\subset G\) _is a Levi subgroup and_ \(\mathbb{O}_{L}\in\mathsf{Orb}(L)\) _such that_ \(\mathbb{O}=\operatorname{Sat}_{L}^{G}\mathbb{O}_{L}\)_, then the homomorphism_ \(\iota\) _defined in (_2.1.2_) maps_ \(K(\mathbb{O}_{L})\) _to_ \(K(\mathbb{O})\)_._ Proof.: For \(G\) a classical group, subgroups with these properties were exhibited in [1, Section 5] (see also Section 5.12 for the case of distinguished Lusztig-Achar data). For \(G\) an adjoint exceptional group, there are three possibilities: 1. \(A(\mathbb{O})\simeq\bar{A}(\mathbb{O})\). 2. \(A(\mathbb{O})\simeq S_{2}\), \(\bar{A}(\mathbb{O})\simeq 1\). 3. \(G\) is of type \(E_{8}\), \(\mathbb{O}\) is the distinguished orbit \(E_{8}(b_{6})\), \(A(\mathbb{O})\simeq S_{3}\), and \(\bar{A}(\mathbb{O})\simeq S_{2}\). In the first case, we take \(K(\mathbb{O})=A(\mathbb{O})\). In the second case, we take \(K(\mathbb{O})=\{1\}\). In the third case, we take \(K(\mathbb{O})\) to be any order \(2\) subgroup of \(A(\mathbb{O})\). It is an easy exercise to check that conditions (i), (ii), and (iii) are satisfied for these choices of \(K(\mathbb{O})\). **Proposition 2.17**.: _The map_ \[\operatorname{Sat}:\mathsf{LA}_{0}(G)\to\mathsf{LA}(G),\qquad\operatorname{Sat }(L,(\mathbb{O}_{L},\bar{C}_{L}))=\operatorname{Sat}_{L}^{G}(\mathbb{O}_{L}, \bar{C}_{L})\] _is a bijection._ Proof.: Since \(\mathsf{LA}(G)\) is independent of isogeny, we can assume in the proof that \(Z(G)\) is connected. Let \(\bar{C}\) be a conjugacy class in \(\bar{A}(\mathbb{O})\) and let \(C^{\prime}\) be the preimage of \(\bar{C}\) under the isomorphism \(K(\mathbb{O})\xrightarrow{\sim}\bar{A}(\mathbb{O})\) of Lemma 2.16(i). By Lemma 2.16(ii), there is a unique conjugacy class \(C\) in \(A(\mathbb{O})\) such that \(C\cap K(\mathbb{O})=C^{\prime}\). Taking \((\mathbb{O},\bar{C})\) to \((\mathbb{O},C)\) defines an injective map \(\eta:\mathsf{LA}(G)\to\mathsf{Conj}(G)\), right-inverse to the projection \(\mathsf{Conj}(G)\to\mathsf{LA}(G)\). Now suppose \(L\subset G\) is a Levi subgroup. Then \(Z(L)\) is connected (see e.g. [12, Lemma 9]) and the following diagrams commute (2.10.1) (the first diagram commutes by definition, and the second by Lemma 2.16(iii)). Suppose \((L_{1},(\mathbb{O}_{L_{1}},\bar{C}_{L_{1}})),(L_{2},(\mathbb{O}_{L_{2}},\bar{C }_{L_{2}}))\in\mathsf{LA}_{0}(G)\) such that \(\operatorname{Sat}(\mathbb{O}_{L_{1}},\bar{C}_{L_{1}}))=\operatorname{Sat}( \mathbb{O}_{L_{2}},\bar{C}_{L_{2}}))=(\mathbb{O},\bar{C})\). Let \((\mathbb{O}_{L_{1}},C_{L_{1}})=\eta(\mathbb{O}_{L_{1}},\bar{C}_{L_{1}})\), \((\mathbb{O}_{L_{2}},C_{L_{2}})=\eta(\mathbb{O}_{L_{2}},\bar{C}_{L_{2}})\), and \((\mathbb{O},C)=\eta(\mathbb{O},\bar{C})\). By the first commutative diagram the conjugacy data \((\mathbb{O}_{L_{1}},C_{L_{1}})\) and \((\mathbb{O}_{L_{2}},C_{L_{2}})\) are distinguished. By the second commutative diagram \(\operatorname{Sat}(L_{1},(\mathbb{O}_{L_{1}},C_{L_{1}}))=\operatorname{Sat}(L_{2 },(\mathbb{O}_{L_{2}},C_{L_{2}})=(\mathbb{O},C)\). So by Proposition 2.3, \((L_{1},(\mathbb{O}_{L_{1}},C_{L_{1}}))\) and \((L_{2},(\mathbb{O}_{L_{2}},C_{L_{2}}))\) are conjugate. Hence, \((L_{1},(\mathbb{O}_{L_{1}},\bar{C}_{L_{1}}))\) and \((L_{2},(\mathbb{O}_{L_{2}},\bar{C}_{L_{2}}))\) are conjugate. This completes the proof. We conclude by describing a useful parameterization of \(\mathsf{LA}(G)\). A _Sommers datum_ for \(G\) is a pair \((M,\mathbb{O}_{M})\) consisting of a pseudo-Levi subgroup \(M\subset G\) and a distinguished nilpotent orbit \(\mathbb{O}_{M}\in\mathsf{Orb}(M)\). Note that \(G\) acts by conjugation on the set of Sommers data. Let \[\mathsf{Som}(G):=\{\text{Sommers data }(M,\mathbb{O}_{M})\text{ for }G\}/G\] If \(L\subset G\) is a Levi subgroup, there is a tautological map \[\operatorname{Sat}^{G}_{L}:\mathsf{Som}(L)\to\mathsf{Som}(G),\qquad \operatorname{Sat}^{G}_{L}(M,\mathbb{O}_{M})=(M,\mathbb{O}_{M}).\] which is clearly compatible with \(\operatorname{Sat}^{G}_{L}:\mathsf{MS}(L)\to\mathsf{MS}(G)\) under the forgetful maps \(\mathsf{MS}(G)\to\mathsf{Som}(G)\) and \(\mathsf{MS}(L)\to\mathsf{Som}(L)\). We define an equivalence relation \(\sim\) on \(\mathsf{Som}(G)\) as follows: \((M_{1},\mathbb{O}_{M_{1}})\sim(M_{2},\mathbb{O}_{M_{2}})\) if and only if \(\operatorname{Sat}^{G}_{M_{1}}\mathbb{O}_{M_{1}}=\operatorname{Sat}^{G}_{M_{2 }}\mathbb{O}_{M_{2}}\) and \(j^{G^{\vee}}_{M^{\vee}_{1}}d(\mathbb{O}_{M^{\vee}_{1}})=j^{G^{\vee}}_{M^{ \vee}_{2}}d(\mathbb{O}_{M^{\vee}_{2}})\). **Lemma 2.18**.: _Let \(G_{ad}\) denote the adjoint form of \(G\). Then the following are true:_ 1. _The forgetful map_ \[\mathsf{MS}^{large}(G_{ad})\to\mathsf{Som}(G_{ad})=\mathsf{Som}(G)\] _is a bijection._ 2. _The fibers of the surjective map_ \[\mathsf{Som}(G)\xrightarrow{\sim}\mathsf{MS}^{large}(G_{ad})\xrightarrow{ \sim}\mathsf{Conj}(G_{ad})\twoheadrightarrow\mathsf{LA}(G_{ad})=\mathsf{LA}(G)\] _(here, the first map is the inverse of the forgetful map in (i), the second map is_ \(\pi^{G_{ad}}\)_, and the third map is the natural quotient map) are exactly the equivalence classes in_ \(\mathsf{Som}(G)\)_. In particular, there is a natural bijection_ \[\mathsf{Som}(G)/\sim\xrightarrow{\sim}\mathsf{LA}(G).\] 3. _Suppose_ \((\mathbb{O},\bar{C})\in\mathsf{LA}(G)\) _corresponds under the bijection in (ii) to an equivalence class_ \(\{(M_{1},\mathbb{O}_{M_{1}}),...,(M_{k},\mathbb{O}_{M_{k}})\}\subset\mathsf{ Som}(G)\)_. Then_ \((\mathbb{O},\bar{C})\) _is distinguished if and only if one (equivalently, all) of the pseudo-Levi subgroups_ \(M_{1},...,M_{k}\) _is of maximal semisimple rank. In this case, the elements in the equivalence class in_ \(\mathsf{Som}(G)\) _correspond bijectively via the bijection_ \(\mathsf{Som}(G)\xrightarrow{\sim}\mathsf{Conj}(G_{ad})\) _in (ii) to the elements in the preimage of_ \((\mathbb{O},\bar{C})\) _under the projection_ \(\mathsf{Conj}(G_{ad})\twoheadrightarrow\mathsf{LA}(G)\)_._ 4. _Suppose_ \((\mathbb{O},\bar{C})\in\mathsf{LA}(G)\) _is distinguished and_ \((M,tZ^{\circ},\mathbb{O}_{M})\in\mathsf{MS}(G)\) _maps to_ \((\mathbb{O},\bar{C})\) _under_ \(\mathsf{MS}(G)\stackrel{{\pi}}{{\to}}\mathsf{Conj}(G)\to \mathsf{LA}(G)\)_. Then_ \((M,\mathbb{O}_{M})\) _belongs to the equivalence class in_ \(\mathsf{Som}(G)\) _corresponding to_ \((\mathbb{O},\bar{C})\) _under the bijection in (ii)._ Proof.: (i) is [13, Proposition 35]. (ii) is immediate from Lemma 2.13. (iii) and (iv) follow from part (i) and (iii) of Lemma 2.2 and Lemma 2.1. Indeed, the first commtative diagram in (2.10.1) implies that any lift \((\mathbb{O},C)\) of a distinguished pair \((\mathbb{O},\bar{C})\) in \(\mathsf{Conj}(G_{ad})\) is also distinguished. Then Lemma 2.2 (iii) implies that the preimage \(\pi^{-1}(\mathbb{O},C)\) of \((\mathbb{O},C)\) under the map \(\pi:\mathsf{MS}(G_{ad})\to\mathsf{Conj}(G_{ad})\) is included in \(\mathsf{MS}^{large}(G_{ad})\). Therefore by Lemma 2.2 (i), \(\pi^{-1}(\mathbb{O},C)\) consists of only one element, which corresponds to a unique element \((M,\mathbb{O}_{M})\in\mathsf{Som}(G)\), where \(M\) is of maximal semisimple rank by Lemma 2.1. ### Special Lusztig-Achar data **Definition 2.19**.: _Let \((\mathbb{O},\bar{C})\in\mathsf{LA}(G)\) and let \(\mathbb{O}^{\vee}=d_{S}^{G^{\vee}}(\mathbb{O},\bar{C})\). Following Achar ([1]) we say that \((\mathbb{O},\bar{C})\) is special if there is a conjugacy class \(\bar{C}^{\prime}\subset\bar{A}(\mathbb{O}^{\vee})\) such that_ \[d_{S}^{G}(\mathbb{O}^{\vee},\bar{C}^{\prime})=\mathbb{O}.\] Let \[\mathsf{LA}^{*}(G):=\{\text{special Lusztig-Achar data $(\mathbb{O},\bar{C})$ for $G$}\}\] and \[\mathsf{LA}^{*}_{0}(G):=\{(L,(\mathbb{O}_{L},\bar{C}_{L}))\mid(\mathbb{O}_{L},\bar{C}_{L})\text{ is special and distinguished}\}/G\] where, as usual, \(L\) runs over all Levi subgroups of \(G\). We will need several basic facts about special Achar data. **Proposition 2.20**.: _The following are true:_ 1. _For any orbit_ \(\mathbb{O}\in\mathsf{Orb}(G)\)_, the Achar datum_ \((\mathbb{O},1)\) _is special._ 2. _The bijection of Proposition_ 2.17 _induces an injection_ \[\operatorname{Sat}^{-1}:\mathsf{LA}^{*}(G)\hookrightarrow\mathsf{LA}^{*}_{0}(G)\] Proof.: (i) is [1, Proposition 2.7]. We proceed to proving (ii). In classical types all distinguished Achar data are special, see [1, Section 5.2]. So (ii) is immediate. In exceptional types, a list of (special) Achar data is given in [1, Section 6]. There are only two non-special distinguished Achar data in exceptional types, the (unique) nontrivial conjugacy class for the nilpotent orbit \(A_{4}+A_{1}\) in \(E_{7}\) and the (unique) nontrivial conjugacy class for the nilpotent orbit \(E_{6}(a_{1})+A_{1}\) in \(E_{8}\). Saturation to \(E_{8}\) takes the former to the (unique) nontrivial conjugacy class for the nilpotent orbit \(A_{4}+A_{1}\) in \(E_{8}\), which is non-special by [1, Section 6]. This completes the proof of (ii). ## 3. Symplectic singularities and unipotent ideals In this section, we will review some preliminary facts about conical symplectic singularities and their Poisson deformations and filtered quantizations. We will also establish several new facts about conical symplectic singularities and nilpotent covers. The main new result is Theorem 3.48. It states that birational induction takes the maximal cover in an equivalence class to the maximal cover in an equivalence class. This result is essential for our parameterization of unipotent representations (Theorem 4.14). ### Poisson deformations In this subsection, we will recall the definitions of (graded, formal) Poisson deformations of (graded) Poisson varieties. A _Poisson scheme_ is a scheme \(X\) equipped with a Poisson bracket on its structure sheaf \(\mathcal{O}_{X}\). If \(S\) is an affine scheme, a _Poisson \(S\)-scheme_ is a scheme \(X\) equipped with a morphism \(f:X\to S\) and a \(f^{-1}\mathcal{O}_{S}\)-linear Poisson bracket on \(\mathcal{O}_{X}\). These definitions have obvious formal counterparts (a _formal Poisson scheme_ is a formal scheme with a Poisson bracket on its structure sheaf, and so on). **Definition 3.1**.: _Let \(X\) be a Poisson variety and let \(S\) be a (possibly formal) affine scheme with a distinguished closed point \(0\). Then a (formal) Poisson deformation of \(X\) over \(S\) is a flat (formal) Poisson \(S\)-scheme \(\mathcal{X}_{S}\) equipped with a Poisson isomorphism \(\iota:\mathcal{X}_{S}\times_{S}\{0\}\xrightarrow{\sim}X\). If \(S\) is a formal scheme complete at \(0\in S\), we say that \(\mathcal{X}_{S}\) is a formal Poisson deformation of \(X\)._ _An isomorphism between Poisson deformations \((\mathcal{X}_{S},\iota)\) and \((\mathcal{X}^{\prime}_{S},\iota^{\prime})\) over \(S\) is an isomorphism of Poisson schemes \(\mathcal{X}_{S}\to\mathcal{X}^{\prime}_{S}\) over \(S\) such that the induced isomorphism \(\mathcal{X}_{S}\times_{S}\{0\}\xrightarrow{\sim}\mathcal{X}^{\prime}_{S}\times_ {S}\{0\}\) intertwines \(\iota\) and \(\iota^{\prime}\)._ Let \(X\) be a Poisson variety. Let \(\mathcal{A}rt_{\mathbb{C}}\) denote the category of local Artinian \(\mathbb{C}\)-algebras with residue field \(\mathbb{C}\). For \(R\in\mathcal{A}rt_{\mathbb{C}}\), define \(\mathrm{PD}_{X}(R)\) to be the set of isomorphism classes of Poisson deformations over \(\mathrm{Spec}(R)\). This defines a functor \(\mathrm{PD}_{X}:\mathcal{A}rt_{\mathbb{C}}\to\mathcal{S}et\). Let \(\mathbb{C}\{\epsilon\}:=\mathbb{C}[\epsilon]/(\epsilon^{2})\) denote the ring of dual numbers over \(\mathbb{C}\). Then \(\mathrm{PD}_{X}(\mathbb{C}\{\epsilon\})\) is the tangent space of \(\mathrm{PD}_{X}\). Next, we recall the notion of a graded Poisson deformation. Let \(X\) be a graded normal Poisson variety. By this, we mean a normal variety equipped with two additional structures: * A rational \(\mathbb{C}^{\times}\)-action. Let \(\mathcal{O}_{X}\) denote the structure sheaf of \(X\) with respect to the conical topology on \(X\) (i.e. the topology with open subsets equal to \(\mathbb{C}^{\times}\)-invariant Zariski open subsets); * A Poisson bracket \(\{\cdot,\cdot\}:\mathcal{O}_{X}\otimes\mathcal{O}_{X}\to\mathcal{O}_{X}\) of degree \(-d\). **Definition 3.2**.: _Let \(X\) be a graded normal Poisson variety and let \(S\) be an affine scheme with a rational \(\mathbb{C}^{\times}\)-action contracting onto a distinguished closed point \(0\in S\). Then a graded Poisson deformation of \(X\) over \(S\) is a Poisson deformation \((\mathcal{X}_{S},\iota)\) of \(X\) over \(S\) (cf. Definition 3.1) equipped with a rational \(\mathbb{C}^{\times}\)-action on \(\mathcal{X}_{S}\) such that_ 1. _The action rescales the Poisson bracket by_ \(t\mapsto t^{-d}\)_._ 2. _The action is compatible with the action on_ \(S\)_._ 3. \(\iota\) _is_ \(\mathbb{C}^{\times}\)_-equivariant._ _An isomorphism between graded Poisson deformations \((\mathcal{X}_{S},\iota)\) and \((\mathcal{X}^{\prime}_{S},\iota^{\prime})\) over \(S\) is an isomorphism of Poisson deformations which is \(\mathbb{C}^{\times}\)-equivariant._ ### Filtered quantizations In this subsection, we will recall the definitions of filtered quantizations of graded Poisson algebras and graded Poisson varieties. Let \(A\) be a _graded Poisson algebra_ of degree \(-d\in\mathbb{Z}_{<0}\). By this, we mean a finitely-generated commutative associative algebra equipped with two additional structures: an algebra grading \[A=\bigoplus_{i=-\infty}^{\infty}A_{i};\] and a Poisson bracket \(\{\cdot,\cdot\}\) of degree \(-d\) \[\{A_{i},A_{j}\}\subset A_{i+j-d},\qquad i,j\in\mathbb{Z}.\] **Definition 3.3**.: _A filtered quantization of \(A\) is a pair \((\mathcal{A},\theta)\) consisting of_ 1. _an associative algebra_ \(\mathcal{A}\) _equipped with a complete and separated filtration by subspaces_ \[\mathcal{A}=\bigcup_{i=-\infty}^{\infty}\mathcal{A}_{\leqslant i},\qquad... \subseteq\mathcal{A}_{\leqslant-1}\subseteq\mathcal{A}_{\leqslant 0} \subseteq\mathcal{A}_{\leqslant 1}\subseteq...\] _such that_ \[[\mathcal{A}_{\leqslant i},\mathcal{A}_{\leqslant j}]\subseteq\mathcal{A}_{ \leqslant i+j-d}\qquad i,j\in\mathbb{Z},\] _and_ _ 2. _an isomorphism of graded Poisson algebras_ \[\theta:\operatorname{gr}(\mathcal{A})\xrightarrow{\sim}A,\] _where the Poisson bracket on_ \(\operatorname{gr}(\mathcal{A})\) _is defined by_ \[\{a+\mathcal{A}_{\leqslant i-1},b+\mathcal{A}_{\leqslant j-1}\}=[a,b]+ \mathcal{A}_{\leqslant i+j-d-1},\qquad a\in\mathcal{A}_{\leqslant i},\ b\in \mathcal{A}_{\leqslant j}.\] _An isomorphism of filtered quantizations \((\mathcal{A}_{1},\theta_{1})\xrightarrow{\sim}(\mathcal{A}_{2},\theta_{2})\) is an isomorphism of filtered algebras such that \(\theta_{1}=\theta_{2}\circ\operatorname{gr}(\phi)\). Denote the set of isomorphism classes of quantizations of \(A\) by \(\operatorname{Quant}(A)\)._ Often, the isomorphism \(\theta\) is clear from the context, and will be omitted from the notation. However, the reader should keep in mind that a filtered quantization \((\mathcal{A},\theta)\) is _not_ determined up to isomorphism by \(\mathcal{A}\) alone. Now let \(X\) be a graded normal Poisson variety. **Definition 3.4**.: _A filtered quantization of \(X\) is a pair \((\mathcal{D},\theta)\) consisting of_ 1. _a sheaf_ \(\mathcal{D}\) _of associative algebras in the conical topology on_ \(X\)_, equipped with a complete and separated filtration by subsheaves of vector spaces_ \[\mathcal{D}=\bigcup_{i=-\infty}^{\infty}\mathcal{D}_{\leqslant i},\qquad... \subseteq\mathcal{D}_{\leqslant-1}\subseteq\mathcal{D}_{\leqslant 0} \subseteq\mathcal{D}_{\leqslant 1}\subseteq...\] _such that_ \[[\mathcal{D}_{\leqslant i},\mathcal{D}_{\leqslant j}]\subseteq\mathcal{D}_{ \leqslant i+j-d}\qquad i,j\in\mathbb{Z},\] _and_ 2. _an isomorphism of sheaves of graded Poisson algebras_ \[\theta:\operatorname{gr}(\mathcal{D})\xrightarrow{\sim}\mathcal{O}_{X},\] _where the Poisson bracket on_ \(\operatorname{gr}(\mathcal{D})\) _is defined by_ \[\{a+\mathcal{D}_{\leqslant i-1},b+\mathcal{D}_{\leqslant j-1}\}=[a,b]+ \mathcal{D}_{\leqslant i+j-d-1},\qquad a\in\mathcal{D}_{\leqslant i},\ b\in \mathcal{D}_{\leqslant j}.\] _An isomorphism of filtered quantizations \((\mathcal{D}_{1},\theta_{1})\xrightarrow{\sim}(\mathcal{D}_{2},\theta_{2})\) is an isomorphism of sheaves of filtered algebras such that \(\theta_{1}=\theta_{2}\circ\operatorname{gr}(\phi)\). Denote the set of isomorphism classes of quantizations of \(X\) by \(\operatorname{Quant}(X)\)._ We conclude this subsection by recalling the definition of a _Hamiltonian quantization_ of a graded Poisson algebra with Hamiltonian \(G\)-action. Let \(A\) be a graded Poisson algebra of degree \(-d\). Suppose \(G\) is an algebraic group which acts rationally on \(A\) by graded Poisson automorphisms. Write \(\operatorname{Der}(A)\) for the Lie algebra of derivations of \(A\). The \(G\)-action on \(A\) induces by differentiation a Lie algebra homomorphism \[\mathfrak{g}\to\operatorname{Der}(A),\qquad\xi\mapsto\xi_{A},\] We say that \(A\) (or the \(G\)-action on \(A\)) is _Hamiltonian_ if there is a \(G\)-equivariant map \(\varphi:\mathfrak{g}\to A_{d}\) (called a _classical co-moment map_) such that \[\{\varphi(\xi),a\}=\xi_{A}(a),\qquad\xi\in\mathfrak{g},\quad a\in A.\] A filtered quantization \((\mathcal{A},\theta)\) is \(G\)-_equivariant_ if \(G\) acts rationally on \(\mathcal{A}\) by filtered algebra automorphisms and the isomorphism \(\theta:\operatorname{gr}(\mathcal{A})\xrightarrow{\sim}A\) is \(G\)-equivariant. In this case, we get a Lie algebra homomorphism \[\mathfrak{g}\to\operatorname{Der}(\mathcal{A}),\qquad\xi\mapsto\xi_{ \mathcal{A}}.\] **Definition 3.5**.: _Suppose \(A\) is a graded Poisson algebra equipped with a Hamiltonian \(G\)-action. A Hamiltonian quantization of \(A\) is a triple \((\mathcal{A},\theta,\Phi)\) consisting of_ 1. \(a\) \(G\)_-equivariant filtered quantization_ \((\mathcal{A},\theta)\) _of_ \(A\)_, and_ 2. \(a\) \(G\)_-equivariant map_ \(\Phi:\mathfrak{g}\to\mathcal{A}_{\leqslant d}\) _(called a quantum co-moment map_) such that_ \[[\Phi(\xi),a]=\xi_{\mathcal{A}}(a),\qquad\xi\in\mathfrak{g},\quad a\in \mathcal{A}.\] _An isomorphism \((\mathcal{A}_{1},\theta_{1},\Phi_{1})\xrightarrow{\sim}(\mathcal{A}_{2}, \theta_{2},\Phi_{2})\) of Hamiltonian quantizations of \(A\) is a \(G\)-equivariant isomorphism of filtered algebras \(\phi:\mathcal{A}_{1}\to\mathcal{A}_{2}\) such that \(\theta_{1}=\theta_{2}\circ\operatorname{gr}(\phi)\) and \(\Phi_{2}=\phi\circ\Phi_{1}\). Denote the set of isomorphism classes of Hamiltonian quantizations of \(A\) by \(\operatorname{Quant}^{G}(A)\)._ ### Symplectic singularities and \(\mathbb{Q}\)-factorial terminalizations Let \(X\) be a normal Poisson variety. **Definition 3.6** ([1], Definition 1.1).: _We say that \(X\) has symplectic singularities if_ 1. _The regular locus_ \(X^{reg}\subset X\) _is symplectic; denote the symplectic form by_ \(\omega\)_._ 2. _There is a resolution of singularities_ \(\rho:Y\to X\) _such that_ \(\rho^{*}\omega\) _extends to a regular (not necessarily symplectic)_ \(2\)_-form on_ \(Y\)_._ Now let \(X\) be a graded normal Poisson variety. **Definition 3.7**.: _We say that \(X\) is a conical symplectic singularity if \(X\) has symplectic singularities and the \(\mathbb{C}^{\times}\)-action on \(X\) is contracting onto a point._ We note that every conical symplectic singularity is automatically affine. **Example 3.8**.: _Let \(G\) be a complex connected reductive algebraic group and let \(\widehat{\mathbb{O}}\to\mathbb{O}\) be a finite etale \(G\)-equivariant cover of a nilpotent co-adjoint \(G\)-orbit \(\mathbb{O}\subset\mathfrak{g}^{*}\). There is a natural \(\mathbb{C}^{\times}\)-action on \(\widehat{\mathbb{O}}\) defined in the following manner (see [1, Section 1] for details and proofs). Consider the 'doubled' dilation action of \(\mathbb{C}^{\times}\) on \(\mathfrak{g}^{*}\), i.e._ \[z\cdot\xi=z^{2}\xi,\qquad z\in\mathbb{C}^{\times},\ \xi\in\mathfrak{g}^{*}.\] _This action preserves \(\mathbb{O}\), since \(\mathbb{O}\) is a nilpotent orbit, and lifts to a unique \(\mathbb{C}^{\times}\)-action on \(\widehat{\mathbb{O}}\), which commutes with the \(G\)-action on \(\widehat{\mathbb{O}}\). It follows that any morphism of covers \(\widehat{\mathbb{O}}\to\widehat{\mathbb{O}}\) is automatically \(\mathbb{C}^{\times}\)-equivariant. The \(\mathbb{C}^{\times}\)-action on \(\widehat{\mathbb{O}}\) induces a non-negative grading on the ring of regular functions \(\mathbb{C}[\widehat{\mathbb{O}}]\)._ _There is also a natural \(G\)-equivariant symplectic form on \(\widehat{\mathbb{O}}\), obtained by pullback from the Kirillov-Kostant-Souriau form on \(\mathbb{O}\). This form induces a Poisson bracket on \(\mathbb{C}[\widehat{\mathbb{O}}]\), and this bracket is of degree \(-2\) with respect to the grading defined above (see again [1, Section 1]). It is known that \(X=\operatorname{Spec}(\mathbb{C}[\widehat{\mathbb{O}}])\) is a conical symplectic singularity (see [13, Lemma 2.5])._ _The natural map \(\widehat{\mathbb{O}}\to X\) is an open embedding, and the \(G\)-action on \(\widehat{\mathbb{O}}\) extends to a \(G\)-action on \(X\). The \(G\)-equivariant covering map \(\widehat{\mathbb{O}}\to\mathbb{O}\) extends to a finite \(G\)-equivariant surjection \(X\to\overline{\mathbb{O}}\). The \(G\)-action on \(\widehat{\mathbb{O}}\) (resp. \(X\)) is Hamiltonian; the moment map is the composition \(\widehat{\mathbb{O}}\to\mathbb{O}\subset\mathfrak{g}^{*}\) (resp. \(X\to\overline{\mathbb{O}}\subset\mathfrak{g}^{*}\)). Note that any morphism of covers \(\widetilde{\mathbb{O}}\to\widehat{\mathbb{O}}\) is symplectic and intertwines the moment maps, hence is automatically \(G\)-(hence \(G\times\mathbb{C}^{\times}\)-)equivariant, since \(G\) is connected._ Recall that a normal variety \(Y\) is \(\mathbb{Q}\)_-factorial_ if every Weil divisor has a nonzero integer multiple which is Cartier. **Proposition 3.9** (Proposition 2.1, [10]).: _Let \(X\) be a Poisson variety with symplectic singularities. Then there is a birational projective morphism \(\rho:Y\to X\) such that_ 1. \(Y\) _is an irreducible Poisson variety._ 2. \(Y\) _is_ \(\mathbb{Q}\)_-factorial._ 3. _The singular locus of_ \(Y\) _is of codimension_ \(\geqslant 4\)_._ The morphism \(\rho:Y\to X\) in the proposition above (or the variety \(Y\), if the morphism is understood) is called a \(\mathbb{Q}\)_-factorial terminalization_ of \(X\). If \(X\) is a conical symplectic singularity, then there is a \(\mathbb{C}^{\times}\)-action on \(Y\) such that \(\rho\) is \(\mathbb{C}^{\times}\)-equivariant, see [11, 10]. By [11], every conical symplectic singularity has only finitely-many isomorphism classes of \(\mathbb{Q}\)-factorial terminalizations. ### The Namikawa space and Weyl group Let \(X\) be a conical symplectic singularity. Associated to \(X\) are two important invariants: a finite-dimensional complex vector space \(\mathfrak{P}=\mathfrak{P}^{X}\) called the _Namikawa space_ and a finite Coxeter group \(W=W^{X}\) called the _Namikawa Weyl group_. In this section, we will recall several equivalent definitions of these objects. **Definition 3.10**.: _Let \(X\) be a conical symplectic singularity and let \(\rho:Y\to X\) be a \(\mathbb{Q}\)-factorial terminalization. The Namikawa space associated to \(X\) is the finite-dimensional complex vector space_ \[\mathfrak{P}=\mathfrak{P}^{X}:=H^{2}(Y^{reg},\mathbb{C})\] We will see in a moment that \(\mathfrak{P}\) depends only on \(X\) (and not on the choice of \(\rho:Y\to X\)). Since \(X\) is a symplectic singularity, it contains finitely-many symplectic leaves ([1, Theorem 2.3]). Let \(\mathfrak{L}_{k}\), \(k=1,2,\ldots,t\), denote the symplectic leaves of codimension \(2\). For each such leaf \(\mathfrak{L}_{k}\subset X\), the formal slice to \(\mathfrak{L}_{k}\subset X\) is identified with the formal neighborhood at \(0\) in a Kleinian singularity \(\Sigma_{k}=\mathbb{C}^{2}/\Gamma_{k}\). Under the McKay correspondence, \(\Gamma_{k}\) corresponds to a complex simple Lie algebra \(\mathfrak{g}_{k}\) of type A, D, or E. Fix a Cartan subalgebra \(\mathfrak{h}_{k}\subset\mathfrak{g}_{k}\). Write \(\Lambda_{k}\subset\mathfrak{h}_{k}^{*}\) for the weight lattice and \(W_{k}\) for the Weyl group. If we choose a point \(x\in\mathfrak{L}_{k}\), there is a natural identification \(H^{2}(\widehat{\mu}^{-1}(x),\mathbb{Z})\simeq\Lambda_{k}\), and \(\pi_{1}(\mathfrak{L}_{k})\) acts on \(\Lambda_{k}\) by diagram automorphisms. The _partial Namikawa space_ corresponding to \(\mathfrak{L}_{k}\) is the subspace \[\mathfrak{P}_{k}=\mathfrak{P}_{k}^{X}:=(\mathfrak{h}_{k}^{*})^{\pi_{1}( \mathfrak{L}_{k})}.\] Also define \(\mathfrak{P}_{0}=\mathfrak{P}_{0}^{X}:=H^{2}(X^{reg},\mathbb{C})\). **Proposition 3.11** ([10], Lem 2.8).: _There is a linear isomorphism_ \[\mathfrak{P}\xrightarrow{\sim}\bigoplus_{k=0}^{t}\mathfrak{P}_{k}. \tag{3.4.1}\] For each codimension \(2\) leaf \(\mathfrak{L}_{k}\subset X\), consider the subgroup of monodromy invariants \(W_{k}^{\pi_{1}(\mathfrak{L}_{k})}\subset W_{k}\). Note that there is a natural action of \(W_{k}^{\pi_{1}(\mathfrak{L}_{k})}\) on \(\mathfrak{P}_{k}=(\mathfrak{h}_{k}^{*})^{\pi_{1}(\mathfrak{L}_{k})}\). **Definition 3.12**.: _The Namikawa Weyl group associated to \(X\) is the finite group_ \[W=W^{X}:=\prod_{k=1}^{t}W_{k}^{\pi_{1}(\mathfrak{L}_{k})}.\] Note that \(W\) acts on \(\mathfrak{P}\) via the isomorphism (3.4.1) (the action on \(\mathfrak{P}_{0}\) is trivial). ### Filtered quantizations of conical symplectic singularities In this section, we will recall the classification of filtered quantizations of conical symplectic singularities. Let \(X\) be a conical symplectic singularity and let \(\rho:Y\to X\) be a graded \(\mathbb{Q}\)-factorial terminalization of \(X\). For any graded smooth symplectic variety \(V\), there is a (non-commutative) _period map_ \[\operatorname{Per}:\operatorname{Quant}(V)\to H^{2}(V,\mathbb{C}),\] see [1, Sec 4], [11, Sec 2.3]. **Proposition 3.13** ([11, Prop 3.1(1)).: _The maps_ \[\operatorname{Quant}(Y)\stackrel{{|_{Y\operatorname{reg}}}}{{ \rightarrow}}\operatorname{Quant}(Y^{\operatorname{reg}})\stackrel{{ \operatorname{Per}}}{{\rightarrow}}H^{2}(Y^{\operatorname{reg}}, \mathbb{C})=\mathfrak{P}^{X}\] _are bijections._ For \(\lambda\in\mathfrak{P}^{X}\), let \(\mathcal{D}_{\lambda}\) denote the corresponding filtered quantization of \(Y\) and let \(\mathcal{A}_{\lambda}:=\Gamma(Y,\mathcal{D}_{\lambda})\). **Theorem 3.14** (Prop 3.3, Thm 3.4, [11]).: _The following are true:_ 1. _For every_ \(\lambda\in\mathfrak{P}^{X}\)_, the algebra_ \(\mathcal{A}_{\lambda}\) _is a filtered quantization of_ \(\mathbb{C}[X]\)_._ 2. _Every filtered quantization of_ \(\mathbb{C}[X]\) _is isomorphic to_ \(\mathcal{A}_{\lambda}\) _for some_ \(\lambda\in\mathfrak{P}^{X}\)_._ 3. _For every_ \(\lambda,\lambda^{\prime}\in\mathfrak{P}^{X}\)_, we have_ \(\mathcal{A}_{\lambda}\simeq\mathcal{A}_{\lambda^{\prime}}\) _if and only if_ \(\lambda^{\prime}\in W^{X}\cdot\lambda\)_._ _Hence, the map \(\lambda\mapsto\mathcal{A}_{\lambda}\) induces a bijection_ \[\mathfrak{P}^{X}/W^{X}\simeq\operatorname{Quant}(\mathbb{C}[X]),\qquad W^{X} \cdot\lambda\mapsto\mathcal{A}_{\lambda}.\] We conclude by recalling an equivariant version of Theorem 3.14 from [13]. Let \(G\) be a connected reductive algebraic group and suppose \(A:=\mathbb{C}[X]\) admits a Hamiltonian \(G\)-action, see Section 3.2. Define the _extended Namikawa space_ \[\overline{\mathfrak{P}}^{X}:=\mathfrak{P}^{X}\oplus\mathfrak{z}(\mathfrak{g} )^{*}\] This space should be viewed as the equivariant counterpart of \(\mathfrak{P}^{X}\). Let \(W^{X}\) act on \(\overline{\mathfrak{P}}^{X}\) via the decomposition above (the \(W^{X}\)-action on the second factor is defined to be trivial). **Proposition 3.15** (Lem 4.11.2, [13]).: _Let \(G\) be a connected reductive algebraic group and suppose \(A:=\mathbb{C}[X]\) admits a Hamiltonian \(G\)-action. Then the following are true:_ 1. _There is a unique classical co-moment map_ \(\varphi:\mathfrak{g}\to A_{d}\)_._ 2. _Every filtered quantization_ \(\mathcal{A}\in\operatorname{Quant}(A)\) _has a unique_ \(G\)_-equivariant structure._ 3. _For every_ \(\mathcal{A}\in\operatorname{Quant}(A)\) _and_ \(\chi\in\mathfrak{z}(\mathfrak{g})^{*}\)_, there is a unique quantum co-moment map_ \(\Phi_{\chi}:\mathfrak{g}\to\mathcal{A}_{\leqslant d}\) _such that_ \(\Phi|_{\mathfrak{z}(\mathfrak{g})}=\chi\)_._ _In particular, there is a canonical bijection_ \[\overline{\mathfrak{P}}^{X}/W^{X}\stackrel{{\sim}}{{\rightarrow}} \operatorname{Quant}^{G}(A)\qquad W^{X}\cdot(\lambda,\chi)\mapsto(\mathcal{A} _{\lambda},\Phi_{\chi}).\] **Definition 3.16** (Def 5.0.1, [13]).: _The canonical quantization of \(\mathbb{C}[X]\) is the Hamiltonian quantization corresponding to the parameter \(0\in\overline{\mathfrak{P}}^{X}\)._ ### \(\mathbb{Q}\)-factorial terminalizations of nilpotent covers Let \(\widetilde{\mathbb{O}}\in\mathsf{Cov}(G)\) and let \(X=\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}])\). Then \(X\) is a conical symplectic singularity (cf. Example 3.8). In this section, we will give explicit (Lie-theoretic) descriptions of the \(\mathbb{Q}\)-factorial terminalizations of \(X\), the Namikawa space \(\mathfrak{P}(\widetilde{\mathbb{O}}):=\mathfrak{P}^{X}\), and the partial Namikawa spaces \(\mathfrak{P}_{k}(\widetilde{\mathbb{O}}):=\mathfrak{P}_{k}^{X}\). Choose \((L,\widetilde{\mathbb{O}}_{L})\in\mathsf{Cov}_{0}(G)\) such that \(\operatorname{Bind}(L,\widetilde{\mathbb{O}}_{L})=\widetilde{\mathbb{O}}\). By Proposition 2.7, \((L,\widetilde{\mathbb{O}}_{L})\) is unique (up to conjugation by \(G\)). Let \(X_{L}=\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}_{L}])\). Choose a parabolic subgroup \(P\subset G\) with Levi decomposition \(P=LN\). Form the variety \(Y=G\times^{P}(X_{L}\times\mathfrak{p}^{\perp})\) and consider the proper \(G\)-equivariant map \(\widetilde{\mu}:Y\to\overline{\mathbb{O}}\) defined in Section 2.3. Then \(\widetilde{\mu}\) admits a Stein factorization \(Y\xrightarrow{\rho}X\to\overline{\mathbb{O}}\). The first map is projective birational and the second is finite. **Theorem 3.17** ([13, Cor 4.3], [14, Lemma 7.2.4]).: _The following are true._ 1. _The morphism_ \(\rho:Y\to X\) _is a_ \(\mathbb{Q}\)_-factorial terminalization of_ \(X\)_._ 2. _Any_ \(\mathbb{Q}\)_-factorial terminalization_ \(Y\) _of_ \(X\) _is of this form._ If \(\mathfrak{a}\) is a Lie algebra, we write \(\mathfrak{X}(\mathfrak{a})\) for the vector space of Lie algebra homomorphisms \(\mathfrak{a}\to\mathbb{C}\). Let \(\pi:Y\to G/P\) be the natural projection map which makes \(Y\) into a fiber bundle over \(G/P\). Let \(\underline{\pi}\) denote the restriction of \(\pi\) to \(Y^{reg}\). Note that \(H^{2}(G/P,\mathbb{C})\) is identified with \(\mathfrak{X}(\mathfrak{l}\cap[\mathfrak{g},\mathfrak{g}])\). Consider the composite map \[\eta:\mathfrak{X}(\mathfrak{l}\cap[\mathfrak{g},\mathfrak{g}])\simeq H^{2}( G/P,\mathbb{C})\xrightarrow{\pi^{*}}H^{2}(Y^{reg},\mathbb{C})=\mathfrak{P}( \widetilde{\mathbb{O}}), \tag{3.6.1}\] where \(\underline{\pi}^{*}:H^{2}(G/P,\mathbb{C})\to H^{2}(Y^{reg},\mathbb{C})= \mathfrak{P}(\widetilde{\mathbb{O}})\) is the pullback map on cohomology induced by \(\underline{\pi}\). **Proposition 3.18** (Proposition 7.2.2, [14]).: _The map \(\eta:\mathfrak{X}(\mathfrak{l}\cap[\mathfrak{g},\mathfrak{g}])\xrightarrow{ \sim}\mathfrak{P}(\widetilde{\mathbb{O}})\) is an isomorphism._ We can also describe the spaces \(H^{2}(\widetilde{\mathbb{O}},\mathbb{C})\) and \(\mathfrak{P}_{k}(\widetilde{\mathbb{O}}):=\mathfrak{P}_{k}^{X}\) in terms of Lie-theoretic data. We begin by describing \(H^{2}(\widetilde{\mathbb{O}},\mathbb{C})\). Assume for simplicity that \(G\) is simply connected and semisimple. Let \(R\) denote the reductive part of the stabilizer of \(e\in\mathbb{O}\) and let \(\mathfrak{r}\) be its Lie algebra. We note that \(\mathfrak{r}\) does not depend on the choice of \(e\) in \(\mathbb{O}\), and the adjoint action of \(R\) on \(\mathfrak{z}(\mathfrak{r})\) factors through \(R/R^{\circ}\simeq\pi_{1}(\widetilde{\mathbb{O}})\). **Lemma 3.19** (Lemma 7.2.7, [14]).: _There is a natural identification_ \[H^{2}(\widetilde{\mathbb{O}},\mathbb{C})\simeq\mathfrak{z}(\mathfrak{r})^{\pi_ {1}(\widetilde{\mathbb{O}})}\] **Remark 3.20**.: _If \(\widetilde{\mathbb{O}}=\widehat{\mathbb{O}}\) is the universal cover of \(\mathbb{O}\), then \(H^{2}(\widehat{\mathbb{O}},\mathbb{C})\simeq\mathfrak{z}(\mathfrak{r})\) by Lemma 3.19. In particular, \(H^{2}(\widehat{\mathbb{O}},\mathbb{C})=0\) if and only if \(\mathfrak{r}\) is semisimple. On the other hand, if \(\widehat{\mathbb{O}}=\mathbb{O}\), then \(H^{2}(\widetilde{\mathbb{O}},\mathbb{C})\simeq\mathfrak{z}(\mathfrak{r})^{\pi_ {1}(\mathbb{O})}\) was computed in every case by Biswas and Chatterjee in [1]._ We proceed to describing the partial Namikawa spaces \(\mathfrak{P}_{k}(\widetilde{\mathbb{O}})\) for \(k\geqslant 1\). Assume that \(H^{2}(\widetilde{\mathbb{O}},\mathbb{C})=0\). Suppose \(Q\subset G\) is a parabolic subgroup with Levi factor \(M\) and \(\widetilde{\mathbb{O}}_{M}\in\mathsf{Cov}(M)\) satisfies \(\widetilde{\mathbb{O}}=\operatorname{Bind}_{M}^{G}\widetilde{\mathbb{O}}_{M}\). The triple \((Q,M,\widetilde{\mathbb{O}}_{M})\) gives rise to a projective birational morphism \[\rho_{!}\colon G\times^{Q}(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O }}_{M}])\times\mathfrak{q}^{\perp})\to\operatorname{Spec}(\mathbb{C}[\widetilde {\mathbb{O}}]) \tag{3.6.2}\] **Proposition 3.21** (Prop 7.5.6, [14]).: _For each codimension 2 leaf \(\mathfrak{L}_{k}\subset\widetilde{X}\), there is a unique pair \((M_{k},\widetilde{\mathbb{O}}_{M_{k}})\) consisting of a Levi subgroup \(M_{k}\subset G\) and a nilpotent cover \(\widetilde{\mathbb{O}}_{M_{k}}\in\mathsf{Cov}(M_{k})\) such that_ \[\rho_{!}\colon G\times^{Q}(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O }}_{M}])\times\mathfrak{q}^{\perp})\to\operatorname{Spec}(\mathbb{C}[\widetilde {\mathbb{O}}])\] **Proposition 3.22** (Prop 7.5.6, [14]).: _For each codimension 2 leaf \(\mathfrak{L}_{k}\subset\widetilde{X}\), there is a unique pair \((M_{k},\widetilde{\mathbb{O}}_{M_{k}})\) consisting of a Levi subgroup \(M_{k}\subset G\) and a nilpotent cover \(\widetilde{\mathbb{O}}_{M_{k}}\in\mathsf{Cov}(M_{k})\) such that_ \[\rho_{!}\colon G\times^{Q}(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O }}_{M}])\times\mathfrak{q}^{\perp})\to\operatorname{Spec}(\mathbb{C}[ \widetilde{\mathbb{O}}])\] **Proposition 3.23** (Prop 7.5.6, [14]).: _For each codimension 2 leaf \(\mathfrak{L}_{k}\subset\widetilde{X}\), there is a unique pair \((M_{k},\widetilde{\mathbb{O}}_{M_{k}})\) consisting of a Levi subgroup \(M_{k}\subset G\) and a nilpotent cover \(\widetilde{\mathbb{O}}_{M_{k}}\in\mathsf{Cov}(M_{k})\) such that_ \[\rho_{!}\colon G\times^{Q}(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O }}_{M_{k}}])\times\mathfrak{q}^{\perp})\to\operatorname{Spec}(\mathbb{C}[ \widetilde{\mathbb{O}}])\] **Proposition 3.24** (Prop 7.5.6, [14]).: _For each codimension 2 leaf \(\mathfrak{L}_{k}\subset\widetilde{X}\), there is a unique pair \((M_{k},\widetilde{\mathbb{O}}_{M_{k}})\) consisting of a Levi subgroup \(M_{k}\subset G\) and a nilpotent cover \(\widetilde{\mathbb{O}}_{M_{k}}\in\mathsf{Cov}(M_{k})\) such that_ \[\rho_{!}\colon G\times^{Q}(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O }}_{M_{k}}])\times\mathfrak{q}^{\perp})\to\operatorname{Spec}(\mathbb{C}[ \widetilde{\mathbb{O}}])\] **Proposition 3.25** (Prop 7.5.6, [14]).: _For each codimension 2 leaf \(\mathfrak{L}_{k}\subset\widetilde{X}\), there is a unique pair \((M_{k},\widetilde{\mathbb{O}}_{M_{k}})\) consisting of a Levi subgroup \(M_{k}\subset G\) and a nilpotent cover \(\widetilde{\mathbb{O}}_{M_{k}}\in\mathsf{Cov}(M_{k})\) such that_ \[\rho_{!}\colon G\times^{Q}(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O }}_{M_{k}}])\times\mathfrak{q}^{\perp})\to\operatorname{Spec}(\mathbb{C}[ \widetilde{\mathbb{O}}])\] **Proposition 3.26** (Prop 7.5.6, [14]).: _For each codimension 2 leaf \(\mathfrak{L}_{k}\subset\widetilde{X}\), there is a unique pair \((M_{k},\widetilde{\mathbb{O}}_{M_{k}})\) consisting of a Levi subgroup \(M_{k}\subset G\) and a nilpotent cover \(\widetilde{\mathbb{O}}_{M_{k}}\in\mathsf{Cov}(M_{k})\) such that_ \[\rho_{!}\colon G\times^{Q}(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O }}_{M_{k}}])\times\mathfrak{q}^{\perp})\ 1. \(L\subset M_{k}\)_._ 2. \(\widehat{\mathbb{O}}=\operatorname{Bind}_{M_{k}}^{G}\widetilde{\mathbb{O}}_{M_{k}}\)_._ 3. _For any parabolic_ \(Q\subset G\) _with Levi factor_ \(M_{k}\)_, the morphism (_3.6.2_) resolves_ \(\Sigma_{k}\) _and preserves_ \(\Sigma_{j}\) _for_ \(j\neq k\)_._ The pair \((M_{k},\widetilde{\mathbb{O}}_{M_{k}})\) appearing in Proposition 3.21 is called the \(\mathfrak{L}_{k}\)-_adapted resolution datum_. **Proposition 3.22** (Proposition 3.6.4, [13]).: _Let \(\mathfrak{L}_{k}\subset\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}])\) be a codimension 2 leaf and let \((M_{k},\widetilde{\mathbb{O}}_{M_{k}})\) be the \(\mathfrak{L}_{k}\)-adapted resolution datum. Then the isomorphism \(\mathfrak{P}\simeq\mathfrak{X}(\mathfrak{l}\cap[\mathfrak{g},\mathfrak{g}])\) of Proposition 3.18 restricts to an isomorphism \(\mathfrak{P}_{k}\simeq\mathfrak{X}(\mathfrak{m}_{k}\cap[\mathfrak{g},\mathfrak{ g}])\)._ Comparing Propositions 3.18 and 3.11, we arrive at the following (purely geometric) criterion for birational rigidity. **Proposition 3.23** (Proposition 3.7.1, [13]).: _Let \(\widetilde{\mathbb{O}}\in\operatorname{\mathsf{Cov}}(G)\). Then \(\widetilde{\mathbb{O}}\) is birationally rigid if and only if the following conditions hold:_ 1. \(H^{2}(\widehat{\mathbb{O}},\mathbb{C})=0\)_._ 2. \(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}])\) _has no codimension 2 leaves._ We conclude by sketching a simple criterion for birational induction. For any \(\mathbb{O}\in\operatorname{\mathsf{Orb}}(G)\), let \(\mathcal{P}_{rig}(\mathbb{O})\) denote the set of \(G\)-conjugacy classes of pairs \((L,\mathbb{O}_{L})\) consisting of a Levi subgroup \(L\subset G\) and a rigid nilpotent orbit \(\mathbb{O}_{L}\in\operatorname{\mathsf{Orb}}(L)\) such that \(\mathbb{O}=\operatorname{Ind}_{L}^{G}\mathbb{O}_{L}\). This set can be computed in classical types using the results [11, Section 7] and in exceptional types using the tables in [11, Section 4]. Consider the integer \[m(\mathbb{O})=\max\{\dim\mathfrak{z}(\mathfrak{l})\mid(L,\mathbb{O}_{L})\in \mathcal{P}_{rig}(\mathbb{O})\}. \tag{3.6.3}\] **Lemma 3.24**.: _Suppose \(m(\mathbb{O})=\dim\mathfrak{P}\) and there is a unique pair \((L,\mathbb{O}_{L})\in\mathcal{P}_{rig}(\mathbb{O})\) such that \(\dim(\mathfrak{z}(\mathfrak{l}))=m(\mathbb{O})\). Then \(\mathbb{O}=\operatorname{Bind}_{L}^{G}\mathbb{O}_{L}\)._ Proof.: Suppose \(\mathbb{O}\) is birationally induced from a birationally rigid orbit \(\mathbb{O}_{M}\in\operatorname{\mathsf{Orb}}(M)\) for a Levi subgroup \(M\subset G\). By Proposition 3.18, \(\dim(\mathfrak{z}(\mathfrak{m}))=m(\mathbb{O})\). Suppose \(\mathbb{O}_{M}\) is not rigid. Then there is a proper Levi subgroup \(K\subset M\) and a rigid orbit \(\mathbb{O}_{K}\in\operatorname{\mathsf{Orb}}(K)\) such that \(\mathbb{O}=\operatorname{Ind}_{K}^{G}\mathbb{O}_{K}\). So \((K,\mathbb{O}_{K})\in\mathcal{P}_{rig}(\mathbb{O})\) and \(\dim(\mathfrak{z}(\mathfrak{k}))>\dim(\mathfrak{z}(\mathfrak{m}))=m(\mathbb{O})\), a contradiction. Hence, \((M,\mathbb{O}_{M})\in\mathcal{P}_{rig}(\mathbb{O})\) and so \(G\)-conjugate to \((L,\mathbb{O}_{L})\). ### Universal Poisson deformations of conical symplectic singularities Let \(X\) be an affine symplectic singularity. Let \(X^{\diamond}\subset X\) denote the complement in \(X\) to the union of all symplectic leaves of codimension \(\geq 4\). Then \[X^{\diamond}=X^{reg}\sqcup\bigsqcup_{k=1}^{t}\mathfrak{L}_{k},\] where \(\mathcal{L}_{k}\) are the codimension 2 leaves in \(X\) as in Section 3.4. As observed by Namikawa [13, p. 52], the variety \(X^{\diamond}\) admits a unique \(\mathbb{Q}\)-factorial terminalization \(\pi:\widetilde{X}^{\diamond}\to X^{\diamond}\) (up to isomorphism), and \(\widetilde{X}^{\diamond}\) is smooth. Now we take a \(\mathbb{Q}\)-factorial terminalization \(\rho:Y\to X\) of \(X\). Set \(Y^{\diamond}=\rho^{-1}(X^{\diamond})\). Then \(Y^{\diamond}\) lies in the regular locus \(Y^{reg}\) of \(Y\) and the restriction \(\rho|_{Y^{\diamond}}:Y^{\diamond}\to X^{\diamond}\) is also a \(\mathbb{Q}\)-factorial terminalization of \(X^{\diamond}\). Hence there is a unique isomorphism between \(Y^{\diamond}\) and \(\widetilde{X}^{\diamond}\) as varieties over \(X^{\diamond}\). By (2.4) in the proof of [13, Proposition 2.14], we have \(\operatorname{codim}_{Y}Y\backslash Y^{\diamond}\geq 2\) and so \(\operatorname{codim}_{Y^{reg}}Y^{reg}\backslash Y^{\diamond}\geq 2\) This implies that the restriction map \(\mathfrak{P}=H^{2}(Y^{reg},\mathbb{C})\to H^{2}(Y^{\circ},\mathbb{C})\simeq H^{2}( \widetilde{X}^{\circ},\mathbb{C})\) is an isomorphism by a standard argument involving a long exact sequence of cohomology groups (see e.g. the proof of [12, Theorem 5.1(i)]). By the proof of [12, Theorems 5.1, 5.2], the inclusions \(X^{\circ}\subset X\) and \(Y^{\circ}\subset Y\) induce natural transformations \(\mathrm{PD}_{X}\to\mathrm{PD}_{X^{\circ}}\) and \(\mathrm{PD}_{Y}\to\mathrm{PD}_{Y^{\circ}}\). The birational maps \(\pi\), \(\rho\) and \(\rho|_{Y^{\circ}}\) induce natural transformations \[\pi_{*}:\mathrm{PD}_{\widetilde{X}^{\circ}}\to\mathrm{PD}_{X^{\circ}},\quad \rho_{*}:\mathrm{PD}_{Y}\to\mathrm{PD}_{X}\quad\text{and}\quad(\rho|_{X^{ \circ}})_{*}:\mathrm{PD}_{Y^{\circ}}\to\mathrm{PD}_{X^{\circ}}.\] These natural transformations fit into the following commutative diagram: (3.7.1) **Theorem 3.25** (Theorems 5.1, 5.2, [12]).: _The following are true:_ 1. _The horizontal transformations_ \(\mathrm{PD}_{X}\to\mathrm{PD}_{X^{\circ}}\) _and_ \(\mathrm{PD}_{Y}\to\mathrm{PD}_{Y^{\circ}}\) _in (_3.7.1_) are isomorphisms._ 2. _We have_ \(\mathrm{PD}_{Y}(\mathbb{C}\{\epsilon\})\simeq\mathrm{PD}_{Y^{\circ}}(\mathbb{C }\{\epsilon\})\simeq\mathrm{PD}_{\widetilde{X}^{\circ}}(\mathbb{C}\{\epsilon\}) \simeq\mathfrak{P}\)_._ 3. _All deformation functors in (_3.7.1_) are unobstructed and pro-representable._ \(\mathrm{PD}_{Y}\simeq\mathrm{PD}_{Y^{\circ}}\simeq\mathrm{PD}_{\widetilde{X}^ {\circ}}\) _are pro-represented by_ \(\mathbb{C}[\mathfrak{P}]^{\wedge}=\mathbb{C}[\mathfrak{P}^{\wedge}]\)_, the completion of_ \(\mathbb{C}[\mathfrak{P}]\) _at the maximal ideal corresponding to_ \(0\in\mathfrak{P}\)_._ By Theorem 3.25, each Poisson variety appearing in (3.7.1) admits a universal formal Poisson deformation. Now assume that \(X\) is a conical symplectic singularity. Then by [12, Lemma 20], the \(\mathbb{C}^{\times}\)-action on \(X\) induces \(\mathbb{C}^{\times}\)-actions on the universal formal Poisson deformations of the Poisson varieties in (3.7.1). Using these \(\mathbb{C}^{\times}\)-actions, Namikawa shows that the universal formal Poisson deformations of \(X\) and \(Y\) can be algebraized to graded Poisson deformations ([12, Lemma 22]). And Losev shows that these graded Poisson deformations are _universal_ in the following sense ([11, Section 2.2-2.4]). **Definition 3.26**.: _Let \(X\) be a graded Poisson variety. A universal graded Poisson deformation of \(X\) is a graded Poisson deformation \(\mathcal{X}_{univ}\) of \(X\) over some conical affine scheme \(S\) satisfying the following universal property: let \(\mathcal{X}_{B}\) be any graded Poisson deformation of \(X\) over a conical affine scheme \(B\). Then there is a unique \(\mathbb{C}^{\times}\)-equivariant morphism \(B\to S\) and a (not necessarily unique) isomorphism of graded Poisson deformations \(\mathcal{X}_{B}\xrightarrow{\sim}\mathcal{X}_{univ}\times_{S}B\) over \(B\)._ The following results are proved in [12, 12, 12] and [11]. We use the formulation in [12, Section 4.7]. **Theorem 3.27**.: _Let \(X\) be a conical symplectic singularity with Poisson bracket of degree \(d>0\). Then there exist universal graded Poisson deformations \(\mathcal{X}_{univ}\to\mathfrak{P}/W\) and \(\mathcal{Y}_{univ}\to\mathfrak{P}\) of \(X\) and \(Y\), respectively. There exists a surjective projective Poisson morphism \(\tilde{\rho}:\mathcal{Y}_{univ}\to\mathcal{X}_{univ}\) such that:_ 1. _The following diagram commutes and all maps are_ \(\mathbb{C}^{\times}\)_-equivariant:_ _Here,_ \(\mathfrak{P}\to\mathfrak{P}/W\) _is the quotient map and_ \(\mathbb{C}^{\times}\) _acts linearly on_ \(\mathfrak{P}\) _by_ \(t.v\mapsto t^{d}v\)_, i.e., all vectors in_ \(\mathfrak{P}\) _are eigenvectors of weight_ \(-d\)_. The induced sequence of morphisms_ \(\mathcal{Y}_{univ}\to\mathcal{X}_{univ}\times_{\mathfrak{P}/W}\mathfrak{P} \to\mathcal{X}_{univ}\) _is the Stein factorization of_ \(\tilde{\rho}\)_._ 2. _The algebra_ \(\mathbb{C}[\mathcal{Y}_{univ}]\) _carries an action of_ \(W\) _by graded Poisson algebra automorphisms, which makes the map_ \(\mathbb{C}[\mathfrak{P}]\to\mathbb{C}[\mathcal{Y}_{univ}]\)__\(W\)_-equivariant. The induced action on_ \(\mathbb{C}[Y]\) _is trivial._ 3. _The map_ \(\tilde{\rho}\) _induces an isomorphism_ \(\mathbb{C}[\mathcal{X}_{univ}]\xrightarrow{\sim}\mathbb{C}[\mathcal{Y}_{univ}] ^{W}\)_._ Set \(\mathcal{X}_{\mathfrak{P}}:=\mathcal{X}_{univ}\times_{\mathfrak{P}/W} \mathfrak{P}\). Then \(\mathcal{X}_{\mathfrak{P}}\) is a Poisson scheme over \(\mathfrak{P}\) and (i) of Theorem 3.27 says that \(\tilde{\rho}\) induces a morphism \(\mathcal{Y}_{univ}\to\mathcal{X}_{\mathfrak{P}}\) of Poisson schemes over \(\mathfrak{P}\). For any \(\lambda\in\mathfrak{P}\), let \(X_{\lambda}\) (resp. \(Y_{\lambda}\)) denote the fiber of the universal deformation \(\mathcal{X}_{univ}\) (resp. \(\mathcal{Y}_{univ}\)) over \(W\lambda\) (resp. \(\lambda\)). By Theorem 3.27, the morphism \(\tilde{\rho}:\mathcal{Y}_{univ}\to\mathcal{X}_{univ}\) restricts to a projective birational surjective morphism \(Y_{\lambda}\to X_{\lambda}\) with connected fibers, which induces an isomorphism \(\mathbb{C}[X_{\lambda}]\xrightarrow{\sim}\mathbb{C}[Y_{\lambda}]\) of Poisson algebras. Let \(\mathfrak{P}^{reg}\subset\mathfrak{P}\) denote the subset consisting of all \(\lambda\in\mathfrak{P}\) for which the morphism \(Y_{\lambda}\to X_{\lambda}\) is an isomorphism (or equivalently, \(Y_{\lambda}\) is affine) and let \(\mathfrak{P}^{sing}:=\mathfrak{P}\backslash\mathfrak{P}^{reg}\). **Theorem 3.28** ([15, Main Theorem, (i)]).: _The subset \(\mathfrak{P}^{sing}\subset\mathfrak{P}\) is a \(W\)-stable union of finitely many rational hyperplanes in \(\mathfrak{P}\), including the walls corresponding to the \(W\)-action. In particular, \(\mathfrak{P}^{reg}\) is a Zariski-dense open subset of \(\mathfrak{P}\)._ ### Universal Poisson deformations of nilpotent covers Continue with the notation of Section 3.7. In this subsection, we will give an explicit description of \(\mathcal{Y}_{univ}\) in the case of nilpotent covers. Let \(\widetilde{\mathcal{O}}\in\mathsf{Cov}(G)\). Choose \((L,\widetilde{\mathcal{O}}_{L})\in\mathsf{Cov}_{0}(G)\) such that \(\operatorname{Bind}(L,\widetilde{\mathcal{O}}_{L})=\widetilde{\mathcal{O}}\). Let \(X=\operatorname{Spec}(\mathbb{C}[\widetilde{\mathcal{O}}])\) and \(X_{L}=\operatorname{Spec}(\mathbb{C}[\widetilde{\mathcal{O}}_{L}])\). Recall that by Theorem 3.17, any \(\mathbb{Q}\)-factorial terminalization \(Y\) of \(X\) is of the form \(Y=G\times^{P}(X_{L}\times\mathfrak{p}^{\perp})\), where \(P\subset G\) is a parabolic subgroup with Levi decomposition \(P=LN\). Before we continue let us introduce the general notion of an _induced variety_. Let \(Z\) be a Poisson variety equipped with a Hamiltonian \(L\)-action and a moment map \(\mu_{L}:Z\to\mathfrak{l}^{*}\). Let \(Z\times_{\mathfrak{l}^{*}}\mathfrak{n}^{\perp}\) be the pullback of \(\mu_{L}:Z\to\mathfrak{l}^{*}\) along the projection \(\mathfrak{n}^{\perp}=(\mathfrak{g}/\mathfrak{n})^{*}\twoheadrightarrow( \mathfrak{p}/\mathfrak{n})^{*}=\mathfrak{l}^{*}\). Let \(P\) act on \(Z\) via the quotient morphism \(P\twoheadrightarrow P/N=L\) and on \(\mathfrak{n}^{\perp}\) in the natural way. These actions induce a \(P\)-action on \(Z\times_{\mathfrak{l}^{*}}\mathfrak{n}^{\perp}\). We define \[\operatorname{Ind}_{P}^{G}Z:=G\times^{P}\left(Z\times_{\mathfrak{l}^{*}} \mathfrak{n}^{\perp}\right).\] Note that \(\operatorname{Ind}_{P}^{G}Z\simeq G\times^{P}(Z\times\mathfrak{p}^{\perp})\). In particular, if \(Z=X_{L}\), then \(\operatorname{Ind}_{P}^{G}Z\) coincides with the variety \(Y=G\times^{P}(X_{L}\times\mathfrak{p}^{\perp})\). Finally, we note that \(\operatorname{Ind}_{P}^{G}Z\) is naturally a graded Poisson variety with a Hamiltonian \(G\)-action, see [13, Section 7.3] for details. Set \[\mathfrak{z}:=\mathfrak{X}(\mathfrak{l}\cap[\mathfrak{g},\mathfrak{g}]),\quad \mathfrak{z}^{\circ}=\{\chi\in\mathfrak{z}\,|\,G_{\chi}=L\},\quad\text{and} \quad X_{L}^{e}:=X_{L}\times\mathfrak{z}.\] (To make sense of the stabilizer \(G_{\chi}\) in the definition of \(\mathfrak{z}^{\circ}\), we use the isomorphism \(\mathfrak{g}^{*}\simeq\mathfrak{g}\) induced by the Killing form to identify \(\mathfrak{z}\) with \(\mathfrak{z}(\mathfrak{l}\cap[\mathfrak{g},\mathfrak{g}])\)). The Poisson structure on \(X_{L}\) induces a Poisson structure on \(X_{L}^{e}\). The \(L\)-action on \(X_{L}\) and the natural (trivial) \(L\)-action on \(\mathfrak{z}\) induces a Hamiltonian \(L\)-action on \(X_{L}^{e}\), such that the moment map \(\mu_{L}^{e}:X_{L}^{e}\to\mathfrak{l}^{*}\) is the composition of the projection map \(X_{L}^{e}\twoheadrightarrow X_{L}\) with the moment map \(X_{L}\to\mathfrak{l}^{*}\). Hence we can define \[\mathcal{Y}_{\mathfrak{z}}:=\operatorname{Ind}_{P}^{G}X_{L}^{e}=G\times^{P} \left(\mathfrak{z}\times X_{L}\times\mathfrak{p}^{\perp}\right).\] Then we have a natural \(L\)-equivariant projection map \(X_{L}^{e}\twoheadrightarrow\mathfrak{z}\), which induces a projection map \(\mathcal{Y}_{\mathfrak{z}}\twoheadrightarrow G\times^{P}\mathfrak{z} \twoheadrightarrow\mathfrak{z}\) so that \(\mathcal{Y}_{\mathfrak{z}}\) is a graded Poisson scheme over \(\mathfrak{z}\). Note that we have a Poisson isomorphism \[\mathcal{Y}_{\mathfrak{z}}\times_{\mathfrak{z}}\{0\}\simeq Y=G\times^{P} \left(X_{L}\times\mathfrak{p}^{\perp}\right)\] and hence \(\mathcal{Y}_{\mathfrak{z}}\) is a graded Poisson deformation of the \(\mathbb{Q}\)-factorial terminalization \(Y\) of \(X\). Set \(\mathcal{X}_{\mathfrak{z}}:=\mathbb{C}[\mathcal{Y}_{\mathfrak{z}}]\). This is a graded affine Poisson scheme over \(\mathfrak{z}\). **Proposition 3.29** ([1, Proposition 7.2.2]).: _The following are true:_ 1. _The universal graded deformation_ \(\mathcal{Y}_{univ}\) _of_ \(Y\) _in Theorem_ 3.27 _can be identified with_ \(\mathcal{Y}_{\mathfrak{z}}\) _over_ \(\mathfrak{z}\) _as graded Poisson deformations of_ \(Y\) _under the isomorphism_ \(\eta:\mathfrak{z}\xrightarrow{\sim}\mathfrak{P}\) _of Proposition_ 3.18_._ 2. _The isomorphism_ \(\mathcal{Y}_{\mathfrak{z}}\simeq\mathcal{Y}_{univ}\) _in (i) induces a graded_ \(G\)_-equivariant isomorphism_ \(\mathcal{X}_{\mathfrak{z}}\simeq\mathcal{X}_{univ}\times_{\mathfrak{P}/W} \mathfrak{P}\) _of Poisson schemes under the isomorphism_ \(\eta:\mathfrak{z}\xrightarrow{\sim}\mathfrak{P}\) _(cf. Theorem_ 3.27_, (i))._ 3. _Under the isomorphism_ \(\eta:\mathfrak{z}\xrightarrow{\sim}\mathfrak{P}\)_, the open subset_ \(\mathfrak{P}^{reg}\subset\mathfrak{P}\) _in Theorem_ 3.28 _corresponds to the open subset_ \(\mathfrak{z}^{\circ}\subset\mathfrak{z}\)_._ Proof.: (i) is [1, Proposition 7.2.2, (iii)]; (ii) follows from Theorem 3.27; (iii) is [1, Lemma 7.2.4, (i)]. We can describe the base change of \(\mathcal{X}_{\mathfrak{z}}\) and \(\mathcal{Y}_{\mathfrak{z}}\) along the open inclusion \(\mathfrak{z}^{\circ}\hookrightarrow\mathfrak{z}\) explicitly as follows. Define open subschemes \[(X_{L}^{e})^{\circ}:=X_{L}\times\mathfrak{z}^{\circ}\subset X_{L}^{e},\quad \mathcal{X}_{\mathfrak{z}}^{\circ}:=\mathcal{X}_{\mathfrak{z}}\times_{ \mathfrak{z}}\mathfrak{z}^{\circ}\subset\mathcal{X}_{\mathfrak{z}}\quad\text{ and}\quad\mathcal{Y}_{\mathfrak{z}}^{\circ}:=\mathcal{Y}_{\mathfrak{z}}\times_{ \mathfrak{z}}\mathfrak{z}^{\circ}\subset\mathcal{Y}_{\mathfrak{z}}\] Note that \((X_{L}^{e})^{\circ}\) is stable under the \(L\)-action on \(X_{L}^{e}\). Then \[\mathcal{Y}_{\mathfrak{z}}^{\circ}=G\times^{P}\left((X_{L}^{e})^{\circ}\times _{\mathfrak{l}^{*}}\mathfrak{n}^{\perp}\right)=G\times^{P}\left(\mathfrak{z}^ {\circ}\times X_{L}\times\mathfrak{p}^{\perp}\right).\] There is a canonical \(G\times\mathbb{C}^{\times}\)-equivariant isomorphism \[G\times^{L}(X_{L}^{e})^{\circ}=G\times^{P}(N\times(X_{L}^{e})^{\circ}) \xrightarrow{\sim}\mathcal{Y}_{\mathfrak{z}}^{\circ}=G\times^{P}\left((X_{L}^ {e})^{\circ}\times_{\mathfrak{l}^{*}}\mathfrak{n}^{\perp}\right) \tag{3.8.1}\] induced by the isomorphism \[N\times(X_{L}^{e})^{\circ} \xrightarrow{\sim}(X_{L}^{e})^{\circ}\times_{\mathfrak{l}^{*}} \mathfrak{n}^{\perp}\] \[(n,x^{e}) \longmapsto n.(x^{e},\mu_{L}^{e}(x^{e})) \tag{3.8.2}\] for any \(n\in N\) and \(x^{e}\in(X_{L}^{e})^{\circ}\). Here we regard \(\mu_{L}^{e}(x^{e})\) as a vector in \(\mathfrak{n}^{\perp}\) using the \(L\)-equivariant section \(\mathfrak{l}^{*}\hookrightarrow\mathfrak{n}^{\perp}\) of the natural projection \(\mathfrak{n}^{\perp}\twoheadrightarrow\mathfrak{l}^{*}\) induced by the triangular decomposition \(\mathfrak{g}=\mathfrak{n}\oplus\mathfrak{l}\oplus\mathfrak{n}^{-}\). This means that, even though \(\mathcal{Y}_{\mathfrak{z}}\) depends on the choice of the parabolic \(P\), over \(\mathfrak{z}^{\circ}\) there are canonical isomophisms between various \(\mathcal{Y}_{\mathfrak{z}}^{\circ}\) for different \(P\) which preserve the Hamiltonian actions. Moreover, by Proposition 3.29, the map \(\mathcal{Y}_{\mathfrak{z}}^{\circ}\to\mathcal{X}_{\mathfrak{z}}^{\circ}\) is an isomorphism. ### Graded Poisson automorphisms of conical symplectic singularities Let \(X\) be an affine symplectic singularity and define the open subvariety \(X^{\circ}\subset X\) and its unique \(\mathbb{Q}\)-factorial terminalization \(\widetilde{X}^{\circ}\) as in Section 3.7. Let \(\theta\) be a Poisson automorphism of \(X\). Note that \(\theta\) preserves \(X^{\circ}\), hence \(\theta\) lifts uniquely to a Poisson automorphism \(\tilde{\theta}\) of \(\widetilde{X}^{\circ}\), so that the following diagram commutes Correspondingly, we have a commutative diagram of functors: By the (formal) universality of \(\operatorname{PD}_{X^{\circ}}\) and \(\operatorname{PD}_{\widetilde{X}^{\circ}}\), this in turn corresponds to a commutative diagram (3.9.1) of completions of affine schemes at their distinguished points. **Proposition 3.30**.: _Assume that \(X\) is a conical symplectic singularity and \(\theta\) is a graded (i.e., \(\mathbb{C}^{\times}\)-equivariant) Poisson automorphism of \(X\). Then the commutative diagram (3.9.1) can be obtained by completion from a commutative diagram of linear spaces_ (3.9.2) _where the top horizontal map is linear and the vertical ones are quotient maps. The map \(\mathfrak{P}\to\mathfrak{P}\) is given by the pullback map \((\tilde{\theta}^{-1})^{*}:H^{2}(\widetilde{X}^{\circ},\mathbb{C})\to H^{2}( \widetilde{X}^{\circ},\mathbb{C})\) induced by \(\tilde{\theta}^{-1}\)._ Proof.: \(X^{\circ}\) is \(\mathbb{C}^{\times}\)-stable and the \(\mathbb{C}^{\times}\)-action lifts to a \(\mathbb{C}^{\times}\)-action on \(\widetilde{X}^{\circ}\). These \(\mathbb{C}^{\times}\)-actions induce those on \(\mathfrak{P}^{\times}\) and \((\mathfrak{P}/W)^{\times}\) and their completions, so that the morphisms in (3.9.1) are \(\mathbb{C}^{\times}\)-equivariant. By taking \(\mathbb{C}^{\times}\)-finite part of the coordinate algebra, we can algebraize the diagram (3.9.1) and get (3.9.2), whose morphisms are all \(\mathbb{C}^{\times}\)-equivariant. Let \(\gamma:\mathfrak{P}\to\mathfrak{P}\) denote the top horizontal map in (3.9.2). Since \(\mathfrak{P}\) is a vector space, the Zariski tangent space \(T_{0}\mathfrak{P}\) of \(\mathfrak{P}\) at \(0\) is canonically identified with \(\mathfrak{P}\) itself. Under this identification, it is easy to see that the differential \(d\gamma:T_{0}\mathfrak{P}\to T_{0}\mathfrak{P}\) of \(\gamma\) coincides with \(\gamma\) itself since \(\gamma\) is \(\mathbb{C}^{\times}\)-equivariant. Indeed, by Theorem part (i) of 3.27, \(\mathbb{C}^{\times}\) acts linearly on \(\mathfrak{P}\) by \(t.v\mapsto t^{d}v\). Therefore \(\gamma\) being \(\mathbb{C}^{\times}\)-equivariant just means that \(\gamma\) is a \(\mathbb{C}^{\times}\)-equivariant. Hence \(\gamma\) is a \(\mathbb{C}^{\times}\)-equivariant. commutes with the scalar mulitiplication on \(\mathfrak{P}\), and hence \[d\gamma(v)=\frac{d}{dt}\gamma(tv)=\frac{d}{dt}[t\gamma(v)]=\gamma(v),\quad\forall \,v\in T_{0}\mathfrak{P}\simeq\mathfrak{P}.\] In particular, \(\gamma\) is a linear automorphism of \(\mathfrak{P}\). On the other hand, \(T_{0}\mathfrak{P}=\operatorname{PD}_{\widetilde{X}^{\circ}}(\mathbb{C}\{ \epsilon\})\simeq H^{2}(\widetilde{X}^{\circ},\mathbb{C})\) and \(d\gamma=(\tilde{\theta}^{-1})^{*}\). For any graded Poisson varity \(X\), let \(\operatorname{Aut}(X)\) denote the group of graded Poisson automorphisms of \(X\). Note that if \(X=\operatorname{Spec}\mathbb{C}[\widetilde{\mathbb{O}}]\) for \(\widetilde{\mathbb{O}}\in\operatorname{Cov}(G)\), then \(\operatorname{Aut}(X)=\operatorname{Aut}(\widetilde{\mathbb{O}})\). For a general conical symplectic singularity \(X\), Proposition 3.30 yields an action of \(\operatorname{Aut}(X)\) on \(\mathfrak{P}=H^{2}(\widetilde{X}^{\circ},\mathbb{C})\). **Proposition 3.31**.: _Let \(X\) be a conical symplectic singularity. Then the action of \(\operatorname{Aut}(X)\) on \(\mathfrak{P}=\mathfrak{P}^{X}\) enjoys the following properties:_ 1. _It normalizes_ \(W=W^{X}\subset GL(\mathfrak{P})\)_, so that the quotient map_ \(\mathfrak{P}\twoheadrightarrow\mathfrak{P}/W\) _is_ \(\operatorname{Aut}(X)\)_-equivariant._ 2. _It lifts to an_ \(\operatorname{Aut}(X)\)_-action on_ \(X_{\mathfrak{P}}\) _by graded Poisson automorphisms, which normalizes the_ \(W\)_-action. The induced action on_ \(X_{0}\simeq X\) _is just the defining action of_ \(\operatorname{Aut}(X)\) _on_ \(X\)_._ Proof.: (i) is an immediate consequence of Proposition 3.30. For (ii), note that by the universal property we have a natural \(\operatorname{Aut}(X)\)-action on the universal family \(\mathcal{X}_{univ}\) over \(\mathfrak{P}/W\) by graded Poisson automorphisms, and this action is compatible with the \(\operatorname{Aut}(X)\)-action on \(\mathfrak{P}/W\). Then by (i), the fiber product \(X_{\mathfrak{P}}=\mathcal{X}_{univ}\times_{\mathfrak{P}/W}\mathfrak{P}\) admits an action of \(\operatorname{Aut}(X)\). Now \(W\) can be identified with the group of graded Poisson automorphisms of \(X_{\mathfrak{P}}\) whose restrictions to the central fiber \(X_{\mathfrak{P}}\times_{\mathfrak{P}}\{0\}\simeq X\) is identity map (this will appear in [1, Section 7], see also [11]). Clearly the \(\operatorname{Aut}(X)\)-action on \(X_{\mathfrak{P}}\) normalizes \(W\). We conclude this section with an easy lemma. **Lemma 3.32**.: _Let \(X_{1}\) and \(X_{2}\) be two affine symplectic singularities and define open subvarieties \(X_{i}^{\circ}\subset X_{i}\), \(i=1,2\), as in Section 3.7. Suppose \(p:X_{1}\to X_{2}\) is a finite almost etale (Poisson) morphism, such that its restriction \(p|_{X_{1}^{\circ}}:X_{1}^{\circ}\to X_{2}^{\circ}\) is a Galois covering with Galois group \(\Gamma\). Then \(p\) induces a canonical injective linear map \(p^{*}:\mathfrak{P}^{X_{2}}\hookrightarrow\mathfrak{P}^{X_{1}}\) whose image is the the space \((\mathfrak{P}^{X_{1}})^{\Gamma}\) of \(\Gamma\)-fixed vectors of \(\mathfrak{P}^{X_{1}}\)._ Proof.: Let \(\widetilde{X}_{2}^{\circ}\to X_{2}^{\circ}\) be the unique (smooth) \(\mathbb{Q}\)-factorial terminalization of \(X_{2}^{\circ}\) and set \(\widetilde{X}_{1}^{\circ}=\widetilde{X}_{2}^{\circ}\times_{X_{2}^{\circ}}X_{1 }^{\circ}\to\widetilde{X}_{2}^{\circ}\) to be the base change of \(\widetilde{X}_{2}^{\circ}\to X_{2}^{\circ}\) along the morphism \(p|_{X_{1}^{\circ}}:X_{1}^{\circ}\to X_{2}^{\circ}\). Then \(\widetilde{X}_{1}^{\circ}\to\widetilde{X}_{2}^{\circ}\) is also a Galois covering with Galois group \(\Gamma\) and the canonical projection \(\widetilde{X}_{1}^{\circ}\to X_{1}^{\circ}\) is a (smooth) \(\mathbb{Q}\)-factorial terminalization of \(X_{1}^{\circ}\). Therefore the pullback map \(p^{*}:H^{2}(\widetilde{X}_{2}^{\circ},\mathbb{C})\to H^{2}(\widetilde{X}_{1}^{ \circ},\mathbb{C})\) satisfies the claim. ### The extended Namikawa Weyl groups of nilpotent covers Fix all of the notation of Section 3.8, e.g. \(\widetilde{\mathbb{O}}\), \(X\), \((L,\widetilde{\mathbb{O}}_{L})\), \(X_{L}\), \(\mathfrak{z}\), and so on. In this section, we will give an explicit description of the Namikawa Weyl group \(W(\widetilde{\mathbb{O}}):=W^{X}\) in terms of \((L,\widetilde{\mathbb{O}}_{L})\). This will require some additional notation. Let \(N_{G}(L)\) denote the normalizer of \(L\) in \(G\). Let \(\operatorname{Ad}^{*}\) denote the co-adjoint action of \(N_{G}(L)\) on \(\mathfrak{l}^{*}\) and let \(\mu_{L}:X_{L}\to\mathfrak{l}^{*}\) denote the moment map. Consider the group \[N_{G}(L,\widetilde{\mathbb{O}}_{L}):=\{(n,\zeta)\in N_{G}(L)\times\operatorname{ Aut}(X_{L})\,|\,\operatorname{Ad}(n)\circ\mu_{L}=\mu_{L}\circ\zeta\}.\] We can regard \(L\) as a normal subgroup of \(N_{G}(L,\widetilde{\mathbb{O}}_{L})\) via the natural embedding \[L\hookrightarrow N_{G}(L,\widetilde{\mathbb{O}}_{L}),\qquad l\mapsto(l,l).\] **Definition 3.33**.: _The extended Namikawa-Weyl group associated to \(\widetilde{\mathbb{O}}\) is the finite group_ \[\widetilde{W}(\widetilde{\mathbb{O}}):=N_{G}(L,\widetilde{\mathbb{O}}_{L})/L.\] We note that \(\widetilde{W}(\widetilde{\mathbb{O}})\) acts on \(\mathfrak{z}=\mathfrak{X}(\mathfrak{l}\cap[\mathfrak{g},\mathfrak{g}])\) via the natural map \(\widetilde{W}(\widetilde{\mathbb{O}})\to N_{G}(L)/L\). Next, we will describe an action of \(\widetilde{W}(\widetilde{\mathbb{O}})\) on \(\mathcal{X}_{\mathfrak{z}}\). For a general graded Poisson variety \(X\) with a Hamiltonian \(G\)-action that commutes with the \(\mathbb{C}^{\times}\)-action, let \(\operatorname{Aut}^{G}(X)\subset\operatorname{Aut}(X)\) denote the subgroup consisting of \(G\times\mathbb{C}^{\times}\)-equivariant Poisson automorphisms of \(X\). Note that if \(X=\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}])\) for some \(\widetilde{\mathbb{O}}\in\operatorname{\mathsf{Cov}}(G)\), we have \(\operatorname{Aut}^{G}(X)=\operatorname{Aut}^{G}(\widetilde{\mathbb{O}})= \operatorname{Aut}(\widetilde{\mathbb{O}},\mathbb{O})\) by Example 3.8. **Theorem 3.34**.: _With notations as in Section 3.8, the following are true:_ 1. _There is a natural action of_ \(\widetilde{W}(\widetilde{\mathbb{O}})\) _on_ \(\mathcal{X}_{\mathfrak{z}}\) _by_ \(G\times\mathbb{C}^{\times}\)_-equivariant Poisson automorphisms preserving the moment map and lifting the action on_ \(\mathfrak{z}\)_. Moreover, the action preserves_ \(\mathcal{X}_{\mathfrak{z}}^{\circ}\) _and its restriction on_ \(\mathcal{X}_{\mathfrak{z}}^{\circ}\) _induces an action on_ \(G\times^{L}(X_{L}^{e})^{\circ}\) _via the isomorphism (_3.8.1_), which descends from the_ \(N_{G}(L,\widetilde{\mathbb{O}}_{L})\)_-action on_ \(G\times^{L}(X_{L}^{e})^{\circ}\) _given explicitly by_ \[[g,x,\beta]\mapsto[gn^{-1},\zeta(x),n.\beta],\] _for any_ \(g\in G\)_,_ \((x,\beta)\in(X_{L}^{e})^{\circ}=X_{L}\times\mathfrak{z}^{\circ}\) _and_ \((n,\zeta)\in N_{G}(L,\widetilde{\mathbb{O}}_{L})\)_. The action of_ \(\widetilde{W}(\widetilde{\mathbb{O}})\) _on_ \(\mathcal{X}_{\mathfrak{z}}\) _is uniquely characterized by this property._ 2. _Let_ \(\varphi:X_{\chi}\to X_{\chi^{\prime}}\)_,_ \(\chi,\chi^{\prime}\in\mathfrak{z}^{\circ}\)_, be a Hamiltonian isomorphism, i.e., a_ \(G\)_-equivariant Poisson isomorphism intertwining the moment maps to_ \(\mathfrak{g}^{*}\)_. Then_ \(\varphi\) _is induced by a unique element of_ \(\widetilde{W}(\widetilde{\mathbb{O}})\)_._ 3. _There is a short exact sequence of groups,_ (3.10.1) \[1\to W(\widetilde{\mathbb{O}})\to\widetilde{W}(\widetilde{\mathbb{O}})\to \operatorname{Aut}^{G}(X)\to 1,\] _where the surjective homomorphism_ \(\widetilde{W}(\widetilde{\mathbb{O}})\twoheadrightarrow\operatorname{Aut}^{G}(X)\) _is given by the restriction of the_ \(\widetilde{W}(\widetilde{\mathbb{O}})\)_-action on_ \(\mathcal{X}_{\mathfrak{z}}\) _described in (i) to the fiber_ \(\mathcal{X}_{\mathfrak{z}}\times_{\mathfrak{z}}\{0\}\simeq X\)_._ Proof.: The last statement of (i) follows from the fact that \(\mathcal{X}_{\mathfrak{z}}\) is reduced and separated. The rest will appear in [1, Section 7], see also [11]. In view of Proposition 3.31, we can form the semi-direct product \[\widetilde{W}_{G}^{X}:=W^{X}\rtimes\operatorname{Aut}^{G}(X),\] This is a finite group which naturally acts on \(\mathfrak{P}\), \(\mathcal{X}_{univ}\) and \(\mathcal{X}_{\mathfrak{P}}\). Note that the definitions of \(\widetilde{W}_{G}^{X}\) and its actions are independent of choices. The following will appear in [1, Section 7], see also [11]. **Proposition 3.35**.: _A choice of a parabolic subgroup \(P=LN\subset G\) determines an isomorphism \(\kappa_{P}:\widetilde{W}(\widetilde{\mathbb{O}})\xrightarrow{\sim}\widetilde{W }_{G}^{X}\), such that_ 1. \(\kappa_{P}\) _is compatible with the short exact sequence (_3.10.1_), i.e., it restricts to an isomorphism from_ \(W(\widetilde{\mathbb{O}})\subset\widetilde{W}(\widetilde{\mathbb{O}})\) _onto_ \(W^{X}\subset\widetilde{W}_{G}^{X}\) _and the induced map between the quotients is the identity map of_ \(\operatorname{Aut}^{G}(X)\)_. In particular, (_3.10.1_) splits._ _._ 2. _The isomorphism_ \(\eta:\mathfrak{z}\xrightarrow{\sim}\mathfrak{P}\) _intertwines the action of_ \(\widetilde{W}(\widetilde{\mathbb{O}})\) _on_ \(\mathfrak{z}\) _and the action of_ \(\widetilde{W}_{G}^{X}\) _on_ \(\mathfrak{P}\) _under the identification_ \(\kappa_{P}:\widetilde{W}(\widetilde{\mathbb{O}})\xrightarrow{\sim}\widetilde{W} _{G}^{X}\)_._ ### Extended Namikawa Weyl group vs parabolic induction Fix all of the notation of Section 3.8, e.g. \(\widetilde{\mathbb{O}}\), \(X\), \((L,\widetilde{\mathbb{O}}_{L})\), \(X_{L}\), \(\mathfrak{z}\), and so on. Suppose \(M\) is a Levi subgroup of \(G\) containing \(L\). Set \(\widetilde{\mathbb{O}}_{M}=\operatorname{Bind}_{L}^{M}\widetilde{\mathbb{O}}_ {L}\) and \(X_{M}=\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}_{M}])\). We want to compare the Namikawa spaces and (extended) Namikawa groups of \(\widetilde{\mathbb{O}}\) and \(\widetilde{\mathbb{O}}_{M}\). To simplify the statements, we assume in this section that \(G\) is semisimple. Choose parabolic subgroups \(P=LN\subset G\) and \(Q=MU\subset G\), such that \(L\subset M\) and \(P\subset Q\). Let \(P_{M}=P\cap M=LN_{M}\), a parabolic subgroup of \(M\) with Levi factor \(L\) and unipotent radical \(N_{M}=N\cap M\). Note that \(\mathfrak{n}=\mathfrak{n}_{M}\oplus\mathfrak{u}\). We have an inclusion \(\mathfrak{X}(\mathfrak{m})\hookrightarrow\mathfrak{X}(\mathfrak{l})\) and a projection \(\mathfrak{X}(\mathfrak{l})\twoheadrightarrow\mathfrak{X}(\mathfrak{l}\cap[ \mathfrak{m},\mathfrak{m}])\) induced by restriction of characters. We write \[\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}}:=\mathfrak{X}(\mathfrak{m}),\quad \mathfrak{z}_{\mathfrak{l}}^{\mathfrak{m}}:=\mathfrak{X}(\mathfrak{l}\cap[ \mathfrak{m},\mathfrak{m}]).\] Note that we have natural isomorphisms \(\mathfrak{z}\simeq\mathfrak{z}(\mathfrak{l})^{*}\), \(\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}}\simeq\mathfrak{z}(\mathfrak{m})^{*}\) and \(\mathfrak{z}_{\mathfrak{l}}^{\mathfrak{m}}\simeq\mathfrak{z}(\mathfrak{l} \cap[\mathfrak{m},\mathfrak{m}])^{*}\). Moreover, we have a direct sum decomposition \(\mathfrak{z}(\mathfrak{l})=\mathfrak{z}(\mathfrak{m})\oplus\mathfrak{z}( \mathfrak{l}\cap[\mathfrak{m},\mathfrak{m}])\). The latter induces a direct sum decomposition \(\mathfrak{z}=\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}}\oplus\mathfrak{z}_{ \mathfrak{l}}^{\mathfrak{g}}\). Set \(\mathcal{Y}_{\mathfrak{z}}:=\operatorname{Ind}_{P}^{G}X_{L}^{e}\), \(\mathcal{Y}_{\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}}}^{M}:=\operatorname {Ind}_{P_{M}}^{M}(X_{L}\times\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}})\), \(\mathcal{Y}_{\mathfrak{z}}^{M}:=\operatorname{Ind}_{P_{M}}^{M}X_{L}^{e}\) and \(Y_{M}=\operatorname{Ind}_{P_{M}}^{M}X_{L}\). As before, set \(\mathcal{X}_{\mathfrak{z}}:=\mathbb{C}[\mathcal{Y}_{\mathfrak{z}_{\mathfrak{ m}}^{\mathfrak{g}}}]\) and similarly \(\mathcal{X}_{\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}}}^{M}:=\mathbb{C}[ \mathcal{Y}_{\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}}}^{M}]\), \(\mathcal{X}_{\mathfrak{z}}^{M}:=\mathbb{C}[\mathcal{Y}_{\mathfrak{z}}^{M}]\). Then \(\mathcal{Y}_{\mathfrak{z}}^{M}=\mathcal{Y}_{\mathfrak{z}_{\mathfrak{m}}^{ \mathfrak{g}}}^{M}\times\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}}\) and \(\mathcal{X}_{\mathfrak{z}}^{M}=X_{\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}} }^{M}\times\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}}\). There is a canonical \(G\times\mathbb{C}^{\times}\)-equivariant isomorphism \[\mathcal{Y}_{\mathfrak{z}}\xrightarrow{\sim}\operatorname{Ind}_{Q}^{G} \mathcal{Y}_{\mathfrak{z}}^{M} \tag{3.11.1}\] or \[\begin{array}{c}G\times^{P}\left(X_{L}^{e}\times_{\mathfrak{r}^{\ast}} \mathfrak{n}^{\perp}\right)\xrightarrow{\sim}G\times^{Q}\left(M\times^{P_{M}} \left(X_{L}^{e}\times_{\mathfrak{r}^{\ast}}(\mathfrak{m}/\mathfrak{n}_{M})^{ \ast}\right)\times_{\mathfrak{m}^{\ast}}(\mathfrak{g}/\mathfrak{u})^{\ast} \right)\\ \left[g,x^{e},\alpha\right]\qquad\longmapsto\qquad\left[g,1_{M},x^{e},\pi_{ \mathfrak{m}}(\alpha),\iota_{\mathfrak{u}}(\alpha)\right]\end{array} \tag{3.11.2}\] for any \(g\in G\), \(x^{e}\in X_{L}^{e}\) and \(\alpha\in\mathfrak{n}^{\perp}\), where \(1_{M}\) denotes the identity element of \(M\). Here we are using the canonical isomorphism \[\mathfrak{n}^{\perp}\xrightarrow{\sim}(\mathfrak{m}/\mathfrak{n}_{M})^{\ast} \times_{\mathfrak{m}^{\ast}}(\mathfrak{g}/\mathfrak{u})^{\ast},\quad\alpha \mapsto(\pi_{\mathfrak{m}}(\alpha),\iota_{\mathfrak{u}}(\alpha)),\] of \(L\)-representations, which is given by the pullback diagram where \(\iota_{\mathfrak{u}}\) is the dual map of the projection \(\mathfrak{g}/\mathfrak{u}\twoheadrightarrow\mathfrak{g}/\mathfrak{n}\) and \(\pi_{\mathfrak{m}}\) is the dual map of the inclusion \(\mathfrak{m}/\mathfrak{n}_{M}\hookrightarrow\mathfrak{g}/\mathfrak{n}\). It can be checked that (3.11.1) is well-defined and intertwines the maps to \(\mathfrak{g}^{\ast}\times\mathfrak{z}\). Recall that \(\mathcal{Y}_{\mathfrak{z}}\times_{\mathfrak{z}}\{0\}\) is isomorphic to \(Y=G\times^{P}(X_{L}\times(\mathfrak{g}/\mathfrak{p})^{\ast})\). Therefore restricting the isomorphism (3.11.1) to the fibers over \(0\in\mathfrak{z}\) gives an isomorphism \[Y=G\times^{P}(X_{L}\times\mathfrak{p}^{\perp})\xrightarrow{\sim}\operatorname{ Ind}_{P}^{G}Y_{M}=G\times^{Q}(Y_{M}\times\mathfrak{q}^{\perp}). \tag{3.11.3}\] (cf. [13, Lemma 7.4.1]). Thus we can view \(Y^{reg}\) as a fiber bundle over \(G/Q\) with fiber \(Y_{M}^{reg}\times\mathfrak{q}^{\perp}\), so that there are pullback maps \[H^{2}(G/Q,\mathbb{C})\to H^{2}(Y^{reg},\mathbb{C}),\qquad\mathfrak{g}( \widetilde{\mathbb{O}})=H^{2}(Y^{reg},\mathbb{C})\to\mathfrak{g}(\widetilde{ \mathbb{O}}_{M})=H^{2}(Y_{M}^{reg},\mathbb{C}). \tag{3.11.4}\] On the other hand, we have identifications (3.11.5) where the latter two are from Proposition 3.18. The following result is Proposition 7.4.2 of [12]. **Proposition 3.36**.: _Under the isomorphisms in (3.11.5), the pullback maps in (3.11.4) correspond to the inclusion \(\mathfrak{X}(\mathfrak{m})\hookrightarrow\mathfrak{X}(\mathfrak{l})\) and the projection \(\mathfrak{X}(\mathfrak{l})\twoheadrightarrow\mathfrak{X}(\mathfrak{l}\cap[ \mathfrak{m},\mathfrak{m}])\), respectively._ Next we compare the extended Namikawa groups \(\widetilde{W}(\widetilde{\mathbb{O}})\) and \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\). Note that the natural inclusion \(N_{M}(L)\subset N_{G}(L)\) induces an inclusion \(N_{M}(L,\widetilde{\mathbb{O}}_{L})\subset N_{G}(L,\widetilde{\mathbb{O}}_{L})\) and hence an injective group homomorphism \(\iota_{\widetilde{W}}:\widetilde{W}(\widetilde{\mathbb{O}}_{M}) \hookrightarrow\widetilde{W}(\widetilde{\mathbb{O}})\). The restriction of the standard action of \(\widetilde{W}(\widetilde{\mathbb{O}})\) on \(\mathfrak{z}=\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}}\oplus\mathfrak{z}_{ \mathfrak{z}}^{\mathfrak{g}}\) to \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\) is the product of the standard action of \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\) on \(\mathfrak{z}_{\mathfrak{l}}^{\mathfrak{g}}\) and the trivial one on \(\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}}\). Note that there is also a natural inclusion \(\iota_{\mathrm{Aut}}:\mathrm{Aut}^{M}(X_{M})\hookrightarrow\mathrm{Aut}^{G}(X)\) of groups induced by the natural action of \(\mathrm{Aut}^{M}(X_{M})\simeq\mathrm{Aut}^{M}(\widetilde{\mathbb{O}}_{M})\) on \(\mathrm{Bind}_{M}^{G}\widetilde{\mathbb{O}}_{M}=\mathrm{Bind}_{M}^{G} \mathrm{Bind}_{L}^{M}\widetilde{\mathbb{O}}_{L}=\mathrm{Bind}_{L}^{G} \widetilde{\mathbb{O}}_{L}=\widetilde{\mathbb{O}}\) (cf. Proposition 2.6). We will also need the following result, which is immediate from the definition of the \(\mathrm{Aut}(X)\)-action on the Namikawa space \(\mathfrak{P}\). **Lemma 3.37**.: _The pullback map \(H^{2}(Y^{reg},\mathbb{C})\to H^{2}(Y^{reg}_{M},\mathbb{C})\) in (3.11.4) is \(\mathrm{Aut}^{M}(X_{M})\)-equivariant, where \(\mathrm{Aut}^{M}(X_{M})\) acts on \(H^{2}(Y^{reg},\mathbb{C})\) via the inclusion \(\iota_{\mathrm{Aut}}:\mathrm{Aut}^{M}(X_{M})=\mathrm{Aut}^{M}(\widetilde{ \mathbb{O}}_{M})\hookrightarrow\mathrm{Aut}^{G}(\widetilde{\mathbb{O}})\)._ Let \((\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}})^{\circ}\subset\mathfrak{z}_{ \mathfrak{m}}^{\mathfrak{g}}\) and \((\mathfrak{z}_{\mathfrak{l}}^{\mathfrak{m}})^{\circ}\subset\mathfrak{z}_{ \mathfrak{l}}^{\mathfrak{m}}\) be the open subsets defined in the same way as \(\mathfrak{z}^{\circ}\) (cf. Section 3.8), but for \(\mathfrak{m}\subset\mathfrak{g}\) and \(\mathfrak{l}\subset\mathfrak{m}\) respectively. Set \[\mathfrak{z}^{\bullet}:=\mathfrak{z}^{\circ}\cap[(\mathfrak{z}_{\mathfrak{m} }^{\mathfrak{g}})^{\circ}\times(\mathfrak{z}_{\mathfrak{l}}^{\mathfrak{m}})^{ \circ}]\subset\mathfrak{z}.\] This is the complement in \(\mathfrak{z}\) to a finite union of hyperplanes, and hence a dense open subset of \(\mathfrak{z}\). Moreover, it is stable under the action of \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\). Set \((X_{L}^{e})^{\bullet}:=X_{L}\times\mathfrak{z}^{\bullet}\), \((\mathcal{Y}_{\mathfrak{z}}^{M})^{\bullet}:=\mathcal{Y}_{\mathfrak{z}}^{M} \times_{\mathfrak{z}}\mathfrak{z}^{\bullet}\) and so on. Then \[(\mathcal{Y}_{\mathfrak{z}}^{M})^{\bullet}=M\times^{P_{M}}\left((X_{L}^{e})^{ \bullet}\times_{\mathfrak{l}^{\bullet}}(\mathfrak{m}/\mathfrak{n}_{M})^{*} \right).\] Similarly to the isomorphism (3.8.1), we have an \(M\)-equivariant isomorphism \(M\times^{L}(X_{L}^{e})^{\bullet}\xrightarrow{\sim}(\mathcal{Y}_{\mathfrak{z} }^{M})^{\bullet}\), which induces a \(G\)-equivariant isomorphism \[G\times^{M}\left(M\times^{L}(X_{L}^{e})^{\bullet}\right)\xrightarrow{\sim}G \times^{M}(\mathcal{Y}_{\mathfrak{z}}^{M})^{\bullet}. \tag{3.11.6}\] Again a similar construction gives a \(G\)-equivariant isomorphism \[G\times^{M}(\mathcal{Y}_{\mathfrak{z}}^{M})^{\bullet}\xrightarrow{\sim} \mathrm{Ind}_{Q}^{G}(\mathcal{Y}_{\mathfrak{z}}^{M})^{\bullet}. \tag{3.11.7}\] Composing the isomorphisms (3.11.6) and (3.11.7) gives an isomorphism \[G\times^{M}\left(M\times^{L}(X_{L}^{e})^{\bullet}\right)\xrightarrow{\sim} \mathrm{Ind}_{Q}^{G}(\mathcal{Y}_{\mathfrak{z}}^{M})^{\bullet}. \tag{3.11.8}\] Now we consider the following diagram of \(G\)-equivariant isomorphisms of varieties over \(\mathfrak{g}^{*}\times\mathfrak{z}^{\bullet}\). (3.11.9) The leftmost vertical isomorphism is the tautological one, the middle vertical isomorphism is the base change of the isomorphism \(\mathcal{Y}_{\mathfrak{z}}\xrightarrow{\sim}\operatorname{Ind}_{Q}^{G}\mathcal{Y }_{\mathfrak{z}}^{M}\) in (3.11.1) to \(\mathfrak{g}^{*}\times\mathfrak{z}^{\bullet}\), and the rightmost vertical isomorphism is given by Stein factorization. The top left isomorphism is (3.8.1) and the top right isomorphism is the base change of the natural morphism \(\mathcal{Y}_{\mathfrak{z}}\xrightarrow{\sim}\mathcal{X}_{\mathfrak{z}}\) to \(\mathfrak{g}^{*}\times\mathfrak{z}^{\bullet}\). The bottom left isomorphism is (3.11.8), and the bottom right isomorphism is induced by the isomorphism \((\mathcal{Y}_{\mathfrak{z}}^{M})^{\bullet}\xrightarrow{\sim}(\mathcal{X}_{ \mathfrak{z}}^{M})^{\bullet}\). Therefore the right square sub-diagram is the base change to \(\mathfrak{g}^{*}\times\mathfrak{z}^{\bullet}\) of the corresponding diagram without the decoration \(\bullet\). Each variety in (3.11.9) carries a natural action of \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\) which is the restriction of an action on the corresponding variety without the decoration \(\bullet\) and is compatible with the \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\)-action on \(\mathfrak{z}^{\bullet}\) via the projection to \(\mathfrak{z}^{\bullet}\). On the varieties in the top row, \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\) acts via the natural inclusion \(\iota_{\widetilde{W}}:\widetilde{W}(\widetilde{\mathbb{O}}_{M})\hookrightarrow \widetilde{W}(\widetilde{\mathbb{O}})\) and the \(\widetilde{W}(\widetilde{\mathbb{O}})\)-action in Theorem 3.34. On the varieties in the bottom row, the \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\)-actions are induced by the actions on \[M\times^{L}(X_{L}^{e})^{\bullet}\simeq(\mathcal{Y}_{\mathfrak{z}}^{M})^{ \bullet}\simeq(\mathcal{X}_{\mathfrak{z}}^{M})^{\bullet}\] by applying Theorem 3.34 again to \(M\), \(L\) and \(\widetilde{\mathbb{O}}_{M}\). Note that \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\) acts trivially on the second factor of \(\mathcal{X}_{\mathfrak{z}}^{M}=\mathcal{X}_{\mathfrak{z}_{\mathfrak{z}}^{M}} ^{M}\times\mathfrak{z}_{\mathfrak{m}}^{\mathfrak{g}}\). The discussion above and Theorem 3.34(i) proves the following lemma. **Lemma 3.38**.: _The diagram (3.11.9) commutes and all the isomorphisms there are \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\)-equivariant._ Now we are ready to examine the relationship between \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\) and \(\widetilde{W}(\widetilde{\mathbb{O}})\). **Proposition 3.39**.: _The homomorphisms \(\iota_{\widetilde{W}}:\widetilde{W}(\widetilde{\mathbb{O}}_{M})\hookrightarrow \widetilde{W}(\widetilde{\mathbb{O}})\) and \(\iota_{\operatorname{Aut}}:\operatorname{Aut}^{M}(\widetilde{\mathbb{O}}_{M}) \hookrightarrow\operatorname{Aut}^{G}(\widetilde{\mathbb{O}})\) fit into the following commutative diagram_ _where the top and the bottom rows are short exact sequences from (3.10.1)_ Proof.: Note that \(\operatorname{Ind}_{Q}^{G}(\mathcal{X}_{\mathfrak{z}}^{M})\) and \(\mathcal{X}_{\mathfrak{z}}\) are reduced and separated. Therefore by Lemma 3.38, the Stein factorization map \(\operatorname{Ind}_{Q}^{G}(\mathcal{X}_{\mathfrak{z}}^{M})\twoheadrightarrow \mathcal{X}_{\mathfrak{z}}\) is \(\widetilde{W}(\widetilde{\mathbb{O}}_{M})\)-equivariant. The commutativity of the diagram then follows immediately. ### Unipotent ideals Let \(G\) be a connected reductive algebraic group and let \(\widetilde{\mathbb{O}}\in\operatorname{\mathsf{Cov}}(G)\). Recall the canonical quantization \((\mathcal{A}_{0},\Phi_{0})\) of \(\mathbb{C}[\widetilde{\mathbb{O}}]\), cf. Definition 3.16. **Definition 3.40** (Definition 6.0.1, [13]).: _The unipotent ideal attached to \(\widetilde{\mathbb{O}}\) is the primitive ideal_ \[I(\widetilde{\mathbb{O}}):=\ker\left(\Phi_{0}:U(\mathfrak{g})\to\mathcal{A}_{0} \right)\subset U(\mathfrak{g})\] _We write \(\gamma(\widetilde{\mathbb{O}})\in\mathfrak{h}^{*}/W\) for its infinitesimal character._ We will need several basic facts about unipotent ideals and their infinitesimal characters. Recall the equivalence relation \(\sim\) on nilpotent covers defined in Section 2.2. **Proposition 3.41** (Propositions 6.5.4, 8.1.1, [13]).: _The following are true:_ 1. _For every_ \(\widetilde{\mathbb{O}}\in\mathsf{Cov}(G)\)_,_ \(I(\widetilde{\mathbb{O}})\subset U(\mathfrak{g})\) _is a completely prime maximal ideal with associated variety_ \(\overline{\mathbb{O}}\)_._ 2. _For_ \(\widehat{\mathbb{O}},\widehat{\mathbb{O}}\in\mathsf{Cov}(G)\)_, we have_ \[I(\widetilde{\mathbb{O}})=I(\widehat{\mathbb{O}})\iff\widetilde{\mathbb{O}} \sim\widehat{\mathbb{O}}.\] 3. _Suppose_ \(L\subset G\) _is a Levi subgroup. Let_ \(\widetilde{\mathbb{O}}_{L}\in\mathsf{Cov}(L)\) _and_ \(\widetilde{\mathbb{O}}=\mathrm{Bind}_{L}^{G}\widetilde{\mathbb{O}}_{L}\)_. Then_ \[\gamma(\widetilde{\mathbb{O}}_{L})=\gamma(\widetilde{\mathbb{O}})\] _in_ \(\mathfrak{h}^{*}/W\)_._ Proof.: The maximality in (i) is [13, Theorem 5.0.1]. The rest of (i) is [13, Proposition 6.1.2]. (ii) is [13, Proposition 6.5.4]. (iii) is [13, Proposition 8.1.1]. **Corollary 3.42**.: \(\mathrm{Bind}_{L}^{G}\) _descends to a map on equivalence classes_ \[\mathrm{Bind}_{L}^{G}:\mathsf{Cov}(L)/\sim\,\to\mathsf{Cov}(G)/\sim\] Proof.: Suppose \(\widetilde{\mathbb{O}}_{L}\sim\widehat{\mathbb{O}}_{L}\) in \(\mathsf{Cov}(L)\) and let \(\widetilde{\mathbb{O}}=\mathrm{Bind}_{L}^{G}\widetilde{\mathbb{O}}_{L}\), \(\widehat{\mathbb{O}}=\mathrm{Bind}_{L}^{G}\widehat{\mathbb{O}}_{L}\). By (ii) and (iii) of Proposition 3.41, we have \[\gamma(\widetilde{\mathbb{O}})=\gamma(\widehat{\mathbb{O}})\] By (i) of Proposition 3.41, \(I(\widetilde{\mathbb{O}})\) and \(I(\widehat{\mathbb{O}})\) are maximal ideals in \(U(\mathfrak{g})\), and hence uniquely determined by their infinitesimal characters. So in fact \[I(\widetilde{\mathbb{O}})=I(\widehat{\mathbb{O}})\] and therefore \(\widehat{\mathbb{O}}\sim\widehat{\mathbb{O}}\) by Proposition 3.41(ii). For the next proposition, choose a \(W\)-invariant inner product on \(\mathfrak{h}^{*}\) and write \(\|\cdot\|\) for the associated norm. **Proposition 3.43**.: _Suppose \(\widehat{\mathbb{O}}\to\widetilde{\mathbb{O}}\) is a finite \(G\)-equivariant Galois cover. Then_ 1. \(\|\gamma(\widehat{\mathbb{O}})\|\ni\|\gamma(\widetilde{\mathbb{O}})\|\)_._ 2. _There is equality in (i) if and only if the covering is almost etale (in which case,_ \(\gamma(\widehat{\mathbb{O}})\) _and_ \(\gamma(\widehat{\mathbb{O}})\) _are conjugate under_ \(W\)_)._ Proof.: Let \(\mathcal{A}_{0}(\widetilde{\mathbb{O}})\) denote the canonical quantization of \(\mathbb{C}[\widetilde{\mathbb{O}}]\) and let \(\Gamma=\mathrm{Aut}(\widehat{\mathbb{O}},\widetilde{\mathbb{O}})\). Then \(\Gamma\) acts on \(\mathcal{A}_{0}(\widehat{\mathbb{O}})\) by filtered algebra automorphisms, and there is an isomorphism of filtered quantizations \(\mathcal{A}_{0}(\widehat{\mathbb{O}})^{\Gamma}\simeq\mathcal{A}_{\epsilon}( \widehat{\mathbb{O}})\) for some element \(\epsilon\in\mathfrak{P}(\widetilde{\mathbb{O}})\), see [13, Proposition 5.3.1]. Choose \((L,\widetilde{\mathbb{O}})\in\mathsf{Cov}_{0}(G)\) such that \(\widetilde{\mathbb{O}}=\operatorname{Bind}_{L}^{G}\widetilde{\mathbb{O}}_{L}\). Let \(\delta\) denote the image of \(\epsilon\) under the isomorphism \(\mathfrak{P}(\widetilde{\mathbb{O}})\simeq\mathfrak{X}(\mathfrak{I}\cap[ \mathfrak{g},\mathfrak{g}])\). Then by [1, Proposition 8.1.3] \[\gamma(\widetilde{\mathbb{O}})=\gamma(\widetilde{\mathbb{O}}_{L}),\qquad \gamma(\widehat{\mathbb{O}})=\delta+\gamma(\widetilde{\mathbb{O}}_{L}),\] where \(\gamma(\widetilde{\mathbb{O}}_{L})\in(\mathfrak{h}/\mathfrak{z}(\mathfrak{I})) ^{*}\). Since \(\mathfrak{h}/(\mathfrak{z}(\mathfrak{I}))^{*}\) and \(\mathfrak{z}(\mathfrak{I})^{*}\) are orthogonal subspaces of \(\mathfrak{h}^{*}\), we have \[\|\gamma(\widehat{\mathbb{O}})\|^{2}=\|\gamma(\widetilde{\mathbb{O}}_{L})+ \delta\|^{2}=\|\gamma(\widetilde{\mathbb{O}}_{L})\|^{2}+\|\delta\|^{2}\geq\| \gamma(\widetilde{\mathbb{O}})\|^{2}.\] This proves (i). If the covering \(\widehat{\mathbb{O}}\to\widetilde{\mathbb{O}}\) is almost etale, then \(\gamma(\widetilde{\mathbb{O}})=\gamma(\widehat{\mathbb{O}})\) by Proposition 3.41(ii). Otherwise, it is clear from the proof of [1, Proposition 5.3.1] that \(\delta\neq 0\). And therefore, the inequality in (i) is strict. This proves (ii). **Corollary 3.44**.: _Let \(\widetilde{\mathbb{O}}_{L},\widehat{\mathbb{O}}_{L}\in\mathsf{Cov}(L)\) and suppose \(p:\widehat{\mathbb{O}}_{L}\to\widetilde{\mathbb{O}}_{L}\) is a Galois cover. Then \(p\) is almost etale if and only if \(\operatorname{Bind}_{L}^{G}\widehat{\mathbb{O}}_{L}\sim\operatorname{Bind}_{ L}^{G}\widetilde{\mathbb{O}}_{L}\)._ Proof.: Let \(\widetilde{\mathbb{O}}=\operatorname{Bind}_{L}^{G}\widetilde{\mathbb{O}}_{L}\) and \(\widehat{\mathbb{O}}=\operatorname{Bind}_{L}^{G}\widehat{\mathbb{O}}_{L}\). If \(p\) is almost etale, then \(\widehat{\mathbb{O}}_{L}\sim\widetilde{\mathbb{O}}_{L}\), and so \(\widehat{\mathbb{O}}\sim\widehat{\mathbb{O}}\) by Corollary 3.42 Conversely, suppose \(p\) is not almost etale. Then by Proposition 3.43, there is a strict inequality \[\|\gamma(\widehat{\mathbb{O}}_{L})\|>\|\gamma(\widetilde{\mathbb{O}}_{L})\|.\] Since the norm is \(W\)-invariant, this implies that the elements \(\gamma(\widehat{\mathbb{O}}_{L})\) and \(\gamma(\widetilde{\mathbb{O}}_{L})\) are not conjugate under \(W\). But by Proposition 3.41(iii), \(\gamma(\widehat{\mathbb{O}})=\gamma(\widehat{\mathbb{O}}_{L})\) and \(\gamma(\widehat{\mathbb{O}})=\gamma(\widetilde{\mathbb{O}}_{L})\). So by Proposition 3.41(ii), \(\widehat{\mathbb{O}}\) and \(\widetilde{\mathbb{O}}\) are in different equivalence classes. To conclude this section, we note that some of the unipotent ideals of Definition 3.40 (particularly in classical types) have previously appeared in the work of various experts, see e.g [10] and [1]. The original contribution in [1] is to provide a _uniform_ definition of unipotent ideals. ### Unipotent bimodules Let \(G\) be a connected reductive algebraic group. A _\(G\)-equivariant Harish-Chandra \(U(\mathfrak{g})\)-bimodule_ is a finitely-generated \(U(\mathfrak{g})\)-bimodule \(X\) such that the adjoint action of \(\mathfrak{g}\) on \(X\) integrates to a rational action of \(G\). Let \(\operatorname{HC}^{G}(U(\mathfrak{g}))\) denote the category of \(G\)-equivariant Harish-Chandra \(U(\mathfrak{g})\)-bimodules (with \(U(\mathfrak{g})\)-bimodule homomorphisms). If \(I\subset U(\mathfrak{g})\) is a two-sided ideal, let \(\operatorname{HC}^{G}(U(\mathfrak{g})/I)\) denote the full subcategory of \(\operatorname{HC}^{G}(U(\mathfrak{g}))\) consisting of bimodules \(X\in\operatorname{HC}^{G}(U(\mathfrak{g}))\) such that \(IX=XI=0\). This is a monoidal category under \(\otimes_{U(\mathfrak{g})}\). **Definition 3.45** (Definition 6.0.2, [1]).: _Let \(\widetilde{\mathbb{O}}\in\mathsf{Cov}(G)\). A unipotent bimodule attached to \(\widetilde{\mathbb{O}}\) is an irreducible object in \(\operatorname{HC}^{G}(U(\mathfrak{g})/I(\widetilde{\mathbb{O}}))\)._ We conclude this subsection by recalling a description of the category \(\operatorname{HC}^{G}(U(\mathfrak{g})/I(\widetilde{\mathbb{O}}))\) given in [1, Section 6]. Recall from Lemma 2.5 that for any nilpotent cover \(\widetilde{\mathbb{O}}\in\mathsf{Cov}(G)\), there is a unique maximal element \(\widetilde{\mathbb{O}}_{max}\) in the equivalence class of \(\widetilde{\mathbb{O}}\). Define \[\Gamma(\widetilde{\mathbb{O}}):=\operatorname{Aut}(\widetilde{\mathbb{O}}_{max}, \mathbb{O}) \tag{3.13.1}\] Note that \(\Gamma(\widetilde{\mathbb{O}})\) is a finite group. **Theorem 3.46** (Theorem 6.6.2, [13]).: _Let \(\widehat{\mathbb{O}}\in\mathsf{Cov}(G)\). Then there is an equivalence of monoidal categories_ \[\mathrm{HC}^{G}(U(\mathfrak{g})/I(\widehat{\mathbb{O}}))\simeq\Gamma(\widehat{ \mathbb{O}})\operatorname{-mod}\] ### Birational induction preserves maximal covers In this section, we will show that birational induction takes maximal covers to maximal covers. First we compare the preimages of covers in the same equivalence class under the map \(\mathrm{Bind}:\mathsf{Cov}_{0}(G)\to\mathsf{Cov}(G)\). **Proposition 3.47**.: _Suppose \(\widehat{\mathbb{O}},\widehat{\mathbb{O}}\in\mathsf{Cov}(G)\) and \(p:\widetilde{\mathbb{O}}\to\widehat{\mathbb{O}}\) is a finite Galois \(G\)-equivariant almost etale covering map (cf. Section 2.2). Suppose \(\widehat{\mathbb{O}}=\mathrm{Bind}_{L}^{G}\widehat{\mathbb{O}}_{L}\), where \(L\) is a Levi subgroup of \(G\) and \(\widetilde{\mathbb{O}}_{L}\in\mathsf{Cov}(L)\) is birationally rigid. Then there exist a Levi subgroup \(M\subset G\) containing \(L\) and a birationally rigid cover \(\widehat{\mathbb{O}}_{M}\in\mathsf{Cov}(M)\), such that_ 1. \(\mathrm{Bind}_{M}^{G}\widehat{\mathbb{O}}_{M}\simeq\widehat{\mathbb{O}}\)_._ 2. _There is a finite Galois almost etale_ \(M\)_-equivariant covering map_ \[p_{M}:\widetilde{\mathbb{O}}_{M}:=\mathrm{Bind}_{L}^{M}\widetilde{\mathbb{O}} _{L}\to\widehat{\mathbb{O}}_{M},\] _such that the induced covering map_ \(\mathrm{Bind}_{M}^{G}(p_{M}):\widehat{\mathbb{O}}=\mathrm{Bind}_{M}^{G} \widehat{\mathbb{O}}_{M}\to\mathrm{Bind}_{M}^{G}\widehat{\mathbb{O}}_{M}\) _corresponds to_ \(p\) _under the isomorphism in (i)._ Proof.: Without loss of generality, we may assume \(G\) is semisimple. Set \(\Gamma=\mathrm{Aut}(\widehat{\mathbb{O}},\widehat{\mathbb{O}})\). Then \(\Gamma\) is a subgroup of \(\mathrm{Aut}(\widetilde{\mathbb{O}},\mathbb{O})\) and hence, by Proposition 3.35(i), can be regarded as a subgroup of \(\widetilde{W}(\widehat{\mathbb{O}})\) whose intersection with \(W(\widehat{\mathbb{O}})\) is trivial. Therefore \(\Gamma\) acts on \[\mathfrak{P}(\widehat{\mathbb{O}})\simeq\mathfrak{X}(\mathfrak{l})=( \mathfrak{l}/[\mathfrak{l},\mathfrak{l}])^{*}=\mathfrak{z}(\mathfrak{l})^{*} \simeq\mathfrak{z}(\mathfrak{l}),\] where the last identification is by the Killing form. Let \(M:=Z_{G}(\mathfrak{z}(\mathfrak{l})^{\Gamma})\) be the centralizer of \(\mathfrak{z}(\mathfrak{l})^{\Gamma}\) in \(G\). Then \(M\) is a Levi subgroup containing \(L\) with Lie algebra \(\mathfrak{m}\) satisfying \(\mathfrak{z}(\mathfrak{m})=\mathfrak{z}(\mathfrak{l})^{\Gamma}\). Note that \(N_{G}(L)\) acts on \(\mathfrak{z}(\mathfrak{l})\) by conjugation. The subgroup of \(N_{G}(L)\) consisting of elements that fix every vector in \(\mathfrak{z}(\mathfrak{m})\subset\mathfrak{z}(\mathfrak{l})\) is exactly \(N_{M}(L)\). Therefore \(\Gamma\) in fact lies in \(\widetilde{W}(\widehat{\mathbb{O}}_{M})\subset\widetilde{W}(\widehat{\mathbb{ O}})\). Furthermore, since \(\Gamma\) has trivial intersection with \(W(\widehat{\mathbb{O}})\), it also has trivial intersection with \(W(\widehat{\mathbb{O}}_{M})\) by Proposition 3.39. Hence \(\Gamma\) is mapped isomorphically onto its image in \(\mathrm{Aut}^{M}(\widehat{\mathbb{O}}_{M})\), denoted as \(\Gamma_{M}\), under the map \(\widetilde{W}(\widehat{\mathbb{O}}_{M})\twoheadrightarrow\mathrm{Aut}^{M}( \widehat{\mathbb{O}}_{M})\) and \(\Gamma_{M}\) in turn maps isomorphically onto \(\Gamma\) under the map \(\iota_{\mathrm{Aut}}:\mathrm{Aut}^{M}(\widehat{\mathbb{O}}_{M})\hookrightarrow \mathrm{Aut}^{G}(\widehat{\mathbb{O}})\). Therefore \(\Gamma\simeq\Gamma_{M}\) acts on \(\widehat{\mathbb{O}}_{M}\) freely and \(M\)-equivariantly, and intertwines the moment map \(\widetilde{\mathbb{O}}_{M}\to\mathfrak{m}^{*}\). Thus we can form the quotient \(\widehat{\mathbb{O}}_{M}:=\widehat{\mathbb{O}}_{M}/\Gamma_{M}\), which is an \(M\)-equivariant nilpotent cover of \(\mathbb{O}_{M}\); let \(p_{M}:\widetilde{\mathbb{O}}_{M}\to\widehat{\mathbb{O}}_{M}\) denote the quotient map. Then \(\widehat{\mathbb{O}}_{M}\) is an \(M\)-equivariant nilpotent cover and \(p_{M}\) is a finite Galois \(M\)-equivariant covering map, such that \(\mathrm{Aut}(\widetilde{\mathbb{O}}_{M},\widehat{\mathbb{O}}_{M})=\Gamma_{M}\). By Proposition 2.6 (ii), the induced covering map \(\mathrm{Bind}(p_{M}):\widehat{\mathbb{O}}\to\mathrm{Bind}_{M}^{G}\widehat{ \mathbb{O}}_{M}\) is Galois and the composite map \(\mathrm{Aut}(\widetilde{\mathbb{O}}_{M},\widehat{\mathbb{O}}_{M})\xrightarrow{ \sim}\mathrm{Aut}(\widehat{\mathbb{O}},\mathrm{Bind}_{M}^{G}\widehat{\mathbb{O }}_{M})\hookrightarrow\mathrm{Aut}^{G}(\widetilde{\mathbb{O}})\) is nothing else but the isomorphism \(\Gamma_{M}\xrightarrow{\sim}\Gamma\). Hence \(\mathrm{Bind}_{M}^{G}\widehat{\mathbb{O}}_{M}\simeq\widetilde{\mathbb{O}}/ \Gamma\simeq\widehat{\mathbb{O}}\). The claim that \(p_{M}\) is almost etale follows from Corollary 3.44. It remains to show that \(\widehat{\mathbb{O}}_{M}\) is birationally rigid. By Proposition 3.18, we have \(\mathfrak{P}(\widehat{\mathbb{O}}_{M})\simeq\mathfrak{z}(\mathfrak{l}\cap[ \mathfrak{m},\mathfrak{m}])^{*}\). By Proposition 3.36, the map \(\mathfrak{P}(\widetilde{\mathbb{O}})\to\mathfrak{P}(\widetilde{\mathbb{O}}_{M})\) in (3.11.4) corresponds to the restriction map \(r:\mathfrak{z}(\mathfrak{l})^{*}\twoheadrightarrow(\mathfrak{z}(\mathfrak{l}) \cap[\mathfrak{m},\mathfrak{m}])^{*}\), which is \(\Gamma\)-equivariant by Lemma 3.37. By the construction of \(M\), the kernel of \(r\) is \(\mathfrak{z}(\mathfrak{m})^{*}=(\mathfrak{z}(\mathfrak{l})^{\Gamma})^{*}=( \mathfrak{z}(\mathfrak{l})^{*})^{\Gamma}=\mathfrak{P}(\widetilde{\mathbb{O}})^{\Gamma}\), hence the action of \(\Gamma_{M}=\Gamma\) on \(\mathfrak{P}(\widetilde{\mathbb{O}}_{M})\) only fixes \(0\). Now by Lemma 3.32, \(\mathfrak{P}(\widehat{\mathbb{O}}_{M})=\mathfrak{P}(\widetilde{\mathbb{O}}_{M})^ {\Gamma}=0\), therefore Proposition 3.18 implies that \(\widehat{\mathbb{O}}_{M}\) is birationally rigid. **Theorem 3.48**.: _If \(\widehat{\mathbb{O}}_{M}\in\mathsf{Cov}(M)\) is maximal in its equivalence class, then \(\operatorname{Bind}_{M}^{G}\widehat{\mathbb{O}}_{M}\in\mathsf{Cov}(G)\) is maximal in its equivalence class._ Proof.: Set \(\widehat{\mathbb{O}}=\operatorname{Bind}_{M}^{G}\widehat{\mathbb{O}}_{M}\). We first claim that the statement can be reduced to the case when \(\widehat{\mathbb{O}}_{M}\) is birationally rigid. Indeed, we can always choose \((L,\widehat{\mathbb{O}}_{L})\in\mathsf{Cov}_{0}(M)\) so that \(\operatorname{Bind}_{L}^{M}\widehat{\mathbb{O}}_{L}=\widehat{\mathbb{O}}_{M}\) and hence \(\operatorname{Bind}_{L}^{G}\widehat{\mathbb{O}}_{L}=\widehat{\mathbb{O}}\). Then \(\widehat{\mathbb{O}}_{L}\) must be maximal in its equivalence class. Otherwise, suppose \(\widehat{\mathbb{O}}_{L}^{max}\) is the maximal cover in the equivalence class \([\widehat{\mathbb{O}}_{L}]\) and there is a nontrivial finite Galois \(L\)-equivariant covering map \(\widehat{\mathbb{O}}_{L}^{max}\to\widehat{\mathbb{O}}_{L}\). By Proposition 2.6, (ii), the induced covering map \(\operatorname{Bind}_{L}^{M}\widehat{\mathbb{O}}_{L}^{max}\to\widehat{\mathbb{O }}_{M}\) is nontrivial. But by Proposition 3.42, \(\operatorname{Bind}_{L}^{M}\widehat{\mathbb{O}}_{L}^{max}\) and \(\widehat{\mathbb{O}}_{M}\) are in the same equivalence class. This contradicts with the maximality of \(\widehat{\mathbb{O}}_{M}\). Thus we can assume that \(\widehat{\mathbb{O}}_{M}\) is birationally rigid. Let \(\widetilde{\mathbb{O}}\) be the maximal cover in the equivalence class \([\widehat{\mathbb{O}}]\) and \(p:\widehat{\mathbb{O}}\to\widehat{\mathbb{O}}\) be a Galois covering map. By applying Proposition 3.47 to \(p\), we know that there exists a Levi subgroup \(M^{\prime}\), a birationally rigid cover \(\widehat{\mathbb{O}}_{M^{\prime}}\in\mathsf{Cov}(M^{\prime})\), an \(\widehat{\mathbb{O}}_{M^{\prime}}\in\mathsf{Cov}(M^{\prime})\), such that \(\operatorname{Bind}_{M^{\prime}}^{G}\widehat{\mathbb{O}}_{M^{\prime}}\simeq \widehat{\mathbb{O}}\) and \(\operatorname{Bind}_{M^{\prime}}^{G}\widehat{\mathbb{O}}_{M^{\prime}}\simeq \widehat{\mathbb{O}}\), together with an \(M^{\prime}\)-equivariant almost etale covering map \(p_{M^{\prime}}:\widehat{\mathbb{O}}_{M^{\prime}}\to\widehat{\mathbb{O}}_{M^{ \prime}}\) which induces \(p\). By Proposition 2.7, we can assume \(M^{\prime}=M\) and \(\widehat{\mathbb{O}}_{M^{\prime}}=\widehat{\mathbb{O}}_{M}\) possibly after \(G\)-conjugation. But \(\widehat{\mathbb{O}}_{M}\) is maximal, therefore \(p_{M^{\prime}}\) is an isomorphism and so is \(p\). ## 4. Main results Let \(G\) be a complex connected reductive algebraic group with Langlands dual group \(G^{\vee}\). **Definition 4.1**.: _For any \(\mathbb{O}\in\mathsf{Orb}(G)\), the Lusztig cover of \(\mathbb{O}\) is the finite \(G\)-equivariant cover \(\widetilde{\mathbb{O}}\to\mathbb{O}\) corresponding to the kernel of the map \(A(\mathbb{O})\twoheadrightarrow\bar{A}(\mathbb{O})\)._ **Remark 4.2**.: _The Lusztig cover is independent of isogeny in the following sense. Suppose \(G^{\prime}\to G\) is a covering group and let \(\mathbb{O}\in\mathsf{Orb}(G)\). Write \(\widetilde{\mathbb{O}}_{\text{Lus}}\) (resp. \(\widetilde{\mathbb{O}}_{\text{Lus}}^{\prime}\)) for the Lusztig cover of \(\mathbb{O}\) associated to \(G\) (resp. \(G^{\prime}\)). Since \(\bar{A}(\mathbb{O})\) is independent of isogeny, the action of \(G^{\prime}\) on \(\widetilde{\mathbb{O}}_{\text{Lus}}^{\prime}\) descends to an action of \(G\), and \(\widetilde{\mathbb{O}}_{\text{Lus}}^{\prime}\simeq\widetilde{\mathbb{O}}_{ \text{Lus}}\) as \(G\)-equivariant covers of \(\mathbb{O}\)._ **Proposition 4.3**.: _Suppose \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\) is a special distinguished Lusztig-Achar datum and let \(\mathbb{O}=d_{S}(\mathbb{O}^{\vee},\bar{C})\). Then_ 1. \(\widehat{\mathbb{O}}_{\text{Lus}}\) _is birationally rigid._ _Furthermore_ 1. _The restriction of_ \(d_{S}\) _to the set of special distinguished Lusztig-Achar data is injective._ Proof.: If \(G\) is a simple classical group, then (i) and (ii) are proved in Section 6.1. If \(G\) is a simple adjoint group of exceptional type, then (i) and (ii) are proved in Section 7.1. To reduce to these cases, it suffices to prove the following: 1. If the assertions hold for \(G_{1}\) and \(G_{2}\), they also hold for the product \(G_{1}\times G_{2}\). 2. Suppose \(G^{\prime}\to G\) is a covering group. Then the assertions hold for \(G\) if and only if they hold for \(G^{\prime}\). For assertion (ii), both (a) and (b) are obvious. Indeed, the sets \(\mathsf{LA}^{*}(G^{\vee})\), \(\mathsf{Orb}(G)\) and the map \(d_{S}:\mathsf{LA}^{*}(G^{\vee})\to\mathsf{Orb}(G)\) are independent of isogeny and \(d_{S}^{G}=d_{S}^{G_{1}}\times d_{S}^{G_{2}}\). For (i), (a) is clear (for essentially the same reasons). For (b), we argue as follows. By Remark 4.2, there is an isomorphism of algebraic varieties \(\widetilde{\mathbb{O}}_{Lus}\simeq\widetilde{\mathbb{O}}_{Lus}^{\vee}\). Now (b) follows at once from Proposition 3.23. **Remark 4.4**.: _We note that Proposition 4.3 is not true if we drop the assumption that \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\) is special. For example, let \(G\) be the simple adjoint group of type \(E_{7}\) and let \(\mathbb{O}^{\vee}=A_{4}+A_{1}\). Then \(\bar{A}(\mathbb{O}^{\vee})=S_{2}\). Let \(\bar{C}\) denote the nontrivial conjugacy class in \(\bar{A}(\mathbb{O}^{\vee})\). Then \((\mathbb{O}^{\vee},\bar{C})\) is distinguished, but not special. Note that \(\mathbb{O}=d_{S}(\mathbb{O}^{\vee},\bar{C})=A_{3}+A_{2}+A_{1}\) and \(A(\mathbb{O})=1\). So \(\widetilde{\mathbb{O}}_{\text{univ}}=\mathbb{O}\). But \(\mathbb{O}\) is not birationally rigid, see [11, Proposition 3.8.3] (notably, if we replace \(G\) with its simply connected form, then \(A(\mathbb{O})=\mathbb{Z}_{2}\) and hence \(\widetilde{\mathbb{O}}_{\text{univ}}\) is a 2-fold cover of \(\mathbb{O}\). By [11, Proposition 3.9.5], this cover is birationally rigid)._ We are now prepared to define our duality map \(D:\mathsf{LA}^{*}(G^{\vee})\to\mathsf{Cov}(G)\). First, consider the map \[D_{0}:\mathsf{LA}^{*}_{0}(G^{\vee})\to\mathsf{Cov}_{0}(G),\qquad D_{0}(L^{ \vee},(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}}))=(L,d_{S}^{L}(\mathbb{O}_{L ^{\vee}},\bar{C}_{L^{\vee}})_{Lus})\] This is well-defined by Proposition 4.3(ii). We define \(D\) to be the composition \[D:\mathsf{LA}^{*}(G^{\vee})\overset{\text{Sat}^{-1}}{\to}\mathsf{LA}^{*}_{0}( G^{\vee})\overset{D_{0}}{\to}\mathsf{Cov}_{0}(G)\overset{\text{Bind}}{\to} \mathsf{Cov}(G)\] where \(\text{Sat}^{-1}\) is the map of Proposition 2.20(ii). We will sometimes write \(D^{G}\) to indicate the dependence on \(G\). **Proposition 4.5**.: _The map_ \[D:\mathsf{LA}^{*}(G^{\vee})\to\mathsf{Cov}(G)\] _has the following properties:_ 1. \(D\) _is injective._ 2. _If_ \(L\subset G\) _is a Levi subgroup, then_ \[D^{G}\circ\text{Sat}_{L^{\vee}}^{G^{\vee}}=\text{Bind}_{L}^{G}\circ D^{L}\] 3. \((\mathbb{O}^{\vee},\bar{C})\) _is distinguished if and only if_ \(D(\mathbb{O}^{\vee},\bar{C})\) _is birationally rigid._ 4. \(D\) _is independent of isogeny in the following sense: if_ \(\widetilde{G}\to G\) _is a covering group, then the following diagram commutes_ \[\begin{CD}\mathsf{LA}^{*}(\widetilde{G}^{\vee})@>{D^{G}}>{}>\mathsf{Cov}( \widetilde{G})\\ @V{}V{}V@V{}V{}V\\ \mathsf{LA}^{*}(G^{\vee})@>{D^{G}}>{}>\mathsf{Cov}(G)\end{CD}\] Proof.: For (i), we note that \(D\) is the composition of three injective maps: \(\text{Sat}^{-1}:\mathsf{LA}^{*}(G^{\vee})\to\mathsf{LA}^{*}_{0}(G^{\vee})\) is injective by Proposition 2.20(ii), \(D_{0}\) is injective by Proposition 4.3(ii), and \(\text{Bind}:\mathsf{Cov}_{0}(G)\to\mathsf{Cov}(G)\) is injective by Proposition 2.7. Hence, \(D\) is injective, proving (i). For (ii), let \(L\subset G\) be a Levi subgroup of \(G\) and let \((\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\in\mathsf{LA}^{*}(L^{\vee})\). Suppose \(\text{Sat}(K^{\vee},(\mathbb{O}_{K^{\vee}},\bar{C}_{K^{\vee}}))=(\mathbb{O}_ {L^{\vee}},\bar{C}_{L^{\vee}})\) for \((K^{\vee},(\mathbb{O}_{K^{\vee}},\bar{C}_{K^{\vee}}))\in\mathsf{LA}^{*}_{0}(L^{ \vee})\) and let \(\widetilde{\mathbb{O}}_{K}=d_{S}^{K}(\mathbb{O}_{K^{\vee}},\bar{C}_{K^{\vee}} )_{univ}\). Then \[D^{G}(\text{Sat}_{L^{\vee}}^{G^{\vee}}(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee }})=\text{Bind}_{K}^{G}\widetilde{\mathbb{O}}_{L}\] whereas \[\operatorname{Bind}_{L}^{G}(D^{L}(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}}))= \operatorname{Bind}_{L}^{G}\operatorname{Bind}_{K}^{L}\widetilde{\mathbb{O}}_{K}.\] Now (ii) is immediate from the transitivity of birational induction, see Proposition 2.6(ii). (iii) follows from (ii) together with Proposition 4.3(i). (iv) is a consequence of Remark 4.2. **Remark 4.6**.: _We note that our map \(D\) is not in general surjective (nor is its composition with \(\operatorname{\mathsf{Cov}}(G)\to\operatorname{\mathsf{Cov}}(G)/\sim\)). Indeed, let \(G\) be the (unique) simple group of type \(F_{4}\), and let \(\mathbb{O}=A_{2}\). There is a \(G\)-equivariant double cover \(\widetilde{\mathbb{O}}\) of \(\mathbb{O}\), which is birationally rigid by [14, Proposition 3.9.5]. Since \(A(\mathbb{O})\simeq\mathbb{Z}_{2}\) and \(\mathbb{O}\) is birationally induced, it is clear that \(\widetilde{\mathbb{O}}\) is the unique element of its equivalence class. If \(\widetilde{\mathbb{O}}\) were to belong to the image of \(D\), then by Proposition 4.5(iii) it would have to be the case that \(\widetilde{\mathbb{O}}=D(\mathbb{O}^{\vee},\bar{C})\) for some distinguished special Lustig-Achar datum \((\mathbb{O}^{\vee},\bar{C})\). This would imply that \(\mathbb{O}=d_{S}(\mathbb{O}^{\vee},\bar{C})\). Examining Table 3, we see that no such distinguished special \((\mathbb{O}^{\vee},\bar{C})\) exists._ **Remark 4.7**.: _We note that our duality map \(D\) generalizes the duality maps of Barbasch-Vogan-Lusztig-Spaltenstein (denoted \(d\)), Sommers (denoted \(d_{S}\)), Losev-Mason-Brown-Matvieievsky (denoted \(\tilde{d}\)), and Achar (denoted \(d_{A}\)) in the following sense: if \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\), then_ 1. \(D(\mathbb{O}^{\vee},\bar{C})\) _is a cover of_ \(d_{S}(\mathbb{O}^{\vee},\bar{C})\)_._ 2. _If_ \(\bar{C}=1\)_, then_ \(D(\mathbb{O}^{\vee},\bar{C})\) _belongs to the equivalence class_ \(\tilde{d}(\mathbb{O}^{\vee})\)_._ 3. _If_ \(\bar{C}=1\)_, then_ \(D(\mathbb{O}^{\vee},\bar{C})\) _is a cover of_ \(d(\mathbb{O}^{\vee})\)_._ 4. _Let_ \(d_{A}(\mathbb{O}^{\vee},\bar{C})=(\mathbb{O},\bar{C}^{\prime})\)_. By_ _[_10_]_ _and_ _[_1, Section 7.1]__,_ \(\bar{A}(\mathbb{O})\) _is a Coxeter group and admits a Coxeter presentation unique up to conjugacy. We can then associate a parabolic subgroup_ \(H_{\bar{C}^{\prime}}\subset\bar{A}(\mathbb{O})\) _to the conjugacy class_ \(\bar{C}^{\prime}\) _up to conjugacy. Let_ \(H\subset A(\mathbb{O})\) _be preimage of_ \(H_{\bar{C}^{\prime}}\) _under the quotient map_ \(A(\mathbb{O})\twoheadrightarrow\bar{A}(\mathbb{O})\)_. Then_ \(H\) _determines a_ \(G\)_-equivariant cover_ \(\widetilde{\mathbb{O}}\) _of_ \(\mathbb{O}\)_. One can check that_ \(\widetilde{\mathbb{O}}\) _is equivalent to_ \(D(\mathbb{O}^{\vee},\bar{C})\) _in the sense of Definition_ 2.4_. This will be proved in a future paper._ For \(\gamma\in\mathfrak{h}^{*}\), define \[s=\exp(2\pi i\gamma),\qquad L^{\vee}_{\gamma,0}=Z_{G^{\vee}}(\gamma),\qquad L ^{\vee}_{\gamma}=Z_{G^{\vee}}(s)^{\circ}.\] Note that \(L^{\vee}_{\gamma}\) is a pseudo-Levi subgroup of \(G^{\vee}\) and \(L^{\vee}_{\gamma,0}\) is a Levi subgroup of \(L^{\vee}_{\gamma}\). Consider the McNinch-Sommers datum \[\mathsf{MS}(\gamma):=(L^{\vee}_{\gamma},sZ^{\circ},\operatorname{Ind}_{L^{ \vee}_{\gamma,0}}^{L^{\vee}_{\gamma}}\{0\})\in\mathsf{MS}(G^{\vee}) \tag{4.0.1}\] This defines a map \[\mathsf{MS}:\mathfrak{h}^{*}\to\mathsf{MS}(G^{\vee})\] Composing with the projection \(\mathsf{MS}(G^{\vee})\stackrel{{\pi}}{{\to}}\mathsf{Conj}(G^{ \vee})\to\mathsf{LA}(G^{\vee})\), we get a further map \[\mathsf{LA}:\mathfrak{h}^{*}\to\mathsf{LA}(G^{\vee}).\] We will sometimes write \(\mathsf{MS}^{G^{\vee}}:\mathfrak{h}^{*}\to\mathsf{MS}(G^{\vee})\) and \(\mathsf{LA}^{G^{\vee}}:\mathfrak{h}^{*}\to\mathsf{LA}(G^{\vee})\) to indicate the dependence on \(G^{\vee}\). Let \(\mathfrak{h}^{*}_{\mathbb{R}}\subset\mathfrak{h}^{*}\) denote the real form of \(\mathfrak{h}^{*}\) spanned by the roots in \(G\). To any Lusztig-Achar datum \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\), we attach a \(W\)-invariant subset \(S(\mathbb{O}^{\vee},\bar{C})\subset\mathfrak{h}^{*}_{\mathbb{R}}\) as follows. First, choose a Levi subgroup \(L^{\vee}\supset H^{\vee}\) and a distinguished Lusztig-Achar datum \((\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\in\mathsf{LA}(G^{\vee})\). Then \(\mathsf{MS}(G^{\vee})\) is a Coxeter group and admits a Coxeter presentation unique up to conjugacy. We can then associate a parabolic subgroup \(H_{\bar{C}^{\prime}}\subset\bar{A}(\mathbb{O})\) to the conjugacy class \(\bar{C}^{\prime}\) up to conjugacy. Let \(H\subset A(\mathbb{O})\) be preimage of \(H_{\bar{C}^{\prime}}\) under the quotient map \(A(\mathbb{O})\twoheadrightarrow\bar{A}(\mathbb{O})\). Then \(H\) determines a \(G\)-equivariant cover \(\widetilde{\mathbb{O}}\) of \(\mathbb{O}\). One can check that \(\widetilde{\mathbb{O}}\) is equivalent to \(D(\mathbb{O}^{\vee},\bar{C})\) in the sense of Definition 2.4. This will be proved in a future paper._ For \(\gamma\in\mathfrak{h}^{*}\), define \[s=\exp(2\pi i\gamma),\qquad L^{\vee}_{\gamma,0}=Z_{G^{\vee}}(\gamma),\qquad L^{ \vee}_{\gamma}=Z_{G^{\vee}}(s)^{\circ}.\] Note that \(L^{\vee}_{\gamma}\) is a pseudo-Levi subgroup of \(G^{\vee}\) and \(L^{\vee}_{\gamma,0}\) is a Levi subgroup of \(L^{\vee}_{\gamma}\). Consider the McNinch-Sommers datum \[\mathsf{MS}(\gamma):=(L^{\vee}_{\gamma},sZ^{\circ},\operatorname{Ind}_{L^{\vee} _{\gamma,0}}^{L^{\vee}_{\gamma}}\{0\})\in\mathsf{MS}(G^{\vee}) \tag{4.0.1}\] This defines a map \[\mathsf{MS}:\mathfrak{h}^{*}\to\mathsf{MS}(G^{\vee})\] Composing with the projection \(\mathsf{MS}(G^{\vee})\stackrel{{\pi}}{{\to}}\mathsf{Conj}(G^{\vee}) \to\mathsf{LA}(G^{\vee})\), we get a further map \[\mathsf{LA}:\mathfrak{h}^{*}\to\mathsf{LA}(G^{\vee}).\] We will sometimes write \(\mathsf{MS}^{G^{\vee}}:\mathfrak{h}^{*}\to\mathsf{MS}(G^{\vee})\) and \(\mathsf{LA}^{G^{\vee}}:\mathfrak{h}^{*}\to\mathsf{LA}(G^{\vee})\) to indicate the dependence on \(G^{\vee}\). Let \(\mathfrak{h}^{*}_{\mathbb{R}}\subset\mathfrak{h}^{*}\) denote the real form of \(\mathfrak{h}^{*}\) spanned by the roots in \(G\). To any Lusztig-Achar datum \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\), we attach a \(W\)-invariant subset \(S(\mathbb{O}^{\vee},\bar{C})\subset\mathfrak{h}^{*}_{\mathbb{R}}\) as follows. First, choose a Levi subgroup \(L^{\vee}\supset H^{\vee}\) and a distinguished Lusztig-Achar datum \((\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\in\mathsf{LA}(G^{\vee})\). Then \(\mathsf{MS}(G^{\vee})\) is a Coxeter group and admits a Coxeter presentation unique up to conjugacy. We can then associate a parabolic subgroup \(H_{\bar{C}^{\prime}}\subset\bar{A}(\mathbb{O})\) to the conjugacy class \(\bar{C}^{\prime}\) up to conjugacy. Let \(H\subset A(\mathbb{O})\) be preimage of \(H_{\bar{C}^{\prime}}\) under the quotient map \(A(\mathbb{O})\twoheadrightarrow\bar{A}(\mathbb{O})\). Then \(H\) determines a \(G\)-equivariant cover \(\widetilde{\mathbb{O}}\) of \(\mathbb{O}\). One can check that \(\widetilde{\mathbb{O}}\) is equivalent to \(D(\mathbb{O}^{\vee},\bar{C})\) \(\mathsf{LA}(L^{\vee})\) such that \((\mathbb{O}^{\vee},\bar{C})=\operatorname{Sat}_{L^{\vee}}^{G^{\vee}}(\mathbb{O}_{ L^{\vee}},\bar{C}_{L^{\vee}})\). By Proposition 2.17, the pair \((\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\) is unique up to conjugation by \(L^{\vee}\). Define \[S(\mathbb{O}^{\vee},\bar{C}):=W\cdot\left((\mathsf{LA}^{L^{\vee}})^{-1}( \mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\cap\mathfrak{h}_{\mathbb{R}}^{*}\right)\] This is a \(W\)-invariant subset of \(\mathfrak{h}_{\mathbb{R}}^{*}\), independent of the choice of \((L^{\vee},(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}}))\). Since it is \(W\)-invariant, we can (and often will) regard \(S(\mathbb{O}^{\vee},\bar{C})\) as a subset of \(\mathfrak{h}^{*}/W\). Now choose a \(W\)-invariant non-degenerate symmetric form on \(\mathfrak{h}^{*}\) and write \(\|\cdot\|\) for the associated norm. **Theorem 4.8**.: _Let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\). Then there is a unique minimal-length \(W\)-orbit_ \[\gamma(\mathbb{O}^{\vee},\bar{C})\in S(\mathbb{O}^{\vee},\bar{C})\] _Furthermore,_ \[\gamma(\mathbb{O}^{\vee},\bar{C})=\gamma(D(\mathbb{O}^{\vee},\bar{C})).\] Proof.: Choose \(L^{\vee}\subset G^{\vee}\) and a distinguished Lusztig-Achar datum \((\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\) such that \((\mathbb{O}^{\vee},\bar{C})=\operatorname{Sat}_{L^{\vee}}^{G^{\vee}}( \mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\). Then by definition \(S(\mathbb{O}^{\vee},\bar{C})=W\cdot S(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\). So if \(\gamma_{L}\) is a minimal-length \(W_{L}\)-orbit in \(S(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\), then \(W\cdot\gamma_{L}\) is a minimal-length \(W\)-orbit in \(S(\mathbb{O}^{\vee},\bar{C})\). On the other hand, Proposition 4.5(iv) implies \[D^{G}(\mathbb{O}^{\vee},\bar{C})=\operatorname{Bind}_{L}^{G}(D^{L}(\mathbb{O} _{L^{\vee}},\bar{C}_{L^{\vee}}))\] So by Proposition 3.41(iii) \[\gamma(D^{G}(\mathbb{O}^{\vee},\bar{C}))=\gamma(D^{L}(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}}))\] as \(W\)-orbits in \(\mathfrak{h}^{*}\). So if \(\gamma_{L}=\gamma(D^{L}(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}}))\), then \(\gamma(\mathbb{O}^{\vee},\bar{C})=\gamma(D^{G}(\mathbb{O}^{\vee},\bar{C}))\). Thus, we can reduce to the case when \((\mathbb{O}^{\vee},\bar{C})\) is distinguished. Arguing as in the proof of Proposition 4.3, we can further reduce to the case when \(G\) is simple and adjoint. Now the classical cases are handled in Section 6.2. The exceptional cases are handled in Section 7.1. **Remark 4.9**.: _The statement of Theorem 4.8 is inspired by the discussion in [1, Section 11]. In \(\operatorname{loc.\ cit.}\), Barbasch considers some infinitesimal characters in classical types which are defined by a minimality property similar to the one defining \(\gamma(\mathbb{O}^{\vee},\bar{C})\). The novel observation in Theorem 4.8 is that \(\gamma(\mathbb{O}^{\vee},\bar{C})\) coincides with the infinitesimal character associated to the canonical quantization of the dual cover \(D(\mathbb{O}^{\vee},\bar{C})\)._ _We also remark that our minimality result is similar in spirit to some conjectures posed by Sommers and Gunnells in [1, Section 5]._ **Remark 4.10**.: _Let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\). Even if \((\mathbb{O}^{\vee},\bar{C})\) fails to be special, the set \(S(\mathbb{O}^{\vee},\bar{C})\) may still contain a unique minimal-length \(W\)-orbit. However, this \(W\)-orbit will typically \(\operatorname{not}\) correspond to the infinitesimal character of a unipotent ideal. For example, take \(G\) and \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\) as in Remark 4.4. In this case, the set \(S(\mathbb{O}^{\vee},\bar{C})\) contains a unique minimal-length \(W\)-orbit, namely the \(W\)-orbit of the weight \(\gamma=\rho/4+\varpi_{8}/4\) (here \(\rho\) is the half-sum of the positive roots and \(\varpi_{8}\) is the fundamental weight corresponding to the extremal node on the longest leg of the Dynkin diagram, see Remark 7.1 for a description of the method used to compute this minimum). We note that \(\gamma\) is not the infinitesimal character of a unipotent ideal. Moreover, an atlas computation shows that the spherical irreducible Harish-Chandra bimodule with left and right infinitesimal character \(\gamma\) is not unitary._ Now let \(\widetilde{\mathbb{O}}=D(\mathbb{O}^{\vee},\bar{C})\) and consider the category \(\operatorname{HC}^{G}(U(\mathfrak{g})/I(\widetilde{\mathbb{O}})\) of unipotent bimodules. Recall from Theorem 3.46 that this category is equivalent to finite-dimensional representations of a certain finite group \(\Gamma(\widetilde{\mathbb{O}})\), see (3.13.1). Our next task is to describe this finite group in terms of \((\mathbb{O}^{\vee},\bar{C})\). We will need the following proposition. **Proposition 4.11**.: _Let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\). Choose \((L^{\vee},(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}}))\in\mathsf{LA}_{0}^{*}(G ^{\vee})\) such that \((\mathbb{O}^{\vee},\bar{C})=\operatorname{Sat}(L^{\vee},(\mathbb{O}_{L^{ \vee}},\bar{C}_{L^{\vee}}))\) and let_ \[\gamma=\gamma^{L^{\vee}}(\mathbb{O}^{\vee},\bar{C}),\quad(R_{0}^{\vee},sZ(R_ {0}^{\vee})^{\circ},\mathbb{O}_{R_{0}}^{\vee})=\mathsf{MS}^{L^{\vee}}(\gamma ),\quad(R^{\vee},sZ(R^{\vee})^{\circ},\mathbb{O}_{R^{\vee}})=\mathsf{MS}^{G^{ \vee}}(\gamma).\] _Then_ \[\mathbb{O}_{R^{\vee}}=\operatorname{Sat}_{R_{0}^{\vee}}^{R^{\vee}}\mathbb{O} _{R_{0}^{\vee}}.\] Proof.: Arguing as in the proof of Proposition 4.3, we can reduce to the case when \(G\) is a simple group of adjoint type. For \(G\) classical, see Section 6.3. For \(G\) exceptional, see Section 7.2. Now consider the composition \[\mathbb{L}:\mathsf{LA}^{*}(G^{\vee})\stackrel{{\gamma}}{{ \rightarrow}}\mathfrak{h}^{*}/W\stackrel{{\mathsf{MS}}}{{ \rightarrow}}\mathsf{MS}(G^{\vee}) \tag{4.0.2}\] The following is immediate from Proposition 4.11 and Theorem 4.8. **Lemma 4.12**.: _The map \(\mathbb{L}:\mathsf{LA}^{*}(G^{\vee})\rightarrow\mathsf{MS}(G^{\vee})\) is right-inverse to \(\pi:\mathsf{MS}(G^{\vee})\rightarrow\mathsf{LA}^{*}(G^{\vee})\)._ **Remark 4.13**.: _We note that Proposition 4.11 is false for non-special pairs \((\mathbb{O}^{\vee},\bar{C})\). Indeed, let \(G^{\vee}=SO(17)\), let \(\mathbb{O}^{\vee}=\mathbb{O}_{[5,4^{2},3,1]}\), and let \(\bar{C}\) be the unique non-trivial conjugacy class in \(\bar{A}(\mathbb{O}^{\vee})\simeq\mathbb{Z}_{2}\). Then \(\gamma(\mathbb{O}^{\vee},\bar{C})=(\frac{5}{2},\frac{3}{2},\frac{3}{2},\frac{ 3}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2})\). It follows that \(R^{\vee}=SO(16)\), and \(\mathbb{O}_{R^{\vee}}=\operatorname{Ind}_{GL(1)\times GL(3)\times GL(4)}^{SO (16)}\{0\}=\mathbb{O}_{[5,5,3,3]}\). On the other hand, \(R_{0}^{\vee}=GL(4)\times SO(8)\), and \(\mathbb{O}_{R_{0}^{\vee}}=\mathbb{O}_{[4]}\times\mathbb{O}_{[5,3]}\). Thus, \(\operatorname{Sat}_{R_{0}^{\vee}}^{R^{\vee}}\mathbb{O}_{R_{0}^{\vee}}= \mathbb{O}_{[5,4,4,3]}\). We remark that in this case \(\pi(R^{\vee},sZ(R^{\vee})^{\circ},\operatorname{Sat}_{R_{0}^{\vee}}^{R^{ \vee}}\mathbb{O}_{R_{0}^{\vee}})\neq(\mathbb{O}^{\vee},\bar{C})\), so Lemma 4.12 is false as well._ **Theorem 4.14**.: _Assume \(G\) is adjoint. Let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\). Let \(\mathbb{L}(\mathbb{O}^{\vee},\bar{C})=(R^{\vee},sZ^{\circ},\mathbb{O}_{R^{ \vee}})\) and \(\widetilde{\mathbb{O}}=D(\mathbb{O}^{\vee},\bar{C})\). Then there is a group isomorphism_ \[\bar{A}(\mathbb{O}_{R^{\vee}})\simeq\Gamma(\widetilde{\mathbb{O}}).\] Proof.: Arguing as in the proof of Proposition 4.3, we can reduce to the case when \(G\) is a simple group of adjoint type. For \(G\) classical, see Section 6.4. For \(G\) exceptional, see Section 7.2. **Remark 4.15**.: _We note that there is a case-free proof of Theorem 4.14 in the following special case: \(\mathbb{O}^{\vee}\) is even and \(\bar{C}=1\). Under these assumptions, \((R^{\vee},\mathbb{O}_{R^{\vee}})=(G^{\vee},\mathbb{O}^{\vee})\) and \(I(\widetilde{\mathbb{O}})=J(\gamma_{\mathbb{O}^{\vee}})\), cf. [1, Proposition 9.2.1]. So it suffices to show that \(\bar{A}(\mathbb{O}^{\vee})\simeq\Gamma(D(\mathbb{O}^{\vee},\bar{C}))\). Let \(\mathbb{O}=d(\mathbb{O}^{\vee})\). By [11, Proposition 7.4 and Theorem 6.1], there is a monoidal equivalence_ \[\operatorname{HC}^{G}(U(\mathfrak{g})/J(\gamma_{\mathbb{O}^{\vee}}))\simeq\bar{A }(\mathbb{O})\operatorname{-mod}\] _And by [1, Theorem 6.6.2], there is a monoidal equivalence_ \[\operatorname{HC}^{G}(U(\mathfrak{g})/J(\gamma_{\mathbb{O}^{\vee}}))\simeq\Gamma(D (\mathbb{O}^{\vee},\bar{C}))\operatorname{-mod}\] _So by the Tannakian formalism, there is a group isomorphism \(\bar{A}(\mathbb{O})\simeq\Gamma(D(\mathbb{O}^{\vee},\bar{C}))\). Since \(\mathbb{O}^{\vee}\) is even, and hence special, there is a further isomorphism \(\bar{A}(\mathbb{O}^{\vee})\simeq\bar{A}(\mathbb{O})\), see [10]. Thus, \(\bar{A}(\mathbb{O}^{\vee})\simeq\Gamma(D(\mathbb{O}^{\vee},\bar{C}))\), as asserted._ Combining Theorem 4.14 with Theorem 3.46, we arrive at the following result. **Corollary 4.16**.: _Assume \(G\) is adjoint. Let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\). Let \(\mathbb{L}(\mathbb{O}^{\vee},\bar{C})=(R^{\vee},sZ^{\circ},\mathbb{O}_{R^{ \vee}})\) and \(\widetilde{\mathbb{O}}=D(\mathbb{O}^{\vee},\bar{C})\). Then there is a monoidal quivalence of categories_ \[\operatorname{HC}^{G}(U(\mathfrak{g})/I(\widetilde{\mathbb{O}}))\simeq\bar{A} (\mathbb{O}_{R^{\vee}})\operatorname{-mod}\] **Remark 4.17**.: _Corollary 4.16 implies that the unipotent representations attached to \(\widehat{\mathbb{O}}\) are in one-to-one correspondence with irreducible representations of \(\bar{A}(\mathbb{O}_{R^{\vee}})\). In the special case when \(\mathbb{O}^{\vee}\) is even and \(\bar{C}=1\), this (weaker) statement was proved in [11, Theorem III]. Their result was later extended to the case when \(\mathbb{O}^{\vee}\) is special in [12, Appendix A]._ ## 5. Combinatorics in classical types A _partition_ of \(n\in\mathbb{Z}_{\geq 0}\) is a non-increasing sequence of positive integers \(\lambda=[\lambda_{1},\lambda_{2},...,\lambda_{k}]\) such that \(n=\sum_{i}\lambda_{i}\). Write \(\#\lambda\) for \(k\) and \(|\lambda|\) for \(n\). For a positive integer \(x\), we write \(m_{\lambda}(x)\) for the multiplicity of \(x\) in \(\lambda\) and \(\operatorname{ht}_{\lambda}(x)\) for the _height_ of \(x\) in \(\lambda\), i.e. \(\operatorname{ht}_{\lambda}(x)=\sum_{y\geq x}m_{\lambda}(y)\). Note that \(\operatorname{ht}_{\lambda}(x)\) makes sense even if \(x\) is not a part of \(\lambda\). For notational convenience, we will often abbreviate partitions by writing multiplicities as exponents, i.e. \([4^{2},3,1^{3}]\) denotes the partition \([4,4,3,1,1,1]\) of \(14\). If \(\lambda\in\mathcal{P}(m)\) and \(\mu\in\mathcal{P}(n)\), we write \(\lambda\cup\mu\in\mathcal{P}(m+n)\) for the partition obtained by adding multiplicities and \(\lambda\vee\mu\in\mathcal{P}(m+n)\) for the partition obtained by adding corresponding parts. For example, if \(\lambda=[4^{2},3,1^{3}]\) and \(\mu=[5,1]\), then \(\lambda\cup\mu=[5,4^{2},3,1^{4}]\) and \(\lambda\vee\mu=[9,5,3,1^{3}]\). The transpose of a partition \(\lambda\) is denoted by \(\lambda^{t}\). A partition of \(2n+1\) is of _type_\(B\) if every even part occurs with even multiplicity. A partition of \(2n\) is of _type_\(C\) if every odd part occurs with even multiplicity. A partition of \(2n\) is of _type_\(D\) if every even part occurs with even multiplicity. Write \(\mathcal{P}(n)\) for the set of partitions of \(n\), and write \(\mathcal{P}_{B}(2n+1)\subset\mathcal{P}(2n+1)\), \(\mathcal{P}_{C}(2n)\subset\mathcal{P}(2n)\) and \(\mathcal{P}_{D}(2n)\subset\mathcal{P}(2n)\) for the subsets of partitions of the corresponding types. We will sometimes write \(\mathcal{P}_{1}(m)\) for \(\mathcal{P}_{C}(m)\) and \(\mathcal{P}_{0}(m)\) for either \(\mathcal{P}_{B}(m)\) (when \(m\) is odd) or \(\mathcal{P}_{D}(m)\) (when \(m\) is even) in order to make concise statement about \(\mathcal{P}_{\epsilon}(m)\) for \(\epsilon\in\{0,1\}\). A partition of type \(D\) is _very even_ if all parts are even, occuring with even multiplicity. There is a partial order on \(\mathcal{P}(n)\) defined in the following way: \[\lambda\geq\mu\iff\sum_{i\leq j}\lambda_{i}\geq\sum_{i\leq j}\mu_{i},\qquad \forall i\] The _B-collapse_ of \(\lambda\in\mathcal{P}(2n+1)\) (resp. _C-collapse_ of \(\lambda\in\mathcal{P}(2n)\), _D-collapse_ of \(\lambda\in\mathcal{P}(2n)\)) is the (unique) largest partition \(\lambda_{B}\in\mathcal{P}_{B}(2n+1)\) (resp. \(\lambda_{C}\in\mathcal{P}_{C}(2n)\), \(\lambda_{D}\in\mathcal{P}_{D}(2n)\)) which is dominated by \(\lambda\). For a partition \(\lambda=[\lambda_{1},\lambda_{2},\ldots,\lambda_{k}]\), we will sometimes write \(\lambda=(c_{l}\geq c_{l-1}\geq\ldots\geq c_{1})\) where \(c_{1},\ldots,c_{l}\) are the column lengths of the corresponding Young diagram, i.e. the members of the transpose partition \(\lambda^{t}\) of \(\lambda\). It is possible to describe the sets \(\mathcal{P}_{\epsilon}(m)\) in terms of columns (see [12, Prop. 2.3]): * \(\mathcal{P}_{0}(m)\) consists of partitions \(\lambda=(a_{2k+1},a_{2k},\cdots,a_{0})\) of size \(m\) such that \(a_{2i}+a_{2i-1}\) is even for all \(i\) (we insist that there are **even** number of columns, by taking \(a_{0}=0\) if necessary). * \(\mathcal{P}_{1}(m)\) consists of partitions \(\lambda=(a_{2k},a_{2k-1},\cdots,a_{0})\) of size \(m\) such that \(a_{2i}+a_{2i-1}\) is even for all \(i\) (we insist that there are **odd** number of columns, by taking \(a_{0}=0\) if necessary). ### Nilpotent orbits **Proposition 5.1** (Section 5.1, [13]).: _Suppose \(G\) is a simple group of classical type. Then the following are true:_ 1. _If_ \(\mathfrak{g}=\mathfrak{sl}(n)\)_, then there is a bijection_ \[\mathsf{Orb}(G)\xrightarrow{\sim}\mathcal{P}(n)\] 2. _If_ \(\mathfrak{g}=\mathfrak{so}(2n+1)\)_, then there is a bijection_ \[\mathsf{Orb}(G)\xrightarrow{\sim}\mathcal{P}_{B}(2n+1)\] 3. _If_ \(\mathfrak{g}=\mathfrak{sp}(2n)\)_, then there is a bijection_ \[\mathsf{Orb}(G)\xrightarrow{\sim}\mathcal{P}_{C}(2n)\] 4. _If_ \(\mathfrak{g}=\mathfrak{so}(2n)\)_, then there is a surjection_ \[\mathsf{Orb}(G)\twoheadrightarrow\mathcal{P}_{D}(2n)\] _Over very even partitions, this map is two-to-one, and over all other partitions, it is a bijection._ **Convention**.: _In the setting of the above proposition, we write \(\mathbb{O}_{\lambda}\in\mathsf{Orb}(G)\) for the orbit corresponding to a partition \(\lambda\), except in the case when \(\lambda\) is very even and \(\mathfrak{g}\) is of type \(D\). In such cases, there are two orbits corresponding to \(\lambda\), which we denote by \(\mathbb{O}_{\lambda}^{I}\) and \(\mathbb{O}_{\lambda}^{II}\). For convenience, we sometimes allow very even partitions to be decorated by Roman numerals \(I\) or \(II\) to redefine \(\mathcal{P}_{D}(2n)\), so that the map \(\mathsf{Orb}(G)\twoheadrightarrow\mathcal{P}_{D}(2n)\) in Proposition 5.1(d) becomes a bijection. If there is a property \(P\) which \(\mathbb{O}_{\lambda}^{I}\) and \(\mathbb{O}_{\lambda}^{II}\) have in common, we will often say, to simplify the exposition, that \(\mathbb{O}_{\lambda}\) has property \(P\)._ ### The group \(A(\mathbb{O})\) In this section, we will describe the component groups \(A(\mathbb{O})\) for nilpotent orbits of classical groups. We will write \(\mathfrak{g}_{0}(m)=\mathfrak{so}(m)\) (for arbitrary \(m\)) and \(\mathfrak{g}_{1}(m)=\mathfrak{sp}(m)\) (for even \(m\)) in order to make concise statements about \(\mathfrak{g}_{\epsilon}(m)\) for \(\epsilon\in\{0,1\}\). Let \(G_{\epsilon}(m)\) denote the simple classical group corresponding to \(\mathfrak{g}_{\epsilon}(m)\) and let \(G_{\epsilon}^{ad}(m)\) denote the adjoint quotient of \(G_{\epsilon}(m)\). For \(\mathbb{O}_{\lambda}\in\mathsf{Orb}(G_{\epsilon}(m))\), we write \(A(\mathbb{O}_{\lambda})\) (resp. \(A^{ad}(\mathbb{O}_{\lambda})\)) for the component group of \(\mathbb{O}_{\lambda}\) with respect to \(G_{\epsilon}(m)\) (resp. \(G_{\epsilon}^{ad}(m)\)). For \(\lambda=[\lambda_{1},...,\lambda_{k}]\in\mathcal{P}_{\epsilon}(m)\), let \(\lambda^{\epsilon}=[\lambda_{1}^{\epsilon}\geqslant\lambda_{2}^{\epsilon} \geqslant\cdots\geqslant\lambda_{r}^{\epsilon}>0]\) be the subpartition consisting of the parts \(\lambda_{j}^{\epsilon}\) of \(\lambda\) such that \(\lambda_{j}^{\epsilon}\not\equiv\epsilon\mod 2\). We introduce the natural convention that \(\lambda_{k}^{\epsilon}=0\) for any \(k>r\). To each member \(\lambda_{i}^{\epsilon}\) of \(\lambda^{\epsilon}\), we assign a symbol \(\upsilon_{\lambda_{i}^{\epsilon}}\) and write \(A\simeq(\mathbb{Z}_{2})^{r}\) for the elementary abelian \(2\)-group with basis \(\{\upsilon_{\lambda_{i}^{\epsilon}}\}\) (if \(\lambda_{i}^{\epsilon}=\lambda_{j}^{\epsilon}\), then \(\upsilon_{\lambda_{i}^{\epsilon}}=\upsilon_{\lambda_{j}^{\epsilon}}\)). Let \(A^{0}=A^{0}(\mathbb{O}_{\lambda})\) be the subgroup of \(A\) consisting of elements which are products of an even number of \(\upsilon_{\lambda_{i}^{\epsilon}}\)'s, and let \(A^{1}=A^{1}(\mathbb{O}_{\lambda}):=A\). As in [11], for \(\lambda\in\mathcal{P}_{\epsilon}(m)\) and \(\delta\in\{0,1\}\), we set \[\mathcal{S}_{\delta}(\lambda)=\{x\,|\,x\not\equiv\epsilon\ \text{mod}\ 2\ \text{and}\ m_{\lambda}(x)\equiv\delta\ \text{mod}\ 2\}.\] Define the element \(\tilde{v}:=\upsilon_{\lambda^{\epsilon}_{1}}\upsilon_{\lambda^{\epsilon}_{2}} \cdots\upsilon_{\lambda^{\epsilon}_{r}}\) in \(A\). Then \(\tilde{v}\) is of order \(2\) if \(\mathcal{S}_{1}\neq\emptyset\), and is the identity element otherwise. For \(\lambda=[\lambda_{1},...,\lambda_{k}]\in\mathcal{P}(m)\), write \(a\) (resp. \(b\)) for the number of distinct odd (resp. even) parts of \(\lambda\). The following can be deduced from the proof of Theorem 6.1.3 and Corollary 6.1.6 of [10]. **Proposition 5.2**.: _Let \(G=G_{\epsilon}(m)\) and let \(\mathbb{O}_{\lambda}\in\mathsf{Orb}(G)\). Then the following are true:_ 1. _If_ \(\mathfrak{g}=\mathfrak{sl}(n)\)_, then_ \(A(\mathbb{O}_{\lambda})\simeq\mathbb{Z}_{d}\)_, where_ \(d=\gcd\left\{\lambda_{i}\right\}\)_, and_ \(A^{ad}(\mathbb{O}_{\lambda})=1\)_._ 2. _If_ \(\mathfrak{g}=\mathfrak{so}(2n+1)\)_, then_ \(A(\mathbb{O}_{\lambda})\simeq A^{ad}(\mathbb{O}_{\lambda})\simeq A^{0}\simeq (\mathbb{Z}_{2})^{a-1}\)_._ 3. _If_ \(\mathfrak{g}=\mathfrak{sp}(2n)\)_, then_ \(A(\mathbb{O}_{\lambda})\simeq A^{1}\) _and_ \[A^{ad}(\mathbb{O}_{\lambda})\simeq A^{1}/\{1,\tilde{v}\}=\begin{cases}A^{1}=A \simeq(\mathbb{Z}_{2})^{b}&\text{if }\mathcal{S}_{1}=\emptyset\\ A^{1}/\{1,\tilde{v}\}=A/\{1,\tilde{v}\}\simeq(\mathbb{Z}_{2})^{b-1}&\text{if } \mathcal{S}_{1}\neq\emptyset\end{cases}\] 4. _If_ \(\mathfrak{g}=\mathfrak{so}(2n)\)_, then_ \(A(\mathbb{O}_{\lambda})\simeq A^{0}\)_, and_ \[A^{ad}(\mathbb{O}_{\lambda})\simeq A^{0}/\{1,\tilde{v}\}=\begin{cases}A^{0} \simeq(\mathbb{Z}_{2})^{\max(a-1,0)}&\text{if }\mathcal{S}_{1}=\emptyset\\ A^{0}/\{1,\tilde{v}\}\simeq(\mathbb{Z}_{2})^{\max(a-2,0)}&\text{if }\mathcal{S}_{1}\neq\emptyset \end{cases}\] _Parts (ii)-(iv) can be summarized as follows: \(A(\mathbb{O}_{\lambda})\simeq A^{\epsilon}\) and \(A^{ad}(\mathbb{O}_{\lambda})\simeq A^{\epsilon}/(A^{\epsilon}\cap\{1,\tilde{ v}\})\)._ For any \(C\in A(\mathbb{O}_{\lambda})\), we can choose a (not necessarily unique) lift of \(C\) in \(A^{\epsilon}\subset A\). By Proposition 5.2, this lift admits a (unique) decomposition as a product of \(\upsilon_{\lambda^{\epsilon}_{i}}\)'s. Let \(\nu\) be the subpartition of \(\lambda^{\epsilon}\subset\lambda\) consists of those \(\lambda^{\epsilon}_{i}\) such that \(\upsilon_{\lambda^{\epsilon}_{i}}\) appears in this decomposition. Then one can think of \(\nu\) as a'marking' of \(\lambda\). This motivates the following definition introduced in [11]. **Definition 5.3**.: _Let \(X\in\{B,C,D\}\). Define \(\tilde{\mathcal{P}}_{X}(m)\) to be the set of pairs of partitions \((\nu,\eta)\) such that:_ 1. \(\nu\cup\eta\in\mathcal{P}_{X}(m)\)_._ 2. _Each part of_ \(\nu\) _is odd (resp. even) if_ \(X\in\{B,D\}\) _(resp._ \(X=C\)_) and has multiplicity_ \(1\)_._ 3. _If_ \(X\in\{B,D\}\)_, we require_ \(\#\nu\) _to be even. If_ \(X=C\)_, we can always assume_ \(\#\nu\) _is even, by adding a zero if necessary._ _We will write any element \((\nu,\eta)\in\tilde{\mathcal{P}}_{X}(m)\) as \({}^{\langle\nu\rangle}\lambda\), where \(\lambda=\nu\cup\eta\), and we think of \({}^{\langle\nu\rangle}\lambda\) as a partition \(\lambda\in\mathcal{P}_{X}(m)\) with parts in the subpartition \(\nu\) "marked". When \(\lambda\in\mathcal{P}_{D}(2n)\) is a very even partition, \(\nu\) is automatically empty. We then follow the convention in Section 5.1 and allow decoration by a Roman numeral \(I\) or \(II\) on the marked partition \({}^{\langle\emptyset\rangle}\lambda\in\tilde{\mathcal{P}}_{D}(2n)\)._ _Similar to \(\mathcal{P}_{\epsilon}(m)\), we also sometimes write \(\tilde{\mathcal{P}}_{\epsilon}(m)\) for the sets defined above, where \(\epsilon\in\{0,1\}\)._ As discussed above, the marking \(\nu=[\nu_{1},\dots,\nu_{p}]\) of a marked partition \({}^{\langle\nu\rangle}\lambda\in\tilde{\mathcal{P}}_{\epsilon}(m)\) gives rise to an element \(\tilde{C}_{\nu}=\upsilon_{\nu_{1}}\upsilon_{\nu_{2}}\cdots\upsilon_{\nu_{p}}\) in \(A^{\epsilon}\) and so determines a conjugacy class \(C_{\nu}\) in the quotient group \(A(\mathbb{O}_{\lambda})\). Let \(C^{\prime}_{\nu}\) denote the image of \(C_{\nu}\) in \(A^{ad}(\mathbb{O}_{\lambda})\). The following corollaries are just reformulations of Proposition 5.2. **Corollary 5.4**.: _We have a bijection \(\tilde{\mathcal{P}}_{\epsilon}(m)\xrightarrow{\sim}\mathsf{Conj}(G_{\epsilon}(m))\) sending a marked partition \({}^{\langle\nu\rangle}\lambda\in\tilde{\mathcal{P}}_{\epsilon}(m)\) to the \((\mathbb{O}_{\lambda},C_{\nu})\in\mathsf{Conj}(G_{\epsilon}(m))\). When \(G=SO(2n)\) and \(\mathbb{O}_{\lambda}\) is very even, this bijection preserves the Roman numerals._ **Corollary 5.5**.: _The surjective composite map \(\tilde{\mathcal{P}}_{\epsilon}(m)\xrightarrow{\sim}\mathsf{Conj}(G_{\epsilon}(m)) \twoheadrightarrow\mathsf{Conj}(G_{\epsilon}^{ad}(m))\), \({}^{\langle\nu\rangle}\lambda\mapsto(\mathbb{O}_{\lambda},C_{\nu}^{\prime})\), is one-to-one over \((\mathbb{O}_{\lambda},C_{\nu}^{\prime})\) if \(\mathfrak{g}\) is of type \(B\), or \(\mathfrak{g}\) is of type \(C\) or \(D\) and \(\mathcal{S}_{1}(\lambda)=\emptyset\), and two-to-one otherwise. In the latter case, the preimage of \((\mathbb{O}_{\lambda},C_{\nu}^{\prime})\) is of the form \(\{{}^{\langle\nu\rangle}\lambda,{}^{\langle\lambda^{\epsilon}\rangle\nu} \lambda\}\). When the group is \(SO(2n)/\{\pm I_{2n}\}\), the map \(\tilde{\mathcal{P}}_{\epsilon}(2n)\twoheadrightarrow\mathsf{Conj}(SO(2n)/\{ \pm I_{2n}\})\) sends any even partition \(\lambda={}^{\langle\emptyset\rangle}\lambda\) decorated by a Roman numeral to \((\mathbb{O}_{\lambda},1)\) with the same numeral._ **Remark 5.6**.: _Following [20], we can also describe the group \(A(\mathbb{O}_{\lambda})\) in terms of the columns of the partition \(\lambda\) (see the paragraph above Section 5.1). Namely, for each \(\lambda_{i}^{\epsilon}\in\lambda^{\epsilon}\), let \(c_{i}^{\epsilon}:=\mathrm{ht}_{\lambda}(\lambda_{i}^{\epsilon})\) be the height of \(\lambda_{i}^{\epsilon}\) in \(\lambda\). The integers \(c_{i}^{\epsilon}\) are the lengths of certain columns of the Young diagram of \(\lambda\), namely the members of \(\lambda^{t}\) whose heights in \(\lambda^{t}\) are of different parity than \(\epsilon\). Note that we have a symmetry \(\lambda_{i}^{\epsilon}=\mathrm{ht}_{\lambda^{t}}(c_{i}^{\epsilon})\). We write the basis element \(\upsilon_{\lambda_{i}^{\epsilon}}\) of the group \(A\) associated to \(\lambda\) alternatively as \(\vartheta_{c_{i}^{\epsilon}}\) and again follow Proposition 5.2._ ### Levi subalgebras Let \(\mathfrak{g}\) be a simple Lie algebra of classical type, and let \(\mathfrak{h}\subset\mathfrak{g}\) be a Cartan subalgebra. Choose standard coordinates on \(\mathfrak{h}\) as in [1, 2, 3] and denote the coordinate functions by \(\{e_{i}\}\). Write \(\Delta\subset\mathfrak{h}^{*}\) for the roots of \(\mathfrak{g}\). Then we can choose simple roots \(\Pi\subset\Delta\) as follows: * If \(\mathfrak{g}=\mathfrak{sl}(n)\), then \[\Delta=\{\pm(e_{i}-e_{j})\,|\,1\leq i,j\leq n,i\neq j\},\quad\Pi=\{e_{1}-e_{2 },e_{2}-e_{3},\ldots,e_{n-1}-e_{n}\}.\] * If \(\mathfrak{g}=\mathfrak{so}(2n+1)\), then \[\Delta=\{\pm e_{i}\pm e_{j},\pm e_{i}\,|\,1\leq i,j\leq n,i\neq j\},\quad\Pi=\{e _{1}-e_{2},e_{2}-e_{3},\ldots,e_{n-1}-e_{n},e_{n}\}\] and the lowest root is \(-e_{1}-e_{2}\). * If \(\mathfrak{g}=\mathfrak{sp}(2n)\), then \[\Delta=\{\pm e_{i}\pm e_{j},\pm 2e_{i}\,|\,1\leq i,j\leq n,i\neq j\},\quad\Pi=\{e_{1} -e_{2},e_{2}-e_{3},\ldots,e_{n-1}-e_{n},2e_{n}\}\] and the lowest root is \(-2e_{1}\). * If \(\mathfrak{g}=\mathfrak{so}(2n)\), then \[\Delta=\{\pm e_{i}\pm e_{j}\,|\,1\leq i,j\leq n,i\neq j\},\quad\Pi=\{e_{1}-e_{ 2},e_{2}-e_{3},\ldots,e_{n-1}-e_{n},e_{n-1}+e_{n}\}\] and the lowest root is \(-e_{1}-e_{2}\). If \(\mathfrak{g}=\mathfrak{sl}(n)\) and \(a=(a_{1},...,a_{t})\) is a partition of \(n\), there is a Levi subalgebra \[\mathfrak{s}(\mathfrak{gl}(a_{1})\times...\times\mathfrak{gl}(a_{t})):= \mathfrak{sl}(n)\cap(\mathfrak{gl}(a_{1})\times...\times\mathfrak{gl}(a_{t})) \subset\mathfrak{g}\] corresponding to the roots \[\{\pm(e_{i}-e_{j})\}_{1\leq i<j\leq a_{1}}\cup...\cup\{\pm(e_{i}-e_{j})\}_{n-a _{t}+1\leq i<j\leq n}\subset\Delta\] Every Levi subalgebra in \(\mathfrak{g}\) is conjugate to one of this form, and no two such are conjugate. If \(\mathfrak{g}=\mathfrak{so}(2n+1)\), \(\mathfrak{sp}(2n)\), or \(\mathfrak{so}(2n)\), \(0\leq m\leq n\), and \(a\) is a partition of \(n-m\), there is a Levi subgroup \[\mathfrak{gl}(a_{1})\times...\times\mathfrak{gl}(a_{t})\times\mathfrak{g}(m) \subset\mathfrak{g} \tag{5.3.1}\] corresponding to the roots \[\{\pm(e_{i}-e_{j})\}_{1\leq i<j\leq a_{1}}\cup...\cup\{\pm(e_{i}-e_{j})\}_{n-m -a_{t}+1\leq i<j\leq n-m}\cup\Delta(m)\subset\Delta\] where \(\Delta(m)\subset\Delta\) has the obvious meaning. If \(\mathfrak{g}=\mathfrak{so}(2n+1)\) or \(\mathfrak{sp}(2n)\), then every Levi subalgebra in \(\mathfrak{g}\) is conjugate to one of this form, and no two such are conjugate. If \(\mathfrak{g}=\mathfrak{so}(2n)\), and \(a\) is a partition of \(n\) with only even parts, there is a Levi subalgebra \[\mathfrak{gl}(a_{1})\times...\times\mathfrak{gl}(a_{t})^{\prime}\subset\mathfrak{g} \tag{5.3.2}\] corresponding to the roots \[\{\alpha\in\Delta\mid\alpha(1,...,1,2,...,2,...,t,...,t,-t)=0\}\subset\Delta.\] The prime is included to distinguish this subalgebra from the subgroup \(\mathfrak{gl}(a_{1})\times...\times\mathfrak{gl}(a_{t})\subset\mathfrak{g}\) defined above (to which it is \(O(2n)\)-, but not \(SO(2n)\)-conjugate). Every Levi subalgebra in \(\mathfrak{g}\) is conjugate to one of the form (5.3.1) or (5.3.2), and no two such are conjugate. The Levi subalgebras listed above are called _standard_ with respect to \((\mathfrak{h},\Delta,\Pi)\). ### Maximal pseudo-Levi subalgebras Let \(\mathfrak{g}\) be a simple Lie algebra of classical type. Fix \(\mathfrak{h}\), \(\Delta\), \(\Pi\) and \(\{e_{i}\}\) as in Section 5.3. If \(\mathfrak{g}=\mathfrak{sl}(n)\), then there is a unique maximal pseudo-Levi, namely \(\mathfrak{g}\) itself. Now assume \(\mathfrak{g}\) is a simple Lie algebra of type \(B_{n}\), \(C_{n}\) or \(D_{n}\), and \(\mathfrak{l}\) is a maximal pseudo-Levi of \(\mathfrak{g}\). Conjugating if necessary, we can assume that \(\mathfrak{l}=\mathfrak{g}_{I}\) is a standard pseudo-Levi subalgebra associated to a proper subset \(I\) of the set \(\tilde{\Pi}=\Pi\cup\{\alpha_{0}\}\) of vertices of the extended Dynkin diagram of \(\mathfrak{g}\), where \(\alpha_{0}\) is the lowest root, i.e., the negative of the highest root. Moreover, there is a decomposition \(I=I_{1}\sqcup I_{2}\) and hence a Lie algebra decomposition \(\mathfrak{l}=\mathfrak{l}_{1}\oplus\mathfrak{l}_{2}\), where \(\mathfrak{l}_{i}\) is the simple Lie subalgebra generated the root subspaces of the simple roots in \(I_{i}\), \(i=1,2\). In addition, we assume that the lowest root \(\alpha_{0}\) (the extra node in the extended Dynkin diagram) belongs to \(I_{1}\). In particular, \(\mathfrak{l}_{2}\) is a simple subalgebra of the same type as \(\mathfrak{g}\) and \(\mathfrak{l}_{1}\) is a simple Lie algebra (or possibly zero). Then \(\mathfrak{l}_{1}\), if non-zero, is of type \(D\) (resp. \(C\), \(D\)) when \(\mathfrak{g}\) is of type \(B\) (resp. \(C\), \(D\)). More explicitly, write \(k\) for the semisimple rank of \(\mathfrak{l}_{1}\) (\(0\leq k\leq n-1\)), so that \(\mathfrak{l}_{2}\) is of semisimple rank \(n-k\) (when \(k=0\), \(\mathfrak{l}=\mathfrak{l}_{2}\) is nothing else but \(\mathfrak{g}\)). Then the Cartan subalgebra \(\mathfrak{h}_{1}\simeq\mathbb{C}^{k}\) of \(\mathfrak{l}_{1}\) has coordinates \(\{e_{1},e_{2},\ldots e_{k}\}\), and the Cartan subalgebra \(\mathfrak{h}_{2}\simeq\mathbb{C}^{n-k}\) of \(\mathfrak{l}_{2}\) has coordinates \(\{e_{k+1},e_{k+2},\ldots e_{n}\}\). The Lie subalgebra \(\mathfrak{l}_{i}\) is spanned by the Cartan subalgebra \(\mathfrak{h}_{i}=\mathfrak{h}\cap\mathfrak{l}_{i}\) and root subspaces of \(\mathfrak{g}\) of the root subsystems \(\Delta_{i}\subset\Delta\), \(i=1,2\), which we describe below. * If \(\mathfrak{g}=\mathfrak{so}(2n+1)\), then \[\Delta_{1}=\{\pm e_{i}\pm e_{j}\,|\,1\leq i,j\leq k,i\neq j\},\quad\Delta_{2} =\{\pm e_{i}\pm e_{j},\pm e_{i}\,|\,k+1\leq i,j\leq n,i\neq j\},\] so that \(\mathfrak{l}_{1}\simeq\mathfrak{so}(2k)\) and \(\mathfrak{l}_{2}\simeq\mathfrak{so}(2n-2k+1)\). * If \(\mathfrak{g}=\mathfrak{sp}(2n)\), then \[\Delta_{1}=\{\pm e_{i}\pm e_{j},\pm 2e_{i}\,|\,1\leq i,j\leq k,i\neq j\},\quad \Delta_{2}=\{\pm e_{i}\pm e_{j},\pm 2e_{i}\,|\,k+1\leq i,j\leq n,i\neq j\},\] so that \(\mathfrak{l}_{1}\simeq\mathfrak{sp}(2k)\) and \(\mathfrak{l}_{2}\simeq\mathfrak{sp}(2n-2k)\). * If \(\mathfrak{g}=\mathfrak{so}(2n)\), then \[\Delta_{1}=\{\pm e_{i}\pm e_{j}\,|\,1\leq i,j\leq k,i\neq j\},\quad\Delta_{2} =\{\pm e_{i}\pm e_{j}\,|\,k+1\leq i,j\leq n,i\neq j\},\] so that \(\mathfrak{l}_{1}\simeq\mathfrak{so}(2k)\) and \(\mathfrak{l}_{2}\simeq\mathfrak{so}(2n-2k)\). ### Saturation of nilpotent orbits The following proposition is standard. **Proposition 5.7**.: _The following are true:_ 1. _Suppose_ \(\mathfrak{g}=\mathfrak{sl}(n)\)_. Let_ \(\mathfrak{m}=\mathfrak{s}(\mathfrak{gl}(a_{1})\times...\times\mathfrak{gl}(a_{t }))\) _and let_ \[\mathbb{O}_{M}=\mathbb{O}_{\lambda_{1}}\times...\times\mathbb{O}_{\lambda_{t} }\in\mathsf{Orb}(M)\] _Then_ \[\operatorname{Sat}^{G}_{M}\mathbb{O}_{M}=\mathbb{O}_{\lambda}\] _where_ \(\lambda=\bigcup_{j=1}^{t}\lambda_{j}\)_._ 2. _Suppose_ \(\mathfrak{g}=\mathfrak{so}(2n+1)\)_,_ \(\mathfrak{sp}(2n)\) _or_ \(\mathfrak{so}(2n)\)_. Let_ \(\mathfrak{m}=\mathfrak{gl}(a_{1})\times\cdots\times\mathfrak{gl}(a_{t})\times \mathfrak{g}(m)\) _or possibly (if_ \(\mathfrak{g}=\mathfrak{so}(2n)\)_)_ \(\mathfrak{m}=\mathfrak{gl}(a_{1})\times...\times\mathfrak{gl}(a_{t})^{\prime}\) _(with all_ \(a_{i}\) _even) and let_ \[\mathbb{O}_{M}=\mathbb{O}_{\lambda_{1}}\times...\times\mathbb{O}_{\lambda_{t }}\times\mathbb{O}_{\lambda^{0}}\in\mathsf{Orb}(M)\] _Then_ \[\operatorname{Sat}^{G}_{M}\mathbb{O}_{M}=\mathbb{O}_{\lambda}\] _where_ \[\lambda=\lambda^{0}\cup\bigcup_{j=1}^{t}(\lambda_{j}\cup\lambda_{j}).\] _If_ \(\mathfrak{g}=\mathfrak{so}(2n)\) _and_ \(\lambda\) _is very even, then_ \(\lambda^{0}\) _is very even. In this case,_ \(\lambda\) _and_ \(\lambda^{0}\) _have the same decoration._ **Corollary 5.8** ([10], Thm 8.2.14).: _If \(\mathfrak{g}=\mathfrak{sl}(n)\), then the only distinguished nilpotent orbit is the principal one. If \(\mathfrak{g}=\mathfrak{so}(2n+1)\), \(\mathfrak{sp}(2n)\), or \(\mathfrak{so}(2n)\), then \(\mathbb{O}\) is distinguished if and only if the corresponding partition has no repeated parts._ ### (Birational) induction of nilpotent orbits **Proposition 5.9** (Theorem 7.3.3, [10]).: _The following are true:_ 1. _Suppose_ \(\mathfrak{g}=\mathfrak{sl}(n)\)_. Let_ \(\mathfrak{m}=\mathfrak{s}(\mathfrak{gl}(a_{1})\times...\times\mathfrak{gl}(a_{ t}))\) _and let_ \(\lambda^{i}\in\mathcal{P}(a_{i})\)_. Let_ \[\mathbb{O}_{M}=\mathbb{O}_{\lambda^{1}}\times\cdots\times\mathbb{O}_{\lambda^ {t}}\in\mathsf{Orb}(M)\] _Then_ \[\operatorname{Ind}^{G}_{M}\mathbb{O}_{M}=\mathbb{O}_{\lambda}\] _where_ \(\lambda=\bigvee_{j=1}^{t}\lambda_{j}\)_._ 2. _Suppose_ \(\mathfrak{g}=\mathfrak{so}(2n+1)\)_,_ \(\mathfrak{so}(2n)\)_, or_ \(\mathfrak{sp}(2n)\)_. Let_ \(\mathfrak{m}=\mathfrak{gl}(a_{1})\times...\times\mathfrak{gl}(a_{t})\times \mathfrak{g}(m)\) _or possibly (if_ \(\mathfrak{g}=\mathfrak{so}(2n)\)_)_ \(\mathfrak{m}=\mathfrak{gl}(a_{1})\times...\times\mathfrak{gl}(a_{t})^{\prime}\) _(with all_ \(a_{i}\) _even). Let_ \(\lambda^{0}\in\mathcal{P}_{B}(2m+1)\)_,_ \(\mathcal{P}_{D}(2m)\)_, or_ \(\mathcal{P}_{C}(2m)\)_, and_ \(\lambda^{i}\in\mathcal{P}(a_{i})\) _for_ \(1\leqslant i\leqslant t\)_. Consider the orbit_ \[\mathbb{O}_{M}=\mathbb{O}_{\lambda^{1}}\times...\times\mathbb{O}_{\lambda^ {t}}\times\mathbb{O}_{\lambda^{0}}\in\mathsf{Orb}(M)\] _Then_ \[\operatorname{Ind}^{G}_{M}\mathbb{O}_{M}=\mathbb{O}_{\lambda}\] _where_ \[\lambda=(\lambda^{0}\vee\bigvee_{j=1}^{t}(\lambda^{j}\vee\lambda^{j}))_{B/C/D}.\] _If_ \(\mathfrak{g}=\mathfrak{so}(2n)\) _and_ \(\lambda\) _is very even, then its decoration can be deduced from_ _[_10_, Corollary 7.3.4]__._ **Proposition 5.10** ([13], Sections 2 and 3).: _In the setting of Proposition 5.9, \(\operatorname{Bind}_{M}^{G}\mathbb{O}_{M}=\mathbb{O}_{\lambda}\) if and only if one of the following holds:_ 1. \(\mathfrak{g}=\mathfrak{sl}(n).\)__ 2. \(\lambda=(\lambda^{0}\vee\bigvee_{j=1}^{t}(\lambda^{j}\vee\lambda^{j})).\)__ 3. \(\mathfrak{g}=\mathfrak{so}(2n)\)_, the partition_ \(\beta=(\lambda^{0}\vee\bigvee_{j=1}^{t}(\lambda^{j}\vee\lambda^{j}))\) _has only even members, and there is exactly one distinct member_ \(\beta_{k}\) _with odd multiplicity. Then_ \(\beta_{k}\) _is the smallest member of_ \(\beta\)_, and_ \(\lambda\) _is obtained from_ \(\beta\) _by replacing_ \((\beta_{k})\) _in_ \(\beta\) _with_ \((\beta_{k}-1,1)\)_._ **Remark 5.11**.: _We note that condition (iii) of Proposition 5.10 can be reformulated in terms of columns. The reformulated condition is as follows: \(\mathfrak{g}=\mathfrak{so}(2n)\), the partition \(\beta=(\lambda^{0}\vee\bigvee_{j=1}^{t}(\lambda^{j}\vee\lambda^{j}))\) is the join of pairs of columns \((c_{i},c_{i})\) of equal length, for \(1\leq i\leq l\), and there is exactly one distinct \(c_{j}\) that is odd. In this case \(\lambda=\beta_{D}\)._ ### Birationally rigid covers Let \(\lambda\in\mathcal{P}_{\epsilon}(n)\) and \(\mathbb{O}=\mathbb{O}_{\lambda}\) be the corresponding nilpotent orbit in \(\mathfrak{g}_{\epsilon}(n)\). Following [10], we say that an integer \(m\) is _\(\lambda\)-singular_ if \(\lambda_{m}-\lambda_{m+1}\geq 2\). Let \(\mathcal{S}(\lambda)\) denote the set of \(\lambda\)-singular numbers. Then \(\mathcal{S}(\lambda)\) is in bijection with the set of all codimension \(2\) orbits in \(\overline{\mathbb{O}}\), see [11]. For each \(m\in\mathcal{S}(\lambda)\), let \(\mathbb{O}_{m}\) denote the preimage of the corresponding codimension \(2\) orbit in \(\operatorname{Spec}(\mathbb{C}[\mathbb{O}])\). The singularity of \(\operatorname{Spec}(\mathbb{C}[\mathbb{O}])\) along \(\mathbb{O}_{m}\) is equivalent to a Kleinian singularity \(\Sigma_{m}=\mathbb{C}^{2}/\Gamma_{m}\) (see Section 3.4). Given \(m\in\mathcal{S}(\lambda)\), let \(\lambda^{0}\in\mathcal{P}_{\epsilon}(n-2m)\) be the partition obtained from \(\lambda\) by removing two columns of length \(m\) and let \(\mathbb{O}_{0}=\mathbb{O}_{\lambda^{0}}\) be the corresponding nilpotent orbit in \(\mathfrak{g}_{\epsilon}(n-2m)\). Then \(\mathbb{O}=\mathbb{O}_{\lambda}\) is birationally induced from the nilpotent orbit \(\{0\}\times\mathbb{O}_{0}\) in the Levi subalgebra \(\mathfrak{m}=\mathfrak{gl}(m)\times\mathfrak{g}_{\epsilon}(n-2m)\subset \mathfrak{g}_{\epsilon}(n)\) by Proposition 5.10. Recall from Section 2.4 there is a surjective homomorphism \(\phi:A(\mathbb{O})=A(\mathbb{O}_{\lambda})\to A(\mathbb{O}_{0})\). We will compute this homomorphism in Lemma 5.12 below. Recall that by Proposition 5.2 and Remark 5.6, both \(A(\mathbb{O})\) and \(A(\mathbb{O}_{0})\) can be regarded as subgroups of the elementary abelian \(2\)-group \(A\) associated to \(\lambda\) with basis vectors \(\vartheta_{c_{i}^{\ell}}\) associated to certain columns of length \(c_{i}^{\ell}\) of \(\lambda\). **Lemma 5.12**.: _With the notations above, the map \(\phi:A(\mathbb{O})=A(\mathbb{O}_{\lambda})\to A(\mathbb{O}_{0})\) can be computed in the following cases:_ 1. _Suppose_ \(\lambda_{m}\in\lambda^{\epsilon}\) _and_ \(\lambda_{m}=\lambda_{m+1}+2\)_._ * **Type**__\(C_{n}\)**_:_ _Let_ \(\lambda^{0}=(a_{2q}\geq a_{2q-1}\geq\cdots\geq a_{0})\)_. Then_ \(\phi:A(\mathbb{O})\to A(\mathbb{O}_{0})\) _is given by_ 1. \(\phi(\vartheta_{a_{2k-1}})=\vartheta_{a_{2k-1}}\)_,_ \(\forall\,1\leq k\leq q\)_;_ 2. \(\phi(\vartheta_{m})=\vartheta_{a_{2i+1}}\)_, if_ \(a_{2i+1}>m>a_{2i}\)_;_ 3. \(\phi(\vartheta_{m})=1\)_, if_ \(m>a_{2q}\)_._ _In particular the kernel of_ \(\phi\) _is generated by all_ \(\vartheta_{a_{2i+1}}\vartheta_{m}\) _with_ \(a_{2i+1}>m>a_{2i}\) _and by all_ \(\vartheta_{m}\) _with_ \(m>a_{2q}\)_._ * **Type**__\(B_{n}\) **(or**__\(D_{n}\)**):** _Let_ \(\lambda^{0}=(a_{2q+1}\geq a_{2q}\geq\cdots\geq a_{0})\)_. Then the kernel of_ \(\phi\) _is generated by all_ \(\vartheta_{a_{2i+1}}\vartheta_{m}\) _with_ \(a_{2i+1}>m>a_{2i}\)_._ 2. _Otherwise, both_ \(A(\mathbb{O})\) _and_ \(A(\mathbb{O}_{0})\) _are identified with_ \(A^{\epsilon}\) _by Proposition_ 5.2 _and_ \(\phi\) _is the identity map._ Proof.: We only prove the lemma for \(\mathfrak{g}=\mathfrak{sp}(2n)\). The arguments for other types are similar, and left to the reader. As in [11, SS5.2], we may realize \(\mathfrak{g}=\mathfrak{sp}(2n)\) as the space of matrices \[\mathfrak{sp}(2n)=\left\{\begin{pmatrix}Z_{1}&Z_{2}\\ Z_{3}&-Z_{1}^{T}\end{pmatrix}\right|Z_{i}\in M_{n}(\mathbb{C})\text{ and }Z_{2},Z_{3}\,\text{are symmetric.}\right\}, \tag{5.7.1}\] where \(Z_{1}^{T}\) stands for the transpose matrix of \(Z_{1}\). Let \(\mathfrak{h}\subset\mathfrak{g}\) be the Cartan subalgebra of diagonal matrices of the form \[H=\begin{pmatrix}D&0\\ 0&-D\end{pmatrix},\qquad D=\operatorname{diag}(h_{1},h_{2},\ldots,h_{n})\] Define linear functionals \(e_{i}\in\mathfrak{h}^{*}\) by \(e_{i}(H)=h_{i}\) for \(1\leq i\leq n\). As in Section 5.3, the root system of \(\mathfrak{g}\) is then \(\Delta=\{\pm e_{i}\pm e_{j},\pm 2e_{i}\,|\,1\leq i,j\leq n,i\neq j\}\), and we choose \(\Delta^{+}=\{e_{i}\pm e_{j},2e_{k}\,|\,1\leq i<j\leq n,1\leq k\leq n\}\subset\Delta\) to be the set of positive roots and \(\Pi=\{e_{1}-e_{2},e_{2}-e_{3},\ldots,e_{n-1}-e_{n},2e_{n}\}\) to be the set of simple roots. The \(\alpha\)-root space of \(\mathfrak{g}\) is spanned by the root vector \(X_{\alpha}\) defined below, where \(E_{i,j}\) is the elementary matrix having a \(1\) as its \((i,j)\)-entry and zeros elsewhere. \[X_{e_{i}-e_{j}} =E_{i,j}-E_{j+n,i+n},\] \[X_{e_{i}+e_{j}} =E_{i,j+n}+E_{j,i+n},\] \[X_{-e_{i}-e_{j}} =E_{i+n,j}+E_{j+n,i},\] \[X_{2e_{i}} =E_{i,i+n},\] \[X_{-2e_{i}} =E_{i+n,i}.\] We now construct an explicit nilpotent elment \(\tilde{x}\in\mathbb{O}=\mathbb{O}_{\lambda}\) and an explicit \(\mathfrak{sl}_{2}\)-triple \(\tilde{\psi}=(\tilde{x},\tilde{y},\tilde{h})\) in \(\mathfrak{sp}(2n)\), following [13, Section 5.2.2]. We break (the rows of) \(\lambda\) up into chunks of the following two types: pairs \(\{2p+1,2p+1\}\) of equal odd parts, and single even parts \(\{2q\}\). Next we attach a set of consecutive indices in \(\{1,2,\ldots,n\}\) and the associated set of positive (but not necessarily simple) roots to each chunk \(\mathcal{C}\) as follows. If \(\mathcal{C}=\{2q\}\), choose a block \(\mathcal{B}=\mathcal{B}(2q)=\{j+1,\ldots,j+q\}\) of consecutive indices and let \(\mathcal{C}^{+}=\mathcal{C}^{+}(2q)=\{e_{j+1}-e_{j+2},\ldots,e_{j+q-1}-e_{j+q},2e_{j+q}\}\). We set \(\mathcal{B}^{\prime}=\{j+1\}\) if \(2q\geq m\) and \(\mathcal{B}^{\prime}=\emptyset\) otherwise. If \(\mathcal{C}=\{2p+1,2p+1\}\), choose a block \(\mathcal{B}=\mathcal{B}(2p+1,2p+1)=\{l+1,\ldots,l+2p+1\}\) of consecutive indices and let \(\mathcal{C}^{+}=\mathcal{C}^{+}(2p+1,2p+1)=\{e_{l+1}-e_{l+2},\ldots,e_{l+2p}-e _{l+2p+1}\}\) (note that \(\mathcal{C}^{+}\) is empty if \(\mathcal{C}=\{1,1\}\)). We set \(\mathcal{B}^{\prime}=\{l+1,l+2p+1\}\) if \(2p+1\geq m\) and \(\mathcal{B}^{\prime}=\emptyset\) otherwise. We require the blocks of indices attached to distinct chunks to be disjoint. Let \(\tilde{x}\) (\(\tilde{y}\) resp.) be the sum of all \(\alpha\)-root vectors \(X_{\alpha}\) (\(X_{-\alpha}\) resp.) for \(\alpha\) appearing in some \(\mathcal{C}^{+}\). Let \(\tilde{h}=\sum_{\mathcal{C}}\tilde{h}_{\mathcal{C}}\), where \(\tilde{h}_{\mathcal{C}}\) is defined by \[\tilde{h}_{\mathcal{C}}=\sum_{k=1}^{q}(2q-2k+1)(E_{k+l,j+k}-E_{j+k+n,j+k+n})\] if \(\mathcal{C}^{+}=\{e_{j+1}-e_{j+2},\ldots,e_{j+q-1}-e_{j+q},2e_{j+q}\}\), and \[\tilde{h}_{\mathcal{C}}=\sum_{k=0}^{2p}(2p-2k)(E_{l+1+k,l+1+k}-E_{l+1+k+n,l+1 +k+n})\] if \(\mathcal{C}^{+}=\{e_{l+1}-e_{l+2},\ldots,e_{l+2p}-e_{l+2p+1}\}\). Then \(\tilde{\psi}=(\tilde{x},\tilde{y},\tilde{h})\) defines an \(\mathfrak{sl}_{2}\)-triple in \(\mathfrak{sp}(2n)\) associated to \(\mathbb{O}\). Now take \(\mathcal{G}\) to be the union of all \(\mathcal{B}^{\prime}\) and \(\tilde{\mathcal{G}}:=\mathcal{G}\cup\{i+n\,|\,i\in\mathcal{G}\}\). Let \(\mathfrak{r}\) be the Lie subalgebra of \(\mathfrak{g}=\mathfrak{sp}(2n)\) consisting of block diagonal matrices \((Z,-Z^{T})\), where \(Z\in M_{n}(\mathbb{C})\) runs over all matrices with all entries equal to zero except for the \((i,j)\)-th entries with \(i,j\in\mathcal{G}\). Then \(\mathfrak{r}\) can be naturally identified with \(\mathfrak{gl}(m)\). Let \(\mathfrak{k}\) be the Lie subalgebra of \(\mathfrak{g}\) consisting of matrices of the form (5.7.1) whose \(i\)-th row and \(j\)-th column contain only zeros whenever \(i,j\in\tilde{\mathcal{G}}\). Then \(\mathfrak{k}\) is naturally identified with \(\mathfrak{sp}(2n-2m)\) by deleting rows and columns with indices in \(\mathcal{G}\) Let \(\mathfrak{m}\) be the (direct) sum of \(\mathfrak{r}\) and \(\mathfrak{k}\), so that \(\mathfrak{m}\simeq\mathfrak{gl}(m)\times\mathfrak{sp}(2n-2m)\) is a Levi subalgebra of \(\mathfrak{sp}(2n)\) associated to the roots \[\Delta_{\mathfrak{m}}=\{e_{i}-e_{j}\}_{i,j\in\mathcal{G},i\neq j}\cup\{\pm e_{ i}\pm e_{j},\pm 2e_{i}\,|\,i,j\notin\mathcal{G},i\neq j\}.\] Note that \(\mathfrak{m}\) is in general not one of the standard Levi subalgebras considered in Section 5.3. Now we reset rows and columns with indices in \(\tilde{\mathcal{G}}\) of the matrices \(\tilde{x},\tilde{y},\tilde{h}\) to be zero and obtain matrices \(x,y,z\) lying in \(\mathfrak{k}\simeq\mathfrak{sp}(2n-2m)\). It is easy to verify that \(\psi=(x,y,h)\) forms an \(\mathfrak{sl}_{2}\)-triple in \(\mathfrak{k}\) associated to the orbit \(\mathbb{O}_{0}\) corresponding to the partition \(\lambda^{0}\). Let \(\mathfrak{q}\) be the parabolic subalgebra spanned by \(\mathfrak{m}\) and all the positive root vectors \(X_{\alpha}\), \(\alpha\in\Delta^{+}\), and let \(\mathfrak{u}\subset\mathfrak{q}\) be the nilpotent radical corresponding to the roots in \(\Delta^{+}\backslash\Delta_{\mathfrak{m}}\). By construction we have \(\tilde{x}=x+x^{\prime}\), where \(x^{\prime}\in\mathfrak{u}\), hence \(\tilde{x}\in\mathbb{O}\cap(\mathbb{O}_{0}+\mathfrak{u})\). We regard \(x\in\mathbb{O}_{0}\) as an element in \(0\times\mathfrak{sp}(2n-2m)\subset\mathfrak{gl}(m)\times\mathfrak{sp}(2n-2m)= \mathfrak{m}\). Let \(M=R\times K\simeq GL(m)\times Sp(2n-2m)\) be the Levi subgroup of \(G=Sp(2n)\) corresponding to \(\mathfrak{m}\). Let \(M^{\psi}\) and \(M^{\tilde{\psi}}\) denote the \(M\)-centralizers of \(\psi\) and \(\tilde{\psi}\) respectively. Then one can check that \(M^{\tilde{\psi}}\subset M^{\psi}\). We will describe explicitly this inclusion below. By Lemma 2.9, the map \(\phi:A(\mathbb{O})\to A(\mathbb{O}_{0})\) can be computed as the induced natural map \(\pi_{0}(M^{\tilde{\psi}})\to\pi_{0}(M^{\psi})\). First consider case (i) in the statement of the lemma. From the construction of \(\tilde{\psi}\) and \(\psi\), we have the following group isomorphisms (cf. [1, Theorem 6.1.3]): \[M^{\tilde{\psi}}\simeq\underbrace{Sp(a_{2q}-a_{2q-1})\times\cdots\times O(a_{ 2i+1}-m)}_{S}\times\underbrace{O(m-a_{2i})\times\cdots\times O(a_{1}-a_{0}) \times Sp(a_{0})}_{T} \tag{5.7.2}\] and \[M^{\psi}\simeq GL(m)\times\underbrace{Sp(a_{2q}-a_{2q-1})\times\cdots\times O (a_{2i+1}-a_{2i})\times\cdots O(a_{1}-a_{0})\times Sp(a_{0})}_{W}. \tag{5.7.3}\] We can describe the inclusion \(M^{\tilde{\psi}}\subset M\simeq GL(m)\times Sp(2n-2m)\) in terms of the isomorphism (5.7.2) as follows: both subgroups \(S\) and \(T\) are included in \(Sp(2n-2m)\) in the standard way, but \(T\) is also included in \(GL(m)\). The inclusion \(S\hookrightarrow Sp(2n-2m)\) induces an injective homomorphism \(\iota_{S}:S\hookrightarrow\{1\}\times Sp(2n-2m)\subset M\). We also have the product map \(\iota_{T}:T\hookrightarrow GL(m)\times Sp(2n-2m)\) of the two inclusions of \(T\) into \(GL(m)\) and \(Sp(2n-2m)\). It is clear that the images of \(\iota_{S}\) and \(\iota_{T}\) commute with each other, so we have an induced injective homomorphism \((\iota_{S},\iota_{T}):M^{\tilde{\psi}}=S\times T\hookrightarrow M\), which is exactly the inclusion map \(M^{\tilde{\psi}}\hookrightarrow M\). We see that the image lies in \(M^{\psi}\). Note that the \(GL(m)\)-factor is in the identity component of \(M^{\psi}\), so \[\pi_{0}(M^{\psi})\simeq\pi_{0}(M^{\psi}/GL(m))\simeq\pi_{0}(W).\] Therefore we need only to understand the composite map \(f_{W}:M^{\tilde{\psi}}\hookrightarrow M^{\psi}\twoheadrightarrow W\). Note that the composite map \(M^{\tilde{\psi}}\hookrightarrow M\twoheadrightarrow M/GL(m)\simeq Sp(2n-2m)\) is also the composition of \(f_{W}\) and the inclusion \(W\hookrightarrow K\simeq Sp(2n-2m)\), we see that \(f_{W}\) is the block-diagonal embedding \(O(a_{2i+1}-m)\times O(m-a_{2i})\hookrightarrow O(a_{2i+1}-a_{2i})\) of the two red simple factors in (5.7.2) into the red factor in (5.7.3) and is the identity map on the remaining simple factors which appear on the right hand sides of both (5.7.3) and (5.7.2). Passing to the level of component groups \(\pi_{0}\), we get the conclusion in (i) easily. The proof of case (ii) is similar and easier, so we omit it. For any \(m\in\mathcal{S}(\lambda)\), let \(d_{m}=\lfloor(\lambda_{m}-\lambda_{m+1})/2\rfloor\) and \(\lambda^{0}\in\mathcal{P}_{\epsilon}(n-2md_{m})\) be the partition obtained from \(\lambda\) by removing \(2d_{m}\) columns of length \(m\) and let \(\mathbb{O}_{0}=\mathbb{O}_{\lambda^{0}}\) be the corresponding nilpotent orbit in \(\mathfrak{g}_{\epsilon}(n-2md_{m})\). Then \(\mathbb{O}\) is birationally induced from the orbit \(\{0\}\times\mathbb{O}_{0}\) in the Levi subalgebra \(\mathfrak{m}=\mathfrak{gl}(m)^{d_{m}}\times\mathfrak{g}_{\epsilon}(n-2md_{m}) \subset\mathfrak{g}_{\epsilon}(n)\) by Proposition 5.10. Let \(H_{m}\) be the kernel of the map \(\phi:A(\mathbb{O})\to A(\mathbb{O}_{0})\). **Proposition 5.13**.: _Let \(\widetilde{\mathbb{O}}\) be a cover of \(\mathbb{O}=\mathbb{O}_{\lambda}\) corresponding to a subgroup \(H\subset A(\mathbb{O})\). For \(\lambda_{m}\in\lambda\) such that \(\lambda_{m}\geq\lambda_{m+1}+2\), let \(\widetilde{\mathbb{O}}_{m}\subset\widetilde{X}=\operatorname{Spec}(\mathbb{C }[\widehat{\mathbb{O}}])\) be the corresponding cover of \(\mathbb{O}_{m}\). Then \(\widetilde{\mathbb{O}}_{m}\) is connected and the singularity of \(\widetilde{X}\) along \(\widetilde{\mathbb{O}}_{m}\) is equivalent to \(\mathbb{C}^{2}/\Gamma_{m}^{\prime}\), where \(\Gamma_{m}^{\prime}\) is a subgroup of \(\Gamma_{m}\) of index \(2\) if \(\lambda_{m}\in\lambda^{\epsilon}\) and \(H_{m}\pitchfork H\), and \(\Gamma_{m}^{\prime}=\Gamma_{m}\) otherwise._ Proof.: This is essentially Theorem 2.6 of [10]. Note that [10] deals with the adjoint groups, but the arguments in [10, Section 5] work for classical groups as well. Thanks to Lemma 2.9 (iii), we can apply Lemma 5.12 inductively and deduce that \(H_{m}=\{1,v_{\lambda_{m}}v_{\lambda_{m+1}}\}\). In terms of columns of \(\lambda\), \(m\in\mathcal{S}(\lambda)\) means that \(m=c_{j}=c_{i}^{\epsilon}\) is a column of \(\lambda=(c_{p}\geq c_{p-1}\geq\cdots\geq c_{1})\) with multiplicity \(m_{\lambda^{t}}(m)=\lambda_{m}-\lambda_{m+1}\geq 2\), and \(H_{m}=\{1,\vartheta_{c_{j}}\vartheta_{c_{j+1}}\}\). This together with Proposition 5.13 immediately implies the following. **Proposition 5.14**.: _Suppose \(G=G_{\epsilon}(n)\). Let \(\mathbb{O}_{\lambda}\in\mathsf{Orb}(G)\) and let \(\widetilde{\mathbb{O}}_{\lambda}\) be a \(G\)-equivariant cover of \(\mathbb{O}_{\lambda}\) corresponding to a subgroup \(H\subset A(\mathbb{O}_{\lambda})=A^{\epsilon}\). Then \(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}_{\lambda}])\) has no codimension \(2\) leaves if and only if \(\lambda\) and \(H\) satisfy the following conditions:_ 1. _each column_ \(c_{j}\) _of_ \(\lambda\) _has multiplicity less or equal than_ \(2\)_, and it can be_ \(2\) _only when_ \(c_{j}=c_{i}^{\epsilon}\) _for some_ \(i\)_, i.e.,_ \(\operatorname{ht}_{\lambda^{t}}(c_{j})\equiv 1-\epsilon\mod 2\)_;_ 2. _for each_ \(c_{j}=c_{i}^{\epsilon}\) _such that_ \(m_{\lambda}(c_{i}^{\epsilon})=2\)_, the subgroup_ \(H\) _of_ \(A(\mathbb{O})\) _does not contain the element_ \(\vartheta_{c_{j}}\vartheta_{c_{j+1}}=\vartheta_{c_{i}^{\epsilon}}\vartheta_{c_{ i+1}^{\epsilon}}\)_._ **Lemma 5.15**.: _Suppose \(G=G_{\epsilon}(n)\). Let \(\mathbb{O}_{\lambda}\in\mathsf{Orb}(G)\) and let \(\widetilde{\mathbb{O}}_{\lambda}\) be a \(G\)-equivariant cover of \(\mathbb{O}_{\lambda}\) corresponding to a subgroup \(H\subset A(\mathbb{O}_{\lambda})=A^{\epsilon}\). Then \(H^{2}(\widetilde{\mathbb{O}}_{\lambda},\mathbb{C})=0\) if and only if, for each column \(c_{k}^{\epsilon}\) appearing in \(\lambda\) with multiplicity \(2\), \(H\) contains an element \(\theta\) whose decomposition as product of \(\vartheta_{c_{i}^{\epsilon}}\)'s contains \(\vartheta_{c_{k}^{\epsilon}}\)._ Proof.: This follows immediately from Lemma 3.19 and explicit description (in terms of rows) of the reductive centralizer of \(\mathbb{O}_{\lambda}\) for classical groups similar to [11, Theorem 6.1.3]. ### BVLS duality For a partition \(\lambda=[\lambda_{1},...,\lambda_{k}]\in\mathcal{P}(n)\), write \[l(\lambda) =[\lambda_{1},...,\lambda_{k-1},\lambda_{k}-1]\in\mathcal{P}(n-1)\] \[e(\lambda) =[\lambda_{1},...,\lambda_{k},1]\in\mathcal{P}(n+1)\] \[\lambda^{+} =[\lambda_{1}+1,\lambda_{2},...,\lambda_{k}]\in\mathcal{P}(n+1)\] **Proposition 5.16** ([11, Thm 5.2]).: _Suppose \(G\) is a simple group of classical type. Let \(\mathbb{O}_{\lambda}\in\mathsf{Orb}(G^{\vee})\) and let let \(d(\mathbb{O}_{\lambda})=\mathbb{O}_{d(\lambda)}\). Then \(d(\lambda)\) is given by the following formulas:_ 1. _If_ \(\mathfrak{g}=\mathfrak{sl}(n)\)_, then_ \(\mathfrak{g}^{\vee}=\mathfrak{sl}(n)\) _and_ \[d(\lambda)=\lambda^{t}.\] 2. _If_ \(\mathfrak{g}=\mathfrak{so}(2n+1)\)_, then_ \(\mathfrak{g}^{\vee}=\mathfrak{sp}(2n)\) _and_ \[d(\lambda)=(e(\lambda)^{t})_{B}.\] _._ 3. _If_ \(\mathfrak{g}=\mathfrak{sp}(2n)\)_, then_ \(\mathfrak{g}^{\vee}=\mathfrak{so}(2n+1)\) _and_ \[d(\lambda)=(l(\lambda^{t}))_{C}.\] 4. _If_ \(\mathfrak{g}=\mathfrak{so}(2n)\)_, then_ \(\mathfrak{g}^{\vee}=\mathfrak{so}(2n)\) _and_ \[d(\lambda)=(\lambda^{t})_{D}.\] _If_ \(\lambda\) _is very even, then_ \(d(\lambda)=\lambda^{t}\) _is very even. If_ \(n\) _is divisible by_ \(4\)_, then_ \(\lambda\) _and_ \(d(\lambda)\) _have the same decoration. Otherwise, they have opposite decorations._ ### Lusztig-Achar data Following [1], we introduce the notion of a _reduced marked partition_. **Definition 5.17**.: _Let \(X\in\{B,C,D\}\) and let \(\lambda\in\mathcal{P}_{X}(m)\) (where \(m\) is either even or odd, depending on \(X\)). If \(X=B\) (resp. \(C\), \(D\)), a markable part (or row) for \(\lambda\) is an odd (resp. even, odd) positive integer \(x\) such that \(m_{\lambda}(x)\geq 1\) and \(\mathrm{ht}_{\lambda}(x)\) is odd (resp. even, even)._ _A reduced marked partition of type \(X\) is a marked partition \({}^{\langle\nu\rangle}\lambda\) of type \(X\) where \(\nu\) consists of only markable parts of \(\lambda\). We denote the subset of all reduced marked partitions in \(\tilde{\mathcal{P}}_{X}(m)\) by \(\overline{\mathcal{P}}_{X}(m)\). Again we allow decorations by Roman numerals \(I\) and \(II\) for the very even partitions of type \(D\)._ _Similar to \(\mathcal{P}_{\epsilon}(m)\), we also sometimes write \(\overline{\mathcal{P}}_{\epsilon}(m)\) for the sets defined above, where \(\epsilon\in\{0,1\}\)._ Recall that in Section 5.2, we have defined a subpartition \(\lambda^{\epsilon}=[\lambda_{1}^{\epsilon},\lambda_{2}^{\epsilon},\cdots, \lambda_{r}^{\epsilon}]\) and an elementary abelian \(2\)-group \(A\simeq(\mathbb{Z}_{2})^{r}\) with basis \(\{v_{\lambda_{i}^{\epsilon}}\}\) and the subgroups \(A^{\epsilon}\) for \(\epsilon=0,1\). Let \(\lambda^{m}=[\lambda_{1}^{m}>\lambda_{2}^{m}>\cdots>\lambda_{l}^{m}>0]\) be the subpartition of \(\lambda^{\epsilon}\) consisting of all the markable parts of \(\lambda\) in the sense of Definition 5.17. We also make the convention that \(\lambda_{0}^{m}=\infty\) and \(\lambda_{l+1}^{m}=0\). Let \(N\) denote the kernel of the composition \(A^{\epsilon}\simeq A(\mathbb{O}_{\lambda})\twoheadrightarrow\bar{A}(\mathbb{O} _{\lambda})\) (see Proposition 5.2). We describe the group \(N\) below, which follows easily from the discussions in [12, Section 5]. **Proposition 5.18**.: _The subgroup \(N\) of \(A^{\epsilon}\) is generated by the elements of form \(v_{\lambda_{i}^{\epsilon}}v_{\lambda_{j}^{m}}\) such that \(\lambda_{j}^{m}\leq\lambda_{i}^{\epsilon}<\lambda_{j-1}^{m}\) for all \(1\leq j\leq l+1\) (which implies that \(v_{\lambda_{i}^{\epsilon}}=v_{\lambda_{i}^{\epsilon}}v_{0}\) always lies in \(N\) for \(\lambda_{i}^{\epsilon}<\lambda_{l}^{m}\))._ **Corollary 5.19**.: _Suppose \(G\) is a simple classical group. Let \(\mathbb{O}_{\lambda}\in\mathsf{Orb}(G)\). If \(\mathfrak{g}=\mathfrak{sl}(n)\), then \(\bar{A}(\mathbb{O}_{\lambda})\simeq 1\). Otherwise, \(\bar{A}(\mathbb{O}_{\lambda})\simeq(\mathbb{Z}_{2})^{d}\), where_ \[d=\begin{cases}\#\text{ of markable parts}&\text{if }\mathfrak{g}= \mathfrak{sp}(2n)\\ \#\text{ of markable parts}-1&\text{if }\mathfrak{g}=\mathfrak{so}(2n+1)\text{ or }\mathfrak{so}(2n)\end{cases}\] **Remark 5.20**.: _We can also describe \(\bar{A}(\mathbb{O})\) in terms of columns as we did for \(A(\mathbb{O})\) in Remark 5.6 as follows. We can introduce the notion of a markable column of \(\lambda\) as the dual to a markable part/row, namely those \(c_{i}^{\epsilon}\) whose corresponding \(\lambda_{i}^{\epsilon}\) are markable in the sense of Definition 5.17. More concretely, If \(X=B\) (resp. \(C\), \(D\)), a markable column of \(\lambda\) is an odd (resp. even, even) positive integer \(y\) such that \(m_{\lambda^{t}}(y)\geq 1\) and \(\mathrm{ht}_{\lambda^{t}}(y)\) is odd (resp. even, odd). Let \(c^{m}=(c_{l}^{m}>c_{l-1}^{m}>\cdots>c_{1}^{m}>0)\) denote the markable columns of \(\lambda\). We also make the convention that \(c_{0}^{m}=0\), \(c_{l+1}^{m}=\infty\) and \(\vartheta_{\infty}=0\). Then the subgroup \(N\) of \(A^{\epsilon}\) is generated by the elements of form \(\vartheta_{c_{i}^{\epsilon}}\vartheta_{c_{j}^{m}}\) such that \(c_{j-1}^{m}<c_{i}^{\epsilon}\leq c_{j}^{m}\) for all \(1\leq j\leq l+1\) (which implies that \(\vartheta_{c_{i}^{\epsilon}}=\vartheta_{c_{i}^{\epsilon}}\vartheta_{\infty}\) always lies in \(N\) for \(c_{i}^{\epsilon}>c_{l}^{m}\))._ For \(G\) simple classical, we have the following parametrizations of \(\mathsf{LA}(G)\) (see [1, Section 3.4]). Here we adopt the notations of Section 5.2. **Proposition 5.21**.: _Suppose \(\mathfrak{g}\) is a simple Lie algebra of classical type, then the following are true:_ 1. _Suppose_ \(\mathfrak{g}=\mathfrak{sl}(n)\)_. Then_ \(\bar{A}(\mathbb{O})\simeq 1\) _for every_ \(\mathbb{O}\in\mathsf{Orb}(G)\)_. In particular, there is a bijection_ \(\mathcal{P}(n)\xrightarrow{\sim}\mathsf{LA}(G)\)_._ 2. _Suppose_ \(\mathfrak{g}=\mathfrak{g}_{\epsilon}(m)\)_,_ \(\epsilon\in\{0,1\}\)_, is of type_ \(B\)_,_ \(C\) _or_ \(D\)_. Then the composite map_ \[\overline{\mathcal{P}}_{\epsilon}(m)\hookrightarrow\tilde{\mathcal{P}}_{ \epsilon}(m)\twoheadrightarrow\mathsf{Conj}(G_{\epsilon}(m))\xrightarrow{ \sim}\mathsf{LA}(G_{\epsilon}(m))\] _is a bijection, where_ \(\overline{\mathcal{P}}_{\epsilon}(m)\hookrightarrow\tilde{\mathcal{P}}_{ \epsilon}(m)\) _is the natural inclusion map,_ \(\tilde{\mathcal{P}}_{\epsilon}(m)\xrightarrow{\sim}\mathsf{Conj}(G_{ \epsilon}(m))\) _is the bijection in Corollary_ 5.4 _and_ \(\mathsf{Conj}(G_{\epsilon}(m))\twoheadrightarrow\mathsf{LA}(G_{\epsilon}(m))\) _is the natural projection._ ### Special Lusztig-Achar data Let \(G\) be a simple group of type \(X\in\{B,C,D\}\). By Proposition 5.21, \(\mathsf{LA}(G)\) is parameterized by \(\overline{\mathcal{P}}_{X}(m)\). Recall the subset \(\mathsf{LA}^{*}(G)\subset\mathsf{LA}(G)\) of special Lusztig-Achar data, cf. Section 2.11. Let \(\overline{\mathcal{P}}_{X}^{*}(m)\) denote the subset of \(\overline{\mathcal{P}}_{X}(m)\) corresponding to \(\mathsf{LA}^{*}(G)\). The elements of \(\overline{\mathcal{P}}_{X}^{*}(m)\) are called _special marked partitions_. **Proposition 5.22** (Propositions 5.8 and 5.16, [1]).: _A reduced marked partition \({}^{\langle\nu\rangle}\lambda\) of type \(B\) (resp. \(C\), \(D\)) is special if and only if there are no even (resp. odd, even) parts \(x\) of \(\lambda\) such that \(\mathrm{ht}_{\nu}(x)\) is odd and \(\mathrm{ht}_{\lambda}(x)\) is odd (resp. even, even)._ ### Saturation of conjugacy data The following is elementary. We leave the straightforward verification to the reader. **Proposition 5.23**.: _Let \(G=G_{\epsilon}(m)\). Let \(M=GL(a_{1})\times...\times GL(a_{t})\times G_{\epsilon}(n)\) or possibly (if \(G\) is even orthogonal) \(M=GL(a_{1})\times...\times GL(a_{t})^{\prime}\) (with all \(a_{i}\) even) and suppose \((\mathbb{O}_{M},C_{M})\in\mathsf{Conj}(M)\simeq\mathsf{Conj}(GL(a_{1}))\times...\times\mathsf{Conj}(GL(a_{t}))\times\mathsf{Conj}(G_{\epsilon}(n))\) corresponds, under the map of Corollary 5.4 to a tuple_ \[(\lambda^{1},...,\lambda^{t},^{\langle\nu^{0}\rangle}\lambda^{0})\in\mathcal{ P}(a_{1})\times...\times\mathcal{P}(a_{t})\times\tilde{\mathcal{P}}_{ \epsilon}(n)\] _Then \(\mathrm{Sat}_{M}^{G}(\mathbb{O}_{M},C_{M})\) corresponds to the marked partition \({}^{\langle\nu\rangle}\lambda\), where_ \[\lambda=\lambda^{0}\cup\bigcup_{j=1}^{t}(\lambda^{j}\cup\lambda^{j}),\qquad \nu=\nu^{0}\] **Corollary 5.24**.: _Let \(G=G_{\epsilon}^{ad}(m)\). Suppose \((\mathbb{O},C)\in\mathsf{Conj}(G)\) corresponds under the map of Corollary 5.5 to a marked partition \({}^{\langle\nu\rangle}\lambda\in\tilde{\mathcal{P}}_{\epsilon}(m)\). Then \((\mathbb{O},C)\) is distinguished if and only if the following equivalent conditions are satisfied:_ 1. \(\nu\) _and_ \(\eta=\lambda\backslash\nu\) _are distinguished partitions, i.e., all their members have multiplicity_ \(1\)_._ 2. _For all i,_ \(\lambda_{j}\not\equiv\epsilon\mod 2\)_, i.e.,_ \(\lambda=\lambda^{\epsilon}\)_. Moreover,_ \(m_{\lambda}(x)\leq 2\) _for any_ \(x\)_, and if_ \(m_{\lambda}(x)=2\)_, then_ \(x\in\nu\)_._ 3. _Under the isomorphism_ \(\mathsf{Som}(G)\xrightarrow{\sim}\mathsf{Conj}(G)\) _of Lemma_ 2.18_,_ \((\mathbb{O},C)\in\mathsf{Conj}(G)\) _corresponds to the pair_ \((\mathsf{I}_{1}\times\mathsf{I}_{2},\mathbb{O}_{\nu}\times\mathbb{O}_{\eta})\)_, where_ \(\mathsf{I}_{1}\times\mathsf{I}_{2}\) _is a maximal pseudo-Levi subalgebra of_ \(\mathfrak{g}\) _(see Section_ 5.4_) and_ \(\mathbb{O}_{\nu}\) _and_ \(\mathbb{O}_{\eta}\) _are distinguished nilpotent orbits in the simple classical Lie algebras_ \(\mathsf{I}_{1}\) _and_ \(\mathsf{I}_{2}\) _corresponding to partitions_ \(\nu\) _and_ \(\eta\) _respectively._ **Remark 5.25**.: _Note that when \(\mathfrak{g}\) is of type \(C\) or \(D\), the simple factors \(\mathfrak{l}_{1}\) and \(\mathfrak{l}_{2}\) are of the same type, and \((\mathfrak{l}_{1}\times\mathfrak{l}_{2},\mathbb{O}_{\nu}\times\mathbb{O}_{\eta})\) is \(G_{\epsilon}^{ad}(m)\)-conjugate to \((\mathfrak{l}_{2}\times\mathfrak{l}_{1},\mathbb{O}_{\eta}\times\mathbb{O}_{\nu})\) in \(\mathsf{Som}(G_{\epsilon}^{ad}(m))\). This corresponds to the fact that both \({}^{\langle\nu\rangle}\lambda\) and \({}^{\langle\eta\rangle}\lambda\) correspond to the same pair \((\mathbb{O},C)\in\mathsf{Conj}(G_{\epsilon}^{ad}(m))\) as in Corollary 5.5._ ### Saturation of Lusztig-Achar data The following is elementary. We leave the straightforward verification to the reader. **Proposition 5.26**.: _Let \(G=G_{\epsilon}(m)\). Let \(M=GL(a_{1})\times...\times GL(a_{t})\times G_{\epsilon}(n)\) or possibly (if \(G\) is even orthogonal) \(M=GL(a_{1})\times...\times GL(a_{t})^{\prime}\) (with all \(a_{i}\) even) and suppose \((\mathbb{O}_{M},\bar{C}_{M})\in\mathsf{LA}(M)\simeq\mathsf{LA}(GL(a_{1})) \times...\times\mathsf{LA}(GL(a_{t}))\times\mathsf{LA}(G)\) corresponds, under the bijections of Proposition 5.21 to a tuple_ \[(\lambda^{1},...,\lambda^{t},{}^{\langle\nu^{0}\rangle}\lambda^{0})\in \mathcal{P}(a_{1})\times...\times\mathcal{P}(a_{t})\times\overline{\mathcal{ P}}_{\epsilon}(n)\] _Then \(\mathrm{Sat}_{M}^{G}(\mathbb{O}_{M},\bar{C}_{M})\) corresponds to the reduced marked partition \({}^{\langle\nu\rangle}\lambda\), where_ \[\lambda=\lambda^{0}\cup\bigcup_{j=1}^{t}(\lambda^{j}\cup\lambda^{j}),\qquad \nu=\nu^{0}\] **Corollary 5.27**.: _Suppose \({}^{\langle\nu\rangle}\lambda\) is the reduced marked partition associated to a Lusztig-Achar datum \((\mathbb{O},\bar{C})\in\mathsf{LA}(G)\). Then \((\mathbb{O},\bar{C})\) is distinguished if and only if the following equivalent conditions are satisfied:_ 1. \(\nu\) _and_ \(\eta=\lambda\backslash\nu\) _are distinguished partitions._ 2. _For all i,_ \(\lambda_{j}\not\equiv\epsilon\mod 2\)_, i.e.,_ \(\lambda=\lambda^{\epsilon}\)_. Moreover,_ \(m_{\lambda}(x)\leqslant 2\) _for any_ \(x\)_, and if_ \(m_{\lambda}(x)=2\)_, then_ \(x\in\nu\) _(which implies that_ \(x\) _is markable in the sense of Section_ 5.9_)._ Now we give an alternative description of \(\bar{A}(\mathbb{O})\) of \(\mathbb{O}=\mathbb{O}_{\lambda}\) in the special case when there is a distinguished Lusztig-Achar datum \((\mathbb{O},\bar{C})\in\mathsf{LA}(G)\). Recall that in Section 5.2, we have defined a subpartition \(\lambda^{\epsilon}=[\lambda_{1}^{\epsilon},\lambda_{2}^{\epsilon},\cdots, \lambda_{r}^{\epsilon}]\) and an elementary abelian \(2\)-group \(A\simeq(\mathbb{Z}_{2})^{r}\) with basis \(\{\upsilon_{\lambda_{i}^{\epsilon}}\}\) and the subgroups \(A^{\epsilon}\) for \(\epsilon=0,1\). By Corollary 5.27, we have \(\lambda=\lambda^{\epsilon}\). Let \(N\) denote the kernel of the map \(A^{\epsilon}=A(\mathbb{O}_{\lambda})\twoheadrightarrow\bar{A}(\mathbb{O}_{ \lambda})\) (see Proposition 5.2). The next proposition follows easily from the discussion in [1, Section 5]. **Proposition 5.28**.: _The following are true:_ 1. _If_ \(\mathfrak{g}\) _is of type_ \(B\)_, then_ \(N\) _is the subgroup of_ \(A^{\epsilon}\) _generated by the elements_ \(\upsilon_{\lambda_{2i}}\upsilon_{\lambda_{2i+1}}\) _for all_ \(i\geqslant 1\)_._ 2. _If_ \(\mathfrak{g}\) _is of type_ \(C\) _or_ \(D\)_, then_ \(N\) _is the subgroup of_ \(A^{\epsilon}\) _generated by the elements_ \(\upsilon_{\lambda_{2i-1}}\upsilon_{\lambda_{2i}}\) _for all_ \(i\geqslant 1\) _(note that when_ \(\mathfrak{g}\) _is of type_ \(C\) _and_ \(r\) _is odd, this implies that_ \(\upsilon_{\lambda_{r}}=\upsilon_{\lambda_{r}}\upsilon_{0}\) _lies in_ \(N\)_)._ Now we define a special basis of \(\bar{A}(\mathbb{O}_{\lambda})\). **Definition 5.29**.: _With notations above, define the following elements in \(A^{\epsilon}\):_ * \(\theta_{i}:=\upsilon_{\lambda_{2i-1}}\upsilon_{\lambda_{2i+1}}\) _for_ \(1\leqslant i\leqslant\frac{r-1}{2}\)_, if_ \(\mathfrak{g}\) _is of type_ \(B\)_;_ * \(\theta_{i}:=\upsilon_{\lambda_{2i}}\upsilon_{\lambda_{2i+2}}\) _for_ \(1\leqslant i\leqslant\lfloor\frac{r}{2}\rfloor\)_, if_ \(\mathfrak{g}\) _is of type_ \(C\)_;_ * \(\theta_{i}:=\upsilon_{\lambda_{2i}}\upsilon_{\lambda_{2i+2}}\) _for_ \(1\leqslant i\leqslant\frac{r}{2}-1\)_, if_ \(\mathfrak{g}\) _is of type_ \(D\)_._ _Note that when \(\mathfrak{g}\) is of type \(C\), by our convention \(\theta_{k}=\upsilon_{\lambda_{2k}}\upsilon_{0}=\upsilon_{\lambda_{2k}}\) if \(r=2k\) or \(2k+1\)._ The various \(\theta_{i}\)'s generate a subgroup \(K(\mathbb{O}_{\lambda})\) of \(A(\mathbb{O}_{\lambda})=A^{\epsilon}\) which maps isomorphically onto its image under the quotient map \(A(\mathbb{O}_{\lambda})\twoheadrightarrow\bar{A}(\mathbb{O}_{\lambda})\). In other words, we get a splitting \(s:\bar{A}(\mathbb{O}_{\lambda})\to A(\mathbb{O}_{\lambda})\) whose image is the subgroup \(K(\mathbb{O}_{\lambda})\). We write \(\bar{\theta}_{i}\) for the image of \(\theta_{i}\) in \(\bar{A}(\mathbb{O}_{\lambda})\) and so \(\{\bar{\theta}_{i}\}\) forms a basis of \(\bar{A}(\mathbb{O}_{\lambda})\). Note that the group \(K(\mathbb{O}_{\lambda})\) is mapped isomorphically to its image under the quotient map \(A(\mathbb{O}_{\lambda})\twoheadrightarrow A^{ad}(\mathbb{O}_{\lambda})\), so we can also regard \(K(\mathbb{O}_{\lambda})\) as a subgroup of \(A^{ad}(\mathbb{O}_{\lambda})\). Then \(K(\mathbb{O}_{\lambda})\subset A^{ad}(\mathbb{O}_{\lambda})\) is exactly the group in Lemma 2.16. Elements in \(A^{0}\), i.e., those that can be written as products of even number of \(\upsilon_{\lambda_{i}}\), e.g., \(\theta_{i}\), will also be identified with their images in \(A^{ad}(\mathbb{O}_{\lambda})\) by abuse of notation. ### Sommers duality **Proposition 5.30**.: _[_5, Theorem 12]_ _Let \(G^{\vee}=G_{\epsilon}(m)\) and \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\) corresponds to a reduced marked partition \({}^{\langle\nu\rangle}\lambda\in\overline{\mathcal{P}}_{X}(m)\)(see Proposition 5.21). Let \(\mathbb{O}_{d_{S}(^{\langle\nu\rangle}\lambda)}=d_{S}(\mathbb{O}^{\vee},\bar{C})\). Then, writing \(\eta=\lambda\backslash\nu\), the partition \(d_{S}(^{\langle\nu\rangle}\lambda)\) is given by the following formulas:_ 1. _If_ \(\mathfrak{g}^{\vee}=\mathfrak{sp}(2n)\)_, then_ \(\mathfrak{g}=\mathfrak{so}(2n+1)\) _and_ \[d_{S}(^{\langle\nu\rangle}\lambda)=((\nu\cup((\eta)^{+})_{B})^{t})_{B}.\] 2. _If_ \(\mathfrak{g}^{\vee}=\mathfrak{so}(2n+1)\)_, then_ \(\mathfrak{g}=\mathfrak{sp}(2n)\) _and_ \[d_{S}(^{\langle\nu\rangle}\lambda)=((\nu\cup l(\eta)_{C})^{t})_{C}.\] 3. _If_ \(\mathfrak{g}^{\vee}=\mathfrak{so}(2n)\)_, then_ \(\mathfrak{g}=\mathfrak{so}(2n)\) _and_ \[d_{S}(^{\langle\nu\rangle}\lambda)=((\nu\cup((\eta^{t})_{D})^{t})^{t})_{D}.\] Now assume that \((\mathbb{O}^{\vee},\bar{C})\) is a distinguished pair in \(\mathsf{LA}(G^{\vee})\). We will choose lifts \(\tilde{\theta}_{i}\in A(\mathbb{O}^{\vee})\) of \(\tilde{\theta}_{i}\in\bar{A}(\mathbb{O}^{\vee})\) different from those of Definition 5.29 as follows. **Definition 5.31**.: _Let \(G^{\vee}=G_{\epsilon}(m)\). Define the following elements in \(A(\mathbb{O}^{\vee})=A^{\epsilon}\):_ * \(\tilde{\theta}_{i}:=\upsilon_{\lambda_{2i-1}}\upsilon_{\lambda_{2i}}\) _for_ \(1\leq i\leq\frac{r-1}{2}\)_, if_ \(\mathfrak{g}^{\vee}\) _is of type_ \(B\)_;_ * \(\tilde{\theta}_{i}:=\upsilon_{\lambda_{2i}}\upsilon_{\lambda_{2i+1}}\) _for_ \(1\leq i\leq\lfloor\frac{r}{2}\rfloor\)_, if_ \(\mathfrak{g}^{\vee}\) _is of type_ \(C\)_;_ * \(\tilde{\theta}_{i}:=\upsilon_{\lambda_{2i}}\upsilon_{\lambda_{2i+1}}\) _for_ \(1\leq i\leq\frac{r}{2}-1\)_, if_ \(\mathfrak{g}^{\vee}\) _is of type_ \(D\)_._ _Note that when \(\mathfrak{g}^{\vee}\) is of type \(C\), by our convention \(\tilde{\theta}_{k}=\upsilon_{\lambda_{2k}}\upsilon_{0}=\upsilon_{\lambda_{2k}}\) if \(r=2k\) or \(2k+1\). This induces a splitting \(\tilde{s}:\bar{A}(\mathbb{O}^{\vee})\hookrightarrow A(\mathbb{O}^{\vee})\) defined by \(\tilde{s}(\tilde{\theta}_{i})=\tilde{\theta}_{i}\)._ **Definition 5.32**.: _Suppose \((\mathbb{O}^{\vee},\bar{C})\) is a distinguished pair in \(\mathsf{LA}(G^{\vee})\). Set \(C_{0}:=\tilde{s}(\bar{C})\in A(\mathbb{O}^{\vee})=A^{\epsilon}\). The marked partition associated to \(C_{0}\) (via Corollary 5.4) is denoted by \({}^{\langle\nu_{0}\rangle}\lambda\) with \(\lambda=\nu_{0}\cup\eta_{0}\)._ We now simplify the formulas in Proposition 5.30 in the case when \((\mathbb{O}^{\vee},\bar{C})\) is distinguished. We first introduce a new operation on partitions. **Definition 5.33**.: _For any distinguished partition \(p=[p_{1},p_{2},\ldots,p_{l}]\) such that all \(p_{i}\) have the same parity, define \(p^{\uparrow}:=[p_{1}^{\uparrow},p_{2}^{\uparrow},\ldots,p_{l}^{\uparrow}]\) to be the partition such that \(p_{i}^{\uparrow}=p_{i}+1\) for odd \(i\) and \(p_{i}^{\uparrow}=p_{i}-1\) for even \(i\). (In terms of Young diagrams, what we do here is to move one box from the second row to the first one, from the fourth row to the third one, and so on.)_ **Lemma 5.34**.: _Suppose \((\mathbb{O}^{\vee},\bar{C})\) is a distinguished pair in \(\mathsf{LA}(G^{\vee})\) and \({}^{\langle\nu\rangle}\lambda\) is the corresponding reduced marked partition. Then we have_ * \(d_{S}(^{\langle\nu\rangle}\lambda)=(\nu_{0}\cup l(\eta_{0})_{C})^{t}\) _if_ \(\mathfrak{g}^{\vee}\) _is of type_ \(B\) * \(d_{S}(^{\langle\nu\rangle}\lambda)=(\nu_{0}\cup\eta_{0}^{+})^{t}\) _if_ \(\mathfrak{g}^{\vee}\) _is of type_ \(C\)_;_ * \(d_{S}(^{\langle\nu\rangle}\lambda)=(\nu_{0}\cup\eta_{0}^{\uparrow})^{t}\) _if_ \(\mathfrak{g}^{\vee}\) _is of type_ \(D\)_._ Proof.: One can argue by induction using block decomposition in Section 5.14 and Proposition 5.38, then apply Proposition 5.30 and Corollary 5.27. We leave the details to the reader. ### Block decompositions If \({}^{\langle\nu\rangle}\lambda\), \({}^{\langle\nu^{1}\rangle}\lambda^{1}\) and \({}^{\langle\nu^{2}\rangle}\lambda^{2}\) are marked partitions, we write \({}^{\langle\nu\rangle}\lambda=^{\langle\nu^{1}\rangle}\lambda^{1}\cup{}^{ \langle\nu^{2}\rangle}\lambda^{2}\) if \(\lambda=\lambda^{1}\cup\lambda^{2}\) and \(\nu=\nu^{1}\cup\nu^{2}\). We say that \(\lambda^{1}\) is _evenly_ (resp. _oddly_) _superior_ to \(\lambda^{2}\) if there is an even (resp. odd) integer \(m\) such that \(\lambda^{1}_{\#\lambda^{1}}\geqslant m\geqslant\lambda^{2}_{1}\). **Definition 5.35**.: _Let \({}^{\langle\nu\rangle}\lambda\) be a marked partition of type \(B\) (resp. \(C\), \(D\)). A block decomposition of \({}^{\langle\nu\rangle}\lambda\) is a decomposition of \({}^{\langle\nu\rangle}\lambda\) into marked partitions_ \[{}^{\langle\nu\rangle}\lambda=^{\langle\nu^{1}\rangle}\lambda^{1}\cup...\cup {}^{\langle\nu^{k}\rangle}\lambda^{k}\] _such that_ 1. \({}^{\langle\nu^{i}\rangle}\lambda^{i}\) _is a marked partition of type_ \(D\) _(resp._ \(C\)_,_ \(D\)_) for_ \(i>1\) _and_ \({}^{\langle\nu^{1}\rangle}\lambda^{1}\) _is a marked partition of type_ \(B\) _(resp._ \(C\)_,_ \(D\)_)._ 2. _If_ \(\mathfrak{g}^{\vee}\) _is of type_ \(C\)_,_ \(\#\lambda^{i}\) _is even for_ \(1\leqslant i\leqslant k-1\) _and_ \(\#\nu^{i}\) _is even for all_ \(i\)_, where the last part of_ \(\nu^{k}\) _is allowed to be_ \(0\)_._ 3. \(\lambda^{i}\) _is evenly (resp. oddly, evenly) superior to_ \(\lambda^{i+1}\) _for all_ \(i\)_._ _We say that a marked partition \({}^{\langle\nu\rangle}\lambda\) of type \(B\) (resp. \(C\), \(D\)) is a basic block if \(\nu\) has two elements, namely the smallest part of \(\lambda\) and the largest part of \(\lambda\) of odd (resp. even, even) height, or if \(X=C\) and \(\nu\) is a singleton consisting of the largest odd part of \(\lambda\) of even height._ **Proposition 5.36**.: _([1, Proposition 4.11]) Every reduced marked partition admits a block decomposition_ \[{}^{\langle\nu\rangle}\lambda=^{\langle\nu^{1}\rangle}\lambda^{1}\cup...\cup {}^{\langle\nu^{k}\rangle}\lambda^{k}\] _such that each for each \(i\), either \(\nu^{i}=\emptyset\) or \({}^{\langle\nu^{i}\rangle}\lambda^{i}\) is a basic block._ We call the block decomposition in Proposition 5.36 a _block decomposition into basic blocks_. **Remark 5.37**.: _It is clear from the definitions that each block in a block decomposition of a special marked partition is special._ For a partition \(\lambda\), let \(\lambda_{-}=l(\lambda^{t})^{t}\). The following is immediate from [1, Proposition 4.9]. **Proposition 5.38**.: _Let \(X\in\{B,C,D\}\) and let \({}^{\langle\nu\rangle}\lambda\in\overline{\mathcal{P}}_{X}(m)\). Suppose that \({}^{\langle\nu\rangle}\lambda=^{\langle\nu^{1}\rangle}\lambda^{1}\cup{}^{ \langle\nu^{2}\rangle}\lambda^{2}\cup\ldots\cup{}^{\langle\nu^{k}\rangle} \lambda^{k}\) is a block decomposition. Then_ 1. _If_ \(X=B\)_, then_ \[d_{S}(^{\langle\nu\rangle}\lambda)=d_{S}(^{\langle\nu^{1}\rangle}\lambda^{1}) \lor d_{S}(^{\langle\nu^{2}\rangle}\lambda^{2})\vee\ldots\lor d_{S}(^{\langle \nu^{k}\rangle}\lambda^{k}).\] 2. _If_ \(X=C\)_, then_ \[d_{S}(^{\langle\nu^{1}\rangle}\lambda)=d_{S}(^{\langle\nu^{1}\rangle}\lambda^{1 })\centerdot d_{S}(^{\langle\nu^{2}\rangle}\lambda^{2})\centerdot\ldots\lor d_{S}( ^{\langle\nu^{k-1}\rangle}\lambda^{k-1})\centerdot d_{S}(^{\langle\nu^{k} \rangle}\lambda^{k}).\] 3. _If_ \(X=D\)_, then_ \[d_{S}(^{\langle\nu\rangle}\lambda)=d_{S}(^{\langle\nu^{1}\rangle}\lambda^{1}) \lor d_{S}(^{\langle\nu^{2}\rangle}\lambda^{2})\vee\ldots\lor d_{S}(^{\langle \nu^{k}\rangle}\lambda^{k})\] ### Unipotent infinitesimal characters Let \(G\) be a simple classical group of type \(B\), \(C\), or \(D\), and let \(\widehat{\mathbb{O}}\in\mathsf{Cov}(G)\) be a birationally rigid nilpotent cover. In this section, we will recall formulas from [1, Section 8.2] computing the unipotent infinitesimal character \(\gamma(\widehat{\mathbb{O}})\) attached to \(\widehat{\mathbb{O}}\) in terms of the partition for \(\mathbb{O}\). **Definition 5.39** (Definition 8.2.1, [11]).: _Suppose \(q=[q_{1},q_{2},\ldots,q_{l}]\) is a partition of \(n\). Define \(\rho^{+}(q)\in\left(\frac{1}{2}\mathbb{Z}\right)^{\left\lfloor\frac{n}{2} \right\rfloor}\) by appending the positive elements of the sequence_ \[\left(\frac{q_{i}-1}{2},\frac{q_{i}-3}{2},\ldots,\frac{3-q_{i}}{2},\frac{1-q_{ i}}{2}\right)\] _for each \(i\geq 1\), and then adding \(0\)'s if necessary so that the length of the sequence \(\rho^{+}(q)\) equals \(\left\lfloor\frac{n}{2}\right\rfloor\)._ **Remark 5.40**.: _If \(p\) is a partition with all even members and \(q\) is a partition with all odd members, we have \(\rho^{+}(p\cup q)=\rho^{+}(p)\cup\rho^{+}(q)\)._ **Definition 5.41** (Definition 8.2.2, [11]).: _Let \(q\in\mathcal{P}(n)\) be a partition. Define \(f_{0}(q)\in\mathcal{P}(n)\) as follows: for every odd \(i\) with \(q_{i}\geq q_{i+1}+2\), replace \([q_{i},q_{i+1}]\) in \(q\) by \([q_{i}-1,q_{i+1}+1]\). Define \(f_{1}(q)\in\mathcal{P}(n+1)\) as follows: for every even \(i\) with \(q_{i}\geq q_{i+1}+2\), replace \([q_{i},q_{i+1}]\) in \(q\) by \([q_{i}-1,q_{i+1}+1]\) and finally replace \(q_{1}\) by \(q_{1}+1\). If \(q=\emptyset\) is the empty partition, define \(f_{1}(q)=(1)\)._ **Definition 5.42** (Definition 8.2.7, [11]).: _Let \(q\) be a partition. Let \(x(q)\) be the subpartition of \(q\) consisting of all multiplicity \(1\) parts and let \(y(q)\) be the subpartition of \(q\) consisting of all multiplicty \(2\) parts._ _Suppose \(y\) is a partition such that every part of it has multiplicity 2. Define a partition \(g(y)\) (of the same size as \(y\)) by replacing every pair \([y_{i},y_{i}]\) with \([y_{i}+1,y_{i}-1]\)._ **Proposition 5.43** (Proposition 8.2.8, [11]).: _Let \(p\) be the partition corresponding to \(\mathbb{O}\). Form the partitions \(x=x(p^{t})\) and \(y=y(p^{t})\). Then for \(G=G_{\epsilon}(m)\) the infinitesimal character \(\gamma(\widehat{\mathbb{O}})\) is given by the following formula_ \[\gamma(\widehat{\mathbb{O}})=\rho^{+}(g(y)\cup f_{\epsilon}(x)).\] **Lemma 5.44**.: _Suppose \(\mathfrak{g}\) is of classical type and \((\mathbb{O}^{\vee},\bar{C})\) is a distinguished pair in \(\mathsf{LA}(G^{\vee})\) with the corresponding reduced marked partition \({}^{\langle\nu\rangle}\lambda\). Then for the cover \(\widehat{\mathbb{O}}=D(\mathbb{O}^{\vee},\bar{C})\), we have \(\gamma(\widehat{\mathbb{O}})=\rho^{+}(\nu_{0}^{\uparrow}\cup\eta_{0})\)._ Proof.: Suppose \(\mathfrak{g}^{\vee}\) is of type \(B\), so that \(\mathfrak{g}\) is of type \(C\). Let \(p\) be the transpose of the partition of \(d_{S}(^{\langle\nu\rangle}\lambda)\). Then by Lemma 5.34, \(p=\nu_{0}\cup\eta_{0\;\;C}^{-}\). Partition of this form has the property that any member of it has multiplicity at most \(2\), and furthermore, any multiplicity-\(2\) part (if exists) is of the form \(p_{i}=p_{i+1}\), where \(i\) is odd. Therefore the formula for \(\gamma(\widehat{\mathbb{O}})\) in Proposition 5.43 (ii) simplifies to \[\rho^{+}(f_{1}(p^{t}))=\rho^{+}(f_{1}(\nu_{0}\cup\eta_{0\;\;C}^{-}))=\rho^{+}( \nu_{0}^{\uparrow}\cup\eta_{0}),\] where the last equality follows from Definition 5.31. The arguments for type \(B\) and \(D\) are similar. ## 6. Proofs of main results in classical types ### Proof of Proposition 4.3 **Proposition 6.1**.: _Suppose \(G\) is a simple adjoint group of classical type. Then the following are true:_ 1. _Suppose_ \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\) _is a special distinguished Lusztig-Achar datum. Let_ \(\mathbb{O}=d_{S}(\mathbb{O}^{\vee},\bar{C})\) _and let_ \(\widetilde{\mathbb{O}}\) _denote the Lusztig cover of_ \(\mathbb{O}\) _(see Definition_ 4.1_). Then_ \(\widetilde{\mathbb{O}}\) _is birationally rigid._ 2. _The map_ \(d_{S}:\mathsf{LA}^{*}(G^{\vee})\to\mathsf{Orb}(G)\) _is injective when restricted to the set of special distinguished Lusztig-Achar data._ Proof.: First suppose \(\mathfrak{g}^{\vee}=\mathfrak{sl}(n)\). By Proposition 5.21, there is a unique distinguished element in \(\mathsf{LA}^{*}(G^{\vee})\), namely \((\mathbb{O}_{prin},1)\), and \(d_{S}(\mathbb{O}_{prin},1)=d(\mathbb{O}_{prin})=\{0\}\). Now (i) and (ii) are immediate. Next, suppose \(\mathfrak{g}^{\vee}=\mathfrak{so}(2n+1)\) (the other cases are completely analogous and are left to the reader). We will first prove (ii). First, we observe that if \((\mathbb{O}^{\vee},\bar{C})\) is special and distinguished, then \(\mathbb{O}^{\vee}\) must be special. For this, it is sufficient to show that the partition of \(\mathbb{O}^{\vee}\) contains no even parts (any such orbit is even and thus special). Suppose, to the contrary, that the partition of \(\mathbb{O}^{\vee}\) contains an even part \(k\). Then by Proposition 5.7\(\mathbb{O}^{\vee}\) is saturated from an orbit \(\mathbb{O}^{\vee}_{L}\) in the Levi subalgebra \(\mathrm{I}^{\vee}=\mathfrak{gl}(k)\times\mathfrak{so}(2n+1-2k)\), and by Corollary 5.19, \(\bar{A}(\mathbb{O}^{\vee})\simeq\bar{A}(\mathbb{O}^{\vee}_{L^{\vee}})\). Thus, \((\mathbb{O}^{\vee},\bar{C})\) is not distinguished, a contradiction. We conclude that \(\mathbb{O}^{\vee}\) is special, as asserted. Now suppose \((\mathbb{O}^{\vee}_{1},\bar{C}_{1})\) and \((\mathbb{O}^{\vee}_{2},\bar{C}_{2})\) are special distinguished Lusztig-Achar data such that \(d_{S}(\mathbb{O}^{\vee}_{1},\bar{C}_{1})=d_{S}(\mathbb{O}^{\vee}_{2},\bar{C}_ {2})=\mathbb{O}\). By [1, Remark 14], \(\mathbb{O}\) is in the special piece of both \(d(\mathbb{O}^{\vee}_{1})\) and \(d(\mathbb{O}^{\vee}_{2})\). Since both \(\mathbb{O}^{\vee}_{1}\) and \(\mathbb{O}^{\vee}_{2}\) are special (in the sense of Lusztig), it follows that \(\mathbb{O}^{\vee}_{1}=\mathbb{O}^{\vee}_{2}=d(\mathbb{O})\) and hence by Lemma 2.13 that \(\bar{C}_{1}=\bar{C}_{2}\). Therefore, \((\mathbb{O}^{\vee}_{1},\bar{C}_{1})=(\mathbb{O}^{\vee}_{2},\bar{C}_{2})\). This completes the proof of (ii). We now proceed to proving (i). Let \({}^{\langle\nu\rangle}\lambda\in\overline{\mathcal{P}}_{B}(2n+1)\) be the reduced marked partition corresponding to \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\), cf. Proposition 5.21 and let \(\pi=d_{S}({}^{\langle\nu\rangle}\lambda)\in\mathcal{P}_{C}(2n)\), the partition corresponding to \(\mathbb{O}\). We need to show that \(\widehat{\mathbb{O}}\) is birationally rigid. By Proposition 3.23, this is equivalent to showing that \(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}])\) has no codimension \(2\) leaves and \(H^{2}(\widetilde{\mathbb{O}},\mathbb{C})=0\). For the former, we need to check the two conditions in Proposition 5.14. By Lemma 5.34, \(\pi=(\nu_{0}\cup l(\eta_{0})_{C})^{t}\). Therefore columns \(c_{j}\) of \(\pi\) are just rows of the partition \(\pi^{t}=\nu_{0}\cup l(\eta_{0})_{C}\). Since \(\nu_{0}\) and \(\eta_{0}\) are distinguished, it is easy to verify that \(m_{\pi^{t}}(c_{j})\leq 2\) for all \(c_{j}\). Now if \(m_{\pi^{t}}(c_{j})=2\), then \(c_{j}\in l(\eta_{0})_{C}\). Since \(\eta_{0}\) has odd number of members and all members of \(\eta_{0}\) are odd, we deduce that all members of \(l(\eta_{0})_{C}\) are even and so is \(c_{j}\). Moreover, the height of \(c_{j}\) in \(l(\eta_{0})_{C}\) is even. But the definition of \(\nu_{0}\) and \(\eta_{0}\) (Definition 5.32), the height of any \(x\in l(\eta_{0})_{C}\) in \(l(\eta_{0})_{C}\) and the height of \(x\) in \(\pi^{t}\) has the same parity. This means that \(c_{j}\) is a markable column of \(\pi\) by Remark 5.20 and condition (i) of Proposition 3.23 is satisfied. Condition (ii) also follows immediately from Remark 5.20. Now we check that \(H^{2}(\widehat{\mathbb{O}},\mathbb{C})=0\) using Lemma 5.15. Assume that \(c_{j}=c_{k}^{\epsilon}\) is a column of \(\pi\) with even height in \(\pi^{t}=\nu_{0}\cup l(\eta_{0})_{C}\), such that \(c_{j}=c_{j-1}+2\). Then any such part \(c_{j}=c_{k}^{\epsilon}\) must belong to \(\nu_{0}\), and hence is odd. This is because \(\eta_{0}\) is distinguished, so if \(c_{j}\in l(\eta_{0})_{C}\), \(c_{j}=c_{j-1}+2\) would imply \(c_{j}\) is of odd height in \(\pi^{t}\), which yields contradiction. Therefore \(c_{k}^{\epsilon}\) is not a markable column for \(\pi\). As in Remark 5.20, assume that the markable columns of \(\pi\) are \((c_{1}^{m}>c_{l-1}^{m}>\cdots>c_{1}^{m}>0)\) and \(c_{q-1}^{m}<c_{j}=c_{k}^{\epsilon}<c_{q}^{m}\) for some \(1\leq q<l+1\). Then we can take \(\theta=\vartheta_{c^{\vee}_{k}}\vartheta_{c^{m}_{q}}\in N\), which fullfills the condition in Lemma 5.15. This finishes the proof that \(H^{2}(\widehat{\mathbb{O}},\mathbb{C})=0\). ### Proof of Theorem 4.8 Since \((\mathbb{O}^{\vee},\bar{C})\) is distinguished, the set \(S(\mathbb{O}^{\vee},\bar{C})=\mathsf{LA}^{-1}(\mathbb{O}^{\vee},\bar{C})\subset \mathfrak{h}_{\mathbb{R}}^{*}\) is already \(W\)-invariant and therefore can be regarded as a subset of \(\mathfrak{h}^{*}/W\). Now Theorem 4.8 for classical types will follow from Lemma 5.44 and the following result (for the definition of \(\nu_{0}\) and \(\eta_{0}\), see Definition 5.32). **Theorem 6.2**.: _Suppose \(G\) is a simple group of classical type. Let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\) be a distinguished Lusztig-Achar datum corresponding to a reduced marked partition \({}^{\langle\nu\rangle}\lambda\). Then \(\rho^{+}(\nu_{0}^{\uparrow}\cup\eta_{0})\) is the unique minimal-length \(W\)-orbit in \(S(\mathbb{O}^{\vee},\bar{C})\)._ The remainder of this section is dedicated to the proof of Theorem 6.2. First of all, recall that a Lusztig-Achar datum \((\mathbb{O}^{\vee},\bar{C})\) is specified by indicating the corresponding equivalence class \(\{(M_{1}^{\vee},\mathbb{O}_{M_{1}^{\vee}}),...,(M_{k}^{\vee},\mathbb{O}_{M_{ k}^{\vee}})\}\) of Sommers data, see Lemma 2.18 and the discussion preceding it. By Lemma 2.18(iii), \((\mathbb{O}^{\vee},\bar{C})\) is distinguished if and only if one (equivalently, all) of the pseudo-Levi subgroups \(M_{1}^{\vee},...,M_{k}^{\vee}\) is of maximal semisimple rank. In this case, \((M_{i}^{\vee},\mathbb{O}_{M_{i}^{\vee}})\) correspond via the bijection \(\mathsf{Som}(G^{\vee})\xrightarrow{\sim}\mathsf{Conj}(G^{\vee}_{ad})\) to the distinguished pairs \((\mathbb{O}^{\vee},C_{i})\) in the preimage of \((\mathbb{O}^{\vee},\bar{C})\) under the projection \(\mathsf{Conj}(G^{\vee}_{ad})\twoheadrightarrow\mathsf{LA}(G^{\vee})\). For any distinguished pair \((\mathbb{O}^{\vee},C)\in\mathsf{Conj}(G^{\vee}_{ad})\), set \(S(\mathbb{O}^{\vee},C)\subset\mathfrak{h}^{*}\) to be the preimage of the composition \[\mathfrak{h}^{*}\xrightarrow{\mathsf{Som}}\mathsf{Som}(G^{\vee}_{ad}) \xrightarrow{\sim}\mathsf{Conj}(G^{\vee}_{ad}),\] where the map \(\mathsf{Som}\) is given by \[\mathsf{Som}:\mathfrak{h}^{*}\twoheadrightarrow\mathsf{Som}(G^{\vee}_{ad}),\quad\gamma\mapsto(L^{\vee}_{\gamma},\operatorname{Ind}_{L^{\vee}_{\gamma,0 }}^{L^{\vee}_{\gamma}}\{0\}),\] where \(L^{\vee}_{\gamma,0}=Z_{G^{\vee}}(\gamma)\) and \(L^{\vee}_{\gamma}=Z_{G^{\vee}}(\exp(2\pi i\gamma))^{\circ}\). Then \(S(\mathbb{O}^{\vee},\bar{C})\) is the union of all \(S(\mathbb{O}^{\vee},C_{i})\) where \((\mathbb{O}^{\vee},C_{i})\) are all the lifts of \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\) in \(\mathsf{Conj}(G^{\vee}_{ad})\). Now fix a Cartan subalgebra \(\mathfrak{h}^{\vee}\simeq\mathfrak{h}^{*}\) of \(\mathfrak{g}^{\vee}\), the root system \(\Delta^{\vee}\), and the coordinates of \(\{e_{i}\}\) as in Section 5.3. For a distinguished pair \((\mathbb{O}^{\vee},C)\in\mathsf{Conj}(G^{\vee}_{ad})\), let \((L^{\vee},\mathbb{O}^{\vee}_{\mathbb{U}^{\vee}})\), denote the corresponding element in \(\mathsf{Som}(G^{\vee})\), where \(L^{\vee}\) has Lie algebra \(\mathfrak{l}^{\vee}\). Then by Corollary 5.24, \(L^{\vee}\) is of maximal semisimple rank and so we can assume that \(\mathfrak{l}^{\vee}\) is of the form \(\mathfrak{l}^{\vee}_{1}\times\mathfrak{l}^{\vee}_{2}\) as in Section 5.4, where \(\mathfrak{l}^{\vee}_{i}\) are simple classical Lie algebras, \(i=1,2\), so that the Cartan subalgebra \(\mathfrak{h}_{1}\simeq\mathbb{C}^{k}\) of \(\mathfrak{l}_{1}\) has coordinates \(\{e_{1},e_{2},\ldots e_{k}\}\), and the Cartan subalgebra \(\mathfrak{h}_{2}\simeq\mathbb{C}^{n-k}\) of \(\mathfrak{l}_{2}\) has coordinates \(\{e_{k+1},e_{k+2},\ldots e_{n}\}\). Moreover, we can write \(\mathbb{O}^{\vee}_{\mathfrak{l}^{\vee}}=\mathbb{O}^{\vee}_{\lambda^{1}}\times \mathbb{O}^{\vee}_{\lambda^{2}}\), where \(\mathbb{O}^{\vee}_{\lambda^{i}}\) is a distinguished orbit in \(\mathfrak{l}^{\vee}_{i}\) whose corresponding partition is denoted as \(\lambda^{i}\), \(i=1,2\). We also write \(\nu=\lambda^{1}\), \(\eta=\lambda^{2}\), so that \((\mathbb{O}^{\vee},C)\) corresponds to the marked partition \({}^{\langle\nu\rangle}\lambda\). By the discussions above, for an element \(\gamma\in\mathfrak{h}^{*}\) to lie in \(S(\mathbb{O}^{\vee},C)\), we need \(\mathfrak{l}^{\vee}_{\gamma}=\mathfrak{l}^{\vee}\) possibly after some \(W\)-conjugation, and additionally \(\operatorname{Ind}_{L^{\vee}_{\gamma,0}}^{L^{\vee}_{\gamma}}\{0\}=\mathbb{O}^{ \vee}_{\mathfrak{l}^{\vee}}\). Set \[\gamma_{1}:=(e_{1}(\gamma),e_{2}(\gamma),\ldots,e_{k}(\gamma))\in\mathfrak{h} _{1}^{\vee}\quad\text{and}\quad\gamma_{2}:=(e_{k+1}(\gamma),e_{k+2}(\gamma), \ldots,e_{n}(\gamma))\in\mathfrak{h}_{2}^{\vee}\,.\] Additionally, \(\gamma\) satisfies \(\operatorname{Ind}_{L^{\vee}_{\gamma,0}}^{L^{\vee}_{\gamma}}\{0\}=\mathbb{O}^{ \vee}_{\mathfrak{l}^{\vee}}\). Let \(\mathfrak{l}^{\vee}_{\gamma_{i},0}\subset\mathfrak{l}^{\vee}_{i}\) be the Levi subalgebra of \(\mathfrak{l}^{\vee}_{i}\) determined by the singular datum of \(\gamma_{i}\), for \(i=1,2\). We have a decomposition \(\mathfrak{l}^{\vee}_{\gamma,0}=\mathfrak{l}^{\vee}_{\gamma_{1},0}\times \mathfrak{l}^{\vee}_{\gamma_{2},0}\). Now the condition \(\operatorname{Ind}_{L^{\vee}_{\gamma,0}}^{L^{\vee}_{\gamma}}\{0\}=\mathbb{O}^{ \vee}_{\mathfrak{l}^{\vee}}\) amounts to \(\operatorname{Ind}_{L^{\vee}_{\gamma_{i},0}}^{L^{\vee}_{\gamma}}\{0\}=\mathbb{O}^{ \vee}_{\lambda^{i}}\) for \(i=1,2\). We now examine separately the cases of type \(B\), \(C\), and \(D\). 1. Type \(B_{n}\): \(\mathfrak{g}^{\vee}=\mathfrak{so}(2n+1)\) (\(n\geq 3\)). It is easy to see that the condition \(\mathrm{I}^{\vee}_{\gamma}=\mathrm{I}^{\vee}\) is equivalent to the conditions that \(e_{i}(\gamma)\in\frac{1}{2}+\mathbb{Z}\) for \(1\leq i\leq k\) and \(e_{i}(\gamma)\in\mathbb{Z}\) for \(k+1\leq i\leq n\). Therefore, \(S(\mathbb{O}^{\vee},C)\) coincides with the image of the subset \[S_{0}({}^{\langle\nu\rangle}\lambda):=W\cdot\left\{\gamma=(\gamma_{1},\gamma_{ 2})\in\mathfrak{h}^{\vee}\mid\mathrm{Ind}^{L^{\vee}_{i}}_{L^{\vee}_{\gamma_{i },0}}\{0\}=\mathbb{O}^{\vee}_{\lambda^{i}},\gamma_{1}\in\left(\frac{1}{2}+ \mathbb{Z}\right)^{k},\gamma_{2}\in\mathbb{Z}^{n-k}\right\}, \tag{6.2.1}\] under \(\mathfrak{h}^{*}\to\mathfrak{h}^{*}/W\). 2. Type \(D_{n}\): \(\mathfrak{g}^{\vee}=\mathfrak{so}(2n)\) (\(n\geq 4\)). The condition that \(\mathrm{I}^{\vee}_{\gamma}=\mathrm{I}^{\vee}\) is equivalent to requiring either \(\gamma_{1}\in(\frac{1}{2}+\mathbb{Z})^{k}\) and \(\gamma_{2}\in\mathbb{Z}^{n-k}\), or \(\gamma_{1}\in\mathbb{Z}^{k}\) and \(\gamma_{2}\in(\frac{1}{2}+\mathbb{Z})^{n-k}\). Note that by switching the roles of \(\nu\) and \(\eta\), we can reduce the latter case to the former. If we still define \(S_{0}({}^{\langle\nu\rangle}\lambda)\) as in (6.2.1), then \(S(\mathbb{O}^{\vee},C)\) is the union of \(S_{0}({}^{\langle\nu\rangle}\lambda)\) and \(S_{0}({}^{\langle\eta\rangle}\lambda)\). 3. Type \(C_{n}\): \(\mathfrak{g}^{\vee}=\mathfrak{sp}(2n)\) (\(n\geq 2\)). This is similar to the case of type \(D\), but this time we only consider the condition that \(\gamma_{1}\in\mathbb{Z}^{k}\) and \(\gamma_{2}\in(\frac{1}{2}+\mathbb{Z})^{n-k}\), for the convenience of writing. To treat all types uniformly, write \(k_{1}=k\), \(k_{2}=n-k\), and define the set \[S_{\epsilon}({}^{\langle\nu\rangle}\lambda):=W\cdot\left\{\gamma=(\gamma_{1}, \gamma_{2})\in\mathfrak{h}^{\vee}\mid\mathrm{Ind}^{L^{\vee}_{i}}_{L^{\vee}_{ \gamma_{i},0}}\{0\}=\mathbb{O}^{\vee}_{\lambda^{i}},\gamma_{i}\in\left(\frac{ \epsilon+i}{2}+\mathbb{Z}\right)^{k_{i}},i=1,2\right\} \tag{6.2.2}\] for \(\epsilon\in\{0,1\}\), and for any marked partition \({}^{\langle\nu\rangle}\lambda\in\tilde{\mathcal{P}}_{\epsilon}(m)\). The discussion above amounts to a proof of the following lemma. **Lemma 6.3**.: _The set \(S(\mathbb{O}^{\vee},\bar{C})\) is the union of \(S_{\epsilon}({}^{\langle\nu\rangle}\lambda)\), where \({}^{\langle\nu\rangle}\lambda\) runs over all marked partitions in \(\tilde{\mathcal{P}}_{\epsilon}(m)\) which are mapped to \((\mathbb{O}^{\vee},\bar{C})\)._ Therefore we should first find elements of minimal length in each set \(S_{\epsilon}({}^{\langle\nu\rangle}\lambda)\). Before stating the result, we first introduce some notations. **Definition 6.4**.: _For any partition \(p=[p_{1},p_{2},\ldots,p_{l}]\) and any \(0\leq k\leq l\), set \(p_{\leq k}:=[p_{1},\ldots,p_{k}]\) to be the subpartition of \(p\) consisting of the first \(k\) rows and set \(p_{>k}:=[p_{k+1},\ldots,p_{l}]\) to be the subpartition of \(p\) consisting of the last \(l-k\) rows. Define \(p^{-1}:=[p_{1}-1,p_{2}-1,\ldots,p_{l}-1]\), i.e., the partition obtained from \(p\) by subtracting \(1\) from each part._ **Proposition 6.5**.: _For any marked partition \({}^{\langle\nu\rangle}\lambda\) such that \(\lambda\in\mathcal{P}_{\epsilon}(m)\), the sequence \(\rho^{+}(\nu^{\uparrow}\cup\eta)\) gives an element of minimal length in the set \(S_{\epsilon}({}^{\langle\nu\rangle}\lambda)\). When \(\mathfrak{g}^{\vee}\) is of type \(B\) or \(C\), any element of minimal length in \(S_{\epsilon}({}^{\langle\nu\rangle}\lambda)\) is \(W\)-conjugate to \(\rho^{+}(\nu^{\uparrow}\cup\eta)\). When \(\mathfrak{g}^{\vee}\) is of type \(D\), this is also true except for the case when \(\eta=\emptyset\). In that case, there are exactly two \(W\)-orbits with minimal length._ Proof.: We treat each type separately. **Types \(B\) and \(D\).** Suppose \(\mathfrak{g}^{\vee}=\mathfrak{so}(2n+\delta)\), where \(2n+\delta\geq 7\) and \(\delta=0\) or \(1\) depending on whether \(\mathfrak{g}^{\vee}\) is of type \(D\) or \(B\). We adopt the same notations introduced before Lemma 6.3. We can choose \(\gamma_{1}\) and \(\gamma_{2}\) independently to minimize the length of \(\gamma=(\gamma_{1},\gamma_{2})\). By \(L^{\vee}_{i}\)-conjugation we can assume that \(\mathrm{I}^{\vee}_{\gamma_{i},0}\) is a standard Levi subalgebra in \(\mathrm{I}^{\vee}_{i}\) with respect to choice of simple roots mentioned above, for \(i=1,2\). The subalgebra \(\mathrm{I}^{\vee}_{1}\) is of type \(D\). Assume \(\mathrm{I}^{\vee}_{\gamma_{1},0}=(\mathrm{I}^{\vee}_{1})_{J_{1}}\), where \(J_{1}\subset I_{1}\). If \(J_{1}\) contains both the positive roots \(e_{1}-e_{2}\) and the lowest root \(-e_{1}-e_{2}\), then \(e_{1}(\gamma)=e_{2}(\gamma)=0\), which contradicts the condition \(e_{1}(\gamma)\in\frac{1}{2}+\mathbb{Z}\). Therefore \(J_{1}\) can only contain at most one of the two roots \(e_{1}-e_{2}\) and \(-e_{1}-e_{2}\) and hence \(\mathrm{I}^{\vee}_{\gamma_{1},0}\) is isomorphic to a product of factors of type \(A\). First consider the case when the lowest root \(-e_{1}-e_{2}\) is not contained in \(J_{1}\). In this case, \(\mathrm{I}^{\vee}_{\gamma_{1},0}\) is uniquely determined by a partition \(q_{1},q_{2},\ldots,q_{l}\) of \(k\), such that \(J_{1}=I_{1}\backslash\bigcup_{i=1}^{l-1}\{e_{b_{i}}-e_{b_{i}+1}\}\), where \(b_{i}=\sum_{j=1}^{i}q_{j}\) (note \(b_{0}=0\)). Then \(\mathrm{I}^{\vee}_{\gamma_{1},0}\simeq\mathfrak{gl}(q_{1})\times\mathfrak{gl} (q_{2})\times\cdots\times\mathfrak{gl}(q_{l})\). By permuting the coordinates \(e_{1},\ldots,e_{k}\) (this corresponds to conjugating \(\mathrm{I}^{\vee}_{\gamma_{1},0}\) by \(L^{\vee}_{1}\)), we may assume that the sequence \(q_{1},\ldots,q_{l}\) is non-increasing. Define the partition \(q=(q_{1},q_{1},q_{2},q_{2},\ldots,q_{l},q_{l})\) in terms of columns. Then the condition \(\mathbb{O}^{\vee}_{\lambda^{1}}=\mathbb{O}^{\vee}_{\nu}=\mathrm{Ind}^{L^{ \vee}_{\gamma_{1},0}}_{L^{\vee}_{\gamma_{1},0}}\{0\}\) just means that \(\nu=q_{D}\), the \(D\)-collapse of \(q\). Since \(\nu\) is a distinguished partition with all odd members and \(\#\nu\) is even, it is not hard to see that \(q=\nu^{\uparrow}\). The constraint on \(\gamma_{1}\) determined by \(\mathrm{I}^{\vee}_{\gamma_{1},0}\) consists of a family of equalities \[e_{b_{i}+1}(\gamma)=e_{b_{i}+2}(\gamma)=\cdots=e_{b_{i+1}}(\gamma),\quad 0 \leq i\leq l-1.\] It is not hard to see that all \(\gamma_{1}\) of minimal length that satisfy these equalities are of the form \[\bigg{(}\underbrace{\frac{1}{2}\epsilon_{1},\,\ldots,\frac{1}{2}}_{q_{1}} \epsilon_{1},\underbrace{\frac{3}{2}\epsilon_{2},\,\ldots,\frac{3}{2}}_{q_{2}} \epsilon_{2},\,\ldots,\underbrace{\frac{2l-1}{2}\epsilon_{l},\,\ldots,\frac{2l -1}{2}}_{q_{l}}\epsilon_{l}\bigg{)}. \tag{6.2.3}\] where \(\epsilon_{i}=\pm 1\) for \(1\leq i\leq l\). After signed permutations, they become \[\gamma_{1}^{+}=\bigg{(}\underbrace{\frac{1}{2},\,\ldots,\frac{1}{2}}_{q_{1}} \underbrace{\frac{3}{2},\,\ldots,\frac{3}{2}}_{q_{2}},\,\ldots,\underbrace{ \frac{2l-1}{2},\,\ldots,\frac{2l-1}{2}}_{q_{l}}\bigg{)}. \tag{6.2.4}\] It is straightforward to check that (6.2.4) coincides with \(\rho^{+}(q)=\rho^{+}(\nu^{\uparrow})\). When the lowest root \(-e_{1}-e_{2}\) is contained in \(J_{1}\), \(\gamma_{1}\) can be obtained from (6.2.3) by multiplying the first coordinate by \(-1\). The remaining discussions are the same as above and again any allowed \(\gamma_{1}\) with minimal length can be changed to \(\rho^{+}(\nu^{\uparrow})\) by a signed permuation. Next we determine \(\gamma_{2}\). Since \(\mathrm{I}^{\vee}_{2}\simeq\mathfrak{so}(2n-2k+\delta)\), we may assume \(\mathrm{I}^{\vee}_{\gamma_{2},0}\simeq\mathfrak{so}(2t+\delta)\times \mathfrak{gl}(a_{1})\times\mathfrak{gl}(a_{2})\times\cdots\times\mathfrak{gl}( a_{d})\), where \(0\leq t\leq n-k\) and \(a_{1},\ldots,a_{d}\) is a non-increasing sequence of positive integers. Define the partition \(q:=(a_{1},a_{1},a_{2},a_{2},\ldots,a_{d},a_{d})\) in terms of columns. Add one column of length \(2t+\delta\) gives the partition \(\tilde{q}=q\vee(2t+\delta)\). Then the condition \(\mathbb{O}^{\vee}_{\lambda^{2}}=\mathbb{O}^{\vee}_{\eta}=\mathrm{Ind}^{L^{ \vee}_{2}}_{L^{\vee}_{\gamma_{2},0}}\{0\}\) just means that \(\eta\) is the \(B\)- (or \(D\)-)collapse of \(\tilde{q}\) and \(2t+\delta\leq\#\eta\), when \(\mathfrak{g}^{\vee}\) is of type \(B\) (or \(D\)). Set \(\eta_{(2t+\delta)}:=(\eta_{\leq 2t+\delta})^{-1}\cup\eta_{>2t+\delta}^{ \uparrow}\) (see Definition 6.4). Since \(\eta\) is distinguished with all odd members, \(\eta_{(2t+\delta)}\) has only even members and it is easy to see that \(\tilde{q}_{B}=\eta\) (or \(\tilde{q}_{D}=\eta\)) implies that \(\eta_{(2t+\delta)}=q\) and hence \(a_{i}\)'s are uniquely determined by \(\eta\) and \(t\). Analogous to the case of \(\gamma_{1}\), for fixed \(t\), the \(\gamma_{2}\) that minimizes the length can by taken as \[\gamma_{2}^{+}=\big{(}\underbrace{d,\ldots,d}_{a_{d}},\ldots,\underbrace{2, \ldots,2}_{a_{2}},1,\ldots,1,0,\ldots,0\big{)}\] and any other choices are can be obtained from \(\gamma_{2}^{+}\) by signed permutations. Apparently taking \(t\) to be the maximal value \(\frac{1}{2}(\#\eta-\delta)\) gives the minimal length, so that \(\gamma_{2}^{+}=\rho^{+}(\eta)\). Any other choice of \(t\) would give strictly greater length. In this case \(q=\eta_{(\#\eta)}=\eta^{-1}\) and \(\tilde{q}=\eta\), hence the induction is birational, i.e., \(\mathbb{O}^{\vee}_{\lambda^{2}}=\mathbb{O}^{\vee}_{\eta}=\mathrm{Bind}^{L^{ \vee}_{2}}_{L^{\vee}_{\gamma_{2},0}}\{0\}\) by Proposition 5.9. We conclude that any \(\gamma\) of minimal length can be changed to \((\gamma_{1}^{+},\gamma_{2}^{+})\) by signed permutations, which in turn is equivalent to \(\rho^{+}(\nu^{\uparrow})\cup\rho^{+}(\eta)\) up to permutations. The latter equals \(\rho^{+}(\nu^{\uparrow}\cup\eta)\) by Remark 5.40. When \(\mathfrak{g}^{\vee}\) is of type \(B\), this implies that all minimal elements are \(W\)-conjugate to each other. When \(\mathfrak{g}^{\vee}\) is of type \(D\), the Weyl group consists of only signed permuations that have even number of sign changes. When \(\eta\neq\emptyset\), \(\gamma_{2}\) always have at least one coordinate equal to \(0\) and hence we have the same conclusion. When \(\eta=\emptyset\) so that \(\nu=\lambda\), there are exactly two \(W\)-conjugacy classes of elements with minimal length, one contains \(\rho^{+}(\lambda^{\uparrow})\) and the other one contains \(\rho^{+}(\lambda^{\uparrow})\) with the first (or any) coordinate changed to its negative1. Footnote 1: They become the same orbit if we consider the disconnected Weyl group of \(O(2n)\) **Type \(C\).** Let \(\mathfrak{g}^{\vee}=\mathfrak{sp}(2n)\). The condition that \(e_{n}(\gamma)\in\frac{1}{2}+\mathbb{Z}\) forces that \(\mathrm{I}^{\vee}_{\gamma_{2},0}\simeq\mathfrak{gl}(q_{1})\times\mathfrak{gl} (q_{2})\times\cdots\times\mathfrak{gl}(q_{l})\) without any factor of type \(C\), where the sequence \(q_{1},q_{2},\ldots,q_{l}\) is a partition of \(n-k\). As in the discussions about \(\gamma_{1}\) in type \(B\) and \(D\), we can assume \(q_{1},q_{2},\ldots,q_{l}\) is non-increasing after permutation. Define the partition \(q:=(q_{1},q_{1},q_{2},q_{2},\ldots,q_{l},q_{l})\) in terms of columns, then the condition \(\mathbb{O}^{\vee}_{\lambda^{2}}=\mathbb{O}^{\vee}_{\eta}=\mathrm{Ind}^{L^{ \vee}_{\gamma_{2},0}}_{L^{\vee}_{\gamma_{2},0}}\{0\}\) means that \(\eta=q\) and hence the induction is birational, i.e., \(\mathbb{O}^{\vee}_{\lambda^{2}}=\mathbb{O}^{\vee}_{\eta}=\mathrm{Bind}^{L^{ \vee}_{\gamma_{2},0}}_{L^{\vee}_{\gamma_{2},0}}\{0\}\) by Proposition 5.9. Therefore \[\gamma_{2}^{+}=\rho^{+}(\eta)=\bigg{(}\,\underbrace{1}_{q_{1}},\underbrace{3} _{q_{2}},\ldots,\underbrace{3}_{q_{2}},\ldots,\underbrace{\frac{2l-1}{2}, \ldots,\frac{2l-1}{2}}_{q_{l}}\,\bigg{)}\] achieve minimal length and all other choices differ from this one by a signed permutation. Next we determine \(\gamma_{1}\). Since \(\mathrm{I}^{\vee}_{1}\simeq\mathfrak{sp}(2k)\), we may assume \(\mathrm{I}^{\vee}_{\gamma_{2},0}\simeq\mathfrak{sp}(2t)\times\mathfrak{gl}(a_ {1})\times\mathfrak{gl}(a_{2})\times\cdots\times\mathfrak{gl}(a_{d})\), where \(0\leq t\leq n-k\) and \(a_{1},\ldots,a_{d}\) is a non-increasing sequence of positive integers. Define the partition \(q:=(a_{1},a_{1},a_{2},a_{2},\ldots,a_{d},a_{d})\) in terms of columns. Add one column of length \(2t\) gives the partition \(\tilde{q}=q\vee(2t)\). Then the condition \(\mathbb{O}^{\vee}_{\lambda^{1}}=\mathbb{O}^{\vee}_{\nu}=\mathrm{Ind}^{L^{ \vee}_{\gamma_{1},0}}_{L^{\vee}_{\gamma_{1},0}}\{0\}\) just means that \(\nu=\tilde{q}_{C}\) and \(2t\leq\#\nu\). Set \(\nu^{(2t)}:=[(\nu_{\leq 2t})^{-1}]^{\dagger}\cup\nu_{>2t}\). Since \(\nu\) is distinguished with all even members, \(\nu^{(2t)}\) has only even members and it is easy to see that \(\tilde{q}_{C}=\nu\) implies that \(\nu^{(2t)}=q\) and hence \(a_{i}\)'s are uniquely determined by \(\nu\) and \(t\). With fixed \(t\), the choice of \(\gamma_{1}\) which minimizes length is \[\gamma_{1}^{+}=\big{(}\underbrace{d,\ldots,d}_{a_{d}},\ldots,\underbrace{2, \ldots,2}_{a_{2}},1,\ldots,1,0,\ldots,0\,\big{)}\] Clearly taking \(t\) to be the maximal value \(\lfloor\frac{1}{2}\#\nu\rfloor\) gives the minimal length and \(\gamma_{1}^{+}=\rho^{+}(\nu^{\uparrow})\). Any other choice of \(t\) would give strictly greater length. Therefore \(\gamma=(\gamma_{1}^{+},\gamma_{2}^{+})\) achieves the minimal length, which is equivalent to \(\rho^{+}(\nu^{\uparrow}\cup\eta)\) up to permutations. Like in the case of type \(B\), any other choice with minimal length is \(W\)-conjugate to \(\rho^{+}(\nu^{\uparrow}\cup\eta)\). We introduce more notation before continuing with the proof of Theorem 6.2. **Definition 6.6**.: _Let \(q=[q_{1},q_{2}]\) be a partition where \(q_{1}\geq q_{2}\geq 0\) and \(q_{1}>0\) (but \(q_{2}\) can be zero). Define \(q^{\uparrow}:=[q_{1}+1,\max(q_{2}-1,0)]\)._ Proof of Theorem 6.2.: Recall that by Proposition 5.2, there is a surjective composite map \(A^{\epsilon}\simeq A(\mathbb{O}^{\vee})\twoheadrightarrow\bar{A}(\mathbb{O}^{ \vee})\) of maps. Let \(\tilde{C}\) be any lift of \(\tilde{C}\) in \(A^{\epsilon}\subset A\). As usual, by decomposing \(\tilde{C}\) uniquely as a product of \(\upsilon_{\lambda_{i}}\), we can associate to \(\tilde{C}\) a marked partition \({}^{\langle\nu\rangle}\lambda\in\tilde{\mathcal{P}}_{\epsilon}(m)\) with \(\lambda=\nu\cup\eta\). By Lemma 6.3 and Proposition 6.5, it suffices to show that, \(\|\rho^{+}(\nu^{\uparrow}\cup\eta)\|>\lambda=\nu\cup\eta\). \(\|\rho^{+}(\nu_{0}^{\uparrow}\cup\eta_{0})\|\) for any \(\tilde{C}\) different from \(C_{0}\) (see Definition 5.32 for the definition of \(C_{0}\), \(\nu_{0}\) and \(\eta_{0}\)). By the description of the kernel \(N\) of the quotient map \(A^{\epsilon}\twoheadrightarrow\bar{A}(\mathbb{O}^{\vee})\) in Proposition 5.28, we see that \(\tilde{C}\) can be written as the product of \(C_{0}\) and the element \(\prod_{i=1}^{k}\zeta_{i}^{\epsilon_{i}}\), where \(\epsilon_{i}=0\) or \(1\), and \(\zeta_{i}:=\upsilon_{\lambda_{2i}}\upsilon_{\lambda_{2i+1}}\) (resp. \(\zeta_{i}:=\upsilon_{\lambda_{2i-1}}\upsilon_{\lambda_{2i}}\)) depending on whether \(\mathfrak{g}^{\vee}\) is of type \(B\) (resp. \(C/D\)), and \(k=\frac{1}{2}(\#\lambda-1)\) (resp. \(\frac{1}{2}\#\lambda\), \([\frac{1}{2}\#\lambda]\)) depending on whether \(\mathfrak{g}^{\vee}\) is of type \(B\) (resp. \(C/D\)). Set \[C_{j}:=C_{0}\prod_{1\leq i\leq j}\zeta_{i}^{\epsilon_{i}},\quad 0\leq j\leq k,\] then \(C_{k}=\tilde{C}\) and \(C_{j}=C_{j-1}\zeta_{j}^{\epsilon_{i}}\). Let \(\nicefrac{{\langle\nu_{j}\rangle}}{{\lambda}}\) with \(\lambda=\nu_{j}\cup\eta_{j}\) denote the marked partition associated to \(C_{j}\). We can divide the change from \(C_{0}\) to \(C_{k}\) into several steps, each time going from \(C_{j-1}\) to \(C_{j}\) by multiplying \(C_{j-1}\) by \(\zeta_{j}^{\epsilon_{j}}\). If either \(\epsilon_{j}=0\) or \(\zeta_{j}=1\), which happens exactly when \(\lambda_{2j}=\lambda_{2j+1}\) (resp. \(\lambda_{2j-1}=\lambda_{2j}\)) if \(\mathfrak{g}^{\vee}\) is of type \(B\) (resp. \(C/D\)), then \(C_{j}=C_{j-1}\) and nothing is changed. Otherwise, \(C_{j}=C_{j-1}\zeta_{j}\) and \(\zeta_{j}\neq 1\), so we need to look at the effect on the partition \(\tau_{j}:=\nu_{j}^{\uparrow}\cup\eta_{j}\) when \(j\) goes to \(j+1\). There are two cases: _Case 1._ If \(\mathfrak{g}^{\vee}\) is of type \(C\), \(\#\lambda=2k+1\) is odd and \(\tilde{C}=C_{k}=C_{k-1}\zeta_{k}=C_{k-1}\upsilon_{\#\lambda}\), then \(\tau_{j+1}\) can be obtained from \(\tau_{j}\) by adding \(1\) to its last row, hence \(\|\tau_{j+1}\|>\|\tau_{j}\|\). _Case 2._ Otherwise, by the definitions of \(C_{0}\) and \(\zeta_{j}\), \(\tau_{j+1}\) can be obtained from \(\tau_{j}\) by replacing the adjacent two rows \(q=(q_{1},q_{2})\) in \(\tau_{j}\), which correspond to the adjacent rows of \(\lambda\) in the definition of \(\zeta_{j}\), with the two rows of \(q^{\uparrow}\). Now by the elementary Lemma 6.7 below, again we have \(\|\tau_{j+1}\|>\|\tau_{j}\|\). When \(\tilde{C}\neq C_{0}\), at least one multiplication by some nontrivial \(\zeta_{j}\) occurs, therefore \(\rho^{+}(\nu_{0}^{\uparrow}\cup\eta_{0})\) achieves the unique minimal length among all \(\rho^{+}(\nu_{j}^{\uparrow}\cup\eta_{j})\). Note that in the type \(D\) case, \(\eta_{0}\) is never empty by definition. Therefore the statement about uniqueness up to \(W\)-conjugation also follows from Proposition 6.5. **Lemma 6.7**.: _Let \(q=(q_{1},q_{2})\) be a partition with \(q_{1}\geq q_{2}>0\). Then \(\|\rho^{+}(q)\|<\|\rho^{+}(q^{\uparrow})\|\)._ Proof.: First consider the case when \(q_{1}\) and \(q_{2}\) are both even. Write squares of the norms as \[\|\rho^{+}(q)\|^{2}=\sum_{k=1}^{q_{2}/2}\left[\left(\frac{2k-1}{2}\right)^{2}+ \left(\frac{2k-1}{2}\right)^{2}\right]+\sum_{k=1}^{\frac{q_{1}-q_{2}}{2}}\left( \frac{q_{2}+2k-1}{2}\right)^{2}\] and \[\|\rho^{+}(q^{\uparrow})\|^{2}=\sum_{k=1}^{q_{2}/2}\left[(k-1)^{2}+k^{2} \right]+\sum_{k=1}^{\frac{q_{1}-q_{2}}{2}}\left(\frac{q_{2}+2k}{2}\right)^{2}.\] We have \((k-1)^{2}+k^{2}>\left(\frac{2k-1}{2}\right)^{2}+\left(\frac{2k-1}{2}\right)^{2}\) by the Cauchy-Schwartz inequality. Hence \(\|\rho^{+}(q)\|<\|\rho^{+}(q^{\uparrow})\|\). Next suppose \(q_{1}\) and \(q_{2}\) are both odd. Write squares of the norms as \[\|\rho^{+}(q)\|^{2}=\sum_{k=1}^{\frac{q_{2}-1}{2}}\left[k^{2}+k^{2}\right]+ \sum_{k=1}^{\frac{q_{1}-q_{2}}{2}}\left(\frac{q_{2}+2k-1}{2}\right)^{2}\] and \[\|\rho^{+}(q^{\uparrow})\|^{2}=\frac{1}{2}+\sum_{k=1}^{\frac{q_{2}-1}{2}}\left[ \left(\frac{2k-1}{2}\right)^{2}+\left(\frac{2k+1}{2}\right)^{2}\right]+\sum_{k= 1}^{\frac{q_{1}-q_{2}}{2}}\left(\frac{q_{2}+2k}{2}\right)^{2}.\] The same argument as before gives \(\|\rho^{+}(q)\|<\|\rho^{+}(q^{\uparrow})\|\). Finally consider the case when \(q_{1}\not\equiv q_{2}\mod 2\). Write squares of the norms as \[\|\rho^{+}(q)\|^{2}=\sum_{\begin{subarray}{c}1\leqslant j\leqslant q_{2}-2\\ j\equiv q_{2}\mod 2\end{subarray}}\left(\frac{j}{2}\right)^{2}+\sum_{ \begin{subarray}{c}1\leqslant j\leqslant q_{2}-1\\ j\equiv q_{2}-1\mod 2\end{subarray}}\left(\frac{j}{2}\right)^{2}+\sum_{k=1}^{ \frac{q_{1}-q_{2}-1}{2}}\left(\frac{q_{2}+2k}{2}\right)^{2}\] and \[\|\rho^{+}(q^{\uparrow})\|^{2}=\sum_{\begin{subarray}{c}1\leqslant j\leqslant q _{2}-2\\ j\equiv q_{2}\mod 2\end{subarray}}\left(\frac{j}{2}\right)^{2}+\sum_{ \begin{subarray}{c}1\leqslant j\leqslant q_{2}-1\\ j\equiv q_{2}-1\mod 2\end{subarray}}\left(\frac{j}{2}\right)^{2}+\sum_{k=1}^{ \frac{q_{1}-q_{2}-1}{2}}\left(\frac{q_{2}+2k+1}{2}\right)^{2}.\] It is clear that \(\|\rho^{+}(q)\|<\|\rho^{+}(q^{\uparrow})\|\). ### Proof of Proposition 4.11 First, let \((\mathbb{O}^{\vee},\bar{C})\) be a distinguished pair in \(\mathsf{LA}(G^{\vee})\), and let \((\eta_{0},\nu_{0})\) be as in Definition 5.32. By Theorem 6.2, we have \(\gamma(\mathbb{O}^{\vee},\bar{C})=\rho^{+}(\nu_{0}^{\uparrow}\cup\eta_{0})\). We note that since \(\mathbb{O}^{\vee}\) is distinguished, all members of \(\eta_{0}\) and \(\nu_{0}\) are odd (resp. even, odd) if \(\mathfrak{g}^{\vee}\) is of type \(B\) (resp. \(C\), \(D\)). Thus, all members of \(\nu_{0}^{\uparrow}\) are even (resp. odd, even), and \(\mathfrak{r}^{\vee}=\mathfrak{so}(|\nu_{0}|)\times\mathfrak{so}(|\eta_{0}|)\) (resp. \(\mathfrak{r}^{\vee}=\mathfrak{sp}(|\nu_{0}|)\times\mathfrak{sp}(|\eta_{0}|)\), \(\mathfrak{r}^{\vee}=\mathfrak{so}(|\nu_{0}|)\times\mathfrak{so}(|\eta_{0}|)\)). A direct computation shows that the partitions corresponding to the orbits in the two factors are \((\nu_{0}^{\uparrow})_{D}=\nu_{0}\) (resp. \((\nu_{0}^{\uparrow})_{C}=\nu_{0}\), \((\nu_{0}^{\uparrow})_{D}=\nu_{0}\)) and \(\eta_{0}\) respectively. Now let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\), and let \((L^{\vee},\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\in\mathsf{LA}_{0}(G^{\vee})\) be the triple corresponding to \((\mathbb{O}^{\vee},\bar{C})\) under the bijection of Proposition 2.17. Note that \(\mathfrak{l}^{\vee}=\prod_{i\in I}\mathfrak{gl}(a_{i})\times\mathfrak{so}(2k+1)\) for a set of integers \(\{a_{i}\in i\in I\}\), and \(\mathbb{O}_{L^{\vee}}=\prod_{i}\mathbb{O}_{[a_{i}]}\times\mathbb{O}_{0}^{\vee}\). Let \(\bar{C}_{0}\) be the image of \(\bar{C}_{L^{\vee}}\) under the isomorphism \(\bar{A}(\mathbb{O}_{L^{\vee}})\simeq\bar{A}(\mathbb{O}_{0}^{\vee})\), and let \((\eta_{0}^{\uparrow},\nu_{0}^{\uparrow})\) correspond to the distinguished pair \((\mathbb{O}_{0}^{\vee},\bar{C}_{0})\) as in Definition 5.32. Let \(I^{0}\) be the set of \(i\in I\) such that \(a_{i}\) is odd (resp. even, odd), and let \(I^{1}=I\backslash I^{0}\). Let \(\eta_{0}=\eta_{0}^{0}\cup\bigcup_{i\in I^{0}}[a_{i},a_{i}]\), and \(\nu_{0}=\nu_{0}^{0}\cup\bigcup_{i\in I^{1}}[a_{i},a_{i}]\). The statement below follows from the discussion above and the definition of \(R^{\vee}\). **Lemma 6.8**.: _In the setting above, the following are true:_ * _If_ \(\mathfrak{g}^{\vee}\) _is of type_ \(B\) _or_ \(D\) _(resp._ \(C\)_), then_ \(\mathfrak{r}^{\vee}=\mathfrak{so}(|\nu_{0}|)\times\mathfrak{so}(|\eta_{0}|)\) _(resp._ \(\mathfrak{r}^{\vee}=\mathfrak{sp}(|\nu_{0}|)\times\mathfrak{sp}(|\eta_{0}|)\)_)._ * \(\mathrm{Sat}_{R_{0}^{\vee}}^{R^{\vee}}\mathbb{O}_{R_{0}^{\vee}}=\mathbb{O}_{ \nu_{0}}\times\mathbb{O}_{\eta_{0}}\)_._ Recall from Definition 5.33 the partition \((\nu_{0}^{0})^{\uparrow}\), and let \(\nu_{0}^{\uparrow}=(\nu_{0}^{0})^{\uparrow}\cup\bigcup_{i\in I^{1}}[a_{i},a_{i}]\). Note that \(\gamma:=\gamma(\mathbb{O}^{\vee},\bar{C})=\gamma(\mathbb{O}_{0}^{\vee},\bar{C} _{0})\cup\rho(a)=\rho^{+}(\nu_{0}^{\uparrow}\cup\eta_{0})\), and \(Z_{\mathfrak{g}}(\gamma)=Z_{\mathfrak{so}(|\nu_{0}|)}(\rho^{+}(\nu_{0}^{\uparrow})) \times Z_{\mathfrak{so}(|\eta_{0}|)}(\rho^{+}(\eta_{0}))\). It remains to show that \(\mathbb{O}_{\nu_{0}}\) and \(\mathbb{O}_{\eta_{0}}\) are the Richardson orbits corresponding to the Levi subalgebras \(Z_{\mathfrak{so}(|\nu_{0}|)}(\rho^{+}(\nu_{0}^{\uparrow}))\) and \(Z_{\mathfrak{so}(|\eta_{0}|)}(\rho^{+}(\eta_{0}))\) respectively. Let \({}^{\langle\nu\rangle}\lambda\) be the reduced marked partition corresponding to \((\mathbb{O}^{\vee},\bar{C})\). We first make the following basic observations: assume \(\mathfrak{g}^{\vee}\) is of type \(B\) (resp. \(C\), \(D\)), then * If \(a\) is not a member of \(\nu\), then \(\mathrm{ht}_{\eta_{0}}(a)\) is odd (resp. even, even) if and only if \(\mathrm{ht}_{\nu}(a)\) is even and \(\mathrm{ht}_{\lambda}(a)\) is odd (resp. even, even); * \(\mathrm{ht}_{\nu_{0}}(a)\) is odd if and only if \(\mathrm{ht}_{\nu}(a)\) is odd and \(\mathrm{ht}_{\lambda}(a)\) is odd (resp. even, even). These observations can be deduced by inductive arguments using block decomposition (cf. Section 5.14). Assume that \(\mathfrak{g}^{\vee}\) is of type \(B\), the other types are completely analogous and are left to the reader. Since \((\mathbb{O}^{\vee},\bar{C})\) is special Lusztig-Achar datum, the observation above implies that \(\operatorname{ht}_{\nu_{0}}(a_{i})\) is even for all \(i\in I^{1}\), see Proposition 5.22. It follows that \(\nu_{0}=(\nu_{0}^{\uparrow})_{D}\). Note that \(\nu_{0}^{\uparrow}\) has only even members, and therefore \(Z_{\mathfrak{so}(|\nu_{0}|)}(\rho^{+}(\nu_{0}^{\uparrow}))=\prod\mathfrak{gl} (i)^{k_{i}}\), and \(k_{i}=\frac{1}{2}((\nu_{0}^{\uparrow})_{i}-(\nu_{0}^{\uparrow})_{i+1})\). It follows immediately that the partition of the corresponding Richardson orbit is \(\nu_{0}=(\nu_{0}^{\uparrow})_{D}\). For the second factor, all members of \(\eta_{0}\) are odd, and therefore \(Z_{\mathfrak{so}(|\eta_{0}|)}(\rho^{+}(\eta_{0}))=\prod\mathfrak{gl}(i)^{k_{i }}\times\mathfrak{so}(\#\eta_{0})\), where \(k_{i}=\frac{1}{2}((\eta_{0})_{i}-(\eta_{0})_{i+1})\). The partition of the corresponding Richardson orbit is \(\eta_{0}\). ### Proof of Theorem 4.14 Suppose \(G^{\vee}=G_{\epsilon}(m)\) is a simple classical group of type \(B\), \(C\), or \(D\). Let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\) and form the McNinch-Sommers datum \((R^{\vee},sZ^{\circ},\mathbb{O}_{R^{\vee}}):=\mathbb{L}(\mathbb{O}^{\vee}, \bar{C})\in\mathsf{MS}(G^{\vee})\) as in the paragraph preceding Theorem 4.14. Write \({}^{\langle\nu\rangle}\lambda\) for the reduced marked partition corresponding to \((\mathbb{O}^{\vee},\bar{C})\) and \(\pi\) for the partition corresponding to \(\mathbb{O}=d_{S}(\mathbb{O}^{\vee},\bar{C})\). **Lemma 6.9**.: _Suppose \(G^{\vee}=G_{\epsilon}(m)\) is a simple classical group of type \(B\), \(C\), or \(D\) and let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\). Assume \((\mathbb{O}^{\vee},\bar{C})\) is distinguished. Then there is a group isomorphism_ \[\bar{A}(\mathbb{O}_{R^{\vee}})\simeq A^{ad}(d_{S}(\mathbb{O}^{\vee},\bar{C})).\] Proof.: Recall from Definition 5.32 the subpartitions \(\nu_{0}\) and \(\eta_{0}\) of \(\lambda\) attached to the distinguished pair \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\) corresponding to the marked partition \({}^{\langle\nu\rangle}\lambda\), such that \(\nu_{0}\) is a type \(D\) (resp. \(C\), \(D\)) partition and \(\eta_{0}\) is a type \(B\) (resp. \(C\), \(D\)) partition when \(\mathfrak{g}^{\vee}\) is of type \(B\) (resp. \(C\), \(D\)). Both \(\nu_{0}\) and \(\eta_{0}\) are distinguished. By Lemma 5.44, \(\mathbb{O}_{R^{\vee}}=\mathbb{O}_{\nu_{0}}\times\mathbb{O}_{\eta_{0}}\). Then the claim follows from a straightforward computation using Lemma 5.34 and Corollary 5.19 (see also the paragraph following Definition 5.29). To prove Theorem 4.14 for a general \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\), assume that it is saturated from a distinguished pair \((\mathbb{O}^{\vee}_{L^{\vee}},\bar{C}_{L^{\vee}})\in\mathsf{LA}^{*}(L^{\vee})\) for a Levi subgroup \(L^{\vee}\subset G^{\vee}\) with Lie algebra \(\mathbb{I}^{\vee}=\mathfrak{gl}(a_{1})\times\cdots\times\mathfrak{gl}(a_{t}) \times\mathfrak{g}^{\vee}_{\epsilon}(n)\), such that \(\mathbb{O}^{\vee}_{L^{\vee}}=\mathbb{O}_{[a_{1}]}\times\cdots\times\mathbb{O }_{[a_{t}]}\times\mathbb{O}^{\vee}_{0}\). Here if \(\lambda\) and \(\lambda_{0}\) are the partitions corresponding to \(\mathbb{O}^{\vee}\) and \(\mathbb{O}^{\vee}_{0}\) respectively, \(\lambda=\lambda_{0}\cup\bigcup_{i=1}^{t}[a_{i},a_{i}]\). Again form the McNinch-Sommers datum \((R^{\vee}_{0},sZ^{\circ}_{\bar{R}^{\vee}_{0}},\mathbb{O}_{R^{\vee}_{0}}):= \mathbb{L}(\mathbb{O}_{\lambda_{0}},\bar{C}_{0})\in\mathsf{MS}(G_{\epsilon}(n))\) as in the paragraph preceding Theorem 4.14. We have \(\mathbb{O}_{R^{\vee}_{0}}=\mathbb{O}_{\nu^{\theta}_{0}}\times\mathbb{O}_{ \eta^{\theta}_{0}}\) for (distinguished) subpartitions \(\nu^{0}_{0}\) and \(\eta^{0}_{0}\) of \(\lambda_{0}\) as in Definition 5.32. Let \((\nu_{0},\eta_{0})\) be the subpartitions of \(\lambda\) defined as in Section 6.3, so that \(\mathbb{O}_{R^{\vee}}\simeq\mathbb{O}_{\nu_{0}}\times\mathbb{O}_{\eta_{0}}\). Let \(L_{R^{\vee}}\) be the Levi subgroup of \(R^{\vee}\) with the Lie algebra \(\mathsf{I}_{R^{\vee}}=\mathfrak{gl}(a_{1})\times\cdots\times\mathfrak{gl}(a_{t} )\times\mathfrak{r}^{\vee}_{0}\). By Proposition 4.11\(\mathbb{O}_{R^{\vee}}=\operatorname{Sat}_{L_{R^{\vee}}}^{R^{\vee}}\mathbb{O}_{[a_{1 }]}\times\cdots\times\mathbb{O}_{[a_{t}]}\times\mathbb{O}_{R^{\vee}_{0}}\). We will apply an inductive argument to reduce the general case to the distinguished case. For this purpose, it suffices to analyze what happens when we saturate from a maximal Levi subalgebra. Suppose that \((\mathbb{O}^{\vee},\bar{C})=\operatorname{Sat}_{M^{\vee}}^{G^{\vee}}(\mathbb{O} _{M^{\vee}},\bar{C}_{M^{\vee}})\) for a maximal Levi subalgebra \(\mathfrak{m}^{\vee}=\mathfrak{gl}(a)\times\mathfrak{g}^{\vee}_{\epsilon}(n) \subset\mathfrak{g}^{\vee}\). We can always assume that \(\mathbb{O}_{M^{\vee}}=\mathbb{O}_{prin}\times\mathbb{O}^{\vee}\). Let \(\bar{C}\in\bar{A}(\underline{\mathbb{O}}^{\vee})\simeq\bar{A}(\mathbb{O}_{M^{ \vee}})\) be the element corresponding to \(\bar{C}_{M^{\vee}}\in\bar{A}(\mathbb{O}_{M^{\vee}})\). Let \({}^{\langle\nu\rangle}\underline{\lambda}\) be the reduced marked partition corresponding to the pair \((\mathbb{O}^{\vee},\bar{C})\). Define subpartitions \(\underline{\nu}_{0}\) and \(\underline{\eta}_{0}\) of \(\underline{\lambda}\) as above. We note that \(\lambda=\underline{\lambda}\cup[a,a]\). Let \(\overline{\mathbb{O}}=d_{S}(\underline{\mathbb{O}}^{\vee},\bar{C})\), and \(\mathbb{O}=d_{S}(\mathbb{O}^{\vee},\bar{C})\). The following lemma is immediate from Corollary 5.19. **Lemma 6.10**.: _Suppose that \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\). Then \(\bar{A}(\mathbb{O}_{\mathbb{R}^{\vee}})\) is not isomorphic to \(\bar{A}(\mathbb{O}_{\underline{R}^{\vee}})\) if and only if the following conditions are satisfied_ _(1) \(a\) is not a member of \(\underline{\lambda}\);_ _(2) \(a\) is odd (resp. even, odd) if \(\mathfrak{g}^{\vee}\) is of type \(B\) (resp. \(C\), \(D\));_ _(3) \(\operatorname{ht}_{\eta_{0}}(a)\) is odd (resp. even, even) if \(\mathfrak{g}^{\vee}\) is of type \(B\) (resp. \(C\), \(D\));_ _(4) If \(\mathfrak{g}^{\vee}\) is of type \(D\), then \(\underline{\eta}_{0}\neq\emptyset\)._ _In this case, \(\bar{A}(\mathbb{O}_{\mathbb{R}^{\vee}})\simeq\bar{A}(\mathbb{O}_{\underline{ R}^{\vee}})\times\mathbb{Z}_{2}\)._ **Lemma 6.11**.: _Suppose that \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\). Then \(\mathbb{O}\) is not birationally induced from \(\{0\}\times\underline{\mathbb{O}}\) if and only if the following conditions are satisfied_ _(1) \(a\) is not a member of \(\underline{\lambda}\)._ _(2) When \(\mathfrak{g}^{\vee}\) is of type \(B\) (resp. \(C\), \(D\)),_ _(i) If \(a\) is odd (resp. even, odd), then \(\operatorname{ht}_{\eta_{0}}(a)\) is odd (resp. even, even); moreover, if \(\mathfrak{g}^{\vee}\) is of type \(D\), then \(\underline{\lambda}\) is not empty or very even;_ _(ii) If \(a\) is even (resp. odd, even), then \(\operatorname{ht}_{\nu_{0}}(a)\) is odd._ _In this case, \(\operatorname{Bind}^{G}_{M}(\{0\}\times\underline{\mathbb{O}})\) is a \(2\)-fold cover of \(\mathbb{O}\)._ Proof.: This follows from an inductive argument using block decompositions (cf. Section 5.14) and Proposition 5.38. We only point out that the condition that \(\underline{\lambda}\) is not empty or very even when \(\mathfrak{g}^{\vee}\) is of type \(D\) in (2.i) is due to Proposition 5.10(iii) and Remark 5.11. **Claim 6.12**.: _Assume that \(\mathfrak{g}^{\vee}\) is of type \(D\). Then \(\underline{\eta}_{0}=\emptyset\) if and only if \(\underline{\lambda}\) is empty or very even (cf. Lemma 6.10 (4))._ Proof.: The "if" part of the claim is clear. For the "only if" part, note that \(\underline{\eta}_{0}=\emptyset\) implies that \(\eta_{0}^{0}=\emptyset\), which forces \(\nu=\emptyset\), since otherwise the largest part of \(\nu\) is of even height in \(\lambda^{0}\) and hence \(\eta_{0}^{0}\) contains at least the largest part of \(\lambda^{0}\). Now \(\nu=\emptyset\) implies that \(\nu_{0}^{0}=\emptyset\), hence \(\underline{\nu}_{0}\) is either empty or consists of pairs of equal parts \([a_{j},a_{j}]\) with \(a_{j}\) even. Therefore \(\underline{\lambda}=\underline{\nu}_{0}\cup\underline{\eta}_{0}=\underline{ \nu}_{0}\) is either empty or very even. Proof of Theorem 4.14 for classical groups.: By observation (b) above and Proposition 5.22, condition (2.ii) in Lemma 6.11 is equivalent to that \((\mathbb{O}^{\vee},\bar{C})\) is not special. Therefore by Lemma 6.10, Lemma 6.11 and Claim 6.12, we can conclude that, for special \((\mathbb{O}^{\vee},\bar{C})\), \(\bar{A}(\mathbb{O}_{\mathbb{R}^{\vee}})\) is not isomorphic to \(\bar{A}(\mathbb{O}_{\underline{R}^{\vee}})\) if and only if \(\mathbb{O}\) is not birationally induced from \(\{0\}\times\mathbb{O}\). Moreover, in this case \(\operatorname{Bind}^{G}_{M}(\{0\}\times\underline{\mathbb{O}})\) is a \(2\)-fold cover of \(\mathbb{O}\), and \(\bar{A}(\mathbb{O}_{\mathbb{R}^{\vee}})\simeq\bar{A}(\mathbb{O}_{\underline{R} ^{\vee}})\times\mathbb{Z}_{2}\). By induction, we can reduce Theorem 4.14 to the case when \((\mathbb{O}^{\vee},\bar{C})\) is distinguished. This case was handled in Lemma 6.9. **Remark 6.13**.: _We note that Theorem 4.14 does not hold if we drop the assumption that \((\mathbb{O}^{\vee},\bar{C})\) is special. For example, let \((\mathbb{O}^{\vee},\bar{C})=^{\langle[5,1]\rangle}[5,4,4,3,1]\) in \(\mathfrak{so}(17)\). Then \((\mathbb{O}^{\vee},\bar{C})\) is saturated from the Lusztig-Achar datum corresponding to \({}^{\langle[5,1]}[5,3,1]\). In this case, \(\gamma(\mathbb{O}^{\vee},\bar{C})=(\frac{5}{2},\frac{3}{2},\frac{3}{2},\frac{ 3}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2})\) and \(\mathbb{L}(\mathbb{O}^{\vee},\bar{C})=(SO(16),sZ^{\circ},\mathbb{O}_{[7,4^{2},1]})\). Hence \(\bar{A}(\mathbb{O}_{\mathbb{R}^{\vee}})\simeq 1\). On the other hand, \(D(\mathbb{O}^{\vee},\bar{C})=\operatorname{Bind}^{Sp(16)}_{GL(4)\times Sp(8)} \mathbb{O}_{[2^{3},1^{2}]}\), which is the \(2\)-fold cover of \(\mathbb{O}_{[t^{3},2^{2}]}\). So \(\Gamma(D(\mathbb{O}^{\vee},\bar{C}))\simeq\mathbb{Z}_{2}\)._ ## 7. Proofs of main results in exceptional types In this section, we prove Proposition 4.3, Theorem 4.8, Proposition 4.11, and Theorem 4.14 for \(\mathfrak{g}\) a simple exceptional Lie algebra. We begin by establishing some notational conventions which will remain in place for the remainder of this section: * Nilpotent orbits are denoted using Bala-Carter notation, see [10]. * Infinitesimal characters are represented as dominant elements of \(\mathfrak{h}^{*}\), written in _fundamental weight coordinates_. For example, \(\rho\) is denoted by \((1,...,1)\). * Levi and pseudo-Levi subgroups are denoted by indicating their Lie types. The Lie type of a (pseudo)Levi subgroup \(L\subset G\) does not determine \(L\) uniquely up to conjugacy (or even up to isomorphism). This ambiguity will turn out not to matter. * Lusztig-Achar data are labeled according to the conventions of [11, Section 4] and [1, Section 3]. Namely, a Lusztig-Achar datum \((\mathbb{O}^{\vee},\bar{C})\) is specified by indicating the corresponding equivalence class \(\{(M_{1}^{\vee},\mathbb{O}_{M_{1}^{\vee}}),...,(M_{k}^{\vee},\mathbb{O}_{M_{k }^{\vee}})\}\) of Sommers data, see Lemma 2.18 and the discussion preceding it. By Lemma 2.18(iii), \((\mathbb{O}^{\vee},\bar{C})\) is distinguished if and only if one (equivalently, all) of the pseudo-Levi subgroups \(M_{1}^{\vee},...,M_{k}^{\vee}\) is of maximal semisimple rank. A list of Achar data and special Achar data in simple exceptional types can be found in [1, Section 6]. ### Proof of Proposition 4.3 and Theorem 4.8 In Tables 2-6 below, we list all distinguished special Lusztig-Achar data \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\) in simple exceptional types. As explained above, each such Lusztig-Achar datum is denoted by specifying the corresponding equivalence class of Sommers data. For each such Sommers datum \((M^{\vee},\mathbb{O}_{M^{\vee}})\) in this equivalence class, we record below * The nilpotent orbit \(\mathbb{O}=d_{S}(\mathbb{O}^{\vee},\bar{C})=j_{M}^{G}d(\mathbb{O}_{M^{\vee}} )\in\mathsf{Orb}(G)\). This orbit is computed using the atlas software. * The minimal-length \(W\)-orbit \(\gamma(M^{\vee},\mathbb{O}_{M^{\vee}})\) in the set \[\{\gamma\in\mathfrak{h}_{\mathbb{R}}^{*}/W\mid\mathsf{MS}(\gamma)=(M^{\vee}, tZ^{\circ},\mathbb{O}_{M^{\vee}})\}\] This is computed (and shown to exist) using the method described in Remark 7.1 below. * The minimal-length \(W\)-orbit \(\gamma(\mathbb{O}^{\vee},\bar{C})\) in \[S(\mathbb{O}^{\vee},\bar{C})=\{\gamma\in\mathfrak{h}_{\mathbb{R}}^{*}/W\mid \mathsf{LA}(\gamma)=(\mathbb{O}^{\vee},\bar{C})\}\] By Lemma 2.18(iv), this is simply the minimal-length element among the various \(\gamma(M^{\vee},\mathbb{O}_{M^{\vee}})\) corresponding to \((\mathbb{O}^{\vee},\bar{C})\). * The (Lie type of the) reductive part \(\mathfrak{r}(\mathbb{O})\) of the centralizer of an element in \(\mathbb{O}\). This can be read off of the tables in [10, Section 13.1]. * The unipotent infinitesimal character \(\gamma(D(\mathbb{O}^{\vee},\bar{C}))\) attached to the Lusztig cover of \(d_{S}(\mathbb{O}^{\vee},\bar{C})\). This can be read off of the tables in [13, Section 4.3]. By inspection of Tables 2-6, we see that the following are true: * If \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G)\) is special and distinguished, then \(d_{S}(\mathbb{O}^{\vee},\bar{C})\) admits a birationally rigid cover. This follows by comparison with [13, Proposition 3.9.5]. * If \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G)\) is special and distinguished, \(\mathfrak{r}(\mathbb{O})\) is semisimple. * If \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G)\) is special and distinguished and \(\bar{C}\neq 1\), \[\gamma(\mathbb{O}^{\vee},\bar{C})=\gamma(D(\mathbb{O}^{\vee},\bar{C})).\] * \(d_{S}\) is injective when restricted to the set of special and distinguished Lusztig-Achar data. Proof of Proposition 4.3 in exceptional types.: Proposition 4.3(ii) follows at once from observation (iv). We proceed to proving Proposition 4.3(i). Suppose \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}(G^{\vee})\) is special and distinguished. Let \(\mathbb{O}=d_{S}(\mathbb{O}^{\vee},\bar{C})\), and let \(\widehat{\mathbb{O}}_{univ}\) be the universal \(G\)-equivariant cover of \(\mathbb{O}\). In all cases, we have that \(A(\mathbb{O})\simeq\bar{A}(\mathbb{O})\), and therefore \(\widehat{\mathbb{O}}_{Lus}=\widehat{\mathbb{O}}_{univ}\). By observation (i), \(\mathbb{O}\) admits a birationally rigid cover, and by observation (ii) we have \(H^{2}(\widehat{\mathbb{O}}_{univ},\mathbb{C})=0\), see Remark 3.20. These two facts imply that \(\widehat{\mathbb{O}}_{univ}\) is birationally rigid, see [13, Proposition 3.7.1]. Proof of Theorem 4.8 in exceptional types.: This is an immediate consequence of observation (iii). **Remark 7.1**.: _Let \((M^{\vee},\mathbb{O}_{M^{\vee}})\) be a Sommers datum such that \(M^{\vee}\) is of maximal semisimple rank. Here, we describe a finite algorithm for computing the minimal-length \(W\)-orbit \(\gamma(M^{\vee},\mathbb{O}_{M^{\vee}})\) in the set_ \[\{\gamma\in\mathfrak{h}_{\mathbb{R}}^{*}/W\mid\mathsf{MS}(\gamma)=(M^{\vee}, tZ^{\circ},\mathbb{O}_{M^{\vee}})\}. \tag{7.1.1}\] _(the termination of this algorithm shows that a unique minimum exists). Choose a system of positive roots \(\Delta^{+}(\mathfrak{h}^{\vee},\mathfrak{m}^{\vee})\) and let \(F=\{\varpi_{1}^{\vee},...,\varpi_{k}^{\vee}\}\subset\mathfrak{h}^{\vee}\simeq \mathfrak{h}^{*}\) denote the fundamental coweights. Note that (7.1.1) is a subset of the lattice \(\mathbb{Z}F\) generated by \(F\). Consider the finite set_ \[\{\gamma\in\mathbb{Z}F\mid\mathsf{MS}(\gamma)=(M^{\vee},tZ^{\circ},\mathbb{O} _{M^{\vee}}),\|\gamma\|\leq\|\rho\|\}. \tag{7.1.2}\] _This set can be computed using the atlas software. For any \((M^{\vee},\mathbb{O}_{M^{\vee}})\), we can directly verify that it contains a unique minimal-length \(W\)-orbit. This \(W\)-orbit must be of minimal-length in (7.1.1)._ ### Proof of Proposition 4.11 and Theorem 4.14 In Tables 7-11 below, we list all special Lusztig-Achar data \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\), except those of the form \((\mathbb{O}^{\vee},1)\) for even \(\mathbb{O}^{\vee}\), in exceptional types. Each such Lusztig-Achar datum is indicated by specifying an equivalence class of Sommers data as explained in Section 7.1. Recall, cf. Proposition 2.17, that \((\mathbb{O}^{\vee},\bar{C})\) corresponds to a unique pair \((L^{\vee},(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}}))\in\mathsf{LA}_{0}(G^{ \vee})\). For each Lusztig-Achar datum, we compute the following: * The nilpotent orbit \(\mathbb{O}=d_{S}(\mathbb{O}^{\vee},\bar{C})\in\mathsf{Orb}(G)\). * The Levi subgroup \(L\subset G\). * The nilpotent orbit \(\mathbb{O}_{L}=d_{S}(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\in\mathsf{ Orb}(L)\). * \(R^{\vee}\) and \(\mathbb{O}_{R^{\vee}}\) in \((R^{\vee},sZ^{\circ},\mathbb{O}_{R^{\vee}})=\mathbb{L}(\mathbb{O}^{\vee}, \bar{C})\in\mathsf{MS}(G^{\vee})\), see (4.0.2). * The groups \(\bar{A}_{R^{\vee}}=\bar{A}(\mathbb{O}_{R^{\vee}})\), \(A_{L}=A(\mathbb{O}_{L})\), and \(A=A(\mathbb{O})\) (taking \(G^{\vee}\) to be adjoint). * The group \(\Gamma(D(\mathbb{O}^{\vee},\bar{C}))\). This requires some case-by-case analysis, see subsection 7.2.1. By inspection of Tables 7-11, we see that the following is true * For all \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\) not of the form \((\mathbb{O}^{\vee},1)\) for even \(\mathbb{O}^{\vee}\), there is a group isomorphism \[\bar{A}(\mathbb{O}_{R^{\vee}})\simeq\Gamma(D(\mathbb{O}^{\vee},\bar{C}))\] * For all \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\) not of the form \((\mathbb{O}^{\vee},1)\) for even \(\mathbb{O}^{\vee}\), the statement of Proposition 4.11 is true. Proof of Proposition 4.11 in exceptional types.: Let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\). First suppose \(\mathbb{O}^{\vee}\) is even and \(\bar{C}=1\). Choose \((L^{\vee},\mathbb{O}_{L^{\vee}})\in\mathsf{Orb}_{0}(G^{\vee})\) such that \(\mathbb{O}^{\vee}=\operatorname{Sat}_{L^{\vee}}^{G^{\vee}}\mathbb{O}_{L^{ \vee}}\). Then \((\mathbb{O}^{\vee},1)=\operatorname{Sat}_{L^{\vee}}^{G^{\vee}}(\mathbb{O}_{L^ {\vee}},1)\). Since \(\mathbb{O}^{\vee}\) and \(\mathbb{O}_{L^{\vee}}\) are even, we have, in the notation of Proposition 4.11, \(R^{\vee}=G^{\vee}\) and \(R^{\vee}_{0}=L^{\vee}\). Moreover, by Proposition 2.8, we have \(\mathbb{O}_{R^{\vee}}=\mathbb{O}^{\vee}\) and \(\mathbb{O}_{R^{\vee}_{0}}=\mathbb{O}_{L^{\vee}}\). Now Proposition 4.11 is immediate. For all other cases, we use observation (vi). Proof of Theorem 4.14 in exceptional types.: Let \((\mathbb{O}^{\vee},\bar{C})\in\mathsf{LA}^{*}(G^{\vee})\). If \(\mathbb{O}^{\vee}\) is even and \(\bar{C}=1\), then \(\bar{A}(\mathbb{O}_{R^{\vee}})\simeq\Gamma(D(\mathbb{O}^{\vee},\bar{C}))\) by Remark 4.15. For all other cases, we use observation (v). #### 7.2.1. Computations of \(\Gamma\) Let \(\widetilde{\mathbb{O}}=D(\mathbb{O}^{\vee},\bar{C})=\operatorname{Bind}_{L}^{ G}\widetilde{\mathbb{O}}_{L,Lu_{S}}\). In this subsection, we explain how the group \(\Gamma=\Gamma(\widetilde{\mathbb{O}})=\operatorname{Aut}(\widetilde{\mathbb{O }}_{max},\mathbb{O})\) is computed in Tables 7-11. We begin with the following observation. Since \(\widetilde{\mathbb{O}}_{L,univ}\) is birationally rigid, we have that \(\widetilde{\mathbb{O}}_{L,univ}=\widetilde{\mathbb{O}}_{L,Lu_{S}}\). So by Theorem 3.48, \(\widetilde{\mathbb{O}}_{max}=\operatorname{Bind}_{L}^{G}\widetilde{\mathbb{O }}_{L,univ}\). In all but six cases, the automorphism group \(\Gamma=\operatorname{Aut}(\widetilde{\mathbb{O}}_{max},\mathbb{O})\) can be computed using one of the following methods (in all such cases, we indicate the method used in the column labeled '#'): 1. Suppose \(A(\mathbb{O})\simeq 1\). Then \(\widetilde{\mathbb{O}}=\mathbb{O}\), and therefore \(\Gamma\simeq 1\). 2. Suppose \(A(\mathbb{O})\simeq A(\mathbb{O}_{L})\). Since \(\widetilde{\mathbb{O}}_{max}\) is birationally induced from \(\widetilde{\mathbb{O}}_{L,univ}\), the degree of \(\widetilde{\mathbb{O}}\to\mathbb{O}\) must be divisible by the order of \(A(\mathbb{O}_{L})\), see [1, Proposition 2.4.1(iv)]. So our assumption implies that the \(\widetilde{\mathbb{O}}_{max}=\widetilde{\mathbb{O}}_{univ}\). It follows that \(\Gamma\simeq A(\mathbb{O})\). 3. Suppose \(\mathbb{O}\) is even and \(L\) contains (a \(G\)-conjugate of) the Jacobson-Morozov Levi \(L_{\mathbb{O}}\) associated to \(\mathbb{O}\) (cf. 2.3.1). Then \(\mathbb{O}=\operatorname{Bind}_{L}^{G}\mathbb{O}_{L}\) by Proposition 2.8. Hence, \(\Gamma\simeq A(\mathbb{O}_{L})\) by Proposition 2.6(ii). 4. Suppose \(\mathcal{P}_{rig}(\mathbb{O})\) contains a unique element \((K,\mathbb{O}_{K})\) such that \(\dim(\mathfrak{z}(\mathfrak{z}(\mathfrak{z}))=m(\mathbb{O}))\) (cf. 3.6.3) and \(L\) contains a \(G\)-conjugate of \(K\). Then by Lemma 3.24, \(\mathbb{O}=\operatorname{Bind}_{L}^{G}\mathbb{O}_{L}\). So by Proposition 2.6(ii), we have \(\Gamma\simeq A(\mathbb{O}_{L})\). 5. Suppose there exists a Levi subgroup \(K\subset G\) containing \(L\) and that \(A(\mathbb{O})\simeq A(\mathbb{O}_{K})\), where \(\widetilde{\mathbb{O}}_{K}=\operatorname{Bind}_{L}^{K}\widetilde{\mathbb{O}}_ {L,univ}\). Then by Lemma 2.6, \(\Gamma(\widetilde{\mathbb{O}}_{K})\simeq\Gamma\). So the computation of \(\Gamma\) can be reduced to the case of a smaller rank group. 6. In some cases, we can compute \(\widetilde{\mathbb{O}}_{max}\) using Springer theory. The general argument is as follows. Let \(L\subset G\) be a Levi subgroup, let \(\widetilde{\mathbb{O}}_{L}\in\mathsf{Cov}(L)\), and let \(\widetilde{\mathbb{O}}=\operatorname{Bind}_{L}^{G}\widetilde{\mathbb{O}}_{L}\). Recall that \(\widetilde{\mathbb{O}}\) is determined by the subgroup \(A(\widetilde{\mathbb{O}})\subset A(\mathbb{O})\) (up to conjugation). In all cases, we will impose the following additional assumption: 1. Every irreducible constituent of the induced representation \(\operatorname{Ind}_{A(\widetilde{\mathbb{O}}_{L})}^{A(\mathbb{O}_{L})}\mathbb{1}\) is of Springer type, cf. Section 2.7. Then we can define the character of \(W_{L}\), (7.2.1) \[E_{\widetilde{\mathbb{O}}_{L}}:=\sum_{\psi}[\operatorname{Ind}_{A(\widetilde{ \mathbb{O}}_{L})}^{A(\mathbb{O}_{L})}\mathbb{1}:\psi]E_{(\mathbb{O}_{L},\psi)},\] where the sum runs over all irreducible \(A(\mathbb{O}_{L})\)-representations. **Proposition 7.2**.: _Assume condition \((*)\). Then there is an equality in the representation ring of \(A(\mathbb{O})\)_ \[\operatorname{Ind}_{A(\mathbb{O})}^{A(\mathbb{O})}\mathbb{1}\,=\,\sum_{\psi}[ \operatorname{Ind}_{W_{L}}^{W}E_{\widehat{\mathbb{O}}_{L}}:E_{(\mathbb{O},\psi) }]\psi \tag{7.2.2}\] _where the sum runs over all irreducible \(A(\mathbb{O})\)-representations \(\psi\) of Springer type._ Proof.: The special case when \(\widetilde{\mathbb{O}}_{L}=\mathbb{O}_{L}\) was proven in [1, Corollary 5.6]. The proof for general \(\widehat{\mathbb{O}}_{L}\) satisfying the condition \((*)\) is essentially the same and is based on [1]. See especially Proposition 1.10, Theorem 3.3 and Corollary 3.9 in [1]. For any irreducible representation \(\rho\) of a Weyl group \(W\) of exceptional type and irreducible representation \(\rho_{0}\) of a maximal parabolic subgroup \(W_{0}\subset W\), the multiplicity \([\operatorname{Ind}_{W_{0}}^{W}\rho_{0}:\rho)]\) can be found in [1]. Using the transitivity of induction and the Littlewood-Richardson rule, we can compute \([\operatorname{Ind}_{W_{L}}^{W}\rho_{0}:\rho]\) in general. It remains to recover the subgroup \(A(\widehat{\mathbb{O}})\) of \(A(\mathbb{O})\) (up to conjugation) from the representation \(\operatorname{Ind}_{A(\mathbb{O})}^{A(\mathbb{O})}\mathbb{1}\). **Definition 7.3**.: _Let \(H\) be a finite group. We say that \(H\) has linearly distinguishable subgroups if for any pair of subgroups \(H_{1},H_{2}\subset H\),_ \[\operatorname{Ind}_{H_{1}}^{H}\mathbb{1}\,\simeq\,\operatorname{Ind}_{H_{2}}^ {H}\mathbb{1}\,\implies\,H_{1}\text{ and }H_{2}\text{ are $H$-conjugate}\] A standard case-by-case check shows that all symmetric groups \(S_{n}\) with \(n\leqslant 5\) have linearly distinguishable subgroups (cf. [1, Lemma 1.9]). 2 If \(G\) is a simple exceptional group, then \(A(\mathbb{O})\) is a product of symmetric groups \(S_{n}\) with \(n\leqslant 5\). So in the cases of interest, it is possible to deduce \(A(\widehat{\mathbb{O}})\) (up to conjugation) from the \(A(\mathbb{O})\)-representation \(\operatorname{Ind}_{A(\mathbb{O})}^{A(\mathbb{O})}\mathbb{1}\,\). Footnote 2: Notably \(S_{n}\) with \(n\geqslant 6\) does not have this property due to an example of Gassmann ([10]), see also ([11, Chapter 4]). 7. Consider the set \(\operatorname{Unip}(\widehat{\mathbb{O}})\) of irreducible objects in the category \(\operatorname{HC}^{G}(U(\mathfrak{g})/I(\widehat{\mathbb{O}}))\). This set can be enumerated using the atlas software. By Theorem 3.46, the cardinality of \(\operatorname{Unip}(\widehat{\mathbb{O}})\) is equal to the number of conjugacy classes in \(\Gamma\). In several cases, this information will determine \(\Gamma\) uniquely. 8. Covers of \(\mathbb{O}\) (up to isomorphism) are in one-to-one correspondence with conjugacy classes of subgroups of \(A(\mathbb{O})\). The birationally rigid covers are listed in [13, Propositions 3.8.3, 3.9.5]. Suppose that all covers of \(\mathbb{O}\), except for a single cover \(\breve{\mathbb{O}}\), are either birationally rigid or birationally induced from a birationally rigid cover for a Levi subgroup not conjugate to \(L\). Then \(\breve{\mathbb{O}}\simeq\breve{\mathbb{O}}\). Sample computations: 9. Let \((\mathbb{O}^{\vee},M^{\vee})=(C_{3},C_{3})\) in type \(F_{4}\). Then \((L,\mathbb{O}_{L})=(C_{3},\{0\})\) and \(\mathbb{O}=A_{2}\). The weighted Dynkin diagram for \(\mathbb{O}\) consists of \(0\)'s and \(2\)'s, with \(0\)'s on the subdiagram of type \(C_{3}\). In particular, \(\mathbb{O}\) is even and \(L_{\mathbb{O}}=L\). Hence, \(\Gamma\simeq 1\). 10. Let \((\mathbb{O}^{\vee},M^{\vee})=((A_{5})^{\prime},A_{5})\) in type \(E_{7}\). Then \((L,\mathbb{O}_{L})=(A_{5},\{0\})\) and \(\mathbb{O}=D_{4}(a_{1})+A_{1}\). By [1, Table 7], \(\mathcal{P}_{rig}(\mathbb{O})=\{(L,\mathbb{O}_{L})\}\). So \(m(\mathbb{O})=2\). From the diagrams in [1, Section 13] it is clear that \(\operatorname{Spec}(\mathbb{C}[\mathbb{O}])\) has two dimension \(2\) singularities, both of type \(A_{1}\). And by [1, Theorems 5.11, 5.12], \(H^{2}(\mathbb{O},\mathbb{C})=0\). Hence, \(\dim\mathfrak{P}(\mathbb{O})=2\). So Lemma 3.24 implies that \(\mathbb{O}=\operatorname{Bind}_{\mathbb{O}}^{G}\mathbb{O}_{L}\), and therefore \(\Gamma\simeq A(\mathbb{O}_{L})\simeq 1\). 5. Let \((\mathbb{O}^{\vee},M^{\vee})=((A_{3}+A_{1})^{\prime},A_{3}+A_{1})\) in type \(E_{7}\). Then \((L,\mathbb{O}_{L})=(A_{3}+A_{1},\{0\})\) and \(\mathbb{O}=E_{7}(a_{5})\). Let \(K\) be a Levi subgroup of type \(E_{6}\) containing \(L\). Then \(\mathbb{O}_{K}=D_{4}(a_{1})\). Note that \(A(\mathbb{O})\simeq S_{3}\simeq A(\mathbb{O}_{K})\). So \(\Gamma\simeq\Gamma(\widehat{\mathbb{O}}_{K})\), where \(\widehat{\mathbb{O}}_{K}=\operatorname{Bind}_{L}^{K}\mathbb{O}_{L,univ}\). But \(\widehat{\mathbb{O}}_{K}\) corresponds under \(D^{K}\) to the Lusztig-Achar datum \((A_{3}+A_{1},1)\in\mathsf{LA}(K^{\vee})\). Using argument (8), we find that \(\Gamma\simeq\Gamma(\widehat{\mathbb{O}}_{K})=\Gamma(D^{K}(A_{3}+A_{1},1))\simeq 1\). 6. Let \((\mathbb{O}^{\vee},M^{\vee})=(E_{7}(a_{5}),E_{7})\) in type \(E_{8}\), which corresponds to \((\mathbb{O}^{\vee},\bar{C})=(E_{7}(a_{5}),1)\). In this case \(L^{\vee}=M^{\vee}=E_{7}\), \((\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})=(E_{7}(a_{5}),1)\) and the Sommers dual of \((E_{7}(a_{5}),1)\) is \(\mathbb{O}_{L}=d_{S}(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})=D_{4}(a_{1})\) in \(L^{\vee}=E_{7}\). Then \(\pi_{1}(\mathbb{O}_{L})=A(\mathbb{O}_{L})\simeq S_{3}\) and \(\widehat{\mathbb{O}}_{L}=D^{L}(\mathbb{O}_{L^{\vee}},\bar{C}_{L^{\vee}})\) is the \(6\)-fold universal cover of the orbit \(\mathbb{O}_{L}\). There are three irreducible representations of \(S_{3}\), labelled as \(\psi_{3}\) (the trivial representation), \(\psi_{21}\) and \(\psi_{1^{3}}\) (the sign representation), corresponding to the partitions of \(3\) in the subscripts. Then the Springer representations associated to \(D_{4}(a_{1})\) in \(L=E_{7}\) are \[E_{(\mathbb{O}_{L},\psi_{3})}=\phi_{315,16}=315_{a},\,E_{(\mathbb{O}_{L},\psi_ {21})}=\phi_{280,18}=280_{a},\,E_{(\mathbb{O}_{L},\psi_{13})}=\phi_{35,22}=35 _{a}.\] Here we use the tables of [1, Section 13.3] for the Springer correspondence of exceptional groups. Note that [1] labels irreducible characters of exceptional Weyl groups by the notation \(\phi_{d,e}\), where \(d\) is the degree of the character and \(e\) is its fake degree ([1, Section 11.3]), while [1] follows the notations of [10] (for \(E_{6}\) and \(E_{7}\)) and [10] (for \(E_{8}\)). For instance, \(315_{a}\) is an irreducible character of \(W_{E_{8}}\) of dimension \(315\). The subscripts are used to distinguish characters with the same dimension. The two conventions of notations can be translated to each other, for instance, by the tables in [1, Appendix]. Now consider the Sommers dual \(\mathbb{O}=E_{8}(a_{7})\) of \((E_{7}(a_{5}),1)\) in \(E_{8}\). We have \(\pi_{1}(\mathbb{O})=A(\mathbb{O})\simeq S_{5}\). Again we label the irreducible representations of \(S_{5}\) by \(\psi_{\lambda}\) were \(\lambda\) is a partition of \(5\), such that \(\psi_{5}\) is the trivial representation and \(\psi_{1^{5}}\) is the sign representation. The Springer representations attached to \(E_{8}(a_{7})\) in \(E_{8}\) are given in the Table 7.2.1 together with their multiplicities in \(\operatorname{Ind}_{W_{L}}^{W}E_{(\mathbb{O}_{L},\psi_{\eta})}\), where \(W\) (resp. \(W_{L}\)) is the Weyl group of \(E_{8}\) (resp. \(E_{7}\)), \(\mathbb{O}_{L}=D_{4}(a_{1})\) in \(E_{7}\) and \(\eta\) runs over all partitions of \(3\). Note that the column for \(\psi_{1^{5}}\) is left blank since it does not appear in the Springer correspondence for \(E_{8}(a_{7})\). The induction \(\operatorname{Ind}_{\{1\}}^{S_{3}}\mathbb{1}\) of the trivial representation of the trivial subgroup to \(S_{3}\) decomposes as \(\psi_{3}+2\psi_{21}+\psi_{1^{3}}\). Replacing each \(\psi_{\eta}\) in this decomposition by the corresponding Springer representation \(E_{(\mathbb{O}_{L},\psi_{\eta})}\) of \(W_{E_{7}}\), we get the \(W_{E_{7}}\)-representation \(E_{\widehat{\mathbb{O}}_{L}}=315_{a}+2\cdot 280_{a}+35_{a}\) as defined in (7.2.1). According to Table 7.2.1, we have \[\operatorname{Ind}_{A(\widehat{\mathbb{O}})}^{S_{5}}\mathbb{1} =\sum_{\begin{subarray}{c}\lambda\in\mathcal{P}(5),\\ \lambda\neq 1^{5}\end{subarray}}[\operatorname{Ind}_{W_{E_{7}}}^{W_{E_{8}}}E_{ \widehat{\mathbb{O}}_{L}}:E_{(\mathbb{O},\psi_{\lambda})}]\psi_{\lambda}\] \[=1\cdot\psi_{5}+3\cdot\psi_{41}+3\cdot\psi_{32}+3\cdot\psi_{31^{ 2}}+2\cdot\psi_{2^{2}1}+1\cdot\psi_{21^{3}}+0\cdot\psi_{1^{5}},\] which is isomorphic to \(\operatorname{Ind}_{\langle\langle 12\rangle\rangle}^{S_{5}}\mathbb{1}\) by the Littlewood-Richardson rule, where \(\langle(12)\rangle\simeq S_{2}\) is the subgroup of \(S_{5}\) generated by the permutation (12). Therefore \(A(\widehat{\mathbb{O}})\) is conjugate to \(\langle(12)\rangle\) and its normalizer subgroup in \(A(\mathbb{O})\simeq S_{5}\) is conjugate to the subgroup generated by (12) and (345). So \(\Gamma\simeq\langle(12),(345)\rangle/\langle\langle 12\rangle\rangle\simeq S_{3}\). * There are three special Lusztig-Achar data in \(F_{4}\) which map under \(d_{S}\) to \(\mathbb{O}=F_{4}(a_{3})\), namely \((A_{2}+\widetilde{A}_{1},1)\), \((\widetilde{A}_{2}+A_{1},1)\), and \((C_{3}(a_{1}),1)\). The corresponding infinitesimal characters are \((0,1,0,0)/2\), \((1,0,1,0)/2\), and \((0,1,0,1)/2\). Using atlas, we determine that there are 1, 1, and 2 unipotent Harish-Chandra bimodules annihilated by the respective maximal ideals. Since \(A(\mathbb{O})=S_{4}\), this means that the groups \(\Gamma\) associated to the corresponding covers must be 1, 1, and \(\mathbb{Z}_{2}\), respectively. * There are three special Lusztig-Achar data in \(E_{6}\) which map under \(d_{S}\) to \(\mathbb{O}=D_{4}(a_{1})\), namely \((2A_{2}+A_{1},1)\), \((A_{3}+A_{1},1)\), and \((D_{4}(a_{1}),1)\) (the final one is even, and therefore does not appear in Table 9). Using argument (3), we see that \(D(2A_{2}+A_{1},1)=\mathbb{O}\). By inspection of the incidence diagrams in [1, Section 13], we see that \(\operatorname{Spec}(\mathbb{C}[\mathbb{O}])\) contains a unique dimension 2 singularity, of type \(A_{1}\), corresponding to codimension 2 orbit \(\mathbb{O}^{\prime}=A_{3}+A_{1}\subset\overline{\mathbb{O}}\). Furthermore, \(A(\mathbb{O})=S_{3}\) and \(A(\mathbb{O}^{\prime})=1\). In particular, there are three nontrivial covers of \(\mathbb{O}\) (up to isomorphism), of degrees 2, 3, and 6. Denote them by \(\widetilde{\mathbb{O}}_{2}\), \(\widetilde{\mathbb{O}}_{3}\), and \(\widetilde{\mathbb{O}}_{6}\). We first note that \(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}_{6}])\to \operatorname{Spec}(\mathbb{C}[\mathbb{O}])\) smoothens the dimension 2 singularity. Otherwise, \(\dim\mathfrak{P}(\widetilde{\mathbb{O}}_{6})\geqslant|A(\mathbb{O})|/|A( \mathbb{O}^{\prime})|=6\) by [1, Lemma 7.6.13], a contradiction. Since an \(A_{1}\) singularity cannot be smoothened by a 3-fold cover, it follows that \(\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}_{2}])\) has no dimension 2 singularities, and hence \(\widetilde{\mathbb{O}}_{2}\sim\widetilde{\mathbb{O}}_{6}\). By the transitivity of birational induction, we have \[D(A_{3}+A_{1},1)=\operatorname{Bind}_{A_{3}+A_{1}}^{E_{6}}\{0\}=\operatorname {Bind}_{D_{5}}^{E_{6}}\operatorname{Bind}_{A_{3}+A_{1}}^{D_{5}}\{0\}\] Note that \(\operatorname{Bind}_{A_{3}+A_{1}}^{D_{5}}\{0\}\) is the orbit \(\mathbb{O}_{[3^{2},1^{4}]}\) in \(D_{5}\) corresponding to the partition \([3^{2},1^{4}]\). Hence, \(D(A_{3}+A_{1},1)=\operatorname{Bind}_{D_{5}}^{E_{6}}\mathbb{O}_{[3^{2},1^{4}]}\). On the other hand, \(\operatorname{Spec}(\mathbb{C}[\mathbb{O}_{[3^{2},1^{4}]}])\) contains a dimension 2 singularity, of type \(A_{1}\), see [1, Section 3.4]. Thus, \(\operatorname{Spec}(\mathbb{C}[D(A_{3}+A_{1},1)])\) contains a dimension 2 singularity. So by the preceding paragraph, we must have \(D(A_{3}+A_{1},1)=\widetilde{\mathbb{O}}_{3}\). This leaves the case of \(D(D_{4}(a_{1}),1)\). Note that \(D_{4}(a_{1})\) is saturated from the orbit \(\mathbb{O}_{[5,3]}\) in the Levi of type \(D_{4}\). So \(D(D_{4}(a_{1}),1)\) is birationally induced from the orbit \(d(\mathbb{O}_{[5,3]})=\mathbb{O}_{[2^{2},1^{4}]}\) in the Levi of type \(D_{4}\). By the transitivity of birational induction \[D(D_{4}(a_{1}),1)=\operatorname{Bind}_{D_{4}}^{E_{6}}\mathbb{O}_{[2^{2},1^{4}] }=\operatorname{Bind}_{D_{5}}^{E_{6}}\operatorname{Bind}_{D_{4}}^{D_{5}} \mathbb{O}_{[2^{2},1^{4}]}\] Note that \(\operatorname{Bind}_{D_{4}}^{D_{5}}\mathbb{O}_{[2^{2},1^{4}]}\) is the double cover \(\widetilde{\mathbb{O}}_{[3^{2},1^{4}]}\). So \(D(D_{4}(a_{1}),1)=\operatorname{Bind}_{D_{4}}^{E_{6}}\widetilde{\mathbb{O}}_{[3 ^{2},1^{4}]}\). By the preceding paragraph, \(\operatorname{Bind}_{D_{4}}^{E_{6}}\widetilde{\mathbb{O}}_{[3^{2},1^{4}]}= \widetilde{\mathbb{O}}_{3}\), so \(D(D_{4}(a_{1}),1)\) must have degree divisible by 3. The only possibility is \(D(D_{4}(a_{1}),1)=\widetilde{\mathbb{O}}_{6}\). There are six cases in type \(E_{8}\) for which special arguments are required (these cases are marked with asterisks in Tables 7-11): \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\lambda\) & [5] & \([4,1]\) & \([3,2]\) & \([3,1^{2}]\) & \([2^{2},1]\) & \([2,1^{3}]\) & \([1^{5}]\) \\ \hline \(E_{(\mathbb{O},\psi_{\lambda})}\) & \(\phi_{4480,16}\) & \(\phi_{5670,18}\) & \(\phi_{4536,18}\) & \(\phi_{1680,22}\) & \(\phi_{1400,20}\) & \(\phi_{70,32}\) \\ & \(4480_{y}\) & \(5670_{y}\) & \(4536_{y}\) & \(1680_{y}\) & \(1400_{y}\) & \(70_{y}\) \\ \hline \(\left[\operatorname{Ind}_{W_{L}}^{W}E_{(\mathbb{O}_{L},\psi_{3})}:E_{(\mathbb{O}, \psi_{\lambda})}\right]\) & 1 & 1 & 1 & 0 & 0 & 0 & \\ \hline \(\left[\operatorname{Ind}_{W_{I}}^{W}E_{(\mathbb{O}_{L},\psi_{21})}:E_{( \mathbb{O},\psi_{\lambda})}\right]\) & 0 & 1 & 1 & 1 & 1 & 0 & \\ \hline \(\left[\operatorname{Ind}_{W_{I}}^{W}E_{(\mathbb{O}_{L},\psi_{3})}:E_{( \mathbb{O},\psi_{\lambda})}\right]\) & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 \\ \hline \end{tabular} \end{table} Table 1. Springer representations attached to \(\mathbb{O}=E_{8}(a_{7})\) * \((A_{3}+2A_{1},1)\) and \((D_{4}(a_{1})+A_{1},1)\): In both cases, the Sommers dual is \(\mathbb{O}=E_{8}(a_{6})\). By [2], there is a unique codimension 2 symplectic leaf \(\mathfrak{L}\subset X=\operatorname{Spec}(\mathbb{C}[\mathbb{O}])\), corresponding to the nilpotent orbit \(\mathbb{O}^{\prime}=D_{7}(a_{1})\). The corresponding singularity is of type \(A_{3}\), with non-trivial monodromy. Since \(A(\mathbb{O})=S_{3}\), there are (up to isomorphism) three nontrivial covers of \(\mathbb{O}\), of degrees 2, 3, and 6. Denote them by \(\widetilde{\mathbb{O}}_{i}\) for \(i\in\{2,3,6\}\). For each \(i\), let \(X_{i}=\operatorname{Spec}(\mathbb{C}[\widetilde{\mathbb{O}}_{i}])\) and \(\mathfrak{L}_{i}\) be the preimage of \(\mathfrak{L}\) under the map \(X_{i}\to X\). We claim that \(X_{2}\) has a unique codimension 2 leaf, and that the corresponding singularity is of type \(A_{1}\). Indeed, if the singularity of (a connected component of) \(\mathfrak{L}_{2}\) is of type \(A_{3}\), then the same is true for \(\mathfrak{L}_{6}\). In this case, [1, Lemma 7.6.13] implies \(\dim\mathfrak{P}(\widetilde{\mathbb{O}}_{6})\geq 6\). But \(m(\mathbb{O})=3\), a contradiction. Similarly, if \(\mathfrak{L}_{2}\) has 2 connected components, then \(\mathfrak{L}_{6}\) has 6 connected components since \(A(\mathbb{O}^{\prime})\simeq\mathbb{Z}_{2}\) and therefore \(\dim\mathfrak{P}(\widetilde{\mathbb{O}}_{6})\geq 6\). It follows from the claim in the previous paragraph that \(X_{6}\) has 3 codimension 2 symplectic leaves, the connected components of \(\mathfrak{L}_{6}\), and the corresponding singularities are of type \(A_{1}\). The reductive part of the centralizer of an element in \(\mathbb{O}\) is trivial. So \(\dim\mathfrak{P}(\widetilde{\mathbb{O}}_{2})=1\), and \(\dim\mathfrak{P}(\widetilde{\mathbb{O}}_{6})=3\). It follows that \(\{D(A_{3}+2A_{1},1),D(D_{4}(a_{1})+A_{1},1)\}=\{\widetilde{\mathbb{O}}_{3}, \widetilde{\mathbb{O}}_{6}\}\). Let \(L\), \(M_{1}\), and \(M_{2}\) denote the Levi subgroups of type \(D_{7}\), \(A_{3}+2A_{1}\), and \(D_{4}+A_{1}\), respectively. Then by the transitivity of birational induction \[D(A_{3}+2A_{1},1) =\operatorname{Bind}_{M_{1}}^{G}\{0\}=\operatorname{Bind}_{L}^{G }(\operatorname{Bind}_{M_{1}}^{L}\{0\})\] \[D(D_{4}(a_{1})+A_{1},1) =\operatorname{Bind}_{M_{2}}^{G}\mathbb{O}_{[2^{2},1^{4}]}\times \{0\}=\operatorname{Bind}_{L}^{G}(\operatorname{Bind}_{M_{2}}^{L}\mathbb{O}_{ [2^{2},1^{4}]}\times\{0\})\] By Proposition 5.10, \(\operatorname{Bind}_{M_{1}}^{L}\{0\}\) is the orbit corresponding to the partition \([5^{2},1^{4}]\) and \(\operatorname{Bind}_{M_{2}}^{L}\mathbb{O}_{[2^{2},1^{4}]}\times\{0\}\) is a nontrivial cover of the same orbit. Thus, \(D(D_{4}(a_{1})+A_{1},1)\) is a nontrivial cover of \(D(A_{3}+2A_{1},1)\). Hence \(D(A_{3}+2A_{1},1)=\widetilde{\mathbb{O}}_{3}\) and \(D(D_{4}(a_{1})+A_{1},1)=\widetilde{\mathbb{O}}_{6}\). From the analysis of codimension 2 leaves, it is clear that both \(\widetilde{\mathbb{O}}_{3}\) and \(\widetilde{\mathbb{O}}_{6}\) are maximal in their equivalence classes. So the corresponding \(\Gamma\)s are 1 and \(S_{3}\) respectively. * \((A_{4}+A_{1},1)\): The Sommers dual is \(\mathbb{O}=E_{6}(a_{1})+A_{1}\). By [2], there is a unique codimension 2 leaf \(\mathfrak{L}\subset X=\operatorname{Spec}(\mathbb{C}[\mathbb{O}])\), corresponding to the orbit \(D_{7}(a_{2})\). The corresponding singularity is of type \(A_{2}\) with non-trivial monodromy. The reductive part of the centralizer of an element in \(\mathbb{O}\) is one-dimensional. Hence \(\dim\mathfrak{P}(\mathbb{O})\leq 2\). Hence \(D(A_{4}+A_{1},1)\neq\mathbb{O}\). But \(A(\mathbb{O})\simeq S_{2}\). So \(D(A_{4}+A_{1},1)\) is the unique nontrivial (2-fold) cover of \(\mathbb{O}\) and \(\Gamma\simeq S_{2}\). * \((2A_{3},1)\) and \((A_{4}+2A_{1},1)\): In both cases, the Sommers dual is \(\mathbb{O}=D_{7}(a_{2})\). Let \(L\) be the Levi subgroup of type \(D_{7}\) and let \(\mathbb{O}_{L}\) be the nilpotent orbit corresponding to the partition \([3^{4},1^{2}]\) of 14. Then \(A(\mathbb{O}_{L})\simeq\mathbb{Z}_{2}\) and \(\mathbb{O}_{L}\) is birationally (resp. non-birationally) induced from the 0 orbit in the Levi subgroup of \(L\) of type \(2A_{3}\) (resp. \(A_{4}+2A_{1}\)). Hence, \(D(A_{4}+2A_{1},1)\) is a nontrivial cover of \(D(2A_{3},1)\). Since \(A(\mathbb{O})\simeq S_{2}\), we must have that \(D(2A_{3},1)=\mathbb{O}\) and \(D(A_{4}+2A_{1},1)\) is the unique nontrivial (2-fold) cover of \(\mathbb{O}\). The corresponding \(\Gamma\)s are 1 and \(S_{2}\) respectively. * \((D_{5}(a_{1})+A_{1},1)\): The Sommers dual is \(\mathbb{O}=D_{7}(a_{4})\). Let \(L\subset G\) be the Levi subgroup of type \(E_{7}\) and let \(\mathbb{O}_{L}=A_{3}+A_{2}\). Note that \(A(\mathbb{O}_{L})\simeq\mathbb{Z}_{2}\). We claim that \(D(D_{5}(a_{1})+A_{1},1)=\mathbb{O}\). For this it is enough to show that \(\mathbb{O}_{L}\) is birationally induced from the orbit \(\mathbb{O}_{[2^{2},1^{6}]}\times\{0\}\) in the Levi subgroup of \(L\) of type \(D_{5}+A_{1}\). This was shown above. ## 8. Tables \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\mathbb{O}^{\vee}\) & \(\mathbb{O}_{M^{\vee}}\) & \(d_{S}(\mathbb{O}^{\vee},C)\) & \(\gamma(M^{\vee},\mathbb{O}_{M^{\vee}})\) & \(\gamma(\mathbb{O}^{\vee},C)\) & \(\gamma(D(\mathbb{O}^{\vee},C)\) & \(\tau(0)\) \\ \hline \(E_{8}(a_{7})\) & \(E_{8}(a_{7})\) & \(E_{8}(a_{7})\) & \((0,0,0,0,1,0,0)\) & \((0,0,0,0,1,0,0)\) & \((0,0,0,0,1,0,0)\) & \(1\) \\ \hline \(E_{8}(a_{7})\) & \(E_{7}(a_{5})+A_{1}\) & \(E_{7}(a_{5})\) & \((0,0,1,0,1,0,0,1)/2\) & \((0,0,1,0,0,1)/2\) & \((0,0,1,0,1,0,0,1)/2\) & \(A_{1}\) \\ \hline \(E_{8}(a_{7})\) & \(D_{8}(a_{5})\) & \(D_{6}(a_{2})\) & \((1,0,0,1,0,0,1,0)/2\) & \((1,0,0,1,0,0,1,0)/2\) & \((1,0,0,1,0,0,1,0)/2\) & \(2A_{1}\) \\ \hline \(E_{8}(a_{7})\) & \(E_{6}(a_{3})+A_{2}\) & \(E_{6}(a_{3})+A_{1}\) & \((0,1,1,0,1,0,1,1)/3\) & \((0,1,1,0,1,0,1,1)/3\) & \((0,1,1,0,1,0,1,1)/3\) & \(A_{1}\) \\ \hline \(E_{8}(a_{7})\) & \(D_{5}(a_{1})+A_{3}\) & \(D_{5}(a_{1})+A_{2}\) & \((1,1,1,0,1,1,1,1)/4\) & \((1,1,1,0,1,1,1,1)/4\) & \((1,1,1,0,1,1,1,1)/4\) & \(A_{1}\) \\ \hline \(E_{8}(a_{7})\) & \(2A_{4}\) & \(A_{4}+A_{3}\) & \((1,1,1,1,1,1,1)/5\) & \((1,1,1,1,1,1,1,1)/5\) & \((1,1,1,1,1,1,1,1)/5\) & \(A_{1}\) \\ \hline \(E_{8}(a_{7})\) & \(A_{5}+A_{2}+A_{1}\) & \(A_{5}+A_{1}\) & \((2,2,1,1,1,1,1,1)/6\) & \((2,2,1,1,1,1,1)/6\) & \((2,2,1,1,1,1,1)/6\) & \(2A_{1}\) \\ \hline \(D_{7}(a_{2})\) & \(E_{7}+A_{1}\) & \(2A_{3}\) & \((1,1,1,1,1,1,1)/4\) & \((1,1,1,1,1,1,1)/4\) & \((1,1,1,1,1,1,1)/4\) & \(B_{2}\) \\ \hline \(E_{8}(b_{6})\) & \(E_{8}(b_{6})\) & \(D_{4}(a_{1})+A_{2}\) & \((0,0,1,0,0,0,1,0,0)\) & \((0,0,0,1,0,0,0,1)\) & \(A_{2}\) \\ \hline \(E_{8}+A_{2}\) & & \((1,1,1,0,1,1,1,3)/3\) & \((1,1,1,0,1,1,1)/2\) & \((1,0,0,1,0,1,1,1)/2\) & \((1,0,0,1,0,1,1)/2\) & \(2A_{1}\) \\ \hline \(E_{8}(b_{6})\) & \(D_{8}(a_{3})\) & \(A_{3}+A_{2}+A_{1}\) & \((1,0,0,1,0,1,1,1)/2\) & \((1,1,0,1,0,1,1,1)/2\) & \((1,0,0,1,0,1,1,1)/2\) & \(2A_{1}\) \\ \hline \(E_{8}(a_{6})\) & \(E_{8}(a_{6})\) & \(D_{4}(a_{1})+A_{1}\) & \((0,0,0,1,0,0,1,0)\) & \((0,0,0,1,0,0,1,0)\) & \((0,0,0,1,0,0,1,0)\) & \(3A_{1}\) \\ \hline \(E_{8}(a_{6})\) & \(D_{8}(a_{2})\) & \(A_{3}+2A_{1}\) & \((1,1,1,0,1,0,1,1)/2\) & \((1,1,1,0,1,0,1,1)/2\) & \((1,1,1,0,1,0,1,1)/2\) & \(B_{2}+A_{1}\) \\ \hline \(E_{8}(a_{6})\) & \(A_{8}\) & \(2A_{2}+2A_{1}\) & \((1,1,1,1,1,1,1,1)/3\) & \((1,1,1,1,1,1,1)/3\) & \((1,1,1,1,1,1,1,1)/3\) & \(B_{2}\) \\ \hline \(E_{8}(b_{5})\) & \(E_{8}(b_{5})\) & \(D_{4}(a_{1})\) & \((0,0,1,0,0,1,1)\) & \((0,0,0,1,0,0,1,1)\) & \((0,0,0,1,0,0,1,1)\) & \(D_{4}\) \\ \hline \(E_{8}(b_{5})\) & \(E_{7}(a_{2})+A_{1}\) & \(A_{3}+A_{1}\) & \((1,1,0,1,0,1,1,2)/2\) & \((1,1,0,1,0,1,1,2)/2\) & \((B_{3}+A_{1}\) \\ \hline \(E_{8}(b_{5})\) & \(E_{6}+A_{2}\) & \(2A_{2}+A_{1}\) & \((1,1,1,1,1,1,1,1,1,1)/3\) & \((1,1,1,1,1,1,1,1,1)/3\) & \((1,1,1,1,1,1,1,1,1)/3\) & \(G_{2}+A_{1}\) \\ \hline \(E_{8}(a_{5})\) & \(E_{8}(a_{5})\) & \(E_{8}(a_{5})\) & \(2A_{2}\) & \((1,0,0,1,0,0,1,0)\) & \((1,0,0,1,0,0,1,0)\) & \(2G_{2}\) \\ \hline \(E_{8}(a_{5})\) & \(D_{8}(a_{1})\) & \(A_{2}+3A_{1}\) & \((1,1,1,0,1,1,1,1)/2\) & \((1,1,1,0,1,1,1,1)/2\) & \((1,1,1,0,1,1,1,1)/2\) & \(G_{2}+A_{1}\) \\ \hline \(E_{8}(b_{4})\) & \(E_{8}(b_{4})\) & \(A_{2}+2A_{1}\) & \((1,0,0,1,0,0,1,1)\) & \((1,0,0,1,0,0,1,1)\) & \((1,0,0,1,0,0,1,1)\) & \(B_{3}+A_{1}\) \\ \hline & \(E_{7}+A_{1}\) & \((1,1,1,1,0,1,1,1)/2\) & \((1,1,1,1,1,1,1,1)/2\) & \((1,1,1,1,1,1,1,1)/2\) & \(G_{2}+A_{1}\) \\ \hline \(E_{8}(a_{4})\) & \(E_{8}(a_{4})\) & \(A_{2}+A_{1}\) & \((1,0,0,1,0,1,0,1)\) & \((1,0,0,1,0,1)\) & \((1,0,0,1,0,1,0,1,1)\) & \(A_{5}\) \\ \hline \(E_{8}(a_{4})\) & \(D_{8}\) & \(4A_{1}\) & \((1,1,1,1,1,1,1,1)/2\) & \((1,1,1,1,1,1,1,1)/2\) & \((1,1,1,1,1,1,1,1)/2\) & \(G_{4}\) \\ \hline \(E_{8}(a_{3})\) & \(E_{8}(a_{3})\) & \(A_{2}\) & \(( \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \((\mathbb{O}^{\vee},\mathbb{O}_{M^{\vee}})\) & \(\mathbb{O}\) & \((L,\mathbb{O}_{L})\) & \((R^{\vee},\mathbb{O}_{R^{\vee}})\) & \(A_{R^{\vee}},A_{L},A\) & \(\Gamma\) & \# \\ \hline \((F_{4}(a_{3}),B_{4}(a_{1}))\) & \(B_{2}\) & \((F_{4},B_{2})\) & \((B_{4},[5,3,1])\) & \(\mathbb{Z}_{2},S_{2},S_{2}\) & \(S_{2}\) & \((2)\) \\ \hline \((F_{4}(a_{3}),C_{3}(a_{1})+A_{1})\) & \(C_{3}(a_{1})\) & \((F_{4},C_{3}(a_{1}))\) & \((C_{3}+A_{1},[4,2]\times[2])\) & \(\mathbb{Z}_{2},S_{2},S_{2}\) & \(S_{2}\) & \((2)\) \\ \hline \(F_{4}(a_{3}),2A_{2})\) & \(\bar{A}_{2}+A_{1}\) & \((F_{4},\bar{A}_{2}+A_{1})\) & \((2A_{2},[3^{2}])\) & \(1,1,1\) & \(1\) & \((1)\) \\ \hline \((F_{4}(a_{3}),A_{3}+A_{1})\) & \(A_{2}+A_{1}\) & \((F_{4},A_{2}+A_{1})\) & \((A_{3}+A_{1},[4]\times[2])\) & \(1,1,1\) & \(1\) & \((1)\) \\ \hline \((F_{4}(a_{1}),B_{4})\) & \(A_{1}\) & \((F_{4},A_{1})\) & \((B_{4},[9])\) & \(1,1,1\) & \(1\) & \((1)\) \\ \hline \((C_{3}(a_{1}),A_{1}+B_{2})\) & \(C_{3}(a_{1})\) & \((B_{3},(2^{2},1^{3}))\) & \((B_{4},[5,2^{2}])\) & \(1,1,S_{2}\) & \(1\) & \((4)\) \\ \hline \((B_{2},A_{3})\) & \(B_{2}\) & \((C_{3},(2,1^{3}))\) & \((A_{3}+A_{1},[4]\times\{0\})\) & \(1,1,S_{2}\) & \(1\) & \((4)\) \\ \hline \((A_{1},A_{1})\) & \(F_{4}(a_{1})\) & \((A_{1},\{0\})\) & \((C_{3}+A_{1},\{0\}\times[2])\) & \(1,1,S_{2}\) & \(1\) & \((3)\) \\ \hline \((A_{1},A_{1})\) & \(F_{4}(a_{1})\) & \((A_{1},\{0\})\) & \((B_{4},[3,1^{6}])\) & \(\mathbb{Z}_{2},1,S_{2}\) & \(S_{2}\) & \((8)\) \\ \hline \((A_{1}+\bar{A}_{1},2A_{1})\) & \(F_{4}(a_{2})\) & \((2A_{1},\{0\})\) & \((C_{3}+A_{1},[2^{3}]\times\{0\})\) & \(1,1,S_{2}\) & \(1\) & \((3)\) \\ \hline \((A_{2}+A_{1},A_{2}+A_{1})\) & \(F_{4}(a_{3})\) & \((A_{2}+A_{1},\{0\})\) & \((B_{4},[3^{3}])\) & \(1,1,S_{4}\) & \(1\) & \((7)\) \\ \hline \((A_{2}+A_{1},A_{2}+A_{1})\) & \(A_{3}+A_{1}\) & \((D_{5},[3^{2},1^{3}])\) & \((C_{3}+A_{1},[3^{2}]\times[2])\) & \(1,1,S_{4}\) & \(1\) & \((7)\) \\ \hline \((A_{2},A_{1}),C_{3}(a_{1})\) & \(F_{4}(a_{3})\) & \((C_{3},(4,2))\) & \((C_{3}+A_{1},[4,2]\times\{0\})\) & \(\mathbb{Z}_{2},S_{2},S_{4}\) & \(S_{2}\) & \((7)\) \\ \hline \((C_{3},C_{3})\) & \(A_{2}\) & \((C_{3},\{0\})\) & \((C_{3}+A_{1},[6]\times\{0\})\) & \(1,1,S_{2}\) & \(1\) & \((3)\) \\ \hline \end{tabular} \end{table} Table 8. Special Lusztig-Achar data (not of the form \((\mathbb{O}^{\vee},1)\) with even \(\mathbb{O}^{\vee}\)) in type \(F_{4}\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \((\mathbb{O}^{\vee},\mathbb{O}_{M^{\vee}})\) & \(\mathbb{O}\) & \((L,\mathbb{O}_{L})\) & \((R^{\vee},\mathbb{O}_{R^{\vee}})\) & \(A_{R^{\vee}},A_{L},A\) & \(\Gamma\) & \# \\ \hline \((D_{4}(a_{1}),3A_{2})\) & \(2A_{2}+A_{1}\) & \((E_{6},2A_{2}+A_{1})\) & \((3A_{2},[3]\times[3]\times[3])\) & \(1,1,1\) & \(1\) & \((1)\) \\ \hline \((E_{6}(a_{3}),A_{5}+A_{1})\) & \(3A_{1}\) & \((E_{6},3A_{1})\) & \((A_{5}+A_{1},[6]\times[2])\) & \(1,1,1\) & \(1\) & \((1)\) \\ \hline \((D_{4}(a_{1}),A_{3}+2A_{1})\) & \(A_{3}+A_{1}\) & \((D_{5},[3^{2},1^{3}])\) & \((A_{5}+A_{1},[4,2]\times[2])\) & \(1,1,1\) & \(1\) & \((1)\) \\ \hline \((A_{2},A_{4})\) & \(A_{5}\) & \((D_{4},(3^{2},2^{2},1))\) & \((A_{5}+A_{1},[3^{3}]\times\{0\})\) & \(1,1,1\) & \(1\) & \((1)\) \\ \hline \((A_{1},A_{1})\) & \(E_{6}(a_{1})\) & \((A_{1},\{0\})\) & \((A_{5}+A_{1},\{0\}\times[2])\) & \(1,1,1\) & \(1\) & \((1)\) \\ \hline \((2A_{1},2A_{1})\) & \(D_{5}\) & \((2A_{1},\{0\})\) & \((D_{5},[3,1^{7}])\) & \(1,1,1\) & \(1\) & \((1)\) \\ \hline \((3A_{1},3A_{1})\) & \(E_{6}(a_{3})\) & \((3A_{1},\{0\})\) & \((A_{5}+A_{1},[2^{3}]\times\{0\})\) & \(1,1,S_{2}\) & \(1\) & \((3)\) \\ \hline \((A_{2}+A_{1},A_{2}+A_{1})\) & \(D_{5}(a_{1})\) & \((A_{2}+A_{1},\{0\})\) & \((A_{5}+A_{1},[3^{4}]\times[2])\) & \(1,1,1\) & \(1\) & \((1)\) \\ \hline \((A_{2}+2A_{1},A_{2}+2A_{1})\) & \(A_{4}+A_{1}\) & \((A_{2}+2A_{1},\{0\})\) & \((D_{5},[3^{3},1])\) & \(1,1,1\) & \(1\) & \((1)\) \\ \hline \((A_{3},A_{3})\) & \(A_{4}\) & \((A_{3},\{0\})\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \((\mathbb{O}^{\vee},\mathbb{O}_{M^{\vee}})\) & \(\mathbb{O}\) & \((L,\mathbb{O}_{L})\) & \((R^{\vee},\mathbb{O}_{R^{\vee}})\) & \(A_{R^{\vee}},A_{L},A\) & \(\Gamma\) & \# \\ \hline \((E_{7}(a_{5}),D_{6}(a_{2})+A_{1})\) & \((A_{3}+A_{1})^{\prime}\) & \((E_{7},(A_{3}+A_{1})^{\prime})\) & \((D_{6}+A_{1},[7,5]\times[2])\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((E_{7}(a_{5}),A_{5}+A_{2})\) & \(2A_{2}+A_{1}\) & \((E_{7},2A_{2}+A_{1})\) & \((A_{5}+A_{2},[6]\times[3])\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((E_{6}(a_{1}),A_{7})\) & \(4A_{1}\) & \((E_{7},4A_{1})\) & \((A_{7},[8])\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((E_{6}(a_{3}),A_{5}+A_{1})\) & \(D_{5}+A_{1}\) & \((E_{6},3A_{1})\) & \((D_{6}+A_{1},[6]^{2}\times[2])\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((A_{4},2A_{3})\) & \(D_{4}+A_{1}\) & \((E_{6},3A_{1})\) & \((2A_{3}+A_{1},[4]^{2}\times[0])\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((D_{4}(a_{1})+A_{1},A_{3}+3A_{1})\) & \((A_{5})^{\prime}\) & \((D_{5}+A_{1},[3,2^{2}]\times\{0\})\) & \((D_{6}+A_{1},[5,3,2^{2}]\times\{0\})\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((D_{4}(a_{1}),A_{3}+2A_{1})\) & \(D_{6}(a_{2})\) & \((D_{5},(3,2^{2},1^{3}))\) & \((D_{6}+A_{1},[4^{2},2^{2}]^{T}\times\{0\})\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((A_{2},4A_{1})\) & \(D_{6}\) & \((D_{4},(3,2^{2},1))\) & \((D_{6}+A_{1},[2^{6}]^{T}\times[2])\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((A_{1},A_{1})\) & \(E_{7}(a_{1})\) & \((A_{1},\{0\})\) & \((D_{6}+A_{1},\{0\})\) & \((D_{1}+A_{1},[3^{2}]\times\{0\})\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((A_{2},A_{1})\) & \(E_{7}(a_{2})\) & \((2A_{1},\{0\})\) & \((D_{6}+A_{1},[3,1^{3}]\times\{0\})\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((A_{3},A_{3})\) & \(D_{6}(a_{1})\) & \((A_{3},\{0\})\) & \((D_{6}+A_{1},[5,1^{7}]\times\{0\})\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((A_{2}+A_{1},A_{2}+A_{1})\) & \(E_{7}(a_{5})\) & \((2A_{2}+A_{1},\{0\})\) & \((D_{6}+A_{1},[3^{2}]\times[2])\) & \(1,1,S_{3}\) & \(1\) & (3) \\ \hline \((A_{3}+A_{1}),(A_{3}+A_{1})\) & \(E_{7}(a_{5})\) & \((A_{3}+A_{1},\{0\})\) & \((D_{6}+A_{1},[2^{6}]^{T}\times[2])\) & \(1,1,S_{3}\) & \(1\) & (5) \\ \hline \((A_{3}+A_{1},A_{3}+2A_{1})\) & \(E_{6}(a_{3})\) & \((A_{3}+2A_{1},\{0\})\) & \((D_{6}+A_{1},[4^{2},2^{2}]^{T}\times[2])\) & \(1,1,S_{2}\) & \(1\) & (3) \\ \hline \((D_{4}(a_{1})+A_{1},D_{4}(a_{1})+A_{1})\) & \((A_{5})^{\prime}\) & \((D_{4}+A_{1},[2^{2},1^{4}]\times\{0\})\) & \((D_{6}+A_{1},[4^{2},2^{2}]^{T}\times\{0\})\) & \(1,1,S_{3}\) & \(1\) & (5) \\ \hline \((A_{3}+A_{2},A_{3}+2A_{1})\) & \(E_{6}(a_{3})\) & \((A_{3}+2A_{1},\{0\})\) & \((D_{6}+A_{1},[4^{2},2^{2}]^{T}\times[2])\) & \(1,1,S_{2}\) & \(1\) & (3) \\ \hline \((D_{4}(a_{1})+A_{1},D_{4}(a_{1})+A_{1})\) & \((A_{5})^{\prime\prime}\) & \((D_{4}+A_{1},[2^{2},1^{4}]\times\{0\})\) & \((D_{6}+A_{1},[4^{2},2^{2}]^{T}\times[2])\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((A_{3}+A_{2},A_{3}+A_{2})\) & \(D_{5}(a_{1})+A_{1}\) & \((A_{3}+A_{2},\{0\})\) & \((D_{6}+A_{1},[5,3^{2}]\times\{0\})\) & \(1,1,1\) & \(1\) & (1) \\ \hline \((D_{4}(a_{1})+A_{1},D_{4}(a_{1})+A_{1})\) & \(A_{4}\) & \((D_{4}+A_{1},\{0\})\) & \((D_{6}+A_{1},[7,1^{3}]\times[2])\) & \(2_{2},1,S_{2}\) & \(S_{2}\) & (8) \\ \hline \((D_{5}(a_{1}),D_{5}(a_{1}))\) & \(A_{4}\) & \((D_{5}(2^{2},1^{6}))\) & \((D_{6}+A_{1},[7,1^{3}]\times\{0\})\) & \(2_{2},1,S_{2}\) & \(S_{2}\) & (8) \\ \hline \(((A_{5})^{\prime},(A_{5})^{\prime})\) & \(D_{4}(a_{1})+A_{1}\) & \((A_{5},\{0\})\) & \((D_{6}+A_{1},[6^{2}]^{T}\times[0])\) & \(1,1,S_{2}\) & \(1\) & (4) \\ \hline \((A_{5}+A_{1},A_{5}+A_{1})\) & \(D_{4}(a_{1})\) & \((A_{5}+A_{1},\{0\})\) & \((D_{6}+A_{1},[6^{2}]^{T}\times[2])\) & \(1,1,S_{3}\) & \(1\) & (3) \\ \hline \((D_{6}(a_{2}),D_{6}(a_{2}))\) & \(D_{4}(a_{1})\) & \((D_{6}(2^{4},1^{4}))\) & \((D_{6}+A_{1},[7,5]\times\{0\})\) & \(1,1,S_{3}\) & \(1\) & (8) \\ \hline \((D_{5}+A_{1},D_{5}+A_{1})\) & \(2A
2309.15581
Positron Beams At Ce$^+$BAF
We present a scheme for the generation of a high polarization positron beam with continous wave (CW) bunch structure for the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Laboratory (JLab). The positrons are created in a high average power conversion target and collected by a CW capture linac and DC solenoid.
J. Grames, J. Benesch, M. Bruker, L. Cardman, S. Covrig, P. Ghoshal, S. Gopinath, J. Gubeli, S. Habet, C. Hernandez-Garcia, A. Hofler, R. Kazimi, F. Lin, S. Nagaitsev, M. Poelker, B. Rimmer, Y. Roblin, V. Lizarraga-Rubio, A. Seryi, M. Spata, A. Sy, D. Turner, A. Ushakov, C. A. Valerio-Lizarraga, E. Voutier
2023-09-27T11:29:12Z
http://arxiv.org/abs/2309.15581v1
# Positron Beams at Ce\({}^{+}\)Baf\({}^{*}\) ###### Abstract We present a scheme for the generation of a high polarization positron beam with continous wave (CW) bunch structure for the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Laboratory (JLab). The positrons are created in a high average power conversion target and collected by a CW capture linac and DC solenoid. ## 1 Introduction The CEBAF accelerator has provided high energy spin polarized electron beams for almost 30 years. Today, JLab is exploring an upgrade which would provide high energy spin polarized positron beams to address new physics [1, 2]. A relatively new technique referred to as PEPPo (Polarized Electrons for Polarized Positrons) has been adopted [3] to generate the positrons. Here the spin polarization of an electron beam is transferred by polarized bremsstrahlung and polarized e+/e- pair creation within a high-power rotating tungsten target. In this scheme two accelerators are used (see Fig. 1). First, the Jefferson Lab Low Energy Recirculator Facility (LERF) building (see Fig. 2) is repurposed to take advantage of existing electrical, cryogenic, and shielding facilities. A high current >1 mA spin polarized CW electron beam is produced, accelerated to an energy of 120 MeV and transported to the high-power target to generate the spin polarized positrons. Afterwards, the positrons are collected to maximize intensity or polarization, bunched and re-accelerated to 123 MeV. Finally their spin direction may adjusted in a novel spin rotator. Once the positron beam exits the LERF it is transported from ground level through a new beam line to the CEBAF accelerator tunnel underground. There it is transported half-way around the accelerator and injected as a usual electron beam would from the existing CEBAF electron injector. The positrons are then accelerated to 12 GeV and may be extracted at any pass (intermediate energies) to any of the four halls. The Ce\({}^{+}\)BAF design is optimized to provide users with spin polarization >60 % at intensities >100 nA, and with higher intensities when polarization is not needed. ## 2 LERF ### Polarized Electron Injector The existing LERF injector provides the baseline layout with the superconducting quarter cryomodule (Capture Linac: SRF 10 MV) capable of accelerating up to 10 mA CW beams to 9 MeV/\(c\)[4]. Upstream of the Capture Linac, the layout will resemble that in CEBAF, albeit more compact starting with the polarized electron source, followed by a Wien spin rotator and a buncher cavity for longitudinal matching to the SRF 10 MV. Downstream of the SRF 10 MV, a three dipole magnet chicane injects the electron beam into the first of two full-length accelerating SRF cryomodules (60 MV each). The LERF electron gun will be a scaled-up version of the 130 keV inverted geometry gun used at CEBAF for many years [5]. The CEBAF gun reliably provides highly spin polarized electron beams 90% and average current of 200 \(\mu\)A with 0.4 pC CW bunch trains (250/499 MHz). Due to excellent dynamic vacuum conditions and a biased anode limiting ionized residual gas from reaching the photocathode, charge lifetimes >400 C with strained-superlattice GaAs/GaAsP are achieved [6, 7]. However, because the bremsstrahlung yield of positrons from electrons will be low (\(<10^{-4}\)) a much higher beam current >1 mA is required with correspondingly higher bunch charge >2 pC. We expect to operate the gun in a range of Figure 1: CEBAF and LERF accelerators. Green line shows the new 123 MeV transport beam line connecting LERF to CEBAF for high energy acceleration of positron beams. 300-350 kV to manage the higher bunch charge and allow direct injection to the SRF 10 MV. To meet the anticipated demands we expect to re-design the gun cathode electrode in two ways, (a) to have a larger spherical radius to achieve and safely maintain higher gradient and (b) to accommodate a larger laser beam spot size to extend the charge lifetime. Additionally the cathode must be free of field emission so we plan to include the capability of applying 50 kV beyond the required beam voltage for high voltage gas conditioning. The higher bunch charges also pose challenges for the initial bunching and acceleration of the beam. Space charge forces will repel the electrons and reverse the bunching as the beam drifts, where space charge effects typically degrade beam quality. To prevent this, we have kept the distance between the gun and the first accelerating element as short as possible and plan to compress the electron bunch from 40 ps (determined by the optical pulsed) to about 2 ps within a few meters prior to the SRF QCM. The final section of the electron injector shapes the transverse emittance to match the acceptance for two CEBAF style CMs which accelerate the beam energy to about 120 MeV. A separate contribution to this conference [8] describes the electron injector in detail. ### High Power Target for Positron Production A conceptual design of the high power positron target has been developed. Tungsten has been chosen as the preferred target material. GEANT4 [9] simulations have been used to determine that a tungsten target thickness of 4 mm is optimal for maximizing the Figure-of-Merit [10] (FOM, defined as the product of the positron current and the square of their longitudinal polarization). The thermal power deposited by a 1 mA electron beam current of 120 MeV energy into 4 mm of tungsten has been estimated with FLUKA [11, 12] to be on the order of 17 kW. A typical target employed at JLab has less than 1 kW of electron beam power deposited into the target material. The only feasible cooling agent for the 17 kW target at Jefferson Lab would be water as the maximum cryogenic capacity for target cooling is less than 6 kW. Notably, the only JLab target that surpassed the 1 kW mark to date was the 2.5 kW liquid hydrogen target [13] for the Qweak experiment. The design of the target has been done with ANSYS-Fluent [14] thermal simulations. Fluent calculations have shown that a static 4 mm thick tungsten target in a copper frame cooled by an internal water channel could sustain about 1 kW of electron beam power safely. A 35 cm diameter and 4 mm thick tungsten target rotating at 2 Hz could safely dissipate the 17 kW beam power deposited in it while maintaining a maximum temperature below 1000 K. ### CW Positron Beam Formation The generation of positrons in a thick target creates an exceptionally broad distribution in transverse and longitudinal phase space. A high field (\(B_{1}\)) quarter-wave transformer (QWT) located after the target decreases the transverse angular divergence of the positron distribution while also defining the central momentum of the positron polarization distribution to be collected. Following the high field region a low field (\(B_{2}\)) solenoid is used to manage the positron beam through an RF capture section. The solenoid fields \(B_{1}\) and \(B_{2}\) are optimized to maintain the large 4D transverse phase space of the beam [16] through this region. A positron momentum spread \(\delta p/p_{0}<1\%\) was chosen early on in the design in order to mitigate apertures in regions of large dispersion in the transport lines connecting the LERF to CEBAF. Although we have not settled on a final momentum spread this issue motivated us to include an RF capture section right after the QWT in our design studies in order to decrease the longitudinal energy spread as well as improve the transverse beam emittance. Following the RF capture region the positron momenta is defined by a chicane beamline composed of quadrupoles and dipoles to create a correlation between positron energy and transverse position at its midpoint. After the chicane the positron beam is accelerated in a SRF CM to 123 MeV (an injection energy requirement for CEBAF 12 GeV) and transported through a bunch compression chicane to achieve a bunch length of a few picoseconds. Figure 3: Side-view concept design of the rotating target. Figure 2: LERF layout of polarized e\({}^{*}\) and e\({}^{*}\) injectors. In this contribution the chicane was optimized for 60 MeV/\(c\) positrons (maximum polarized FoM) while passing a 1% energy spread. Results of recent CW polarized positron beam simulations are shown in Table 1 relative to our present design goals. We have met or exceeded all goals except for the normalized emittance, which can be met by reducing the acceptance and reducing the positron beam current or increasing the drive beam power. A reference to earlier work on this topic was reported in Ref. [17]. ### Positron Spin Rotator The precession of the electron beam polarization when accelerated at CEBAF to 12 GeV is more than 60 full revolutions. Experiments however most often require longitudinal or sometimes transverse spin polarization at their target. At CEBAF a 4\(\pi\) spin rotator consisting of two Wien filters with intervening solenoid magnets [18] is used to orient the spin at the injector to control the final spin polarization at the experiment. This is convenient when the beam energy is 100 keV and the required Wien filter field strengths are modest (e.g. E 1 MV/m and B 100 G). However, the positron beam production energies at the LERF are 10's of MeV and the final beam energy is >100 MeV, making a Wien filter impractical. For Ce\({}^{+}\)BAF a higher energy spin rotator concept has been imagined. The proposed spin rotator scheme is shown in Fig. 4. Composed by interleaved dipole and solenoid fields the small anomalous gyromagnetic factor for positrons (or even electrons) means the spin rotation in the solenoids is more effective than in the dipoles at lower energies [19]. However, the dipole magnetic field is necessary to provide the desired spin rotation axis. Rotating the spin around the longitudinal solenoid and radial dipole fields, this spin rotator can provide a desired net spin rotation around the vertical axis in the horizontal plane. Notably in this design the dipole fields are arranged with net zero bending angle, leaving the beam trajectory intact and transparent to beam orbit perturbations. Further details of this design will be presented in a future presentation once on-going simulations are completed. ## 12 GeV Ce\({}^{+}\)Baf Once the CW positron beam has been formed and the spin oriented it is ready for acceleration to higher energies. The positron beam is transported in a new tunnel connecting the east side of LERF to the south east corner of CEBAF near the entrance of the South Linac (Fig. 1). This new beamline features a double-bend achromat (DBA) to maintain small dispersion and a vertical achromatic translator to bring the beam to the elevation of the CEBAF South Linac tunnel near the ceiling. At this point a long FODO channel attached to the ceiling of the South Linac transports the beam to the west side of CEBAF where it is bent 180 degrees via a DBA-like lattice with low dispersion and is also isochronous. At the end of this long transport line a vertical achromat translator and horizontal bending magnets bring the beam to the start of the North Linac where it is injected. Additionally, each beamline also has a betatron matching section. While this long beamline from LERF to CEBAF is designed for the 123 MeV/\(c\) positron beam it should also be suitable for an electron beam with energy up to 650 MeV/\(c\) to be compatible with a future upgrade of CEBAF to 22 GeV. The CEBAF accelerator limits the maximum transverse emittance that one can transport because of the reduced acceptance at the extraction corners. We estimate that one can inject between 40 and 120 mm.mrad of normalized emittance at the front of the north linac. In terms of longitudinal acceptance, we are planning to change the optics configuration for the first two recirculation arcs (east and west sides) in order to have smaller dispersion functions and an easily tunable momentum compaction. With these new optics we should expect to inject up to a percent of energy spread in the front of the north linac and transport a beam that has a longitudinal bunch length around 1 mm. A separate contribution to this conference [20] is exploring the admittance of the electron injector and first recirculation pass of CEBAF. ## 2 Outlook The Ce\({}^{+}\)BAF working group has developed a scheme to provide CEBAF with polarized positron beams with CW time structure. Early designs and simulated parameters combined with constraints are approaching the anticipated goals. Our focus in the coming months is to develop a white paper documenting in greater detail the technical approach and additional issues being addressed, but not reported in the length of these proceedings. \begin{table} \begin{tabular}{l r r} \hline \hline **Ce\({}^{+}\)BAF Parameter** & **Status** & **Goal** \\ \hline \(p_{0}\) [MeV/\(c\)] & 60 & 60 \\ \(\sigma_{\delta p/p_{0}}\) [\%] & 0.68 & \(\pm 1\) \\ \(\sigma_{z}\) [ps] & 3 & \(\leq 4\) \\ Normalized \(\epsilon_{n}\) [mm mrad] & 140 & \(\leq 40\) \\ \(p_{f}\) [MeV/\(c\)] & 123 & 123 \\ \(I_{e^{+}}\)(\(P>60\%\)) [nA] & 170 & \(>50\) \\ \hline \hline \end{tabular} \end{table} Table 1: Simulated parameters of the Ce\({}^{+}\)BAF injector. Figure 4: Spin rotator concept. \(\phi_{x}\): spin rotation around the radial axis, \(\phi_{z}\): spin rotation around the longitudinal axis.
2309.05792
Seamless Integration of Tactile Sensors for Cobots
The development of tactile sensing is expected to enhance robotic systems in handling complex objects like deformables or reflective materials. However, readily available industrial grippers generally lack tactile feedback, which has led researchers to develop their own tactile sensors, resulting in a wide range of sensor hardware. Reading data from these sensors poses an integration challenge: either external wires must be routed along the robotic arm, or a wireless processing unit has to be fixed to the robot, increasing its size. We have developed a microcontroller-based sensor readout solution that seamlessly integrates with Robotiq grippers. Our Arduino compatible design takes away a major part of the integration complexity of tactile sensors and can serve as a valuable accelerator of research in the field. Design files and installation instructions can be found at https://github.com/RemkoPr/airo-halberd.
Remko Proesmans, Francis wyffels
2023-09-11T19:49:57Z
http://arxiv.org/abs/2309.05792v1
# Seamless Integration of Tactile Sensors for Cobots ###### Abstract The development of tactile sensing is expected to enhance robotic systems in handling complex objects like deformables or reflective materials. However, readily available industrial grippers generally lack tactile feedback, which has led researchers to develop their own tactile sensors, resulting in a wide range of sensor hardware. Reading data from these sensors poses an integration challenge: either external wires must be routed along the robotic arm, or a wireless processing unit has to be fixed to the robot, increasing its size. We have developed a microcontroller-based sensor readout solution that seamlessly integrates with Robotiq grippers. Our Arduino compatible design takes away a major part of the integration complexity of tactile sensors and can serve as a valuable accelerator of research in the field. Design files and installation instructions can be found at [https://github.com/RemkoPr/airo-halberd](https://github.com/RemkoPr/airo-halberd). Tactile sensing, System integration, Open-source ## I Introduction Tactile sensing is essential for robotic systems in dealing with complex objects in dynamic environments [1]. Clothing, for example, is difficult to handle using solely computer vision due to their infinitely large configuration space [2] and self-occlusions. In particular, different research groups have recently attempted a tactile approach [3, 4, 5] to unfold textile pieces. Furthermore, reflective, transparent and textureless objects are a challenge for industrial robots [6] caused by improper image segmentation and distorted stereoscopic depth images for vision-only systems. However, commercially available industrial grippers typically do not feature tactile feedback [7, 8]. The prevailing trend indicates that these grippers offer binary feedback to indicate whether they are gripping an object by monitoring their current consumption. This feedback, combined with force torque sensing in the joints of a robotic arm, is the current extent of readily available tactile sensing. Hence, researchers are required to acquire or develop their own tactile sensors and integrate them into the robot arm. However, integrating tactile sensors requires careful consideration of how to power the sensors and how to communicate the data. An evident choice is to run external wiring which severely limits the movements of the robot. In contrast, adding external batteries, power splitters or wireless readout electronics to the robot can cause self-collisions and further restrict robot movement when deployed in a constrained environment. We present a seamless solution for the integration of tactile sensors, specifically for Robotiq grippers and Universal Robot (UR) arms, both brands that are commonly employed in literature [9, 10, 11, 12]. By lowering the entry barrier to integrating tactile sensors to prevalent robotic grippers, we believe our design is an accelerator for many researchers in the field of robotic tactile sensing. ## II Design To mount a Robotiq gripper to a cobot arm, Robotiq provides an I/O Coupling. This coupling is bolted to the arm, and the gripper is bolted to the coupling. In [13], we presented an intermediate flange, to be placed between the Robotiq I/O Coupling and the gripper, which provided a breakout interface of the power and signal pins of the gripper. We have now realised a coupling which completely replaces the Robotiq I/O Coupling and provides extra functionality, shown in Fig. 1. Fig. 1: System integration.
2304.00170
Fixation probability in evolutionary dynamics on switching temporal networks
Population structure has been known to substantially affect evolutionary dynamics. Networks that promote the spreading of fitter mutants are called amplifiers of natural selection, and those that suppress the spreading of fitter mutants are called suppressors. Research in the past two decades has found various families of amplifiers while suppressors still remain somewhat elusive. It has also been discovered that most networks are amplifiers under the birth-death updating combined with uniform initialization, which is a standard condition assumed widely in the literature. In the present study, we extend the birth-death processes to temporal (i.e., time-varying) networks. For the sake of tractability, we restrict ourselves to switching temporal networks, in which the network structure alternates between two static networks at constant time intervals. We show that, in a majority of cases, switching networks are less amplifying than both of the two static networks constituting the switching networks. Furthermore, most small switching networks are suppressors, which contrasts to the case of static networks.
Jnanajyoti Bhaumik, Naoki Masuda
2023-03-31T23:30:21Z
http://arxiv.org/abs/2304.00170v2
# Fixation probability in evolutionary dynamics on switching temporal networks ###### Abstract Population structure has been known to substantially affect evolutionary dynamics. Networks that promote the spreading of fitter mutants are called amplifiers of natural selection, and those that suppress the spreading of fitter mutants are called suppressors. Research in the past two decades has found various families of amplifiers while suppressors still remain somewhat elusive. It has also been discovered that most networks are amplifiers under the birth-death updating combined with uniform initialization, which is a standard condition assumed widely in the literature. In the present study, we extend the birth-death processes to temporal (i.e., time-varying) networks. For the sake of tractability, we restrict ourselves to switching temporal networks, in which the network structure alternates between two static networks at constant time intervals. We show that, in a majority of cases, switching networks are less amplifying than both of the two static networks constituting the switching networks. Furthermore, most small switching networks are suppressors, which contrasts to the case of static networks. ## 1 Introduction Evolutionary dynamics models enable us to study how populations change over time under natural selection and neutral random drift among other factors. Over the past two decades, the population structure, particularly those represented by networks (i.e., graphs), has been shown to significantly alter the spread of mutant types [1, 2, 3, 4, 5]. Mutants may have a fitness that is different from the fitness of a resident type, which makes the mutants either more or less likely to produce offsprings. The fitness of each type may vary depending on the type of the neighboring individuals' types as in the case of evolutionary games on networks. On the other hand, the simplest assumption on the fitness is to assume that the fitness of each type is constant over time. This latter case, which we refer to as constant selection, has also been studied as biased voter models, modeling stochastic opinion formation in networks (and well-mixed populations) [6, 7, 8, 9]. Networks on which real-world dynamical processes approximated by evolutionary dynamics occur may be time-varying. Temporal (i.e., time-varying) networks and dynamical processes on them have been extensively studied [10, 11, 12, 13, 14, 15, 16]. Evolutionary game dynamics on time-varying networks are no exception. It has been shown that temporal networks enhance the evolution of cooperation as compared to static networks [17, 18, 19, 20]. It has also been known for a longer time that coevolutionary dynamics of a social dilemma game and network structure, in which the dynamics of the network structure depend on the state of the nodes (e.g., cooperator or defector), enhance overall cooperation if players tend to avoid creating or maintaining edges connecting to defectors [5, 21, 22, 23]. In this study, we investigate constant-selection evolutionary dynamics on temporal networks to clarify how the time dependence of the network structure impacts evolutionary processes. In particular, a key question in studies of constant-selection evolutionary dynamics on networks is the fixation probability, defined as the probability that a single mutant type introduced to a node in the network eventually fixates, i.e., occupies all the nodes of the network. The fixation probability depends on the fitness of the mutant type relative to the fitness of the resident type, denoted by \(r\). A network is called an amplifier of natural selection if it has a higher fixation probability than the complete graph, which corresponds to the Moran process, when \(r>1\) and a lower fixation probability when \(r<1\); conversely, a network is called a suppressor if the fixation probability is smaller than for the Moran process on \(r>1\) and larger for \(r<1\)[1, 24]. In Fig. 1, we show hypothetical examples of the fixation probability as a function of \(r\) for three networks: the complete graph (i.e., Moran process), an amplifier, and a suppressor. Under the so-called birth-death updating rule and uniform initialization, most static networks are amplifiers [25, 26]. In fact, there is only one suppressing static network with six nodes among the 112 connected six-node networks [27]. Furthermore, various families of amplifiers have been found [28, 29, 30, 31, 32], whereas suppressors still Figure 1: Concept of amplifier and suppressor of natural selection. The fixation probability of a single mutant type for an amplifier is smaller than that for the Moran process when \(r<1\) and larger than that for the Moran process when \(r>1\). Conversely, the fixation probability for a suppressor is larger than that for the Moran process when \(r<1\) and smaller than that for the Moran process when \(r>1\). The Moran process, amplifier, and suppressor have the same fixation probability at \(r=1\), which is equal to \(1/N\). In the figure, the fixation probabilities for the Moran process are given by Eq. (9) with \(N=5\), and those for the amplifier and suppressor are hypothetical ones for expository purposes. remain elusive [27, 33]. On these grounds, we ask the following two main questions in the present study. First, as in the case of static networks, is a vast majority of temporal networks amplifier of natural selection under the same condition (i.e., birth-death updating rule and uniform initialization)? Second, if we combine amplifying static networks, \(G_{1}\) and \(G_{2}\), into a temporal network, can the obtained temporal network be a suppressor or a less amplifying temporal network than both \(G_{1}\) and \(G_{2}\)? ## 2 Model Let \(G\) be a static weighted network with \(N\) nodes. We assume undirected networks for simplicity although extending the following evolutionary dynamics to the case of directed networks is straightforward. We assume that each node takes either the resident or mutant type at any discrete time. The resident and mutant have fitness \(1\) and \(r\), respectively. The fitness represents the propensity with which each type is selected for reproduction in each time step. The mutant type initially occupies just one node, which is selected uniformly at random among the \(N\) nodes. The other \(N-1\) nodes are occupied by the resident type. We then run the birth-death process, which is a generalization of the Moran process to networks [1, 3, 4, 5, 34, 35]. Specifically, in every discrete time step, we select a node \(v\) to reproduce with the probability proportional to its fitness value. Next, we select a neighbor of \(v\), denoted by \(v^{\prime}\), with the probability proportional to the weight of the undirected edge (\(v\), \(v^{\prime}\)). Then, the type at \(v\) (i.e., either resident or mutant) replaces that at \(v^{\prime}\). We repeat this process until the entire population is of a single type, either resident or mutant, which we call the fixation. In this study, we extend this birth-death process to temporal networks in which two static networks \(G_{1}\) and \(G_{2}\), both having \(N\) nodes, alternate with constant intervals \(\tau\). We call this temporal network model the switching network and denote it by \((G_{1},G_{2},\tau)\). Switching networks have been used for studying various dynamics on temporal networks including synchronization [35, 36, 37, 38, 39, 40, 41], random walk [42, 43, 44], epidemic processing [45, 46, 47, 48], network control [49], and reaction-diffusion systems [50]. Specifically, we first run the birth-death process on \(G_{1}\) for \(\tau\) time steps. Then, we switch to \(G_{2}\) and run the same birth-death process on \(G_{2}\) for \(\tau\) time steps. Then, we switch back to \(G_{1}.\) We keep flipping between \(G_{1}\) and \(G_{2}\) every \(\tau\) time steps until the fixation of either type occurs. ## 3 Computation and theoretical properties of the fixation probability in switching networks In this section, we describe the methods for calculating the fixation probability of a single mutant, i.e., the probability that the mutant type of fitness \(r\) fixates when there is initially just one node of the mutant type that is selected uniformly at random. We extend the methods for static networks [51] to our model. We also state some mathematical properties of the fixation probability in switching networks. ### Fixation probability in static networks We first explain the known procedure for calculating the fixation probability of the mutant type, which we simply refer to as the fixation probability in the following text, in any static weighted network using Markov chains [1, 51]. We describe the state of the evolutionary dynamics by an \(N\)-dimensional binary vector \(\mathbf{s}=(s_{1},\ldots,s_{N})\), where \(s_{i}\in\{0,1\},\forall i\in\{1,\ldots,N\}\). For each \(i\), let \(s_{i}=0\) or \(s_{i}=1\) indicate that node \(i\) is occupied by a resident or a mutant, respectively. Let \(S\) be the set of all states. Note that \(S\) has cardinality \(2^{N}\), that is, there are \(2^{N}\) states and that there are \({N\choose m}\) states with \(m\) mutants. We label the states by a bijective map, denoted by \(f\), from \(S\) to \(\{1,\ldots,2^{N}\}\). The transition probability matrix of the Markov chain, denoted by \(T=(T_{ij})\), is a \(2^{N}\times 2^{N}\) matrix. Its entry \(T_{f(\mathbf{s}),f(\mathbf{s^{\prime}})}\) represents the probability that the state changes from \(\mathbf{s}\) to \(\mathbf{s^{\prime}}\) in one time step. It should be noted that \(T_{f(\mathbf{s}),f(\mathbf{s^{\prime}})}\) can be non-zero if and only if vectors \(\mathbf{s}\) and \(\mathbf{s^{\prime}}\) differ in at most one entry. Therefore, each row of \(T\) has at most \(N+1\) non-zero entries. Let \(\mathbf{s}\) be a state with \(m\) mutants, \(s_{i}=1\) for \(i\in\{g(1),\ldots,g(m)\}\), and \(s_{i}=0\) for \(i\in\{g(m+1),\ldots,g(N)\}\), where \(g\) is a permutation on \(\{1,\ldots,N\}\). Let \(\mathbf{s^{\prime}}\) be the state with \(m+1\) mutants in which \(s^{\prime}_{i}=1\) for \(i\in\{g(1),\ldots,g(m),g(m+1)\}\) and \(s^{\prime}_{i}=0\) for \(i\in\{g(m+2),\ldots,g(N)\}\). Note that \(\mathbf{s}\) and \(\mathbf{s^{\prime}}\) differ only at the \(g(m+1)\)th node, where \(\mathbf{s}\) has a resident and \(\mathbf{s^{\prime}}\) has a mutant. We obtain \[T_{f(\mathbf{s}),f(\mathbf{s^{\prime}})}=\frac{r}{rm+N-m}\sum_{m^{\prime}=1}^{m}\frac {A_{g(m^{\prime}),g(m+1)}}{w(g(m^{\prime}))}, \tag{1}\] where \(A\) denotes the weighted adjacency matrix of the network, i.e., \(A_{ij}\) is the weight of edge \((i,j)\), and \(w(i)\equiv\sum_{j=1}^{N}A_{ij}\) represents the weighted degree of the \(i\)th node, also called the strength of the node. Next, consider a state \(\mathbf{s^{\prime\prime}}\) with \(m-1\) mutants such that \(s^{\prime\prime}_{i}=1\) for \(i\in\{g(1),\ldots,g(\tilde{m}-1),g(\tilde{m}+1),\ldots,g(m)\}\) and \(s^{\prime\prime}_{i}=0\) for \(i\in\{g(\tilde{m}),g(m+1),g(m+2),\ldots,g(N)\}\). We obtain \[T_{f(\mathbf{s}),f(\mathbf{s^{\prime\prime}})}=\frac{1}{rm+N-m}\sum_{m^{\prime}=m+1}^ {N}\frac{A_{g(m^{\prime}),g(\tilde{m})}}{w(g(m^{\prime}))}. \tag{2}\] The probability that the state does not change after one time step is given by \[T_{f(\mathbf{s}),f(\mathbf{s})}=1-\frac{r}{rm+N-m}\sum_{\ell=m+1}^{N}\sum_{m^{\prime }=1}^{m}\frac{A_{g(m^{\prime}),g(\ell)}}{w(g(m^{\prime}))}-\frac{1}{rm+N-m} \sum_{\tilde{m}=1}^{m}\sum_{m^{\prime}=m+1}^{N}\frac{A_{g(m^{\prime}),g( \tilde{m})}}{w(g(m^{\prime}))}. \tag{3}\] Let \(x_{f(\mathbf{s})}\) denote the probability that the mutant fixates when the evolutionary dynamics start from state \(\mathbf{s}\). Because \[x_{f(\mathbf{s})}=\sum_{\mathbf{s^{\prime}}\in S}T_{f(\mathbf{s}),f(\mathbf{s^{\prime}})}x_{f( \mathbf{s^{\prime}})}, \tag{4}\] we obtain \(T\mathbf{x}=\mathbf{x}\), where \(\mathbf{x}=(x_{1},\ldots,x_{2^{N}})^{\top}\), and \({}^{\top}\) represents the transposition. Because \(x_{f((0,\ldots,0))}=0\) and \(x_{f((1,\ldots,1))}=1\), we need to solve the set of \(2^{N}-2\) linear equations to obtain the fixation probabilities starting from an arbitrary initial state. ### Fixation probability in switching networks We now consider the same birth-death process on switching network \((G_{1},G_{2},\tau)\). To calculate the fixation probability in \((G_{1},G_{2},\tau)\), we denote by \(T^{(1)}\) and \(T^{(2)}\) the transition probability matrices for the birth-death process on static network \(G_{1}\) and \(G_{2}\), respectively. Let \(x_{i}(t)\) be the fixation probability when the evolutionary dynamics start from the \(i\)th state (with \(i\in\{1,\ldots,2^{N}\}\)) at time \(t\). We obtain \[\mathbf{x}(t)=\begin{cases}T^{(1)}\mathbf{x}(t+1)&\text{ if }2n\tau\leq t<(2n+1)\, \tau,\\ T^{(2)}\mathbf{x}(t+1)&\text{ if }(2n+1)\tau\leq t<(2n+2)\,\tau,\end{cases} \tag{5}\] where \(\mathbf{x}(t)=\left(x_{1}(t),\ldots,x_{2^{N}}(t)\right)^{\top}\) and \(n\in\left\{0,1,\ldots\right\}\). We recursively use Eq. (5) to obtain \[\mathbf{x}\left(0\right)= T^{(1)}\mathbf{x}\left(1\right)=\cdots=\left(T^{(1)}\right)^{\tau} \mathbf{x}\left(\tau\right)=\left(T^{(1)}\right)^{\tau}\left(T^{(2)}\right)\mathbf{x }\left(\tau+1\right)=\cdots\] \[= \left(T^{(1)}\right)^{\tau}\left(T^{(2)}\right)^{\tau}\mathbf{x} \left(2\tau\right). \tag{6}\] Because of the periodicity of the switching network, we obtain \(\mathbf{x}\left(0\right)=\mathbf{x}\left(2\tau\right)\). Therefore, the fixation probability is given as the solution of \[\mathbf{x}^{*}=\left(T^{(1)}\right)^{\tau}\left(T^{(2)}\right)^{\tau}\mathbf{x}^{*}. \tag{7}\] Let \(\tilde{S}^{(1)}\) be the set of the \(N\) states with just one mutant. Then, the fixation probability when there is initially a single mutant located on a node that is selected uniformly at random is given by \[\rho\equiv\frac{1}{N}\sum_{\mathbf{s}\in\tilde{S}^{(1)}}x_{f(\mathbf{s})}^{*}. \tag{8}\] Note that \(\rho\) is a function of \(r\) and depends on the network structure. Because \(\left(T^{(1)}\right)^{\tau}\left(T^{(2)}\right)^{\tau}\) is a stochastic matrix with two absorbing states, it has a unique solution [52, 53]. The birth-death process on switching networks has the following property. **Theorem 1**.: _(Neutral drift) If \(r=1\), then \(\rho=\frac{1}{N}\) for arbitrary \(G_{1}\), \(G_{2}\), and \(\tau\in\mathbb{N}\)._ Proof.: We imitate the proof given in [54]. Assume a switching network \((G_{1},G_{2},\tau)\) on \(N\) nodes and that each node is initially occupied by a mutant of distinct type, i.e., node \(i\) is occupied by a mutant of type \(A_{i}\). We also assume that each mutant has fitness \(1\). We denote the probability that mutant \(A_{i}\) fixates by \(q_{i}\). Note that \(\sum_{i=1}^{N}q_{i}=1\). Now we reconsider our original evolutionary dynamics with \(r=1\), in which there are only equally strong two types, i.e., resident type and mutant type, with the initial condition in which the mutant type occupies the \(i\)th node and the resident type occupies all the other \(N-1\) nodes. Then, the fixation probability of the mutant is equal to \(q_{i}\) because this model is equivalent to the previous model if we identify \(A_{i}\) with the mutant type and the other \(N-1\) types with the resident type. Therefore, the fixation probability for the original model with \(r=1\) and the uniform initialization is given by \(\sum_{i=1}^{N}q_{i}/N=1/N\). _Remark_.: The theorem holds true even if we switch among more than two static networks or if the switching intervals, \(\tau\), deterministically change from one switching interval to another. The proof remains unchanged. ### Identifying amplifiers and suppressors We operationally define amplifiers and suppressors as follows; similar definitions were used in the literature [1, 55]. For a given switching or static network, we computed the fixation probability for several values of \(r\). We say that the network is amplifier if the fixation probability is larger than for that for the complete graph with the same number of nodes, or equivalently, the Moran process, at six values of \(r>1\), i.e., \(r\in\{1.1,1.2,1.3,1.4,1.6,1.8\}\) and a smaller than that for the Moran process at three values of \(r<1\), i.e., \(r\in\{0.7,0.8,0.9\}\). Note that the fixation probability for the Moran process with \(N\) individuals is given by (see e.g. [2]) \[\rho=\frac{1-\frac{1}{r}}{1-\frac{1}{r^{N}}}. \tag{9}\] Similarly, we say that a network is suppressor if the fixation probability is smaller than for the Moran process at the same six values of \(r\) larger than \(1\) and larger than for the Moran process at the three values of \(r\) smaller than \(1\). It is known that some static networks are neither amplifier nor suppressor [33]. ### Isothermal theorem A network is called isothermal if its fixation probability is the same as that for the Moran process, i.e., if Eq. (9) holds true [1]. A static undirected network, which may be weighted, is isothermal if and only if all the nodes have the same (weighted) degree [56, 57, 1]. One can easily construct isothermal switching networks as follows. **Theorem 2**.: _If \(G_{1}\) and \(G_{2}\) are isothermal networks, then the switching network \((G_{1},G_{2},\tau)\) is an isothermal network._ Proof.: The proof is exactly the same as in the static network case as shown in [1, 2]. We denote by \(p_{m,m-1}\) the probability that the state of the network moves from a state with \(m\) mutants to a state with \(m-1\) mutants in one time step. Similarly, we denote by \(p_{m,m+1}\) the probability that the state moves from one with \(m\) mutants to one with \(m+1\) mutants in one time step. We observe that \(p_{m,m-1}/p_{m,m+1}=1/r\) at every time step \(t\) because the static network at any \(t\), which is either \(G_{1}\) or \(G_{2}\), is isothermal. Therefore, the fixation probability for \((G_{1},G_{2},\tau)\) is given by Eq. (9). ## 4 Fixation probability in various switching networks In this section, we analyze the fixation probability in three types of switching networks, i.e., networks with six nodes, larger switching networks in which \(G_{1}\) and \(G_{2}\) have symmetry (i.e., complete graph, star graph, and bipartite networks), and empirical networks. ### Six-node networks We first analyzed the fixation probability in switching networks that are composed of two undirected and unweighted connected networks with \(6\) nodes. There are \(112\) non-isomorphic undirected connected networks on \(6\) nodes. We switched between any ordered pair of different networks, giving us a total of \(112\times 111=12432\) switching networks. It should be noted that swapping the order of \(G_{1}\) and \(G_{2}\) generally yields different fixation probabilities. We randomly permuted the node labels in \(G_{2}\). We did not consider all possible labeling of nodes because there would be at most \(112\cdot 111\cdot 6!=8951040\) switching networks on \(6\) nodes if we allow shuffling of node labeling, although the symmetry reduces this number. In Fig. 2(a), we show two arbitrarily chosen static networks on six nodes, \(G_{1}\) and \(G_{2}\), which are amplifiers as static networks. In Fig. 2(b), we plot the fixation probability as a function of the fitness of the mutant, \(r\), for the switching network (\(G_{1},G_{2},\tau=1\)), the static networks \(G_{1}\) and \(G_{2}\), the aggregate weighted static network generated from \(G_{1}\) and \(G_{2}\), and the Moran process (i.e., complete graph on six nodes). The aggregated weighted static network is the superposition of \(G_{1}\) and \(G_{2}\) such that the weight of the edge is either \(1\) or \(2\). It is equivalent to the average of \(G_{1}\) and \(G_{2}\) over time. All these static and switching networks yield \(\rho=1/N=1/6\) at \(r=1\), as expected (see Theorem 1). In addition, there exist differences in \(\rho\) between the different networks and the Moran process although the difference is small. In fact, \(G_{1}\) and \(G_{2}\) are amplifiers, with their fixation probability being larger than that for the Moran process when \(r>1\) and vice versa when \(r<1\), confirming the known result [25, 27]. Figure 2(b) also indicates that the aggregate network is an amplifier. However, the switching network is suppressor. We reconfirm these results in Fig. 2(c), in which we show the difference in the fixation probability between a given static or switching network and the Moran process. If the difference is negative for \(r<1\) and positive for \(r>1\), then the network is an amplifier. If the difference is positive for \(r<1\) and negative for \(r>1\), then the network is a suppressor. Figure 2(c) shows that \(G_{1}\) is a stronger amplifier than \(G_{2}\), which is a stronger amplifier than the aggregate network. In contrast, the switching network \((G_{1},G_{2},1)\) is a suppressor while \((G_{1},G_{2},10)\) and \((G_{1},G_{2},50)\) are amplifiers. The result for \((G_{1},G_{2},50)\) is close to that for static network \(G_{1}\), which is because the evolutionary dynamics on \((G_{1},G_{2},\tau)\) is equivalent to that on \(G_{1}\) in the limit \(\tau\to\infty\). We conclude that switching networks composed of two amplifiers can be a suppressor, in particular when \(\tau\) is small. We emphasize that this counterintuitive result is not due to the property of the aggregate network because the aggregate network, which is the time average of \(G_{1}\) and \(G_{2}\), is also an amplifier. To investigate the generality of this finding to other six-node networks, we calculated the fixation probability for the switching networks derived from all possible pairs of six-node networks. Table 1 shows the number of switching networks on six nodes that are amplifiers, that of suppressors, and that of networks that are neither amplifier or suppressor, for four values of \(\tau\). The table indicates that a majority of the six-node switching networks investigated are suppressors when \(\tau=1\) and \(\tau=3\). This result is in stark contrast to the fact that there is only \(1\) suppressor among \(112\) six-node static unweighted networks under the birth-death process [25, 27]. Out of the \(111\) static networks that are not suppressor, \(100\) networks are amplifiers, five are isothermal, and the other six networks are neither amplifier, suppressor, nor isothermal [33, 58]. Most switching networks are amplifiers when \(\tau=50\), which is presumably because most static networks are amplifiers and the birth-death process on \((G_{1},G_{2},\tau)\) converges to that on \(G_{1}\) in the limit \(\tau\to\infty\), as we discussed above. ### Larger symmetric switching networks In this section, we assume symmetry in \(G_{1}\) and \(G_{2}\) to calculate the fixation probability for larger switching networks. Specifically, we set \(G_{1}\) to be the star graph and \(G_{2}\) to be either the complete graph or complete bipartite graph. #### 4.2.1 Combination of the star graph and the complete graph Consider switching networks in which \(G_{1}\) is the star graph and \(G_{2}\) is the complete graph. For this switching network, we can reduce the dimension of the transition probability matrix from \(2^{N}\times 2^{N}\) to \(2N\times 2N\) by exploiting the symmetry in \(G_{1}\) and \(G_{2}\). Therefore, one can reduce the number of equations from \(2^{N}-2\) to \(2N-2\). Specifically, one can uniquely describe the state of the network Figure 2: A suppressing switching network composed of two amplifying static networks on six nodes. (a) A switching network composed of six nodes. Both \(G_{1}\) and \(G_{2}\) are amplifiers. (b) Fixation probability in the static and switching networks as a function of \(r\). Moran refers to the Moran process. Note that \(G_{1}\), \(G_{2}\), the aggregate network, and the Moran process represent static networks. (c) Difference between the fixation probability for the given network and that for the Moran process. by \((i,j)\), where \(i\in\{0,1\}\) and \(j\in\{0,\ldots,N-1\}\). We set \(i=0\) and \(i=1\) when the hub node of \(G_{1}\) is occupied by a resident or mutant, respectively. We set \(j\in\{0,1,\ldots,N-1\}\) to the number of mutants in the other \(N-1\) nodes, which we refer to as the leaves. Tuple \((i,j)\) is a valid expression of the state of the network because the \(N-1\) leaves are structurally equivalent to each other in both \(G_{1}\) and \(G_{2}\). Tuples \((0,0)\) and \((1,N-1)\) correspond to the fixation of the resident and mutant type, respectively. The transition probability from state \((i,j)\) to state \((i^{\prime},j^{\prime})\) in a single time step of the birth-death process is nonzero if and only if \((i^{\prime},j^{\prime})=(i,j+1)\) and \(i=1\), \((i^{\prime},j^{\prime})=(i,j-1)\) and \(i=0\), \((i^{\prime},j^{\prime})=(1-i,j)\), or \((i^{\prime},j^{\prime})=(i,j)\). Let \(T^{(1)}\) denote the transition probability matrix for the star graph. We obtain \[T^{(1)}_{(i,j)\rightarrow(i^{\prime},j^{\prime})}=\begin{cases}\frac{rj}{C_{1} }&\text{if $i=0$ and $i^{\prime}=1$,}\\ \frac{N-1-j}{C_{2}}&\text{if $i=1$ and $i^{\prime}=0$,}\\ \frac{1}{C_{1}}\cdot\frac{j}{N-1}&\text{if $i^{\prime}=i=0$ and $j^{\prime}=j-1$,}\\ \frac{r}{C_{2}}\cdot\frac{N-1-j}{N-1}&\text{if $i^{\prime}=i=1$ and $j^{\prime}=j+1$,}\\ 1-\sum\limits_{\begin{subarray}{c}(i^{\prime\prime},j^{\prime\prime})\neq\\ (i,j)\end{subarray}}\hskip-14.226378ptT^{(1)}_{(i,j)\rightarrow(i^{\prime \prime},j^{\prime\prime})}&\text{if $(i^{\prime},j^{\prime})=(i,j)$,}\\ 0&\text{otherwise,}\end{cases} \tag{10}\] where \(C_{1}=rj+N-j\) and \(C_{2}=r(j+1)+N-(j+1)\)[1]. The first line of Eq. (10) represents the probability that the type of the hub changes from the resident to mutant. For this event to occur, one of the \(j\) leaf nodes occupied by the mutant must be chosen as parent, which occurs with probability \(rj/\left(rj+N-j\right)\). Because every leaf node is only adjacent to the hub node, the hub node is always selected for death if a leaf node is selected as parent. Therefore, the probability of \(i\) changing from \(0\) to \(1\) is equal to \(rj/\left(rj+N-j\right)\), which is shown in the first line of Eq. (10). As another example, consider state \((1,j)\), in which the hub has a mutant, \(j\) leaf nodes have mutants, and the other \(N-1-j\) leaf nodes have residents. For the state to change from \((1,j)\) to \((1,j+1)\), the hub node must be selected as parent with probability \(r/\left[r\left(j+1\right)+N-(j+1)\right]\), and a leaf node of the resident type must be selected for death, which occurs with probability \((N-1-j)/(N-1)\). The fourth line of Eq. (10) is equal to the product of these two probabilities. One can similarly derive the other lines of Eq. (10). \begin{table} \begin{tabular}{c c c c} \hline \(\tau\) & Amplifier & Suppressor & Neither \\ \hline 1 & 3636 & 8177 & 619 \\ 3 & 5190 & 6347 & 895 \\ 10 & 11102 & 629 & 701 \\ 50 & 12038 & 262 & 132 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of amplifiers and suppressors among \(112\cdot 111=12432\) switching networks on six nodes. The transition probability matrix for \(G_{2}\), which is the complete graph, is given by \[T_{(i,j)\rightarrow(i^{\prime},j^{\prime})}^{(2)}=\begin{cases}\frac{rj}{C_{1}} \cdot\frac{1}{N-1}&\text{if $i=0$ and $i^{\prime}=1$},\\ \frac{N-1-j}{C_{2}}\cdot\frac{1}{N-1}&\text{if $i=1$ and $i^{\prime}=0$},\\ \frac{N-j}{C_{1}}\cdot\frac{j}{N-1}&\text{if $i^{\prime}=i=0$ and $j^{\prime}=j-1$},\\ \frac{rj}{C_{1}}\cdot\frac{N-1-j}{N-1}&\text{if $i^{\prime}=i=0$ and $j^{\prime}=j+1$},\\ \frac{N-1-j}{C_{2}}\cdot\frac{j}{N-1}&\text{if $i^{\prime}=i=1$ and $j^{\prime}=j-1$},\\ \frac{r(j+1)}{C_{2}}\cdot\frac{N-1-j}{N-1}&\text{if $i^{\prime}=i=1$ and $j^{\prime}=j+1$},\\ 1-\sum T_{(i^{\prime\prime},j^{\prime\prime})\neq}^{(1)}&\text{if $(i^{\prime},j^{\prime})=(i,j)$},\\ (i,j)&\text{otherwise}.\end{cases} \tag{11}\] For example, for the transition from state \((0,j)\) to \((1,j)\) to occur, one of the \(j\) mutant leaf nodes must be first selected as parent, which occurs with probability \(rj/\left(rj+N-j\right)\). Then, the hub node must be selected for death, which occurs with probability \(1/\left(N-1\right)\). The first line of Eq. (11) is equal to the product of these two probabilities. As another example, for the state to change from \((1,j)\) to \((1,j+1)\), one of the mutant nodes, which may be the hub or a leaf, must be first selected as parent, which occurs with probability \(r\left(j+1\right)/\left[r\left(j+1\right)+N-(j+1)\right]\). Then, a leaf node of the resident type must be selected for death, which occurs with probability \(\left(N-1-j\right)/\left(N-1\right)\). The right-hand side on the sixth line of Eq. (11) is equal to the product of these two probabilities. One can similarly derive the other lines of Eq. (11). It should be noted that single-step moves from \((1,j)\) to \((1,j-1)\) and those from \((0,j)\) to \((0,j+1)\) are possible in \(G_{2}\), whereas they do not occur in \(G_{1}\). In Fig. 3(a), we plot the fixation probability as a function of \(r\) for switching network \((G_{1},G_{2},\tau)\) in which \(G_{1}\) is the star graph and \(G_{2}\) is the complete graph on four nodes. In this figure, we compare \((G_{1},G_{2},\tau)\) with \(\tau=1\), \(10\), and \(50\), the static star graph, the aggregate network, and the Moran process. Figure 3(a) indicates that \((G_{1},G_{2},10)\) and \((G_{1},G_{2},50)\) are amplifiers and that \((G_{1},G_{2},1)\) is a suppressor. We plot the difference in the fixation probability between the switching networks and the Moran process in Fig. 3(b). When \(\tau=1\), the difference is positive for \(r<1\) and negative for \(r>1\), which verifies that \((G_{1},G_{2},1)\) is a suppressor. This result is surprising because \(G_{1}\) is an amplifier and \(G_{2}\) is equivalent to the Moran process and therefore not a suppressor. In contrast, when \(\tau=10\) and \(\tau=50\), the difference from the Moran process is negative for \(r<1\) and positive for \(r>1\), which verifies that \((G_{1},G_{2},10)\) and \((G_{1},G_{2},50)\) are amplifiers. The result for \(\tau=50\) is close to that for the star graph. This is presumably because the first \(\tau=50\) steps with \(G_{1}\) are sufficient to induce fixation with a high probability given the small network size (i.e., \(N=4\)). Figures 3(a) and 3(b) also indicate that the aggregate network is a weak suppressor. However, the aggregate network is a considerably weaker suppressor than \((G_{1},G_{2},1)\). Therefore, we conclude that the suppressing effect of the switching network mainly originates from the time-varying nature of the network rather than the structure of the weighted aggregate network. We show in Figs. 3(c) and 3(d) the fixation probability and its difference from the case of the Moran process, respectively, as a function of \(r\) for \(N=50\). We observe that the switching network is an amplifier for all the values of \(\tau\) that we considered, i.e., \(\tau=1\), \(10\), and \(50\). In contrast, the aggregate network is a suppressor albeit an extremely weak one. The amplifying effect of the switching network is stronger for a larger value of \(\tau\). Unlike in the case of four nodes (see Figs. 3(a) and 3(b)), the switching networks with 50 nodes are far less amplifying than the star graph even with \(\tau=50\). This phenomenon is expected because fixation in a static network with 50 nodes usually needs much more than 50 steps. These results for the switching networks with \(N=4\) and \(N=50\) nodes remain similar for \((G_{2},G_{1},\tau)\), i.e., when we swap the order of \(G_{1}\) and \(G_{2}\) (see Figs. A1(a) and A1(b)). The present switching network is a suppressor when \(N=4\) and \(\tau=1\) and an amplifier when \(N=50\) or \(\tau\in\{10,50\}\). To examine the generality of these results with respect to the number of nodes, \(N\), we show in Figs. 3(e) and 3(f) the fixation probability relative to that for the Moran process at \(\tau=1\) and \(\tau=50\), respectively, as a function of \(N\). In both figures, we show the fixation probabilities at \(r=0.9\) and \(r=1.1\). Figure 3(e) indicates that the switching network is a suppressor for \(N\leq 4\) and an amplifier for \(N\geq 5\) when \(\tau=1\). We have confirmed that this switching network with \(N=3\) nodes is a suppressor by calculating the fixation probability across a range of \(r\) values in (see Fig. A2(a)). Figure 3(f) indicates that \((G_{1},G_{2},50)\) is an amplifier for any \(N\). #### 4.2.2 Combination of the star graph and the complete bipartite graph In this section, we analyze the switching network in which \(G_{1}\) is the star graph and \(G_{2}\) is the complete bipartite graph \(K_{N_{1},N_{2}}\). By definition, \(K_{N_{1},N_{2}}\) has two disjoint subsets of nodes \(V_{1}\) and \(V_{2}\), and \(V_{1}\) and \(V_{2}\) contain \(N_{1}\) and \(N_{2}\) nodes, respectively. Every node in \(V_{1}\) is adjacent to every node in \(V_{2}\) by an edge. Therefore, every node in \(V_{2}\) is adjacent to every node in \(V_{1}\). Without loss of generality, we assume that the hub node in \(G_{1}\) is one of the \(N_{1}\) nodes in \(V_{1}\). Because of the symmetry, we do not need to distinguish among the \(N_{1}-1\) nodes that are leaf nodes in \(G_{1}\) and belong to \(V_{1}\) in \(G_{2}\), or among the \(N_{2}\) nodes that belong to \(V_{2}\) in \(G_{2}\). Therefore, one can specify the state of this switching network by a tuple \((i,j,k)\), where \(i\in\{0,1\}\) represents whether the hub is occupied by a resident, corresponding to \(i=0\), or mutant, corresponding to \(i=1\); variable \(j\in\{0,\ldots,N_{1}-1\}\) represents the number of mutants among the \(N_{1}-1\) nodes that are leaves in \(G_{1}\) and belong to \(V_{1}\) in \(G_{2}\); variable \(k\in\{0,\ldots,N_{2}\}\) represents the number of mutants among the \(N_{2}\) nodes in \(V_{2}\). Tuples \((0,0,0)\) and \((1,N_{1}-1,N_{2})\) correspond to the fixation of the resident and mutant type, respectively. Using this representation of the states, we reduce the \(2^{N}\times 2^{N}\) transition probability matrix to a \(2N_{1}\left(N_{2}+1\right)\times 2N_{1}\left(N_{2}+1\right)\) transition probability matrix. The transition probability matrix for the star graph is given by \[\mathcal{T}^{(1)}_{(i,j,k)\rightarrow(i^{\prime},j^{\prime},k^{\prime})}=\begin{cases} \frac{r(j+k)}{C_{3}}&\text{if $i=0$ and $i^{\prime}=1$},\\ \frac{N-1-j-k}{C_{4}}&\text{if $i=1$ and $i^{\prime}=0$},\\ \frac{1}{C_{3}}\cdot\frac{j}{N-1}&\text{if $i=0$ and $j^{\prime}=j-1$},\\ \frac{1}{C_{3}}\cdot\frac{k}{N-1}&\text{if $i=0$ and $k^{\prime}=k-1$},\\ \frac{r}{C_{4}}\cdot\frac{N_{1}-1-j}{N-1}&\text{if $i=1$ and $j^{\prime}=j+1$},\\ \frac{r}{C_{4}}\cdot\frac{N_{2}-j}{N-1}&\text{if $i=1$ and $k^{\prime}=k+1$},\\ 1-\sum T^{(1)}_{(i^{\prime\prime},j^{\prime\prime},k^{\prime\prime})\neq}& \text{if $(i^{\prime},j^{\prime},k^{\prime})=(i,j,k)$},\\ 0&\text{otherwise},\end{cases} \tag{12}\] where \(C_{3}=r(j+k)+(N-j-k)\) and \(C_{4}=r\left(j+k+1\right)+(N-j-k-1)\). The first line of Figure 3: Fixation probability for switching networks in which \(G_{1}\) is the star graph and \(G_{2}\) is the complete graph. (a) Fixation probability for \(N=4\). (b) Difference in the fixation probability from the case of the Moran process for \(N=4\). (c) Fixation probability for \(N=50\). (d) Difference in fixation probability from the case of the Moran process for \(N=50\). In (a)–(d), we also show the results for \(G_{1}\) (i.e., star graph) and the aggregate network, and the vertical lines at \(r=1\) are a guide to the eyes. The insets magnify selected ranges of \(r<1\). (e) and (f): Difference in the fixation probability for the switching network relative to the Moran process as a function of \(N\) at \(r=0.9\) and \(1.1\). We set \(\tau=1\) in (e) and \(\tau=50\) in (f). In (e) and (f), the smallest value of \(N\) is three. Eq. (12) represents the probability that the type of the hub changes from the resident to mutant. For this event to occur, one of the \(j+k\) leaf nodes occupied by the mutant must be chosen as parent, which occurs with probability \(r(j+k)/C_{3}\). Then, because any leaf node is only adjacent to the hub node, the hub node is always selected for death. Therefore, the probability of \(i\) changing from \(0\) to \(1\) is equal to \(r(j+k)/C_{3}\). As another example, consider state \((1,j,k)\). For the state to change from \((1,j,k)\) to \((1,j+1,k)\), the hub node, which the mutant type currently inhabits, must be selected as parent with probability \(r/C_{4}\). Then, one of the \(j\) leaf nodes of the resident type in \(V_{1}\) must be selected for death, which occurs with probability \(\left[(N_{1}-1)-j\right]/\left(N-1\right)\). The fifth line of Eq. (12) is equal to the product of these two probabilities. One can similarly derive the other lines of Eq. (12). The transition probability matrix for the complete bipartite graph is given by \[T_{(i,j,k)\rightarrow(i^{\prime},j^{\prime},k^{\prime})}^{(2)}=\begin{cases} \frac{rk}{C_{3}}\cdot\frac{1}{N_{1}}&\text{if $i=0$ and $i^{\prime}=1$,}\\ \frac{N_{2}-k}{C_{4}}\cdot\frac{1}{N_{1}}&\text{if $i=1$ and $i^{\prime}=0$,}\\ \frac{N_{2}-k}{C_{3}}\cdot\frac{j}{N_{1}}&\text{if $i=0$ and $j^{\prime}=j-1$,}\\ \frac{rk}{C_{3}}\cdot\frac{N_{1}-1-j}{N_{1}}&\text{if $i=0$ and $j^{\prime}=j+1$,}\\ \frac{N_{1}-j}{C_{3}}\cdot\frac{k}{N_{2}}&\text{if $i=0$ and $k^{\prime}=k-1$,}\\ \frac{rj}{C_{3}}\cdot\frac{N_{2}-k}{N_{2}}&\text{if $i=0$ and $k^{\prime}=k+1$,}\\ \frac{N_{2}-k}{C_{4}}\cdot\frac{j}{N_{1}}&\text{if $i=1$ and $j^{\prime}=j-1$,}\\ \frac{rk}{C_{4}}\cdot\frac{N_{1}-1-j}{N_{1}}&\text{if $i=1$ and $j^{\prime}=j+1$,}\\ \frac{N_{1}-1-j}{C_{4}}\cdot\frac{k}{N_{2}}&\text{if $i=1$ and $k^{\prime}=k-1$,}\\ \frac{r(j+1)}{C_{4}}\cdot\frac{N_{2}-k}{N_{2}}&\text{if $i=1$ and $k^{\prime}=k+1$,}\\ 1-\sum\limits_{\begin{subarray}{c}(i^{\prime\prime},j^{\prime\prime},k^{ \prime\prime})\neq\\ (i,j,k)\end{subarray}}^{T_{(i,j,k)}}\text{if $(i^{\prime},j^{\prime},k^{ \prime})=(i,j,k)$.}\\ 0&\text{otherwise.}\end{cases} \tag{13}\] The first line of Eq. (13) represents the probability that the type of the hub changes from the resident to mutant. For this event to occur, one of the \(k\) mutant nodes in \(V_{2}\) must be selected as parent with probability \(rk/C_{3}\). Then, the hub node must be selected for death with probability \(1/N_{1}\) because each node in \(V_{2}\) is only adjacent to all the \(N_{1}\) nodes in \(V_{1}\). Therefore, the probability of \(i\) changing from \(0\) to \(1\) is equal to \((rk/C_{3})\cdot(1/N_{1})\). As another example, consider state \((1,j,k)\), in which there are \(j+k+1\) mutants in total. For the state to change from \((1,j,k)\) to \((1,j+1,k)\), one of the \(k\) mutant nodes in \(V_{2}\) must first be selected as parent with probability \(rk/C_{4}\). Then, one of the \(j\) leaf nodes in \(V_{1}\) of the resident type must be selected for death, which occurs with probability \((N_{1}-1-j)/N_{1}\). The eighth line of Eq. (13) is equal to the product of these two probabilities. One can similarly derive the other lines of Eq. (13). In Figs. 4(a) and 4(b), we show the fixation probability and its difference from the case of the Moran process, respectively, for the switching network in which \(G_{1}\) is the star on \(N=4\) nodes and \(G_{2}\) is the complete bipartite graph \(K_{N_{1},N_{2}}\) with \(N_{1}=N_{2}=2\). We set \(\tau=1,10\), and \(50\), and varied \(r\). We also show the results for \(G_{1}\), \(G_{2}\), and the aggregate network in these figures for comparison. We find that \((G_{1},G_{2},1)\) is a suppressor. In contrast, \(G_{1}\) is an amplifier, and \(G_{2}\) is neutral (i.e., equivalent to the Moran process). In fact, no static unweighted network with five nodes or less is a suppressor [27]. Because the aggregate network is an amplifier, albeit a weak one, the suppressing effect of \((G_{1},G_{2},1)\) owes to the time-varying nature of the switching network. Similar to the case in which \(G_{2}\) is the complete graph shown in Fig. 3, \((G_{1},G_{2},10)\) and \((G_{1},G_{2},50)\) are amplifiers, and the behavior of \((G_{1},G_{2},50)\) is close to that for \(G_{1}\), i.e., the star graph. In Figs. 4(c) and 4(d), we show the fixation probability and its difference from the case of the Moran process, respectively, for \(N_{1}=N_{2}=20\). In contrast to the case of \(N_{1}=N_{2}=2\), the switching network with \(N_{1}=N_{2}=20\) is an amplifier for the three values of \(\tau\). Furthermore, in contrast to when \(N_{1}=N_{2}=2\), the fixation probabilities for the switching networks are closer to those for the Moran process than to those for the star graph. These results for the switching networks with \(N=4\) and \(N=40\) nodes remain similar for switching networks \((G_{2},G_{1},\tau)\), as we show in Figs. A1(c) and A1(d). To examine the dependence of the fixation probability on the number of nodes, we show in Fig. 4(e) the difference between the fixation probability for the present switching network and that for the Moran process as we vary \(N\). We set \(\tau=1\) and \(N_{1}=N_{2}=N/2\geq 2\), and compute the fixation probability at \(r=0.9\) and \(r=1.1\). Figure 4(e) indicates that the switching network is a suppressor only when \(N_{1}=N_{2}=2\) (i.e., \(N=4\)) and amplifier for any larger \(N\). When we allow \(N_{1}\neq N_{2}\), we found just one additional suppressor apart from \((N_{1},N_{2})=(2,2)\) under the constraints \(\tau=1\) and \(2\leq N_{1},N_{2}\leq 10\), which is \((N_{1},N_{2})=(3,2)\) (see Fig. A2(b)). With \(\tau=50\), this switching network is amplifier for any \(N\) (see Fig. 4(f)). ### Empirical temporal networks #### 4.3.1 Construction of switching networks Finally, we numerically simulate the birth-death process on four switching networks informed by empirical temporal network data. We split each of the temporal network data set into two static networks \((V_{1},E_{1})\) and \((V_{2},E_{2})\), where \((V_{1},E_{1})\) contains the first half of the time-stamped edges in terms of the time, \((V_{2},E_{2})\) containing the second half of the time-stamped edges, \(V_{1}\) and \(V_{2}\) are sets of nodes, and \(E_{1}\) and \(E_{2}\) are sets of edges. For simplicity, we regard \((V_{1},E_{1})\) and \((V_{2},E_{2})\) as unweighted networks. For two of the four empirical switching networks, both \(V_{1}\) and \(V_{2}\) contain all nodes. In this case, we switch between \(G_{1}\equiv(V_{1},E_{1})\) and \(G_{2}\equiv(V_{2},E_{2})\). For the other two empirical switching networks, either \(V_{1}\) or \(V_{2}\) misses some nodes in the original temporal network. In this case, we construct switching networks in the following two manners. With the first method, we only use the nodes in \(V_{1}\cap V_{2}\) and the edges that exist between pairs of nodes in \(V_{1}\cap V_{2}\) as \(G_{1}\) and \(G_{2}\). For each of the two empirical data sets for which \(V_{1}\) or \(V_{2}\) misses some nodes, we have confirmed that the first and second halves of the static networks induced on \(V_{1}\cap V_{2}\) created with this method are connected networks. With the second method, we use all nodes for both \(G_{1}\) and \(G_{2}\). In other words, we set \(G_{1}=(V_{1}\cup V_{2},E_{1})\) and \(G_{2}=(V_{1}\cup V_{2},E_{2})\). Therefore, if \(v\in V_{1}\) and \(v\notin V_{2}\), for example, then \(v\) is an isolated node in \(G_{2}\). Except with special initial conditions, the fixation of either type never occurs in a static network with isolated nodes. However, the fixation does occur in the switching network if the aggregate network is connected, which we have confirmed to be the case for all our empirical data sets. Figure 4: Fixation probability for switching networks in which \(G_{1}\) is the star graph and \(G_{2}\) is the complete bipartite graph. (a) Fixation probability for \(N_{1}=N_{2}=2\). (b) Difference in the fixation probability from the case of to the Moran process for \(N_{1}=N_{2}=2\). (c) Fixation probability for \(N_{1}=N_{2}=20\). (d) Difference in the fixation probability from the case of the Moran process for \(N_{1}=N_{2}=20\). (e) and (f): Difference in the fixation probability for the switching network relative to the Moran process as a function of \(N\) at \(r=0.9\) and \(1.1\). We set \(\tau=1\) in (e) and \(\tau=50\) in (f). In (e) and (f), the smallest value of \(N\) is four. #### 4.3.2 Simulation procedure As the initial condition, we place a mutant on one node selected uniformly at random and all the other \(N-1\) nodes are of the resident type. Then, we run the birth-death process until all nodes were of the same type. We carried out \(2\times 10^{5}\) such runs in parallel on \(56\) cores, giving us a total of \(112\times 10^{5}\) runs, for each network and each value of \(r\). We numerically calculated the fixation probability as the fraction of runs in which the mutant fixates. We simulated the switching networks with \(\tau\in\{1,10,50\}\) and \(r\in\{0.7,0.8,0.9,1,1.1,1.2,1.3,1.4,1.5,1.6,1.7\}\) for all the networks except the hospital network of \(75\) nodes. For the hospital network, we omitted \(r=1.6\) and \(1.7\) due to high computational cost. #### 4.3.3 Data The ants' colony data, which we abbreviate as ant [59], has \(39\) nodes and \(330\) time-stamped edges. Each node represents an ant in a colony. An edge represents a trophallaxis event, which was recorded when the two ants were engaged in mandible-to-mandible contact for greater than one second. The first and second halves of the data have \(34\) nodes each. The second data is the contacts between members of five households in the Matsangoni sub-location within the Kilifi Health and Demographic Surveillance Site (KHDSS) in coastal Kenya [60]. A household was defined as the group of individuals who ate from the same kitchen [60]. Each participant in the study had a wearable sensor that detected the presence of another sensor within approximately \(1.5\) meters. Each node is an individual in a household. An edge represents a time-stamped contact between two individuals. There were \(47\) nodes. There were \(219\) time-stamped edges representing contacts between pairs of individuals in different households and \(32,426\) time-stamped edges between individuals of the same households. Both the first and second halves contain all the \(47\) nodes and are connected networks as static network owing to the relatively large number of time-stamped edges. The third data is a mammalian temporal network based on interaction between raccoons [61]. A node represents a wild raccoon. The time-stamped events were recorded whenever two raccoons came within approximately \(1\) to \(1.5\) meters for more than one second, using proximity logging collars that were placed on raccoons. The recording was made in Ned Brown Forest Preserve in suburban Cook County, Illinois, USA, from July 2004 to July 2005. There are \(24\) nodes and \(2,000\) time-stamped edges. Both the first and second halves of the data contain all the \(24\) nodes and are connected networks as static network. The fourth data is a contact network in a hospital [62]. The data were recorded in a geriatric unit of a university hospital in Lyon, France, from December 6, 2010 at \(1\) pm to December 10, 2010 at \(2\) pm. The unit contained \(19\) out of the \(1,000\) beds in the hospital. During the recording period, \(50\) professionals worked in the unit, and \(31\) patients were admitted. Fourty-six among the \(50\) professionals and \(29\) among the \(31\) patients participated in the study. Therefore, the network had \(75\) nodes in total. The professionals comprised of \(27\) nurses or nurses' aides, \(11\) medical doctors, and \(8\) administrative staff members. An edge represents a time-stamped contact between two individuals; there are \(32,424\) time-stamped edges. The first and second halves of the data contain \(50\) nodes each. We obtained the ant, raccoon, and hospital data from [https://networkrepository.com/](https://networkrepository.com/)[63]. We obtained the Kilifi data from [https://www.sociopatterns.org/](https://www.sociopatterns.org/). #### 4.3.4 Numerical results We investigate the fixation probability on the switching networks with \(\tau=1\), \(10\), and \(50\), static networks \(G_{1}\) and \(G_{2}\), and the aggregate network. We remind that the aggregate network is a static weighted network, whereas \(G_{1}\) and \(G_{2}\) are unweighted networks. For the ant and hospital data, the switching networks constructed with the second method are different from those constructed with the first method. For these two data sets, fixation does not occur on \(G_{1}\) and \(G_{2}\) because they miss some nodes. Therefore, we do not analyze the fixation probability on \(G_{1}\) and \(G_{2}\) for these data sets. We show in Figs. 5(a) and 5(b) the fixation probability on the ant switching networks constructed with the first and second methods, respectively. Because we are interested in whether the switching networks are amplifiers or suppressors, we only show the difference between the fixation probability on the given network and that for the Moran process in Fig. 5. Figure 5(a) indicates that the switching networks are amplifiers but less amplifying than each of its constituent static networks, \(G_{1}\) and \(G_{2}\). Another observation is that the fixation probability on the static aggregate network is close to that on the switching networks. In this sense, the switching networks do not yield surprising results. The switching networks are more strongly amplifying when \(\tau\) is larger. Moreover, the fixation probability on the switching network is closer to that on \(G_{1}\) when \(\tau\) is larger. This result is expected because the evolutionary dynamics is the same between the switching networks and \(G_{1}\) in the first \(\tau\) time steps. For the switching networks constructed with the second method, Fig. 5(b) shows that the switching networks are amplifiers and more amplifying than the static aggregate network. This result is qualitatively different from that for the switching networks constructed with the first method shown in Fig. 5(a). We show the results for the Kilifi networks in Fig. 5(c). Because the first and second methods yield the same \(G_{1}\) and \(G_{2}\) for the Kilifi data, we only present the results for the first method for this data set and also for the next one (i.e., racoon networks). The figure indicates that the switching networks are amplifiers but less amplifying than \(G_{1}\) and \(G_{2}\) and similarly amplifying compared to the aggregate network. These results are similar to those for the ant networks shown in Fig. 5(a). We show the results for the raccoon networks in Fig. 5(d). We find that the switching networks are amplifiers but less amplifying than \(G_{1}\) and \(G_{2}\), similar to the case of the ant and Kilifi networks. We also find that the switching networks are more amplifying than the aggregate network. We show the results for the hospital switching networks in Figs. 5(e) and 5(f). The results for the switching networks constructed with the first method (see Fig. 5(e)) are similar to those for the raccoon networks shown in Fig. 5(d). The switching networks constructed with the second method (see Fig. 5(f)) are more amplifying than the aggregate network, similar to the case of the ant networks generated by the same method (see Fig. 5(b)). In sum, for these empirical temporal networks, we did not find a surprising result that the fixation probability for the switching networks is not an interpolation of those for the two static networks \(G_{1}\) and \(G_{2}\). However, the fixation probability for the empirical switching networks depends on the \(\tau\) value and deviates from the prediction from the aggregate network in multiple ways. ## 5 Discussion We have shown that, under the birth-death updating rule and uniform initialization, a majority of the switching networks on six nodes are suppressors of natural selection. This result contrasts with the case of static networks, for which there exists only one suppressor on six nodes [27]. We Figure 5: Fixation probability on empirical switching networks. In each panel, we show the difference in the fixation probability from the case of the Moran process as a function of \(r\). (a) Ant networks constructed with the first method. (b) Ant networks constructed with the second method. (c) Kilifi switching networks. (d) Raccoon networks. (e) Hospital networks constructed with the first method. (f) Hospital networks constructed with the second method. We compared the fixation probability on switching networks with \(\tau\in\{1,10,50\}\), \(G_{1}\), \(G_{2}\), and the aggregate network in each panel. also found that switching networks alternating between the star graph and the complete graph and those alternating between the star graph and the complete bipartite graph are suppressors when the number of nodes, \(N\), is small. When \(N\) is larger, the same switching networks are amplifiers but less amplifying than the star graph. Among the empirical networks that we analyzed, we did not find any suppressors. However, these switching networks were notably less amplifying than the constituent static networks \(G_{1}\) and \(G_{2}\). In fact, the less amplifying nature of switching networks is largely explained by the aggregate weighted network, or the static network obtained by the superposition of \(G_{1}\) and \(G_{2}\). Therefore, our results for the empirical switching networks are not surprising. The result that the switching network composed of two amplifying static networks can be suppressor is our main finding. Because all the instances that we have found are small networks, searching suppressing switching networks with larger \(N\) including systematically constructing such instances remains future work. We considered exogenous changes of the network over time in this study. Another opportunity of research is to assume that the change of the network structure over time is driven by the state of the system, which is referred to as adaptive networks [64, 65]. The recent modeling framework inspired by biological examples in which the residents and mutants use different static networks defined on the same node set [66, 67] can be interpreted as an example of fixation dynamics on adaptive networks. Allowing nodes to stochastically sever and create edges they own as the node's type flips from the resident to mutant and vice versa may lead to new phenomena in fixation dynamics. Such models have been extensively studied for evolutionary games on dynamic networks [17, 18, 19, 20, 21, 22, 23]. We recently found that most hypergraphs are suppressors under the combination of a birth-death process and uniform initialization, which are the conditions under which most of conventional networks are amplifiers [54]. It has been longer known that most undirected networks are suppressors under the death-birth process [25] and in directed networks under various imitation rules including birth-death processes [68]. The degree of amplification and suppression also depends on the initialization [24, 31]. For example, non-uniform initializations can make the star, which is a strong amplifier under the birth-death process and uniform initialization, a suppressor [24]. Furthermore, it has been shown that the amplifiers are transient and bounded [69]. Our results suggest that small temporal networks are another major case in which suppressors are common. These results altogether encourage us to explore different variants of network models and evolutionary processes to clarify how common amplifiers are. This task warrants future research. ## Funding N.M. acknowledges support from AFOSR European Office (under Grant No. FA9550-19-1-7024), the Japan Science and Technology Agency (JST) Moonshot R&D (under Grant No. JPMJMS2021), and the National Science Foundation (under Grant No. 2052720 and 2204936). ## Appendices ### A1. Switching networks in which the first network is the star graph In this section, we consider switching networks \((G_{1},G_{2},\tau)\) in which \(G_{1}\) is the complete graph and \(G_{2}\) is the star graph. We show the difference in the fixation probability from the case of the Moran process for the switching networks with \(N=4\) and \(N=50\) in Figs. A1(a) and A1(b), respectively. With \(N=4\), we find that \((G_{1},G_{2},10)\) and \((G_{1},G_{2},50)\) are amplifiers and that \((G_{1},G_{2},1)\) is a suppressor (see Fig. A1(a)). The aggregate network is a weak suppressor. With \(N=50\), we find that \((G_{1},G_{2},\tau)\) for all the three \(\tau\) values (i.e., \(\tau\in\{1,10,50\}\)) are amplifiers and that the aggregate network is a weak suppressor (see Fig. A1(b)). These results are qualitatively the same as those for the switching networks in which the order of \(G_{1}\) and \(G_{2}\) is the opposite, shown in Fig. 3. A main difference is that, when \(\tau=50\), the fixation probability is reasonably close to that for the Moran process in the case of the present switching network because \(G_{1}\) is a regular graph and therefore equivalent to the Moran process. In contrast, in Fig. 3, the switching network is much more amplifying because \(G_{1}\) is the star graph, which is a strong amplifier. We show in Figs. A1(c) and A1(d) the results for the switching networks with \(N=4\) and \(N=40\), respectively, in which \(G_{1}\) is the complete bipartite graph and \(G_{2}\) is the star graph. With \(N=4\), we find that \((G_{1},G_{2},1)\) is a suppressor, \((G_{1},G_{2},10)\) and \((G_{1},G_{2},50)\) are amplifiers \(\tau=50\), and the aggregate network is a weak amplifier (see Fig. A1(c)). With \(N=40\), we find that \((G_{1},G_{2},\tau)\) with \(\tau\in\{1,10,50\}\) is an amplifier and that the aggregate network is a weak amplifier (see Fig. A1(d)). These results are similar to those for the switching networks in which the order of \(G_{1}\) and \(G_{2}\) is the opposite, shown in Figs. 4(a) and 4(b). Similar to Figs. A1(a) and A1(b), with \(\tau=50\), the present switching networks are close in behavior to the Moran process because \(G_{1}\) is a regular network. This result contrasts to the corresponding result for the order-swapped switching network with \(\tau=50\), which is a relatively strong amplifier because \(G_{1}\) is the star graph (see Figs. 4(a) and 4(b)). ### A2. Further examples of small amplifying switching networks in which \(G_{1}\) is the star graph In Fig. A2(a), we show the difference in the fixation probability from the case of the Moran process for the switching networks in which \(G_{1}\) is the star graph and \(G_{2}\) is the complete graph on \(N=3\) nodes. We also plot the results for \(G_{1}\), \(G_{2}\), and the aggregate network. It is known that \(G_{1}\) is an amplifier [1] and that \(G_{2}\) is equivalent to the Moran process. In contrast, the switching network with \(\tau=1\) and the aggregate network are suppressors. The aggregate network is much less suppressing than the switching network. The switching networks with \(\tau\in\{10,50\}\) are amplifiers. In Fig. A2(b), we show the results for the switching networks in which \(G_{1}\) is the star graph and \(G_{2}\) is the complete bipartite graph, \(K_{(3,2)}\), on \(N=5\) nodes. Note that both \(G_{1}\) (i.e., star) [1] and \(G_{2}\) (i.e., complete bipartite graph \(K_{(3,2)}\)) [70] are amplifiers. In contrast, as in Fig. A2(a), the switching network with \(\tau=1\) (but not with \(\tau\in\{10,50\}\)) and the aggregate network are suppressors, and the aggregate network is only weakly suppressing.
2309.11081
Dense 2D-3D Indoor Prediction with Sound via Aligned Cross-Modal Distillation
Sound can convey significant information for spatial reasoning in our daily lives. To endow deep networks with such ability, we address the challenge of dense indoor prediction with sound in both 2D and 3D via cross-modal knowledge distillation. In this work, we propose a Spatial Alignment via Matching (SAM) distillation framework that elicits local correspondence between the two modalities in vision-to-audio knowledge transfer. SAM integrates audio features with visually coherent learnable spatial embeddings to resolve inconsistencies in multiple layers of a student model. Our approach does not rely on a specific input representation, allowing for flexibility in the input shapes or dimensions without performance degradation. With a newly curated benchmark named Dense Auditory Prediction of Surroundings (DAPS), we are the first to tackle dense indoor prediction of omnidirectional surroundings in both 2D and 3D with audio observations. Specifically, for audio-based depth estimation, semantic segmentation, and challenging 3D scene reconstruction, the proposed distillation framework consistently achieves state-of-the-art performance across various metrics and backbone architectures.
Heeseung Yun, Joonil Na, Gunhee Kim
2023-09-20T06:07:04Z
http://arxiv.org/abs/2309.11081v1
# Dense 2D-3D Indoor Prediction with Sound via Aligned Cross-Modal Distillation ###### Abstract Sound can convey significant information for spatial reasoning in our daily lives. To endow deep networks with such ability, we address the challenge of dense indoor prediction with sound in both 2D and 3D via cross-modal knowledge distillation. In this work, we propose a Spatial Alignment via Matching (SAM) distillation framework that elicits local correspondence between the two modalities in vision-to-audio knowledge transfer. SAM integrates audio features with visually coherent learnable spatial embeddings to resolve inconsistencies in multiple layers of a student model. Our approach does not rely on a specific input representation, allowing for flexibility in the input shapes or dimensions without performance degradation. With a newly curated benchmark named Dense Auditory Prediction of Surroundings (DAPS), we are the first to tackle dense indoor prediction of omnidirectional surroundings in both 2D and 3D with audio observations. Specifically, for audio-based depth estimation, semantic segmentation, and challenging 3D scene reconstruction, the proposed distillation framework consistently achieves state-of-the-art performance across various metrics and backbone architectures. ## 1 Introduction Humans can get a good grasp of various information about surroundings with hearing without seeing, like the size of a room or the location of an active alarm. A long line of research has analyzed such intriguing abilities of humans based on interaural differences [1, 2] or brain activation with respect to spatially aligned audio-visual inputs [3, 4], to list a few. Accordingly, there is an emerging interest in teaching neural network models for spatial reasoning without seeing. Such models that spatially perceive the surroundings from sound can be utilized in various environments that are critical for privacy preservation or visually ill-posed (, low illumination or occlusion) [5, 6, 7, 8]. Since predicting visual properties directly from audio is challenging, cross-modal knowledge distillation [9] is often utilized,, teaching audio models with the guidance of visual models. Visual models can make precise predictions about the image of the surroundings, like the location of objects or the depth of a scene. Thus, using visual models as the teacher, audio models can learn how to predict visual properties in a scene from sound inputs. This cross-modal knowledge distillation has been successfully applied Figure 1: key idea of our approach. (a) For vision-to-audio cross-modal distillation, instead of direct distillation between geometrically inconsistent modalities, we spatially align the latent feature maps of students with those of teachers. (b) Using auditory input only, we perform three dense predictions of surroundings: depth estimation, semantic segmentation, and 3D scene reconstruction. to make audio models predict _sparse_ attributes, _e.g_., vehicle tracking [5] or indoor navigation [7]. However, it remains challenging to make _dense_ visual predictions about the surroundings from audio. One of the core challenges behind the dense prediction with audio is to identify fine-grained attributions of the output. In other words, humans can intuitively make sense of the room layout by hearing, but have difficulty in explaining which bandwidths or timeframes are responsible for their perception. Unlike distilling an RGB image teacher for a thermal image student that is geometrically consistent up to the pixel level, there is no obvious one-to-one alignment between image and audio. Hence, it is not feasible to determine which part of the audio spectrogram corresponds to which region of the surrounding. While using multiple intermediate features of a teacher model as a guide can still be beneficial [5, 8], it may not be possible to solve the underlying local correspondence problem between the two heterogeneous modalities. In this work, we are the first to address the dense indoor prediction of omnidirectional surroundings in both 2D and 3D with audio observations. To resolve the inconsistency problem, we propose a novel Spatial Alignment via Matching (SAM) distillation framework. SAM matches local correspondences between the two heterogeneous features by making use of learnable spatial embeddings in several layers of the audio student model, combined with loose triplet-based learning objectives. We retain a set of learnable spatial embeddings to capture spatially varying information of each layer, which are pooled and integrated with initial audio features for alignment. This allows us to resolve inconsistencies even when the shape of the audio input does not match that of the desired output, making it trivially extendable to a challenging scenario like audio-to-3D distillation. To comprehensively evaluate the performance of our method, we curate a new benchmark for audio-based dense prediction of surroundings based on Matterport3D [10] and SoundSpaces [7]. We collect 15.8K indoor scene multimodal observations with task-specific annotations for audio-based depth estimation, semantic segmentation, and 3D scene reconstruction. In dense auditory prediction tasks spanning from 2D to 3D, our framework consistently improves the performance by a wide margin, which is validated on multiple architectures like U-Net [11], DPT [12], and ConvONet [13]. Also, qualitative results demonstrate that our approach can precisely predict the structure of the indoor environment with hearing without seeing. ## 2 Related Works **Indoor Multimodal Scene Analysis.** Extensive research has been conducted to understand indoor surroundings for given various inputs. Using monocular images as input, many visual scene understanding tasks like depth estimation, semantic segmentation, and surface normal estimation have been studied [14, 10, 15]. In addition, 3D-based methods for semantic segmentation, object recognition, and floorplan reconstruction have been proposed with voxel or mesh-based representations [10, 15, 16, 17]. When performing such tasks, combining different modalities as inputs is proven to be effective, such as RGB with depth information for semantic segmentation [18] or voxels with point clouds for 3D segmentation [19]. Recently, 2D vision-language models are successfully employed for open-vocabulary 3D scene understanding [20, 21]. There has been a surge of interest in combining audio and visual signals to tackle visual or acoustic tasks in indoor environments. Some prior works generate binaural audio [22] or scene-aware auditory responses [23, 24] by utilizing visual surroundings as a reference. Binaural audio is simulated from a 3D scene for audio-visual embodied navigation [7, 25]. Audio signals can help improve performances in visual tasks like floorplan reconstruction [26] and depth estimation of normal field-of-views [27, 28]. **Cross-modal Knowledge Distillation.** Knowledge distillation [29] aims at transferring knowledge from a teacher model to a student model by minimizing the distances between the two logit distributions. Cross-modal distillation [9] enhances this transfer by ensuring that the intermediate features of the student model align with those of the teacher model when their input modalities are different. Distillation between different modalities can improve the robustness of prediction under diverse conditions, such as utilizing depth sensors in student models by distilling object detection, action recognition, or semantic segmentation models [30, 31, 32]. Likewise, Vobecky _et al_. [33] leverage LiDAR and image inputs to generate spatially consistent object proposals for semantic segmentation. Cross-modal distillation can be applied to the scenarios where no explicit correspondence exists between the two modalities. Zhao _et al_. [34] use a student model with radio signals for human pose estimation via distillation. Roheda _et al_. [35] conditionally utilize noisy observations of available sensors like seismic sensors to enhance image quality. Also, audio-only and image-only teachers can teach a video-only student model via shared latent embedding [36] or long short-term memory networks [37] for better classification. Other examples include knowledge transfer of speech models for visual lip reading [38, 39] or visual captioning models for audio captioning [40]. **Spatial Reasoning with Sound.** Sound contains valuable information for spatial reasoning. Embodied agents can navigate indoor environments by relying solely on auditory input [7], and their exploration behavior can be promoted by referring to auditory feedback [41]. Other prior works focus on the spatial localization of audio sources [42], 3D face synthesis from speech [43], and depth estimation on a robot [44, 45, 46]. Sound-only models can benefit from the cross-modal distillation of visual teacher models for fine-grained spatial understanding. Vision-to-audio knowledge distillation has shown compelling performance in vehicle localization [5, 8], obstacle detection [47], and collision probability estimation [48]. However, prior works are limited to the sparse prediction of the surrounding environment (_e.g_., bounding boxes), while the dense prediction remains challenging. Closest to our approach is Binaural SoundNet [6, 49], as it improves outdoor dense prediction performance through the cross-modal distillation of multiple tasks. However, our work has three significant differences. First, we perform indoor semantic segmentation and 3D scene reconstruction from audio as new dense prediction tasks. Second, SoundNet does not consider feature-level alignment, while our method hierarchically leverages spatial alignment via matching for fine-grained vision-to-audio distillation. Finally, instead of designing a new architecture for modeling audio inputs [5, 49] or forcing specific input representations [6, 8], we take the audio input as is and adapt off-the-shelf vision models for audio-based dense prediction. ## 3 Approach Our goal is to predict various dense properties of indoor surroundings without visual input by leveraging bin-aural audios, _e.g_., depth, semantic labels, and 3D structures. To this end, we present a framework for vision-to-audio knowledge distillation that does not rely on specific architecture and entails the alignment of heterogeneous features, as shown in Fig. 2. Given a pre-trained visual teacher, we aim to train an audio student model using paired audio-visual observations as training data. We start by reviewing the basics of vision-to-audio knowledge distillation and the challenges in adapting such methods for dense auditory prediction of surroundings (SS3.1). Next, we explain the proposed spatial alignment via matching distillation (SS3.2). Finally, we outline training and inference procedures shared among different tasks (SS3.3). Commonly used variables are defined as follows. ### Vision-to-Audio Knowledge Distillation Cross-modal distillation from a visual teacher model to an audio model has two significant advantages: (i) training without labeled data by turning to the teacher model's prediction (pseudo-GT) and (ii) teaching fine-grained knowledge to the student model via feature distillation. In general, cross-modal distillation for spatial reasoning leverages both pseudo-GT and feature outputs from one or more layers for fine-grained knowledge transfer [9]: \[\mathcal{L}_{\text{crossKD}}=d(v_{\text{out}},a_{\text{out}})+\lambda\sum_{i} \sum_{j}d(v_{i}(j),a_{i}(j)), \tag{1}\] where \(d(\cdot,\cdot)\) is a distance function. This objective is well-defined for two modalities that are consistent up to pixel level (_e.g_., distilling an RGB teacher to a depth student). On the other hand, it is less plausible to use the same method for vision-to-audio knowledge distillation. The main difficulty that hinders knowledge transfer is the semantic and shape inconsistencies of the two heterogeneous modalities. First, the semantics of audio and visual features are not coherent with each other. For example, in the second term of Eq. (1), the \(j\)-th feature of an audio-only model at layer \(i\) may not always match the corresponding feature of a vision-only model. This lack of correspondence between the features of the two modalities makes direct distillation depicted in Fig. 1-(a) less effective, which is Figure 2: Overview of our spatial alignment via matching distillation framework. empirically in line with previous research on vehicle tracking [5, 8]. Second, the shape of audio input is usually not identical to visual input, and simple interpolation of an audio input often deteriorates the performance. Moreover, it is even more challenging when the dimensions of the two modalities do not match, _e.g_., predicting 3D surroundings from audio. Hence, it is necessary to establish a method that can effectively align with visual features regardless of specific input shapes other than naive resizing or cropping. ### Spatial Alignment via Matching To resolve the challenges mentioned above, we introduce a novel method for cross-modal knowledge distillation of two heterogeneous modalities without semantic and shape consistency. We coin this method Spatial Alignment via Matching (SAM), which comprises three major components: input representation, learnable spatial embeddings, and feature refinement. To obtain the spatially aligned features for the \(i\)-th layer of the audio encoder, we can allocate a SAM block that accounts for both feature alignment and shape discrepancy, _i.e_., \(\text{SAM}_{i}:\mathbb{R}^{A_{i}\times C}\rightarrow\mathbb{R}^{V_{i}\times C}\). **Input Representation.** Using Short-Term Fourier Transform (STFT) spectrograms of raw binaural audios, we can exploit any 2D deep networks as commonly done in audio representation learning [50, 51]. However, unlike previous works that rely on pseudo-GT [6, 49] or require identical shapes for feature-level distillation [5, 8], our method can be trivially applied where \((w_{i}^{a},h_{i}^{a})\neq(w_{i}^{v},h_{i}^{v})\). In addition, SAM can handle more challenging scenarios like 1D encoders, _i.e_., \(w_{i}^{a}=1\) or \(h_{i}^{a}=1\), by regarding the input spectrogram as a set of 1D patches. Decomposing the spectrogram into time bands (\(W^{\prime}\times 1\)) or frequency bands (\(1\times H^{\prime}\)) can effectively reduce the feature shape and replace 2D with 1D operations. This allows for more efficient encoder implementation in terms of memory and time, making it applicable to memory-intensive scenarios. **Learnable Spatial Embeddings.** It is essential to retain features that are spatially well-aligned with dense prediction output, especially when the input is not aligned with the output modality. In this regard, we design learnable spatial embeddings as a container to capture spatially varying information in paired audio-visual observations. We maintain a set of embeddings \(p_{i}^{0},...,p_{i}^{K-1}\) identical in shape with visual features for each SAM and transform the shape of student features before the decoder. The number of learnable embeddings \(K\) may vary across layers, where more slots can be assigned to reconstruct high-level features. For \(K\) learnable embeddings, we first derive a similarity matrix \(T_{i}\in\mathbb{R}^{K\times V_{i}}\), which represents the proximity between provided audio feature \(a_{i}\) and the \(k\)-th spatial embedding. We compute the pairwise similarity between the \(j\)-th audio feature and the \(l\)-th feature in a spatial embedding, _i.e_., \(a_{i}(j),p_{i}^{k}(l)\in\mathbb{R}^{C}\), and select the maximum value along the \(j\) dimension: \[T_{i}=\big{\|}_{k=0}^{K-1}T_{i}^{k}=\big{\|}_{k=0}^{K-1}\max_{j}p_{i}^{k}W_{i} a_{i}(j), \tag{2}\] where \(W_{i}\in\mathbb{R}^{C\times C}\) is a linear projection and \(||\) is a concatenation operator. That is, higher similarity implies more coherency between the audio features and spatial embeddings at each region, allowing us to obtain features that are spatially aligned with the visual features. By applying softmax along the \(K\) dimension of similarity matrix \(T_{i}\), we then obtain a pooled embedding \(\hat{p}_{i}\in\mathbb{R}^{V_{i}\times C}\) as a linear combination of embeddings: \[\hat{p}_{i}=\big{\|}_{l=0}^{V_{i}-1}\sum_{k=0}^{K-1}\frac{e^{T_{i}^{k}(l)}}{ \sum_{k}e^{T_{i}^{k}(l)}}p_{i}^{k}(l). \tag{3}\] The softmax term can be interpreted as a probability distribution of selecting \(k\)-th embedding for high audio-visual correspondence, making \(\hat{p}_{i}\) coherent with audio features while maintaining the spatial structure of visual features. **Refinement with Student Features.** For better coherence with audio features, we refine the pooled embedding \(\hat{p}_{i}\) using audio feature \(a_{i}\) as keys and values by leveraging a multi-head attention mechanism (MultiHead) [52]: \[\bar{p}_{i}=\text{MultiHead}(\hat{p}_{i},a_{i},a_{i})+\hat{p}_{i}. \tag{4}\] As a result, we obtain the aligned feature \(\bar{p}_{i}\) from the SAM block at layer \(i\). SAM can facilitate the spatial alignment between features at one (_i.e_., a bottleneck between the encoder and decoder) or more layers. For instance, it can be applied to the global residual connection in pyramid-like architectures [11, 53, 54] to ensure shape consistency, as depicted in Fig. 2-(a-b). ### Training and Inference **Network Architecture.** For teacher models in each task, we follow the training procedure established in previous literature [12, 54, 13]. For simplicity, we train the teacher models using ground truth labels in the training split, while we also report the cross-modal distillation performance of non-iid settings in Appendix. We use ImageNet [55] pretrained weights for training teacher models in 2D tasks. Trained teacher models are only utilized during the training of a student model, with parameters fixed. Our approach can be applied to a wide range of architectures for dense auditory prediction. We demonstrate this by using U-Net [11] with a ResNet-50 [56] backbone and Dense Prediction Transformers (DPT) [12] with a ViT-B/16 [57] backbone as representative examples of convolutional networks and vision transformer variants, respectively. We exploit Convolutional Occupancy Networks (ConvONet) [13] as a base architecture for 3D reconstruction. Using paired audio-visual observations, student models are trained to mimic the output of the teacher model. **Learning Objective.** We minimize the task-specific distance metric between the student and teacher model's prediction (pseudo-GT),, \(\mathcal{L}_{p}=d(v_{\text{out}},a_{\text{out}})\). To facilitate the cross-modal distillation, we integrate an auxiliary feature loss that promotes local coherence between \(a_{i}\) and \(v_{i}\) by optimizing the distance among triplets \((v_{i}(j),a_{i}(k),a_{i}(k^{\prime}))\): \[\mathcal{L}_{f}^{i}=\frac{1}{V_{i}}\sum_{j}\sum_{k^{\prime}\in\mathcal{N}_{k}} \hskip-14.226378pt\max(0,m-v_{i}(j)*a_{i}(k)+v_{i}(j)*a_{i}(k^{\prime})), \tag{5}\] where \(m=0.3\) is a margin, \(\mathcal{N}_{k}\) is a set of negative samples regarding \(a_{i}(k)\), and \(*\) indicates cosine similarity. Since there are no ground truth positive pairs for local correspondence, we use \(a_{i}(k)=\operatorname*{arg\,max}_{a_{i}(k)}a_{i}(k)*v_{i}(j)\) as a loosely defined positive pair. For \(\mathcal{N}_{k}\), we either deem all the other features in \(a_{i}\) as negative or randomly select one among adjacent features, depending on the convergence of feature loss. In summary, our learning objective is as follows: \[\mathcal{L}_{\text{Ours}}=\mathcal{L}_{p}+\lambda\sum_{i}\mathcal{L}_{f}^{i}, \tag{6}\] where \(\lambda\) is a task-specific hyperparameter to balance the scale between the pseudo-GT loss and feature loss. We use up to four SAM blocks for all experiments, where we set \(K=64\) for the last SAM (SAM\({}_{4}\)) and reduce the number by a factor of four. We train the student model from scratch, and during inference, we do not use any input, feature maps, or modules related to the visual modality; only the audio input and the trained audio-only student model are utilized. Further details are deferred to Appendix. ## 4 Experiments We first discuss a new benchmark for three audio-based dense prediction tasks of scene understanding (SS4.1). We then present the results of our approach for audio-based depth estimation, semantic segmentation, and 3D scene reconstruction tasks (SS4.2-4.4). ### The DAPS Benchmark To evaluate the 2D and 3D dense prediction performance with audio, both the audio signal and the information regarding its surrounding space are required. Since none of the existing works benchmark multifaceted aspects of the omnidirectional surroundings as a whole, we organize a new benchmark upon existing simulators and datasets. We coin this benchmark Dense Auditory Prediction of Surroundings (DAPS). DAPS comprises 15.8K indoor scene observations with labels, where each sample consists of bin-aural audio, RGB panorama, and 3D voxel triples as observation and dense labels for three different tasks, as illustrated in Fig. 1-(b). SoundSpaces [7] can simulate sound in indoor environments; for example, it includes Matterport3D [10] that deals with the material properties and layouts of a scene. Once setting the position and orientation of the recording agent in SoundSpaces, we obtain the recordings with respect to a set of emitter and receiver coordinate pairs. For simplicity, we report the results when the coordinates of an emitter and a receiver are identical. After sampling coordinates information, we employ the Habitat simulator [58] to extract multimodal observations of a scene. We obtain RGB, depth, and semantic labels in equirectangular format from each location. To further collect 3D information of a scene, we extract the meshes surrounding the specified coordinate by truncating them,, 2.5m\(\times\)2.5m\(\times\)2m. Then, we use clustering-based filtering to remove noisy groups of meshes and keep only the most salient components. Finally, we generate 3D voxels from meshes for 3D reconstruction. We carefully exclude the samples with weak auditory signals, such as outdoor scenes with high levels of noise, to maintain the quality of the benchmark. Specifically, for 2D dense prediction tasks, we eliminate samples whose labels have more than 10% missing pixels or noisy annotations. For 3D dense prediction, we exclude the samples with corrupted voxels by selecting the 95% lower confidence bound of the number of occupied voxels. We use 11.6K samples for training, 1.6K samples for validation, and 2.6K samples for testing in all experiments. ### Results of Depth Estimation #### 4.2.1 Experiment Settings Following previous works on depth estimation [12, 59], we predict the depth of the whole surroundings given binaural audio from the scene. We follow the decoder design of [59] to train the model with the Inverse Huber loss. We report the results of sinusoidal sweep-convolved binaural inputs following the convention of [27, 28, 44, 46]. We also report the results of natural audio inputs [7] in Fig. 3-(b). **Evaluation Metrics.** We report the mean absolute error (MAE), root mean squared error (RMSE), and delta accuracy (\(\delta_{1},\delta_{2},\delta_{3}\)) for evaluation. MAE and RMSE reflect the error rate of our prediction, while the delta accuracy indicates the relative correctness of our prediction,, \(\max(\frac{\sigma_{\text{out}}}{\tau_{\text{out}}},\frac{\tau_{\text{out}}}{ \sigma_{\text{out}}})<1.25^{i}\). To demonstrate the efficiency of our approach, we also report the memory allocation on GPU and latency during training. **Baselines.** We include some state-of-the-art audio-only and distillation models as baselines [44, 8, 46], which are originally designed to predict bounding boxes or depth maps from a normal field-of-view with multi-channel audios. We also report the performance of losses proposed in [6, 5, 8] combined with U-Net or DPT for fair comparison. #### 4.2.2 Results and Analyses **Comparison with Prior Arts.** Table 1 summarizes the accuracy on DAPS-Depth test split. Compared to previous works on audio-only and distillation-based auditory depth estimation, our method achieves significant performance boosts across all metrics. For both U-Net and DPT, directly minimizing the feature distance between the teacher and the student (_i.e._, +Rank/MTA) contributes marginally to the performance. Instead, adopting the proposed spatial alignment via matching improves the performance substantially, up to 10% (MAE) for U-Net. It is also worth noting that U-Net with SAM displays comparable performance with DPT variants. One of the important aspects of our approach is its efficiency, as illustrated in Fig. 3-(a). Compared to previous distillation methods, DPT+SAM improves both time and space efficiency by 27%, where the gap becomes wider for the other two tasks. **Ablation Studies.** In Table 1, replacing full SAM blocks with multi-head attention (SAMMultiHead) or learnable spatial embeddings (SAMSpatialEmbeddings) deteriorates the absolute error rate by 1.5-1.8%. Reducing the number of spatial embeddings per layer to one (SAM\({}_{3,4(K=1)}\)) is also harmful to performance. Increasing the number of SAM blocks for alignment can be beneficial, but forcefully matching the low-level vision features with audio features (_i.e._, SAM\({}_{1,2}\)) does not improve the prediction accuracy. Table 2 analyzes the influence of different patch designs and spatial embeddings. Both frequency and time patches are more efficient than the regular patch, but only the time patch introduces significant performance gain. This implies that aggregating all frequency responses per short time span is a preferred input representation for dense auditory prediction. Also, the degraded performance of \(\mathbb{R}^{K\times 1\times C}\) spatial embeddings instead of \(\mathbb{R}^{K\times V_{i}\times C}\) (_i.e._, non-spatial embeddings) stresses the importance of securing spatially varying information for matching. Finally, using actual visual features instead of learnable embeddings (_i.e._, oracle embeddings) displays on-par performance with the teacher model. **Generalization to Natural Audio Inputs.** Fig. 3-(b) reports the distillation performance of U-Net trained with diverse audio samples randomly selected from [7]. Not only our approach consistently achieves better performance, but the variance among different audio samples is also smaller than in previous distillation methods. **Qualitative Results.** Fig. 4 displays the depth estimation results from binaural audio. Our approach can precisely measure the depth or structure of the room compared to prior arts. In some cases, it can even capture smaller objects like a billiards table in a scene from the audio. \begin{table} \begin{tabular}{l l|l l l l l} \hline \hline & & MAE\({}_{\downarrow}\) & RMSE\({}_{\downarrow}\) & \(\delta_{1\uparrow}\) & \(\delta_{2\uparrow}\) & \(\delta_{3\uparrow}\) \\ \hline \multicolumn{1}{c|}{} & Teacher [11] & 0.6524 & 1.1296 & 0.7633 & 0.8966 & 0.9328 \\ \hline \multirow{10}{*}{**Ablation**} & BilinearCoAtt [46] & 1.2101 & 1.8366 & 0.5128 & 0.7009 & 0.8139 \\ & BatVision [44] & 0.9345 & 1.5740 & 0.6284 & 0.7975 & 0.8806 \\ & MM-DistillNet [8] & 0.8995 & 1.5812 & 0.6633 & 0.8178 & 0.8902 \\ \hline \multirow{10}{*}{**Ablation**} & Pseudo-GT (\(\mathcal{L}_{p}\)) [6] & 0.9572 & 1.6436 & 0.6258 & 0.7971 & 0.8771 \\ & + Rank [5] & 0.9524 & 1.6350 & 0.6279 & 0.7986 & 0.8786 \\ & + MTA [8] & 0.9572 & 1.6392 & 0.6243 & 0.7956 & 0.8782 \\ & + SAMMultiHead & 0.8789 & 1.5604 & 0.6774 & 0.8256 & 0.8955 \\ & + SAMSpatialEmbeddings & 0.8760 & 1.5468 & 0.6787 & 0.8267 & 0.8965 \\ & + SAM\({}_{3,4}\) (\(K=1\)) & 0.8704 & 1.5467 & 0.6857 & 0.8302 & 0.8978 \\ & **+ SAM\({}_{3,4}\)** & **0.8633** & **1.5397** & **0.6869** & **0.8308** & **0.8982** \\ \hline \multirow{10}{*}{**Ablation**} & Pseudo-GT (\(\mathcal{L}_{p}\)) [6] & 0.8926 & 1.5851 & 0.6684 & 0.8243 & 0.8943 \\ & + Rank [5] & 0.9130 & 1.6017 & 0.6607 & 0.8159 & 0.8869 \\ \cline{1-1} & + MTA [8] & 0.8913 & 1.5819 & 0.6694 & 0.8263 & 0.8953 \\ \cline{1-1} & **+ SAM\({}_{4}\)** & 0.8517 & **1.5276** & 0.6971 & 0.8344 & 0.8986 \\ \cline{1-1} & **+ SAM\({}_{3,4}\)** & **0.8443** & 1.5351 & **0.7019** & **0.8392** & 0.9000 \\ \cline{1-1} & **+ SAM\({}_{1,2,3,4}\)** & 0.8497 & 1.5346 & 0.6992 & 0.8380 & **0.9002** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of depth estimation accuracy on DAPS-Depth test split. \begin{table} \begin{tabular}{l|l l l} \hline \hline & MAE\({}_{\downarrow}\) & RMSE\({}_{\downarrow}\) & \(\delta_{1\uparrow}\) \\ \hline Mono & 1.0783 & 1.7543 & 0.5829 \\ \hline \(16\times 16\) Patch & 0.8903 & 1.5786 & 0.6753 \\ \(1\times H^{\prime}\) Patch (freq.) & 0.8902 & 1.5607 & 0.6629 \\ \(W^{\prime}\times 1\) Patch (time) & **0.8497** & **1.5346** & **0.6992** \\ \hline \(\text{Embeddings}_{\text{NonSpatial}}\) & 0.8777 & 1.5334 & 0.6757 \\ \(\text{Embeddings}_{\text{Oracle}}\) & 0.5622 & 1.0308 & 0.8156 \\ \hline \hline \end{tabular} \end{table} Table 2: Influence of input representation and learnable spatial embeddings in DPT+SAM on DAPS-Depth test split. ### Results of Semantic Segmentation #### 4.3.1 Experiment Settings We train the audio student model to predict pixel-wise categories of the scene. Except for the pseudo-GT learning objective \(\mathcal{L}_{p}\), we follow the training recipe explained in Sec. 4.2. As an auxiliary task, we predict the pseudo-GT segmentation with the penultimate layer feature for better performance, as proposed by Zhao _et al_. [54]. We train the model with the cross-entropy loss, where the primary and auxiliary loss ratio is 1:0.2. Since it is virtually not tractable to classify 40+ semantic categories merely from the audio, we opt out classes about tiny objects (_e.g_., towels) and merge similar classes to establish nine classes for semantic segmentation based on audio. We report the performance of feature-level distillation methods with U-Net as a backbone. **Evaluation Metrics.** We report the pixel-wise accuracy (pAcc), class-wise mean accuracy (mAcc), and class-wise mean IoU (mIoU) for all pixels with valid labels. Since it is challenging to label small objects in a scene with audio precisely, we introduce the mean IoU of ceiling, wall, and floor (3IoU) that constitutes a coarse layout of the scene. #### 4.3.2 Results and Analyses Table 3 summarizes the semantic segmentation accuracy on DAPS-Semantic test split. Although predicting material properties or a semantic structure from auditory input is challenging, the result suggests that the overall output is acceptably plausible, achieving 87% of the teacher model's performance on the pAcc metric. Compared to depth estimation, the ranking-based objective fairly contributes to the distillation performance, which could be related to the classification error ensuring tighter bounds for ranking measures [60]. Still, SAM achieves better performance in all metrics, especially in predicting layout-relevant categories, _i.e_., +4% compared to Pseudo-GT. **Qualitative Examples.** The last two rows of Fig. 4 illustrate the semantic segmentation results. Our approach can better predict the categories of smaller objects and the layout of the indoor surroundings, even under visually ill-posed scenarios like the windows in the third row. ### Results of 3D Scene Reconstruction #### 4.4.1 Experiment Settings We reconstruct a 3D scene with audio by means of voxel super-resolution. Voxel super-resolution aims to reconstruct high-resolution 3D objects using low-resolution voxelized meshes as input [61]. We use a teacher model that maps low (\(16^{3}\)) to high-resolution voxel grids (\(32^{3}\)) by capturing structural details of 3D shapes for reconstruction. Despite the difference in dimensions and shapes, the feature maps of the 3D teacher U-Net are utilized to learn the spatial alignment with auditory features, owing to the SAM blocks. **Evaluation Metrics.** Following Peng _et al_. [13], we report IoU, Chamfer-L\({}_{1}\) distance, normal consistency (NC), and F1-score. We use IoU and F1 to measure the intersection between ground truths and predictions. Also, we \begin{table} \begin{tabular}{l|c c c c} \hline & pAcc\({}_{\uparrow}\) & mAcc\({}_{\uparrow}\) & mIoU\({}_{\uparrow}\) & 3IoU\({}_{\uparrow}\) \\ \hline Teacher & 0.737 & 0.708 & 0.409 & 0.705 \\ \hline BilinearCoAttn [46] & 0.605 & 0.493 & 0.340 & 0.538 \\ MM-DistillNet [8] & 0.629 & 0.515 & 0.311 & 0.581 \\ \hline Pseudo-GT (\(\mathcal{L}_{p}\)) [6] & 0.628 & 0.513 & 0.320 & 0.576 \\ + MTA [8] & 0.629 & 0.514 & 0.316 & 0.576 \\ + Rank [5] & 0.642 & 0.520 & 0.359 & 0.587 \\ + \(\mathbf{SAM_{Full}}\) & **0.644** & **0.526** & **0.363** & **0.600** \\ \hline \end{tabular} \end{table} Table 3: Comparison of semantic segmentation accuracy on DAPS-Semantic test split. Figure 4: Qualitative examples of audio-based depth estimation (upper) and semantic segmentation (lower). evaluate Chamfer-L\({}_{1}\) distance and NC as similarity metrics based on multidimensional point sets and normal displacement vectors, respectively. **Baselines.** Due to a lack of prior research on generating 3D objects from audio, we set up several conceivable baselines for comparison. First, we interpolate the 2D audio input to 3D to use ConvONet as a backbone. We report the performance of audio-only models and their variants with feature distillation. Second, as in the spatial alignment via matching framework, we use the 2D audio input as is and convert intermediate feature maps to match the shape of 3D features. We use U-Net [11] and ViT [57] as backbones to show that our approach can be applied to various encoder structures, where we include the ranking [5] or MTA [8] objectives for cross-modal distillation as baselines. #### 4.4.2 Results and Analyses Table 4 reports the 3D scene reconstruction performance on DAPS-3D test split. Due to task difficulty, the performance gap between the teacher and the student is wider than 2D dense prediction tasks. Still, our approach improves the IoU score by 40% compared to audio-only models. Instead of forcefully converting the audio input representation, reducing the feature distance while keeping the audio input intact generally performs better. Lower Chamfer-L\({}_{1}\) scores of our approach, _i.e_., an 18% reduction for the U-Net backbone, suggest that SAM facilitates the generation of points that are significantly closer to the ground truth. **Qualitative Examples.** Fig. 5 visualizes our audio-based 3D scene reconstruction results. In the absence of visual cues, our approach accurately predicts the closed walls in a scene, even capturing details like holes (_e.g_., doors or windows) and furniture. The substantial gap of quality between ours and prior arts in an open space (the last row of Fig. 5) stresses the importance of our distillation framework for dense prediction of 3D surroundings. ## 5 Conclusion We addressed the audio-based dense prediction of indoor surroundings in 2D and 3D for the first time, addressing the challenges in vision-to-audio knowledge distillation: the discrepancy between the two modalities. To this end, we presented a novel spatial alignment via matching (SAM) distillation framework, accounting for local correspondence of multi-scale features with input shape inconsistency. In experiments in a newly collected DAPS dataset, our distillation framework consistently improves the performance across multiple tasks ranging from 2D to 3D with various architectures as backbones. Qualitative results indicate that our approach better captures fine-grained information about the scene from the auditory input compared to prior arts. \begin{table} \begin{tabular}{l l|c c c} \hline \hline & IoU\({}_{\uparrow}\) & Chamfer\({}_{\downarrow}\) & NC\({}_{\uparrow}\) & F1\({}_{\uparrow}\) \\ \hline Teacher [13] & 0.548 & 0.0137 & 0.882 & 0.560 \\ \hline Audio-only\({}_{\text{Mono}}\) & 0.126 & 0.0698 & 0.625 & 0.189 \\ Audio-only\({}_{\text{Sereo}}\) & 0.136 & 0.0643 & 0.639 & 0.196 \\ \hline \multirow{3}{*}{\begin{tabular}{c} \end{tabular} } & MSE & 0.137 & 0.0630 & 0.639 & **0.203** \\ & Rank [5] & 0.138 & 0.0636 & 0.640 & 0.200 \\ & MTA [8] & 0.149 & 0.0656 & 0.631 & 0.174 \\ \hline \multirow{3}{*}{\begin{tabular}{c} \end{tabular} } & MSE & 0.150 & 0.0676 & 0.626 & 0.177 \\ & Rank [5] & 0.153 & 0.0663 & 0.631 & 0.174 \\ & MTA [8] & 0.159 & 0.0660 & 0.645 & 0.170 \\ & \(\mathbf{SAM}_{\text{Full}}\) & **0.178** & **0.0555** & **0.679** & **0.203** \\ \hline \multirow{3}{*}{ \begin{tabular}{c} \end{tabular} } & MSE & 0.154 & 0.0626 & 0.656 & 0.183 \\ & Rank [5] & 0.147 & 0.0698 & 0.671 & 0.177 \\ \cline{1-1} & MTA [8] & 0.154 & 0.0650 & 0.646 & 0.187 \\ \cline{1-1} & \(\mathbf{SAM}_{\text{Full}}\) & **0.178** & **0.0587** & **0.682** & **0.204** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of 3D scene reconstruction accuracy on DAPS-3D test split. Figure 5: Qualitative examples of audio-based 3D scene reconstruction. **Acknowledgement.** This work was supported by LG AI Research, National Research Foundation of Korea (NRF) grant (No.2023R1A2C2005573) and Institute of Information & Communications Technology Planning & Evaluation (IITP) grant (No.2022-0-00156, 2019-0-01082, 2021-0-01343) funded by the Korea government (MSIT). Gunhee Kim is the corresponding author.
2305.19708
Parameter Estimation Methods of Required Rate of Return
In this study, we introduce new estimation methods for the required rate of returns on equity and liabilities of private and public companies using the stochastic dividend discount model (DDM). To estimate the required rate of return on equity, we use the maximum likelihood method, the Bayesian method, and the Kalman filtering. We also provide a method that evaluates the market values of liabilities. We apply the model to a set of firms from the S\&P 500 index using historical dividend and price data over a 32--year period. Overall, the suggested methods can be used to estimate the required rate of returns.
Battulga Gankhuu
2023-05-31T10:03:52Z
http://arxiv.org/abs/2305.19708v3
# Parameter Estimation Methods of Required Rate of Return ###### Abstract In this study, we introduce new estimation methods for the required rate of returns on equity and liabilities of private and public companies using the stochastic dividend discount model (DDM). To estimate the required rate of return on equity, we use the maximum likelihood method, the Bayesian method, and the Kalman filtering. We also provide a method that evaluates the market values of liabilities. We apply the model to a set of firms from the S&P 500 index using historical dividend and price data over a 32-year period. Overall, the suggested methods can be used to estimate the required rate of returns. ## 1 Introduction Dividend discount models (DDMs), first introduced by Williams (1938), are a popular tool for stock valuation. If we assume that a firm will not default in the future, then the basic idea of all DDMs is that the market price of a stock equals the sum of the stock's next period price and dividend discounted at a risk-adjusted rate, known as a required rate of return, see, e.g., Brealey, Myers, and Marcus (2020). By their very nature, DDM approaches are best applicable to companies paying regular cash dividends. For a DDM with default risk, we refer to Battulga, Jacob, Altangerel, and Horsch (2022). As the outcome of DDMs depends crucially on dividend forecasts, most research in the last few decades has been around the proper estimations of dividend development. An interesting review of some existing deterministic and stochastic DDMs, which model future dividends can be found in D'Amico and De Blasis (2020b). It is an obvious fact that in addition to dividend forecast models, the required rate of return is the main input parameter for DDMs. In addition to its usage in stock valuation, it is an ingredient of the weighted average cost of capital (WACC), and WACC is used to value businesses and projects, see Brealey et al. (2020). The most common model to estimate the required rate of return is the capital asset pricing model (CAPM). Using the CAPM is common in practice, but it is a one-factor model (\(\beta\) only) for which criticism applies, see, e.g., Nagorniak (1985). Thus, multi-factor models (e.g., Fama and French (1993)) are therefore often preferred instead. Another multi-factor model, which is used to estimate the required rate of return is Ross's (1976) arbitrage pricing theory (APT). However, for the model, since every analyst can develop his APT model, there is no universally accepted APT model specification among practitioners. Sudden and dramatic changes in the financial market and economy are caused by events such as wars, market panics, or significant changes in government policies. To model those events, some authors used regime-switching models. The regime-switching model was introduced by seminal works of Hamilton (1989, 1990) (see also a book of Hamilton (1994)) and the model is a hidden Markov model with dependencies, see Zucchini, MacDonald, and Langrock (2016). The regime-switching model assumes that a discrete unobservable Markov process switches among a finite set of regimes randomly and that each regime is defined by a particular parameter set. The model is a good fit for some financial data and becomes popular in financial modeling including equity options, bond prices, and others. The Kalman filtering, which was introduced by Kalman (1960) is an algorithm that provides estimates of some observed and unobserved (state) processes. The Kalman filtering has been demonstrating its usefulness in various applications. It has been used extensively in economics, system theory, the physical sciences, and engineering. In econometrics, the state-space model is usually defined by (i) the observed vector is described in terms of the state vector in linear form (measurement equation), and (ii) the state vector is governed by VAR(1) process (transition equation). To estimate the parameters of the state-space model and to make inferences about the state-space model (smoothing and forecasting), the Kalman filtering can be used, see Hamilton (1994) and Lutkepohl (2005). By the CAPM, the required rate of return is modeled by the risk-free rate, beta, and market return. However, the CAPM is sensitive to its inputs. Recently, Battulga et al. (2022) introduced a stochastic DDM that models the dividends by a compound non-homogeneous Poisson-process and obtained ML estimators and confidence bands of the model's parameters, including the required rate of return. In this paper, instead of the traditional CAPM and its descendant versions, we introduce new estimation methods, which cover the ML methods with regime-switching, the Bayesian method, and the Kalman filtering to estimate the required rate of return on equity. The rest of the paper is organized as follows: In Section 2, to estimate the required rate of returns on equity for public companies, we introduce the ML method with regime-switching and the Bayesian method. Also, we provide a simple method that evaluates market values of liabilities and portfolio choice theory. Section 3 is devoted to parameter estimation methods for private companies, where we consider the ML method with regime-switching, the Bayesian method, and the Kalman filtering. In Section 4, for selected public companies, we provide numerical results based on our methods. Finally, Section 5 concludes the study. ## 2 Parameter Estimation of Public Company In this paper, we assume that there are \(n\) companies and the companies will not default in the future. As mentioned before the basic idea of all DDMs is that the market price of a stock equals the sum of the stock's next period price and dividend discounted at the required rate of return. Therefore, for successive prices of \(i\)-th company, the following relation holds \[P_{i,t}=(1+k^{e}_{i,t})P_{i,t-1}-d_{i,t},\quad i=1,\ldots,n\text{ and }t=1,2,\ldots, \tag{2.1}\] where \(k^{e}_{i,t}\) is the required rate of return on equity, \(P_{i,t}\) is the equity price, and \(d_{i,t}\) is the dividend, respectively, at time \(t\) of \(i\)-th company. In this paper, we suppose that the required rate of returns are random variables. For the above DDM equation, if the required rate of return is less than \(-1\), namely, \(k_{i,t}<-1\), then the sum of the price and dividend, respectively, at time \(t\) of \(i\)-th company takes a negative value, which is an undesirable result. For this reason, we need to write the above DDM equation in the following form \[P_{i,t}=\exp\{\tilde{k}^{e}_{i,t}\}P_{i,t-1}-d_{i,t},\quad i=1,\ldots,n\text{ and }t=1,2,\ldots, \tag{2.2}\] where \(\tilde{k}^{e}_{i,t}:=\ln(1+k^{e}_{i,t})\) is a log required rate of return on equity at time \(t\) of \(i\)-th company. To keep notations simple, let \(\tilde{k}^{e}_{t}:=(\tilde{k}^{e}_{1,t},\ldots,\tilde{k}^{e}_{n,t})^{\prime}\) be an \((n\times 1)\) required rate of return vector on equity at time \(t\), \(P_{t}:=(P_{1,t},\ldots,P_{n,t})^{\prime}\) be an \((n\times 1)\) price vector at time \(t\), and \(d_{t}:=(d_{1,t},\ldots,d_{n,t})^{\prime}\) be an \((n\times 1)\) dividend vector at time \(t\) of the companies. Then, equation (2.2) can be written in vector form \[P_{t}=\exp\{\tilde{k}^{e}_{t}\}\odot P_{t-1}-d_{t},\quad t=1,2,\ldots, \tag{2.3}\] where \(\odot\) is the Hadamard's element-wise product of two vectors. It follows from equation (2.3) that the log required rate of return at time \(t\) is represented by \[\tilde{k}_{t}^{e}=\ln\big{(}(P_{t}+d_{t})\oslash P_{t-1}\big{)},\quad t=1,2,\dots, \tag{2.4}\] where \(\oslash\) is the element-wise division of two vectors. It is worth mentioning that because the price vector and dividend vector are known \(t\), the value of the log required rate of return vector on equity \(k_{t}^{e}\) is known at time \(t\). We assume that each of the \(n\) companies is financed by identically \(m\) different type liabilities. Let \(L_{i,j,t}\) and \(r_{i,j,t}\) be principal outstanding and payment, including interest payment of \(j\)-th type liabilities at time \(t\) of \(i\)-th company. The principal outstanding \(L_{i,j,t}\) represents the remaining liability immediately after \(r_{i,j,t}\) has been paid. It equals the previous period's principal outstanding of the liability, accumulated for one period, minus \(r_{i,t}\). Therefore, we have \[L_{i,j,t}=(1+\bar{k}_{j,t-1})L_{i,j,t-1}-r_{i,j,t},\quad i=1,\dots,n,\ j=1, \dots,m,\ t=1,2,\dots, \tag{2.5}\] where \(\bar{k}_{j,t-1}\) is an interest rate of the the \(j\)-th type liability. It should be noted that the interest rate known at time \(t-1\). Consequently, the sum \(L_{i,j,t}+r_{i,j,t}\) is also known at time \(t-1\). If we sum equation (2.5) over all values of \(j\), then we obtain \[L_{i,t}:=\sum_{j=1}^{m}L_{i,j,t}=\sum_{j=1}^{m}(1+\bar{k}_{j,t-1})L_{i,j,t-1}- \sum_{j=1}^{m}r_{i,j,t}=(1+k_{i,t-1})L_{i,t-1}-r_{i,t}, \tag{2.6}\] where \(L_{i,t}\) is a total liability (book value) at time \(t\), \(r_{i,t}\) is total interest payment minus net new borrowing (\(L_{i,t}-L_{i,t-1}\)) at time \(t\), and \[k_{i,t-1}=\sum_{j=1}^{m}\bar{k}_{j,t-1}w_{i,j,t-1} \tag{2.7}\] with \(w_{i,j,t-1}:=\frac{L_{i,j,t-1}}{L_{i,t-1}}\) is a weighted interest rate at time \(t-1\) of \(i\)-th company, respectively. From equation (2.6), one finds that \[L_{i,t}=\frac{L_{i,t+1}+r_{i,t+1}}{1+\bar{k}_{i,t}}. \tag{2.8}\] As a result, if we replace the weighted interest rate \(\bar{k}_{i,t}\) in the above equation (2.8) by a weighted market interest rate, then the market value at time \(t\) of the \(i\)-th company's liabilities is obtained by \[L_{i,t}^{m}=\frac{L_{i,t+1}+r_{i,t+1}}{1+k_{i,t+1}^{\ell}}=\frac{I_{i,t}+L_{i, t}}{1+k_{i,t+1}^{\ell}}, \tag{2.9}\] where \(k_{i,t+1}^{\ell}\) is a weighted market interest rate (required rate of return on debtholders) at time \(t+1\) of the liabilities and \(I_{i,t}:=k_{i,t}L_{i,t}\) is the total interest payment at time \(t\) of the \(i\)-th company. The weighted market interest rate at time \(t+1\) of the liabilities of the \(i\)-th company is calculated by \[k_{i,t+1}^{\ell}=\sum_{j=1}^{m}\bar{k}_{j,t+1}^{\ell}w_{i,j,t}, \tag{2.10}\] where \(\bar{k}_{j,t+1}^{\ell}\) is market interest rate at time \(t+1\) of the \(j\)-th type liability. The formula of the market value of the liabilities, given in equation (2.8) also holds for individual liabilities, namely, \[L_{i,j,t}^{m}=\frac{I_{i,j,t}+L_{i,j,t}}{1+\bar{k}_{j,t+1}^{\ell}},\quad j=1, \dots,m, \tag{2.11}\] where \(I_{i,j,t}:=\tilde{k}_{j,t}L_{i,j,t}\) is the interest payment at time \(t\) for \(j\)-th type liability of the \(i\)-th company. It can be shown that similarly to equation (2.1), for successive market values of the \(i\)-th company's liabilities, we have \[L_{i,t}^{m}=(1+k_{i,t}^{\ell})L_{i,t-1}^{m}-r_{i,t},\quad t=1,2,\ldots. \tag{2.12}\] Consequently, if a company is financed by liabilities, which are publicly traded in the exchanges, then one can estimate the required rate of return on debtholders using methods, which will appear in this Section, see below. We assume that the log required rate of return vector at time \(t\) on equities, \(\tilde{k}_{t}^{e}\), places first \(n\) components of a \((n+\ell)\) Markov-Switching Vector Autoregressive (MS-VAR\((p)\)) process with order \(p\) and regimes \(N\). Let us denote the dimension of the MS-VAR\((p)\) process by \(\tilde{n}\), i.e., \(\tilde{n}:=n+\ell\). As the log required rate of returns on stocks depends on macroeconomic variables and firm-specific variables, such as GDP, inflation, key financial ratios of the companies, and so on, the last \(\ell\) components of the MS-VAR\((p)\) process \(y_{t}\) correspond to the economic variables that affect the log required rate of returns on equities of the companies. The economic variables may or may not contain dividends. The MS-VAR\((p)\) process \(y_{t}\) is given by the following equation \[y_{t}=A_{0}(s_{t})\psi_{t}+A_{1}(s_{t})y_{t-1}+\cdots+A_{p}(s_{t})y_{t-p}+\xi_ {t}, \tag{2.13}\] where \(y_{t}=(y_{1,t},\ldots,y_{\tilde{n},t})^{\prime}\) is an \((\tilde{n}\times 1)\) random vector, \(\psi_{t}=(\psi_{1,t},\ldots,\psi_{l,t})^{\prime}\) is a \((l\times 1)\) random vector of exogenous variables, \(\xi_{t}=(\xi_{1,t},\ldots,\xi_{\tilde{n},t})^{\prime}\) is an \((\tilde{n}\times 1)\) residual process, \(s_{t}\) is an unobserved regime at time \(t\), which is governed by a Markov chain with \(N\) states, \(A_{0}(s_{t})\) is an \((\tilde{n}\times l)\) coefficient matrix at regime \(s_{t}\) that corresponds to the vector of exogenous variables, for \(i=1,\ldots,p\), \(A_{i}(s_{t})\) are \((\tilde{n}\times\tilde{n})\) coefficient matrices at regime \(s_{t}\) that correspond to \(y_{t-1},\ldots,y_{t-p}\). For the residual process \(\xi_{t}\), we assume that it has \(\xi_{t}:=\Sigma_{t}^{1/2}(\tilde{s}_{t})\varepsilon_{t}\) representation, see Lutkepohl (2005) and McNeil, Frey, and Embrechts (2005), where \(\tilde{s}_{t}=(s_{1},\ldots,s_{t})^{\prime}\) is a \((t\times 1)\) vector of up to and including time \(t\) regimes, \(\Sigma_{t}^{1/2}(\tilde{s}_{t})\) is Cholesky factor of a \((\tilde{n}\times\tilde{n})\) positive definite matrix \(\Sigma_{t}(\tilde{s}_{t})\), which is measurable with respect to \(\sigma\)-field \(\{\mathcal{F}_{t-1},\tilde{s}_{t}\}\) and depends on coefficient matrix \(\Gamma(s_{t}):=[B_{0}(s_{t}):B_{1}(s_{t}):\cdots:B_{p_{+}+q_{*}}(s_{t})]\). Here \(\mathcal{F}_{t}\) is a \(\sigma\)-field, defined below, and \(B_{0}(s_{t})\) is an \((n_{*}\times l_{*})\) matrix, for \(i=1,\ldots,p_{*}+q_{*}\), \(B_{i}(s_{t})\) are \((n_{*}\times n_{*})\) matrices, and \(\varepsilon_{1},\ldots,\varepsilon_{T}\) is a random sequence of independent identically multivariate normally distributed random vectors with means of \(0\) and covariance matrices of \(n\) dimensional identity matrix \(I_{n}\). Then, in particular, for multivariate GARCH process of \((p_{*},q_{*})\) order, dependence of \(\Sigma_{t}^{1/2}\) on \(\Gamma(s_{t})\) is given by \[\mathrm{vech}\big{(}\Sigma_{t}(\tilde{s}_{t})\big{)}=B_{0}(s_{t})+\sum_{i=1}^{ p_{*}}B_{i}(s_{t})\mathrm{vech}\big{(}\xi_{t-i}\xi_{t-i}^{\prime}\big{)}+\sum_{j=1}^{ q_{*}}B_{m_{1}+j,t}(s_{t})\mathrm{vech}(\Sigma_{t-j}(\tilde{s}_{t-j})), \tag{2.14}\] where \(B_{0}(s_{t})\in\mathbb{R}^{n(n+1)/2}\) and \(B_{i}(s_{t})\in\mathbb{R}^{[n(n+1)/2]\times[n(n+1)/2]}\) for \(i=1,\ldots,p_{*}+q_{*}\) are suitable random vector and matrices and the \(\mathrm{vech}\) is an operator that stacks elements on and below a main diagonal of a square matrix. If we assume that in addition to an initial information \(\mathcal{F}_{0}:=\{y_{1-p},\ldots,y_{0},\psi_{1},\ldots,\psi_{T},\Sigma_{1-q_{* }},\ldots,\Sigma_{0}\}\), there are \(T\) observations of the MS-VAR\((p)\) process \(y_{t}\), then equation (2.13) can be compactly written by \[y_{t}=\Pi(s_{t})\mathsf{Y}_{t-1}+\xi_{t},\quad t=1,\ldots,T, \tag{2.15}\] where \(\Pi(s_{t}):=[A_{0}(s_{t}):A_{1}(s_{t}):\cdots:A_{p}(s_{t})]\) is a \((\tilde{n}\times[l+\tilde{n}p])\) coefficient matrix at regime \(s_{t}\), which consist of all the coefficient matrices and \(\mathsf{Y}_{t-1}:=(\psi_{t}^{\prime},y_{t-1}^{\prime},\ldots,y_{t-p}^{\prime})\) is an \(([l+\tilde{n}p]\times 1)\) vector, which consist of exogenous variable \(\psi_{t}\) and last \(p\) lagged values of the process \(y_{t}\). Let for each regime \(j=1,\ldots,N\), \(\pi(j):=\mathrm{vec}(\Pi(j))\) is an \(\big{(}\tilde{n}(l+\tilde{n}p)\times 1\big{)}\) vector, corresponding to the matrix \(\Pi(j)\) and \(\gamma(j):=\mathrm{vec}(\Gamma(j))\) is \(\big{(}[n_{*}(l_{*}+n_{*}(p_{*}+q_{*}))]\times 1\big{)}\) vector, corresponding to the matrix \(\Gamma(j)\), where for a generic \((n\times m)\) matrix \(A\), \(\mathrm{vec}(A)\) is an operator that transform \(A\) into \((nm\times 1)\) vector by stacking columns. For our model, the coefficient vector is \(\big{(}\pi(1)^{\prime},\gamma(1)^{\prime}\big{)}^{\prime}\) when the process is in regime 1, \(\big{(}\pi(2)^{\prime},\gamma(2)^{\prime}\big{)}^{\prime}\) when the process is in regime 2, and so on. Since we assume that the regime-switching process \(s_{t}\) is governed by first-order homogeneous Markov chain, a conditional probability that the regime at time \(t\), \(s_{t}\) equals some particular value conditional on the past regimes, \(s_{t-1},s_{t-2},\ldots,s_{1}\) depends only through the most recent regime at time \(t-1\), \(s_{t-1}\), and does not depend on time, that is, \[p_{ij}:=\mathbb{P}(s_{t}=j|s_{t-1}=i)=\mathbb{P}(s_{t}=j|s_{t-1}=i,s_{t-2}=s_{t -2},\ldots,s_{1}=s_{1}),\quad i,j=1,\ldots,N. \tag{2.16}\] If we collect all the conditional probabilities \(p_{ij}\) into a matrix \(\mathsf{P}\), then we obtain a transition probability matrix of the regime-switching process \(s_{t}\) \[\mathsf{P}=\begin{bmatrix}p_{11}&p_{12}&\ldots&p_{1N}\\ p_{21}&p_{22}&\ldots&p_{2N}\\ \vdots&\vdots&\ddots&\vdots\\ p_{N1}&p_{N2}&\ldots&p_{NN}\end{bmatrix}. \tag{2.17}\] Observe that sums of all rows of the transition probability matrix \(\mathsf{P}\) equal 1, that is, for all \(i=1,\ldots,N\), \(p_{i1}+\cdots+p_{iN}=1\). ### Regime Switching Estimation This Subsection is devoted to regime-switching estimators of parameters of the required rate of return om equity and is based on the book of Hamilton (1994). For \(t=0,\ldots,T\), let us denote available information at time \(t\) by \(\mathcal{F}_{t}\), which consists of the required rate of returns on equities, economic variables, and exogenous variables: \(\mathcal{F}_{t}:=(\mathcal{F}_{0},y_{1},\ldots,y_{t})^{\prime}.\) Then, it is clear that the log-likelihood function of our model is given by the following equation \[\mathcal{L}(\theta)=\sum_{t=1}^{T}\ln\big{(}f(y_{t}|\mathcal{F}_{t-1};\theta) \big{)} \tag{2.18}\] where \(\theta:=\big{(}\pi(1)^{\prime},\ldots,\pi(N)^{\prime},\gamma(1)^{\prime}, \ldots,\gamma(N)^{\prime},\rho^{\prime},\text{vec}(\mathsf{P})^{\prime}\big{)} ^{\prime}\) is a vector, which consists of all population parameters of the model and \(f(y_{t}|\mathcal{F}_{t-1};\theta)\) is a conditional density function of the random vector \(y_{t}\) given the information \(\mathcal{F}_{t-1}\). Here \(\rho:=(\mathbb{P}(s_{1}|\mathcal{F}_{0}),\ldots,\mathbb{P}(s_{N}|\mathcal{F} _{0}))^{\prime}\) is an \((N\times 1)\) initial probability vector. The log-likelihood function is used to obtain the maximum likelihood estimator of the parameter vector \(\theta\). Note that the log-likelihood function depends on all observations, which are collected in \(\mathcal{F}_{T}\), but does not depend on regime-switching process \(s_{t}\), whose values are unobserved. If we assume that the regime-switching process in regime \(j\) at time \(t\), then because conditional on the information \(\mathcal{F}_{t-1}\), \(\xi_{t}\) follows a multivariate normal distribution with mean zero and covariance matrix \(\Sigma_{t}(j)\), the conditional density function of the random vector \(y_{t}\) is given by the following equation \[\eta_{lj} := f(y_{t}|s_{t}=j,\mathcal{F}_{t-1};\alpha)\] \[= \frac{1}{(2\pi)^{\bar{n}/2}|\Sigma_{t}(j)|^{1/2}}\exp\bigg{\{}- \frac{1}{2}\Big{(}y_{t}-\Pi(j)\mathsf{Y}_{t-1}\Big{)}^{\prime}\Sigma_{t}^{-1} (j)\Big{(}y_{t}-\Pi(j)\mathsf{Y}_{t-1}\Big{)}\bigg{\}}\] for \(t=1,\ldots,T\) and \(j=1,\ldots,N\), where \(\alpha:=\big{(}\pi(1)^{\prime},\ldots,\pi(N)^{\prime},\gamma(1)^{\prime}, \ldots,\gamma(N)^{\prime}\big{)}^{\prime}\) is a parameter vector, which differs from the vector of all parameters \(\theta\) by the initial probability vector \(\rho\) and transition probability matrix \(\mathsf{P}\). As a result, since \(\Pi(j)\mathsf{Y}_{t-1}=\big{(}\mathsf{Y}_{t-1}^{\prime}\otimes I_{\bar{n}} \big{)}\pi(j)\), a log of the conditional density function \(\eta_{lj}\) is represented by \[\ln(\eta_{lj}) = -\frac{\tilde{n}}{2}\ln(2\pi)-\frac{1}{2}\ln(|\Sigma_{t}(j)|)\] \[- \frac{1}{2}\Big{(}y_{t}-\big{(}\mathsf{Y}_{t-1}^{\prime}\otimes I _{\bar{n}}\big{)}\pi(j)\Big{)}^{\prime}\Sigma_{t}^{-1}(j)\Big{(}y_{t}-\big{(} \mathsf{Y}_{t-1}^{\prime}\otimes I_{\bar{n}}\big{)}\pi(j)\Big{)},\] where \(\otimes\) is the Kronecker product of two matrices. For all \(t=1,\ldots,T\), we collect the conditional density functions of the price at time \(t\) into an \((n\times 1)\) vector \(\eta_{t}\), that is, \(\eta_{t}:=(\eta_{t1},\ldots,\eta_{tN})^{\prime}\). Let us denote a probabilistic inference about the value of the regime-switching process \(s_{t}\) equals to \(j\), based on the information \({\cal F}_{t}\) and the parameter vector \(\theta\) by \(\mathbb{P}(s_{t}=j|{\cal F}_{t},\theta)\). Collect these conditional probabilities \(\mathbb{P}(s_{t}=j|{\cal F}_{t},\theta)\) for \(j=1,\ldots,N\) into an \((N\times 1)\) vector \(z_{t|t}\), that is, \(z_{t|t}:=\big{(}\mathbb{P}(s_{t}=1|{\cal F}_{t};\theta),\ldots,\mathbb{P}(s_ {t}=N|{\cal F}_{t};\theta)\big{)}^{\prime}\). Also, we need a probabilistic forecast about the value of the regime-switching process at time \(t+1\) equals \(j\) conditional on data up to and including time \(t\). Collect these forecasts into an \((N\times 1)\) vector \(z_{t+1|t}\), that is, \(z_{t+1|t}:=\big{(}\mathbb{P}(s_{t+1}=1|{\cal F}_{t};\theta),\ldots,\mathbb{P} (s_{t+1}=N|{\cal F}_{t};\theta)\big{)}^{\prime}\). The probabilistic inference and forecast for each time \(t=1,\ldots,T\) can be found by iterating on the following pair of equations: \[z_{t|t}=\frac{(z_{t|t-1}\odot\eta_{t})}{i_{N}^{\prime}(z_{t|t-1}\odot\eta_{t}) }\quad\text{and}\quad z_{t+1|t}=\mathbb{P}^{\prime}z_{t|t},\quad t=1,\ldots,T, \tag{2.21}\] where \(\eta_{t}\) is the \((N\times 1)\) vector, whose \(j\)-th element is given by equation (2.20), \(\mathsf{P}\) is the \((N\times N)\) transition probability matrix, which is given by equation (2.17), and \(i_{N}\) is an \((N\times 1)\) vector, whose elements equal \(1\). Given a starting value \(\rho=z_{1|0}\) and an assumed value for the population parameter vector \(\theta\), one can iterate on (2.21) for \(t=1,\ldots,T\) to calculate the values of \(z_{t|t}\) and \(z_{t+1|t}\). To obtain MLE of the population parameters, in addition to the inferences and forecasts we need a smoothed inference about the regime-switching process was in at time \(t\) based on full information \({\cal F}_{T}\). Collect these smoothed inferences into an \((N\times 1)\) vector \(z_{t|T}\), that is, \(z_{t|T}:=\big{(}\mathbb{P}(s_{t}=1|{\cal F}_{T};\theta),\ldots,\mathbb{P}(s_ {t}=N|{\cal F}_{T};\theta)\big{)}^{\prime}\). The smoothed inferences can be obtained by using the Kim's (1994) smoothing algorithm: \[z_{t|T}=z_{t|t}\odot\big{\{}\mathsf{P}^{\prime}(z_{t+1|T}\odot z_{t+1|t}) \big{\}},\quad t=T-1,\ldots,1, \tag{2.22}\] where \(\oslash\) is an element-wise division of two vectors. The smoothed probabilities \(z_{t|T}\) are found by iterating on (2.22) backward for \(t=T-1,\ldots,1\). This iteration is started with \(z_{T|T}\), which is obtained from (2.21) for \(t=T\). If the initial probability \(\rho\) does not depend on the other parameters, then according to Hamilton (1990), maximum likelihood estimators of \((i,j)\)-th element of the transition probability matrix \(\mathsf{P}\), the parameter vector \(\alpha\) that governs the conditional density functions (2.19), and the initial probability \(\rho\) are obtained from the following systems of equations \[\hat{p}_{ij} = \frac{\sum_{t=2}^{T}\mathbb{P}\big{(}s_{t-1}=i,s_{t}=j|{\cal F}_{ T};\hat{\theta}\big{)}}{\sum_{t=2}^{T}(z_{t-1|T})_{i}}, \tag{2.23}\] \[0 = \sum_{t=1}^{T}\bigg{(}\frac{\partial\ln(\eta_{t})}{\partial\alpha ^{\prime}}\bigg{)}^{\prime}z_{t|T},\] (2.24) \[\hat{\rho} = z_{1|T}, \tag{2.25}\] where \(\partial\ln(\eta_{t})/\partial\alpha^{\prime}\) is an \(\big{(}N\times[\tilde{n}(l+\tilde{n}p)+n_{*}(l_{*}+n_{*}(p_{*}+q_{*}))]\big{)}\) matrix of derivatives of the logs of the conditional densities and due to the Kim's smoothing algorithm, the numerator of equation (2.23) can be calculated by \[\mathbb{P}\big{(}s_{t-1}=i,s_{t}=j|{\cal F}_{T};\theta\big{)}=p_{ij}(z_{t|T}) _{j}(z_{t-1|t-1})_{i}/(z_{t|t-1})_{j}. \tag{2.26}\] To simplify notations for MLE that correspond to the parameter vector \(\alpha\), for each regime \(j=1,\ldots,N\), let \(\bar{\mathsf{Y}}_{j}:=\big{[}\bar{\mathsf{Y}}_{0,j}:\cdots:\bar{\mathsf{Y}}_{T -1,j}\big{]}\) be an \(\big{(}[l+\tilde{n}p]\times T\big{)}\) matrix, which is adjusted by the regime \(j\) and whose \(t\)-th column is given by an \(\big{(}[l+\tilde{n}p]\times 1\big{)}\) vector \(\bar{\mathsf{Y}}_{t-1,j}:=\mathsf{Y}_{t-1}\sqrt{(z_{t|T})_{j}}\) and \(\bar{y}_{j}:=\big{[}\bar{y}_{1,j}:\cdots:\bar{y}_{T,j}\big{]}\) be a \(\big{(}\hat{n}\times T\big{)}\) matrix, which is adjusted by the regime \(j\) and whose \(t\)-th column is given by a \(\big{(}\hat{n}\times 1\big{)}\) vector \(\bar{y}_{t,j}:=y_{t}\sqrt{(z_{t|T})_{j}}\). Firstly, let us assume that for each \(j=1,\ldots,N\), the covariance matrix at regime \(j\) is homoscedastic. Then, according to equation (2.20), partial derivatives of the log conditional density function \(\ln(\eta_{tj})\) with respect to the vectors \(\pi(m)\), \(m=1,\ldots,N\) is given by \[\frac{\partial\ln(\eta_{tj})}{\partial\pi(m)^{\prime}}=\begin{cases}\big{(}y_ {t}-\big{(}\mathsf{Y}^{\prime}_{t-1}\otimes I_{\bar{n}}\big{)}\pi(j)\big{)}^ {\prime}\Sigma^{-1}(j)\big{(}\mathsf{Y}^{\prime}_{t-1}\otimes I_{\bar{n}} \big{)}&\text{for}\quad j=m,\\ 0&\text{for}\quad j\neq m.\end{cases} \tag{2.27}\] Thus, due to equation (2.24), one gets that \[\sum_{t=1}^{T}\Big{(}\bar{y}_{t,j}-\big{(}\bar{\mathsf{Y}}^{\prime}_{t-1,j} \otimes I_{\bar{n}}\big{)}\pi(j)\Big{)}^{\prime}\Sigma^{-1}(j)\big{(}\bar{ \mathsf{Y}}^{\prime}_{t-1,j}\otimes I_{\bar{n}}\big{)}=0 \tag{2.28}\] for \(j=1,\ldots,N\). Consequently, for each regime \(j=1,\ldots,N\), ML estimator of the parameter vector \(\pi(j)\) is obtained by \[\hat{\pi}(j):=\Bigg{(}\sum_{t=1}^{T}\big{(}\bar{\mathsf{Y}}_{t-1,j}\otimes I _{\bar{n}}\big{)}\Sigma^{-1}(j)\big{(}\bar{\mathsf{Y}}^{\prime}_{t-1,j}\otimes I _{\bar{n}}\big{)}\Bigg{)}^{-1}\sum_{t=1}^{T}\big{(}\bar{\mathsf{Y}}_{t-1,j} \otimes I_{\bar{n}}\big{)}\Sigma^{-1}(j)\bar{y}_{t,j}. \tag{2.29}\] Since \(\big{(}\bar{\mathsf{Y}}_{t-1,j}\otimes I_{\bar{n}}\big{)}\Sigma^{-1}(j)= \big{(}\bar{\mathsf{Y}}_{t-1,j}\otimes\Sigma^{-1}(j)\big{)}\), we find that \[\sum_{t=1}^{T}\big{(}\bar{\mathsf{Y}}_{t-1,j}\otimes I_{\bar{n}}\big{)}\Sigma ^{-1}(j)\big{(}\bar{\mathsf{Y}}^{\prime}_{t-1,j}\otimes I_{\bar{n}}\big{)}= \Big{(}\bar{\mathsf{Y}}_{j}\bar{\mathsf{Y}}^{\prime}_{j}\otimes\Sigma^{-1}(j) \Big{)} \tag{2.30}\] and \[\sum_{t=1}^{T}\big{(}\bar{\mathsf{Y}}_{t-1,j}\otimes I_{\bar{n}}\big{)}\Sigma ^{-1}(j)\bar{y}_{t,j}=\Big{(}\bar{\mathsf{Y}}_{j}\otimes\Sigma^{-1}(j)\Big{)} \text{vec}(\bar{y}_{j}). \tag{2.31}\] Therefore, the ML estimator \(\hat{\pi}(j)\) is represented by \[\hat{\pi}(j)=\text{vec}\big{(}\hat{\Pi}(j)\big{)}=\Big{(}\big{(}\bar{\mathsf{ Y}}_{j}\bar{\mathsf{Y}}^{\prime}_{j}\big{)}^{-1}\bar{\mathsf{Y}}_{j}\otimes I_{ \bar{n}}\Big{)}\text{vec}(\bar{y}_{j})=\text{vec}\Big{(}\bar{y}_{j}\bar{\mathsf{ Y}}^{\prime}_{j}\big{(}\bar{\mathsf{Y}}_{j}\bar{\mathsf{Y}}^{\prime}_{j} \big{)}^{-1}\Big{)}. \tag{2.32}\] As a result, for each regime \(j=1,\ldots,N\), ML estimator of the parameter \(\Pi(j)\) is given by the following equation \[\hat{\Pi}(j)=\bar{y}_{j}\bar{\mathsf{Y}}^{\prime}_{j}\big{(}\bar{\mathsf{Y}}_{ j}\bar{\mathsf{Y}}^{\prime}_{j}\big{)}^{-1},\quad j=1,\ldots,N. \tag{2.33}\] On the other hand, due to equation (2.20), we have \[\frac{\partial\ln(\eta_{tj})}{\partial\Sigma(m)}=\begin{cases}-\frac{1}{2} \Sigma_{t}^{-1}(j)+\frac{1}{2}\Sigma_{t}^{-1}(j)\Big{(}y_{t}-\big{(}\mathsf{Y} ^{\prime}_{t-1}\otimes I_{\bar{n}}\big{)}\pi(j)\Big{)}&\text{for}\quad j=m,\\ \qquad\qquad\times\Big{(}y_{t}-\big{(}\mathsf{Y}^{\prime}_{t-1}\otimes I_{\bar {n}}\big{)}\pi(j)\Big{)}^{\prime}\Sigma_{t}^{-1}(j)&\\ 0&\text{for}\quad j\neq m.\end{cases} \tag{2.34}\] Consequently, by equation (2.24) ML estimator of the parameter \(\Sigma(j)\) is obtained by \[\hat{\Sigma}(j)=\frac{1}{\sum_{t=1}^{T}(z_{t|T})_{j}}\sum_{t=1}^{T}\Big{(}\bar {y}_{t,j}-\hat{\Pi}(j)\mathsf{Y}_{t-1,j}\Big{)}\Big{(}\bar{y}_{t,j}-\hat{\Pi}(j )\mathsf{Y}_{t-1,j}\Big{)}^{\prime} \tag{2.35}\] for \(j=1,\ldots,N\), Secondly, we suppose that for each \(j=1,\ldots,N\), the covariance matrix is homoscedastic and does not depend on regimes, \(\Sigma_{t}(j)=\Sigma\). Then, similarly to before, it can be shown that maximum likelihood estimators of the parameters \(\Pi(j)\) and \(\Sigma\) are obtained by \[\hat{\Pi}(j)=\bar{y}_{j}\bar{\mathsf{Y}}^{\prime}_{j}(\bar{\mathsf{Y}}_{j}\bar{ \mathsf{Y}}^{\prime}_{j})^{-1} \tag{2.36}\] for \(j=1,\ldots,N\) and \[\hat{\Sigma}=\frac{1}{T}\sum_{t=1}^{T}\sum_{j=1}^{N}\Big{(}\bar{y}_{t,j}-\hat{ \Pi}(j)\mathsf{Y}_{t-1,j}\Big{)}\Big{(}\bar{y}_{t,j}-\hat{\Pi}(j)\mathsf{Y}_{t- 1,j}\Big{)}^{\prime}. \tag{2.37}\] Thirdly, we assume that there is one regime (\(N=1\)) and the covariance matrix is homoscedastic, \(\Sigma_{t}(j)=\Sigma\) and \(\Pi(j)=\Pi\). Then, as before, maximum likelihood estimators of the parameters \(\Pi\) and \(\Sigma\) are found by \[\hat{\Pi}=\bar{y}\bar{\mathsf{Y}}^{\prime}(\bar{\mathsf{Y}}\bar{\mathsf{Y}}^{ \prime})^{-1} \tag{2.38}\] and \[\hat{\Sigma}=\frac{1}{T}\sum_{t=1}^{T}\Big{(}y_{t}-\hat{\Pi}\mathsf{Y}_{t-1} \Big{)}\Big{(}y_{t}-\hat{\Pi}\mathsf{Y}_{t-1}\Big{)}^{\prime}, \tag{2.39}\] where \(\bar{\mathsf{Y}}:=\big{[}\bar{\mathsf{Y}}_{0}:\cdots:\bar{\mathsf{Y}}_{T-1} \big{]}\) and \(\bar{y}:=\big{[}\bar{y}_{1}:\cdots:\bar{y}_{T}\big{]}\). Fourthly, we assume that there is one regime (\(N=1\)), one company (\(n=1\)), no exogenous variables except \(1\), and no economic variables, order of AR process equals \(0\), and a variance of the white noise process \(\xi_{t}\) is homoscedastic, \(\operatorname{Var}(\xi_{t})=\sigma^{2}\). In this assumption, equation (2.13) becomes \(\operatorname{AR}(0)\) process \[\tilde{k}_{t}=a_{0}+\xi_{t}, \tag{2.40}\] where \(\tilde{k}_{t}\) is the log required rate of return on equity of the company. Then, it follows from equations (2.38) and (2.39), maximum likelihood estimators of the parameters \(a_{0}\) and \(\sigma^{2}\) are obtained by \[\hat{a}_{0}=\frac{1}{T}\sum_{t=1}^{T}\tilde{k}_{t}\quad\text{and}\quad\hat{ \sigma}^{2}=\frac{1}{T}\sum_{t=1}^{T}(\tilde{k}_{t}-\hat{a}_{0})^{2}. \tag{2.41}\] Consequently, the maximum likelihood estimators of the parameters \(a_{0}\) equals geometric average of the required rate of returns \[\hat{a}_{0}=\sqrt[T]{(1+k_{1})\ldots(1+k_{T})} \tag{2.42}\] and \((1-\alpha)100\%\) confidence intervals of the parameters \(a_{0}\) and \(\sigma^{2}\) are \[\hat{a}_{0}-t_{1-\alpha/2}(T-1)\frac{\hat{\sigma}}{\sqrt{T-1}}\leq a_{0}\leq \hat{a}_{0}+t_{1-\alpha/2}(T-1)\frac{\hat{\sigma}}{\sqrt{T-1}} \tag{2.43}\] and \[\frac{T\hat{\sigma}^{2}}{\chi_{1-\alpha/2}^{2}(T-1)}\leq\sigma^{2}\leq\frac{T \hat{\sigma}^{2}}{\chi_{\alpha/2}^{2}(T-1)}, \tag{2.44}\] where \(t_{1-\alpha/2}(T-1)\) is a \((1-\alpha/2)\) quantile of the student \(t\) distribution with \((T-1)\) degrees of freedom and \(\chi_{\alpha/2}^{2}(T-1)\) is a \(\alpha/2\) quantile of the chi-square distribution with \((T-1)\) degrees of freedom. From equation (2.40), a point prediction of the log required rate of return on equity equals \(\tilde{k}=\hat{a}_{0}\). Let us assume that true value of the prediction is \(\tilde{k}_{0}=a_{0}+\xi_{0}\). Then, a prediction error equals \(e_{0}:=\tilde{k}_{0}-\tilde{k}=\xi_{0}\). Then, it is clear that \(e_{0}/\sigma\sim\mathcal{N}(0,1)\). The ML estimator of the parameter \(\sigma^{2}\) can be written by \[\hat{\sigma}^{2}=\frac{1}{T}\sum_{t=1}^{T}(\tilde{k}_{t}-\hat{a}_{0})^{2}= \frac{1}{T}\sum_{t=1}^{T}(\xi_{t}-\bar{\xi})^{2}=\frac{1}{T}\xi^{\prime}A\xi, \tag{2.45}\] where \(\xi:=(\xi_{1},\ldots,\xi_{T})^{\prime}\) is a \((T\times 1)\) vector, \(\bar{\xi}=\frac{1}{T}\sum_{t=1}^{T}\xi_{t}\) is the mean of the vector \(\xi\), and \(A:=I_{T}-\frac{1}{T}ir_{t}r_{T}^{\prime}\) is a \((T\times T)\) symmetric idempotent matrix with rank \(T-1\). Since \(\xi\sim\mathcal{N}(0,\sigma^{2}I_{T})\), it holds \(\frac{1}{\sigma}\xi^{\prime}A\xi\sim\chi^{2}(T-1)\), see, e.g., Johnston and DiNardo (1997). Because \(\xi_{0}\) is independent of \(\xi\), one finds that \[\frac{e_{0}}{\sigma}\bigg{/}\sqrt{\frac{\frac{1}{\sigma}\xi^{\prime}A\xi}{T-1} }=\frac{\tilde{k}_{0}-\tilde{k}}{\hat{\sigma}}\sqrt{\frac{T-1}{T}}\sim t(T-1). \tag{2.46}\] Consequently, \((1-\alpha)100\%\) confidence interval for the log required rate of return on equity is given by the following equation \[\hat{a}_{0}-t_{1-\alpha/2}(T-1)\sqrt{\frac{T}{T-1}}\hat{\sigma}\leq\tilde{k}_ {0}\leq\hat{a}_{0}+t_{1-\alpha/2}(T-1)\sqrt{\frac{T}{T-1}}\hat{\sigma}. \tag{2.47}\] As a result, \((1-\alpha)100\%\) confidence interval for the required rate of return on equity is \[\exp\left\{\hat{a}_{0}-t_{1-\alpha/2}(T-1)\sqrt{\frac{T}{T-1}}\hat{\sigma} \right\}-1\leq k_{0}\leq\exp\left\{\hat{a}_{0}+t_{1-\alpha/2}(T-1)\sqrt{\frac{ T}{T-1}}\hat{\sigma}\right\}-1. \tag{2.48}\] The confidence bands will be used in Section 4. The maximum likelihood estimator of the parameter vector \(\theta\) is obtained by the zig-zag iteration method using equations (2.21)-(2.25), (2.33), (2.35), and (2.37). ### The Bayesian Estimation The VAR(\(p\)) process is the workhorse model for empirical macroeconomics. However, if the number of variables in the system increases or the time lag is chosen high, then too many parameters need to be estimated. This will reduce the degrees of freedom of the model and entails a risk of overparametrization. For this reason, in this subsection, we consider the Bayesian analysis for the VAR(\(p\)) process \(y_{t}\). In order to simplify calculations, we assume that our model has one regime, that is, \(N=1\). Under the assumption, our model (2.15) is given by \[y_{t}=\Pi\Upsilon_{t-1}+\xi_{t},\quad t=1,\ldots,T. \tag{2.49}\] where \(y_{t}\) is the \((\tilde{n}\times 1)\) vector, which includes the log required rate of return vector on equity \(\tilde{k}_{t}\), \(\Pi\) is the \((\tilde{n}\times[l+\tilde{n}p])\) random matrix, \(\Upsilon_{t-1}:=\big{(}\psi_{t}^{\prime},y_{t-1}^{\prime},\ldots,y_{t-p}^{ \prime}\big{)}^{\prime}\) is the \(([l+\tilde{n}p]\times 1)\) vector, and conditional on \(\Sigma\), \(\xi_{t}\) is the \((\tilde{n}\times 1)\) white noise process with a random covariance matrix \(\Sigma=\text{Var}(\xi_{t})\). To obtain the Bayesian estimator of the model, we need two representations of the VAR(\(p\)) process \(y_{t}\), namely * the first one is \[\tilde{y}_{T}=\tilde{\Upsilon}_{T}\pi+\tilde{\xi}_{T},\] (2.50) where for integer \(j\in\mathbb{Z}\), \(\tilde{y}_{T+j}=(y_{1+j}^{\prime},\ldots,y_{T+j}^{\prime})^{\prime}\) is an \(([\tilde{n}T]\times 1)\) random vector, \(\tilde{\Upsilon}_{T+j}:=\text{diag}\{\Upsilon_{j}^{\prime}\otimes I_{\tilde{n} },\ldots,\Upsilon_{T+j-1}^{\prime}\otimes I_{\tilde{n}}\}\) is \(\big{(}[\tilde{n}T]\times[\tilde{n}(l+\tilde{n}p]\big{)}\) matrix, \(\pi:=\text{vec}(\Pi)\) is an \(\big{(}[\tilde{n}(l+\tilde{n}p)]\times 1\big{)}\) vector, which is a vectorization of the random matrix \(\Pi\), and conditional on \(\Sigma\), \(\tilde{\xi}_{T+j}:=(\xi_{1+j}^{\prime},\ldots,\xi_{T+j}^{\prime})^{\prime}\) is an \((\tilde{n}T\times 1)\) white noise vector and its distribution is \(\tilde{\xi}_{T+j}|\Sigma\sim\mathcal{N}(0,I_{T}\otimes\Sigma)\). From this representation, likelihood function is obtained by \[f(\tilde{y}_{T}|\pi,\Sigma,\tilde{\Upsilon}_{T})=\frac{1}{(2\pi)^{\tilde{n}T/2 }|\Sigma|^{T/2}}\exp\bigg{\{}-\frac{1}{2}\Big{(}\tilde{y}_{T}-\tilde{\Upsilon}_ {T}\pi\Big{)}^{\prime}\big{(}I_{T}\otimes\Sigma^{-1}\big{)}\Big{(}\tilde{y}_{ T}-\tilde{\Upsilon}_{T}\pi\Big{)}\bigg{\}}\] (2.51) * and the second one is \[\tilde{y}_{T}=\Pi\bar{\Upsilon}_{T}+\bar{\xi},\] (2.52) where for integer \(j\in\mathbb{Z}\), \(\tilde{y}_{T+j}:=[y_{1+j}:\cdots:y_{T+j}]\) is an \((\tilde{n}\times T)\) matrix, \(\nabla_{T+j}:=[\nabla_{j}:\cdots:Y_{T+j-1}]\) is an \(([l+\tilde{n}p]\times T)\) matrix, and \(\bar{\xi}_{T+j}:=[\xi_{1+j}:\cdots:\xi_{T+j}]\) is an \((\tilde{n}\times T)\) white noise matrix. It is the well-known fact that for suitable matrices \(A,B,C,D\), \[{\rm vec}(A)^{\prime}(B\otimes C){\rm vec}(D)={\rm tr}(DB^{\prime}A^{\prime}C). \tag{2.53}\] As a result, the likelihood function can be written by \[f(\tilde{y}_{T}|\Pi,\Sigma,\bar{\nabla}_{T})=\frac{1}{(2\pi)^{\tilde{n}T/2}| \Sigma|^{T/2}}\exp\bigg{\{}-\frac{1}{2}{\rm tr}\Big{(}\big{(}\bar{y}_{T}-\Pi \bar{\nabla}_{T}\big{)}\big{(}\bar{y}_{T}-\Pi\bar{\nabla}_{T}\big{)}^{\prime} \Sigma^{-1}\Big{)}\bigg{\}}. \tag{2.54}\] In the Bayesian analysis, it assumes that an analyst has a prior probability belief \(f(\theta)\) about the unknown parameter \(\theta:=(\Pi,\Sigma)\), where \(f(\theta)\) is a prior density function of the parameter \(\theta\). Let us assume that prior density functions of the parameters \(\pi\) and \(\Sigma\) are multivariate normal with mean \(\pi_{0}\) and covariance matrix \((\Sigma\otimes\Lambda_{0})\) conditional on \(\Sigma\) and inverse-Wishart distribution with shape parameters \(\nu_{0}\) and scale matrix \(V_{0}\), respectively, where \(\Lambda_{0}\) is an \(([l+\tilde{n}p]\times[l+\tilde{n}p])\) matrix, \(\nu_{0}\) is a real number such that \(\nu_{0}>\tilde{n}-1\), and \(V_{0}\) is an \((\tilde{n}\times\tilde{n})\) positive definite matrix. Thus, due to equation (2.53), the prior density functions are proportional to \[f(\Sigma|\nu_{0},V_{0})\propto|\Sigma|^{-(\nu_{0}+\tilde{n}+1)/2}\exp\bigg{\{} -\frac{1}{2}{\rm tr}\Big{(}V_{0}\Sigma^{-1}\Big{)}\bigg{\}} \tag{2.55}\] and \[f(\pi|\Sigma,\pi_{0},\Lambda_{0}) \propto |\Sigma|^{-(l+\tilde{n}p)/2}\exp\bigg{\{}-\frac{1}{2}\Big{(} \big{(}\pi-\pi_{0}\big{)}^{\prime}\big{(}\Sigma^{-1}\otimes\Lambda_{0}^{-1} \big{)}\big{(}\pi-\pi_{0}\big{)}\Big{)}\bigg{\}}\] \[= |\Sigma|^{-(l+\tilde{n}p)/2}\exp\bigg{\{}-\frac{1}{2}{\rm tr}\Big{(} \big{(}\Pi-\Pi_{0}\big{)}\Lambda_{0}^{-1}\big{(}\Pi-\Pi_{0}\big{)}^{\prime} \Sigma^{-1}\Big{)}\bigg{\}},\] where \(\propto\) is the notation of proportionality and \(\Pi_{0}\) is an \((\tilde{n}\times[l+\tilde{n}p])\) known matrix, which satisfy \(\pi_{0}={\rm vec}(\Pi_{0})\). From the conditional density function in equation (2.56), one can deduce that the analyst's best guess of the parameter \(\pi\) is the vector \(\pi_{0}\), and the confidence in this guess is summarized by the matrix \((\Sigma\otimes\Lambda_{0})\) and less confidence is represented by larger diagonal elements of \(\Lambda_{0}\). After values of \(\tilde{y}_{T}\) and \(\tilde{\nabla}_{T}\) is observed, the likelihood function \(f(\tilde{y}_{T}|\Pi,\Sigma,\tilde{\nabla}_{T})\) will update our beliefs about the parameter \((\Pi,\Sigma)\). Which leads to a posterior density function \(f(\Pi,\Sigma|\tilde{y}_{T},\tilde{\nabla}_{T})\). For each numerical value of the parameter \((\Pi,\Sigma)\), the posterior density \(f(\Pi,\Sigma|\tilde{y}_{T},\tilde{\nabla}_{T})\) describes our belief that \((\Pi,\Sigma)\) is the true value, having observed values of \(\tilde{y}_{T}\) and \(\tilde{\nabla}_{T}\). It follows from equations (2.54)-(2.56) that a posterior density of the parameter \((\Pi,\Sigma)\) is given by \[f(\Pi,\Sigma|\bar{y}_{T},\bar{\nabla}_{T}) \propto f(\Pi|\Sigma,\pi_{0},\Lambda_{0})f(\Sigma|\nu_{0},V_{0})f(\tilde {y}_{T}|\Pi,\Sigma,\bar{\nabla}_{T})\] \[\propto |\Sigma|^{-(\nu_{0}+l+\tilde{n}p+T+\tilde{n}+1)/2}\exp\bigg{\{}- \frac{1}{2}{\rm tr}\Big{\{}\Big{(}V_{0}\] \[+ \big{(}\Pi-\Pi_{0}\big{)}\Lambda_{0}^{-1}\big{(}\Pi-\Pi_{0} \big{)}^{\prime}+\big{(}\bar{y}_{T}-\Pi\bar{\nabla}_{T}\big{)}\big{(}\bar{y}_{ T}-\Pi\bar{\nabla}_{T}\big{)}^{\prime}\Big{)}\Sigma^{-1}\Big{\}}\bigg{\}}.\] Let us consider the sum of the terms corresponding to the prior density of the parameter \(\Pi\) and the likelihood function in the last line of the above equation. Then, it can be shown that \[(\Pi-\Pi_{0})\Lambda_{0}^{-1}(\Pi-\Pi_{0})^{\prime}+(\bar{y}_{T}- \Pi\bar{\nabla}_{T})(\bar{y}_{T}-\Pi\bar{\nabla}_{T})^{\prime}\] \[=(\Pi-\Pi_{*|T})(\Lambda_{0}^{-1}+\bar{\nabla}_{T}\bar{\nabla}_{T}^ {\prime})(\Pi-\Pi_{*|T})^{\prime} \tag{2.58}\] \[-\Pi_{*|T}(\Lambda_{0}^{-1}+\bar{\nabla}_{T}\bar{\nabla}_{T}^{ \prime})\Pi_{*|T}^{\prime}+\Pi_{0}\Lambda_{0}^{-1}\Pi_{0}^{\prime}+\bar{y}_{T} \bar{y}_{T}^{\prime},\] where \(\Pi_{*|T}=(\Pi_{0}\Lambda_{0}^{-1}+\bar{y}_{T}\mathbf{\check{V}}_{T}^{\prime})( \Lambda_{0}^{-1}+\bar{\mathbf{\check{V}}}_{T}\mathbf{\check{V}}_{T}^{\prime})^ {-1}\). Consequently, according to equation (2.58), the posterior density of the parameter \((\Pi,\Sigma)\) takes form of a multivariate normal density times an inverse-Wishart density \[f(\Pi,\Sigma|\bar{y}_{T},\bar{\mathbf{\check{V}}}_{T})=f\big{(}\pi\big{|} \Sigma,\pi_{*|T},\Lambda_{*|T},\bar{y}_{T},\bar{\mathbf{\check{V}}}_{T}\big{)} f\big{(}\Sigma\big{|}\nu_{*},V_{*|T},\bar{y}_{T},\bar{\mathbf{\check{V}}}_{T} \big{)} \tag{2.59}\] where \[f\big{(}\pi\big{|}\Sigma,\pi_{*|T},\Lambda_{*|T},\bar{y}_{T}, \bar{\mathbf{\check{V}}}_{T}\big{)} \tag{2.60}\] \[=\frac{1}{(2\pi)^{[\tilde{n}(l+\tilde{n}p)]/2}|\Lambda_{*|T}|^{ \tilde{n}/2}|\Sigma|^{(l+\tilde{n}p)/2}}\exp\bigg{\{}-\frac{1}{2}\Big{(}\pi- \pi_{*|T}\Big{)}^{\prime}\big{(}\Sigma^{-1}\otimes\Lambda_{*|T}^{-1}\big{)} \Big{(}\pi-\pi_{*|T}\Big{)}\bigg{\}}\] with \[\pi_{*|T}:=\mathrm{vec}(\Pi_{*|T})\quad\text{and}\quad\Lambda_{*|T}^{-1}:= \Lambda_{0}^{-1}+\bar{\mathbf{\check{V}}}_{T}\bar{\mathbf{\check{V}}}_{T}^{\prime} \tag{2.61}\] and \[f\big{(}\Sigma\big{|}\nu_{*},V_{*|T},\bar{y}_{T},\bar{\mathbf{\check{V}}}_{T} \big{)}=\frac{|V_{*|T}|^{\nu_{*}/2}}{2^{\tilde{n}\nu_{*}/2}\Gamma_{\tilde{n}} (\nu_{*}/2)}|\Sigma|^{-(\nu_{*}+\tilde{n}+1)/2}\exp\bigg{\{}-\frac{1}{2} \mathrm{tr}\Big{(}V_{*|T}\Sigma^{-1}\Big{)}\bigg{\}} \tag{2.62}\] with \[\nu_{*} := \nu_{0}+l+\tilde{n}p+T \tag{2.63}\] \[V_{*|T} := V_{0}-\Pi_{*|T}(\Lambda_{0}^{-1}+\bar{\mathbf{\check{V}}}_{T} \mathbf{\check{\check{V}}}_{T}^{\prime})\Pi_{*|T}^{\prime}+\Pi_{0}\Lambda_{0}^ {-1}\Pi_{0}^{\prime}+\bar{y}_{T}\bar{\mathbf{\check{y}}}_{T}^{\prime} \tag{2.64}\] Note that if \(\Lambda_{0}^{-1}\to 0\), which corresponds to uninformative diffuse prior, then the posterior mean (2.61) converges to the maximum likelihood estimator \(\tilde{\Pi}=\bar{y}_{T}\mathbf{\check{\check{V}}}_{T}^{\prime}(\bar{\mathbf{ \check{V}}}_{T}\mathbf{\check{\check{V}}}_{T}^{\prime})^{-1}\). By the tower property of conditional expectation, the Bayesian estimator of the parameter vector \(\Pi\) is obtained by \[\Pi_{*|T}=\mathbb{E}(\Pi|\bar{y}_{T},\bar{\mathbf{\check{V}}}_{T})=(\Pi_{0} \Lambda_{0}^{-1}+\bar{y}_{T}\mathbf{\check{\check{V}}}_{T}^{\prime})(\Lambda_ {0}^{-1}+\bar{\mathbf{\check{V}}}_{T}\mathbf{\check{\check{V}}}_{T}^{\prime}) ^{-1}. \tag{2.65}\] Due to the expectation formula of inverse-Wishart distribution, the Bayesian estimator of the parameter \(\Sigma\) is given by \[\Sigma_{*|T}:=\mathbb{E}(\Sigma|\bar{y}_{T},\bar{\mathbf{\check{V}}}_{T})=\frac {1}{\nu_{*}-\tilde{n}-1}V_{*|T}. \tag{2.66}\] To make statistical inferences about the parameter vector \(\theta=(\Pi,\Sigma)\) conditional on the information \(\bar{y}\) and \(\bar{\mathbf{\check{Y}}}\), one may use the Gibbs sampling method, which generates a dependent sequence of our parameters. In the Bayesian statistics, the Gibbs sampling is often used when the joint distribution is not known explicitly or is difficult to sample from directly, but the conditional distribution of each variable is known and is easy to sample from. Constructing the Gibbs sampler to approximate the joint posterior distribution \(f(\Pi,\Sigma|\bar{y}_{T},\bar{\mathbf{\check{V}}}_{T})\) given in equation (2.59) is straightforward: New values \(\big{(}\pi_{(s)},\Sigma_{(s)}\big{)}\), \(s=1,\ldots,N\) can be generated by 1. sample \(\Sigma_{(s)}\sim\mathcal{IW}(\nu_{*},V_{*|T})\) 2. sample \(\pi_{(s)}\sim\mathcal{N}\big{(}\pi_{*|T},\Sigma_{(s)}\otimes\Lambda_{*|T} \big{)}\), where \(\mathcal{IW}\) is an abbreviation of the inverse-Wishart distribution, and the parameters \(\nu_{*}\) and \(V_{*|T}\) of the inverse-Wishart distribution and mean \(\pi_{*|T}\) and the matrix \(\Lambda_{*|T}\) of the multivariate normal distribution are given in equations (2.63)-(2.64) and (2.61), respectively. As mentioned before, VARs tend to have a lot of parameters, and large VARs exacerbate this problem. In particular, for a VAR(\(p\)) process with order of \(p=3\), \(l=1\) exogenous variable and \(\tilde{n}=15\) endogenous variables, we have to estimate \(\tilde{n}(l+\tilde{n}p)=690\) VAR coefficients. In this case, the number of VAR coefficients is much larger than the number of observations for small and medium-sized samples. Therefore, without informative priors or regularization, it is not even possible to estimate the VAR coefficients. In practice, one usually adopts Minnesota prior to estimating the parameters of the VAR(\(p\)) process. Doan, Litterman, and Sims (1984) firstly introduced Minnesota prior to small Bayesian VAR. Also, Banbura, Giannone, and Reichlin (2010) used Minnesota prior for large Bayesian VAR and showed that the forecast of large Bayesian VAR is better than small Bayesian VAR. However, there are many different variants of the Minnesota prior, to illustrative purposes, we consider the prior, which is included in Banbura et al. (2010). The idea of Minnesota prior is that it shrinks diagonal elements of the matrix \(A_{1}\) toward \(\delta_{i}\) and off-diagonal elements of \(A_{1}\) and all elements of other matrices \(A_{0},A_{2},\ldots,A_{p}\) toward 0, where \(\delta_{i}\) is 0 for a stationary variable \(y_{i,t}\) and 1 for a variable with unit root \(y_{i,t}\). For the prior, it is assumed that conditional on \(\Sigma\), \(A_{0},A_{1},\ldots,A_{p}\) are independent, normally distributed, and for \((i,j)\)-th element of the matrix \(A_{s}\) (\(s=0,\ldots,p\)) it holds \[\mathbb{E}\big{(}(A_{s})_{ij}\big{|}\Sigma\big{)}=\begin{cases}\delta_{i}& \text{if}\quad i=j,\ s=1,\\ 0&\text{if}\quad\text{otherwise}\end{cases}, \tag{2.67}\] \[\text{Var}\big{(}(A_{0})_{ij}\big{|}\Sigma\big{)}=1/\varepsilon_{ij},\ \ \text{and}\ \ \text{Var}\big{(}(A_{s})_{ij}\big{|}\Sigma\big{)}=\begin{cases}\frac{\lambda^{2 }}{s^{2}}&\text{if}\quad i=j,\\ \theta\frac{\lambda^{2}}{s^{2}}\frac{\sigma_{i}}{\sigma_{j}}&\text{if}\quad \text{otherwise}\end{cases}\ \ \text{for}\ s=1,\ldots,p, \tag{2.68}\] where we denote \((i,j)\)-th element of the matrix \(A_{s}\) by \((A_{s})_{ij}\). The parameter \(\varepsilon_{ij}\) is a small number and it corresponds to an uninformative diffuse prior for \((A_{0})_{ij}\), the parameter \(\lambda\) controls the overall tightness of the prior distribution, the factor \(1/s^{2}\) is the rate at which prior variance decreases with increasing lag length, the factor \(\sigma_{i}/\sigma_{j}\) accounts for different scale and variability of the data, and the coefficient \(\theta\in[0,1]\) governs the extent to which the lags of the other variables are less important than the own lags. By using the dummy variables, Banbura et al. (2010) obtain Bayesian estimators, corresponding to the hyperparameters. For our Bayesian estimators, given in equation (2.65) and (2.66), we can not use Minnesota prior due to the Kronecker product, \(\Sigma\otimes\Lambda_{0}\). For this reason, to define a prior, which applies the idea of Minnesota prior, we follow Chan (2020). One should note that \(\pi_{0}\), \(\Lambda_{0}\), \(\nu_{0}\), and \(V_{0}\) are hyper-parameters of our model. For the hyperparameter \(\nu_{0}\), to the prior variance of \(\Sigma\) is large, which corresponds relatively uninformative diffuse prior, it is often chosen small value for \(\nu_{0}\). According to the expectation formula of the inverse-Wishart distribution, we have \(\mathbb{E}(\Sigma)=\frac{1}{\nu_{0}-\tilde{n}-1}V_{0}\). Consequently, for given \(\nu_{0}\), one chooses \(V_{0}\) to match the desired prior mean of \(\Sigma\) using the expectation formula. For the hyperparameter \(\pi_{0}\), one may use equation (2.67). To introduce shrinkage in the hyperparameter \(\Lambda_{0}\), we assume that it is a diagonal matrix. Then, its diagonal elements are \[\Lambda_{0,ii}=\begin{cases}\lambda_{1}&\text{if}\quad 1\leq i\leq l,\\ \frac{\lambda_{2}}{s^{2}\sigma_{s}^{2}}&\text{if}\quad l+\tilde{n}(s-1)<i\leq l +\tilde{n}s,\ s=1,\ldots,p\end{cases} \tag{2.69}\] for \(i=1,\ldots l+\tilde{n}p\). Some other priors and Bayesian methods can be found in Chan (2020). In practice, to estimate the parameters \(\sigma_{1}^{2},\ldots,\sigma_{\tilde{n}}^{2}\), for \(i=1,\ldots,\tilde{n}\), we model each individual variable \(y_{i,t}\) by univariate autoregressive model with order \(p\) (AR(\(p\))). Then, we estimate the AR(\(p\)) processes by the ordinary least square (OLS) method. If we denote standard OLS estimate of the error variance of the \(i\)-th AR(\(p\)) equation by \(s_{i}^{2}\), then the parameter \(\sigma_{i}^{2}\) is estimated by \(s_{i}^{2}\). ### Portfolio Selection for Public Company The mean-variance portfolio choice model was established firstly by Markowitz (1952). In the stochastic DDM framework, by introducing a discrete joint distribution for dividend growth rates, the first time Agosto, Mainini, and Moretto (2019) obtain a closed-form covariance formula between two stocks. Also, they consider the portfolio choice problem for two stocks. Furthermore, using a multivariate Markov chain, D'Amico and De Blasis (2020a) provide a portfolio selection of three stocks. In this Subsection, we consider a problem that a public company has some cash at time 0 and wants to maximize its mean-variance utility function on the next period's earnings before tax, which comes from buying stocks, including its own, and paying interest payments on liabilities. Then, the problem is given by the following portfolio choice problem with the mean-variance utility function \[\begin{cases}\tilde{\mathbb{E}}\big{(}x^{\prime}k_{1}^{e}-x_{i}k_{i,0}^{\ell} \big{|}\mathcal{F}_{0}\big{)}-\frac{c}{2}\widetilde{\operatorname{Var}}\big{(} x^{\prime}k_{1}^{e}-x_{i}k_{i,0}^{\ell}\big{|}\mathcal{F}_{0}\big{)} \longrightarrow\max\\ \text{s.t. }x^{\prime}i_{n}+x_{i}=1,\end{cases} \tag{2.70}\] where \(k_{1}^{e}=(k_{1,1}^{e},\ldots,k_{n,1}^{e})^{\prime}\) is an \((n\times 1)\) vector, consisting of the required rate of returns at time 1 on equities, \(k_{i,0}^{\ell}\) is the required rate of return at time 0 on liabilities of \(i\)-th company, calculated by equation (2.9), \((x^{\prime},x_{i})^{\prime}\) is an \(([n+1]\times 1)\) variables' vector, \(c>0\) is a risk-aversion parameter, which is different for each investor, and \(\tilde{\mathbb{E}}\) and \(\widetilde{\operatorname{Var}}\) are an expectation and variance operators under a generic probability measure \(\tilde{\mathbb{P}}\), respectively. The problem is equivalent to the following problem \[\begin{cases}x^{\prime}\tilde{\mu}-x_{i}k_{i,0}^{\ell}-\frac{c}{2}x^{\prime} \tilde{\Sigma}x\longrightarrow\max\\ \text{s.t. }x^{\prime}i_{n}+x_{i}=1,\end{cases} \tag{2.71}\] where \(\tilde{\mu}:=\tilde{\mathbb{E}}(k_{1}^{e}|\mathcal{F}_{0})=\tilde{\mathbb{E}} \big{(}\exp\{Jy_{1}\}\big{|}\mathcal{F}_{0}\big{)}\) and \(\tilde{\Sigma}:=\widetilde{\operatorname{Var}}(k_{1}^{e}|\mathcal{F}_{0})= \widetilde{\operatorname{Var}}\big{(}\exp\{Jy_{1}\}\big{|}\mathcal{F}_{0} \big{)}\) are \((n\times 1)\) conditional expectation vector and \((n\times n)\) conditional covariance matrix of the required rate of return vector on equities \(k_{1}^{e}\), respectively, and \(J:=[I_{n}:0]\) is an \((n\times\tilde{n})\) matrix, which is used to extract the log required rate of return vector \(\tilde{k}_{1}^{e}\) from the vector \(y_{1}\). The problem is the quadratic programming problem and it has a unique solution. Its Lagrangian function is given by \[\mathcal{L}(x,x_{i},\lambda):=x^{\prime}\tilde{\mu}-x_{i}k_{i,0}^{\ell}-\frac {c}{2}x^{\prime}\tilde{\Sigma}x-\lambda(x^{\prime}i_{n}+x_{i}-1). \tag{2.72}\] Taking partial derivatives with respect to parameters \(x\), \(x_{i}\), and \(\lambda\) from the Lagrangian function and setting to zero, one finds the solution of the quadratic programming problem \[x^{*}:=\frac{1}{c}\tilde{\Sigma}^{-1}\big{(}\tilde{\mu}+k_{i,0}^{\ell}i_{n} \big{)}\quad\text{and}\quad x_{i}^{*}:=1-\frac{1}{c}i_{n}^{\prime}\tilde{ \Sigma}^{-1}\big{(}\tilde{\mu}+k_{i,0}^{\ell}i_{n}\big{)}. \tag{2.73}\] To obtain a solution to the problem (2.71), we need to calculate the conditional expectation vector \(\tilde{\mu}_{i}\) and conditional covariance matrix \(\tilde{\Sigma}_{i}\). We consider two cases: \((i)\) Let us assume that the generic probability measure equals the real probability measure, \(\tilde{\mathbb{P}}=\mathbb{P}\). Then, according to the expectation and covariance formula of the log-normal random vector and the fact that \(\mathbb{E}(y_{1}|\mathcal{F}_{0},s_{1})=\Pi(s_{1})\mathsf{Y}_{0}\) and \(\operatorname{Var}(y_{1}|\mathcal{F}_{0},s_{1})=\Sigma_{1}(s_{1})\), we have \[\mathbb{E}\big{(}\exp\{Jy_{1}\}\big{|}\mathcal{F}_{0},s_{1}\big{)}=\exp\bigg{\{} J\Pi(s_{1})\mathsf{Y}_{0}+\frac{1}{2}J\Sigma_{1}(s_{1})J^{\prime}\bigg{\}} \tag{2.74}\] and \[\operatorname{Var}\big{(}\exp\{Jy_{1}\}\big{|}\mathcal{F}_{0},s_{1 }\big{)}=\Bigg{(}\exp\bigg{\{}J\Pi(s_{1})\mathsf{Y}_{0}+\frac{1}{2}J\Sigma_{1}(s _{1})J^{\prime}\bigg{\}} \tag{2.75}\] \[\times\exp\bigg{\{}J\Pi(s_{1})\mathsf{Y}_{0}+\frac{1}{2}J\Sigma_{1 }(s_{1})J^{\prime}\bigg{\}}^{\prime}\Bigg{)}\odot\bigg{(}\exp\Big{\{}J\Sigma_{1 }(s_{1})J^{\prime}\Big{\}}-I_{n}\bigg{)}.\] As a result, we get the parameters \(\tilde{\mu}_{i}\) and \(\tilde{\Sigma}_{i}\) in equation (2.73), corresponding to the portfolio selection with regime-switching: \[\tilde{\mu}=\sum_{s_{1}=1}^{N}\mathbb{E}\big{(}\exp\{Jy_{1}\}\big{|}\mathcal{F}_ {0},s_{1}\big{)}p_{s_{1}},\quad\text{and}\quad\tilde{\Sigma}=\sum_{s_{1}=1}^{N }\text{Var}\big{(}\exp\{J_{i}y_{1}\}\big{|}\mathcal{F}_{0},s_{1}\big{)}p_{s_{1 }}. \tag{2.76}\] \((ii)\) Now we assume that there is one regime and the generic probability measure equals the posterior probability measure of the Bayesian method, i.e., \(\tilde{\mathbb{P}}(\cdot)=\mathbb{P}(\cdot|\bar{y}_{0},\bar{\mathbf{Y}}_{0})\). Here we suppose that to obtain the posterior density at time zero, we used up to and including zero (last \(T\)) observations of a process \(y_{t}\). Since \(\xi_{1}\) is independent of the information \(\{\tilde{y}_{0},\bar{\mathbf{Y}}_{0}\}\), by the tower property and taking out what is known property of conditional expectation, \(\mathbb{E}(y_{1}|\bar{y}_{0},\bar{\mathbf{Y}}_{0})=\mathbb{E}(\Pi|\bar{y}_{0},\bar{\mathbf{Y}}_{0})\mathbf{Y}_{0}=\Pi_{*|0}\mathbf{Y}_{0}\) and \[\text{Var}(y_{1}|\bar{y}_{0},\bar{\mathbf{Y}}_{0})=\mathbb{E}\big{(}\text{Var }(\xi_{1}|\Sigma,\bar{y}_{0},\bar{\mathbf{Y}}_{0})\big{|}\bar{y}_{0},\bar{ \mathbf{Y}}_{0}\big{)}=\mathbb{E}(\Sigma|\bar{y}_{0},\bar{\mathbf{Y}}_{0})= \Sigma_{*|0}, \tag{2.77}\] where the Bayesian estimators \(\Pi_{*|0}\) and \(V_{*|0}\) are given by equations (2.65) and (2.66). As a result, one obtains \(\tilde{\mu}_{i}\) and \(\tilde{\Sigma}_{i}\), corresponding to the Bayesian portfolio selection: \[\tilde{\mu}=\exp\left\{J\Pi_{*|0}\mathbf{Y}_{0}+\frac{1}{2}J\Sigma_{*|0}J^{ \prime}\right\} \tag{2.78}\] and \[\tilde{\Sigma} = \Bigg{(}\exp\left\{J\Pi_{*|0}\mathbf{Y}_{0}+\frac{1}{2}J\Sigma_{* |0}J^{\prime}\right\}\exp\left\{J\Pi_{*|0}\mathbf{Y}_{0}+\frac{1}{2}J\Sigma_{* |0}J^{\prime}\right\}^{\prime}\Bigg{)}\] \[\odot \Bigg{(}\exp\left\{J\Sigma_{*|0}J^{\prime}\right\}-I_{n}\Bigg{)}.\] The solution to the problem is not only maximizing the earning before tax but also optimizing a capital structure of a company. Let us assume that \(i\)-th company has some cash, say $50 million. Then, the company may optimize its capital structure of the balance sheet, namely, \((i)\) reduce liabilities or expand the liabilities by \(50\times x_{i}^{*}\) and \((ii)\) reduce treasury stocks of the company or expand the stocks by \(50\times\bar{x}_{i}^{*}\), where \(\bar{x}_{i}^{*}\) is \(i\)-th component of the optimal vector \(x^{*}\). Of course, one may add constraints to the problem to prohibit short sales. ## 3 Parameter Estimation of Private Company In this Section, we will consider parameter estimation methods for \(n\) private companies. Let \(B_{t}\) be a book value of equity and \(b_{t}\) be a book value growth rate, respectively, at time \(t\) of a private company. Since the book value of equity at time \(t-1\) grows at rate \(b_{t}\), its value at time \(t\) becomes \[B_{t}=(1+b_{t})B_{t-1}. \tag{3.1}\] If we assume that for the private company, its price-to-book ratio is constant, say, \(m=P_{t}/B_{t}\), for all \(t=1,\ldots,T\), then according to DDM equation (2.1), price (value) at time \(t\) of the private company is expressed by the following equation \[mB_{t}=(1+k_{t})mB_{t-1}-d_{t}=\big{(}(1+k_{t})m-\Delta_{t}\big{)}B_{t-1}, \tag{3.2}\] where \(k_{t}\) is the required rate of return on equity at time \(t\) and \(\Delta_{t}:=d_{t}/B_{t-1}\) is a dividend-to-book ratio at time \(t\), respectively, of the private company. If we substitute equation (3.1) into the left-hand side of the above equation (3.2), then we get that \[(1+b_{t})mB_{t-1}=\big{(}(1+k_{t})m-\Delta_{t}\big{)}B_{t-1}. \tag{3.3}\] Therefore, a relation between the dividend-to-book ratio, book value growth rate, required rate of return, and the price-to-book ratio is given by \[mb_{t}=mk_{t}-\Delta_{t},\quad t=1,2,\ldots. \tag{3.4}\] We refer to the model and its versions with the regime-switching and with a state (latent or unobserved) variable given in equations (3.6) and (3.22), respectively, as the private company valuation model. For the log private company valuation model, we refer to Battulga (2023), where he considers the private company valuation model in the log framework, and obtain closed-form pricing and hedging formulas for European call and put options. It should be noted that the private company valuation model given in (3.4) is equivalent to the franchise factor model, see Leibowitz and Kogelman (1990), but the private company valuation models with the regime-switching and the state variable differs from the franchise factor model. According to equation (3.4), the required rate of return at time \(t\) of the private company is represented by \[k_{t}=\frac{1}{m}\Delta_{t}+b_{t}. \tag{3.5}\] From the above equation, one can see that for a dividend-paying private company, if \(m\) increases, then the required rate of return \(k_{t}\) decreases and it converges to the book value growth rate \(b_{t}\). Thus, as the price-to-book and dividend-to-book ratios are positive, the book value growth rate is a floor of the required rate of return. On the other hand, because the term \(\frac{1}{m}\Delta_{t}\) takes positive value, if a private company pays dividends, then its required rate of return is always greater than the does not pay case. ### Regime Switching Estimation In order to incorporate a case, where the dividends are not paid into maximum likelihood estimators of the private company valuation model's parameter, rather than equation (3.4) we will use equation (3.5). As some private companies may not pay dividends, we suppose that there are \(n_{d}\) (\(0\leq n_{d}\leq n\)) companies that pay dividends. Because it is always possible to change the order of the companies, without loss of generality we can assume that dividend-paying companies are placed first \(n_{d}\) components of the first equation of system (3.9), see below. To keep notations simple, let \(B_{t}:=(B_{1,t},\ldots,B_{n,t})^{\prime}\) be an \((n\times 1)\) book value vector, \(m(s_{t}):=(m_{1}(s_{t}),\ldots,m_{n_{d}}(s_{t}))^{\prime}\) be an \((n_{d}\times 1)\) price-to-book ratio vector in regime \(s_{t}\) corresponding to dividend paying companies, \(b_{t}:=(b_{1,t},\ldots,b_{n,t})^{\prime}\) be an \((n\times 1)\) book value growth rate vector, and \(k_{t}(s_{t}):=(k_{1,t}(s_{t}),\ldots,k_{n,t}(s_{t}))^{\prime}\) be an \((n\times 1)\) required rate of return vector in regime \(s_{t}\), \(r(s_{t}):=(1/m_{1}(s_{t}),\ldots,1/m_{n_{d}}(s_{t}))^{\prime}\) be an \((n\times 1)\) book-to-price ratio vector in regime \(s_{t}\) and reciprocal of the vector \(m(s_{t})\), and \(R(s_{t}):=\text{diag}\{r(s_{t}),0\}\) be an \((n\times n)\) diagonal matrix, whose diagonal elements consist of the book-to-price ratio vector \(r(s_{t})\) and an \(((n-n_{d})\times 1)\) zero vector. Then, equation (3.5) can be written by \[b_{t}=k_{t}(s_{t})-R(s_{t})\Delta_{t}. \tag{3.6}\] Since the book value growth rate process \(b_{t}\) may depend on economic variables, we define an \((\ell\times 1)\) MS-VAR(\(p\)) process \(x_{t}\) \[x_{t}=A_{0}^{x}(s_{t})\psi_{t}+A_{1}^{x}(s_{t})x_{t-1}+\cdots+A_{p}^{x}(s_{t} )x_{t-p}+v_{t}, \tag{3.7}\] where \(x_{t}=(x_{1,t},\ldots,x_{\ell,t})^{\prime}\) is an \((\ell\times 1)\) random vector, \(\psi_{t}=(\psi_{1,t},\ldots,\psi_{l,t})^{\prime}\) is a \((l\times 1)\) random vector of exogenous variables, \(v_{t}=(u_{1,t},\ldots,v_{\ell,t})^{\prime}\) is an \((\ell\times 1)\) residual process, \(s_{t}\) is an unobserved regime at time \(t\), which is governed by a Markov chain with \(N\) states, \(A_{0}^{x}(s_{t})\) is an \((\ell\times l)\) coefficient matrix at regime \(s_{t}\) that corresponds to the vector of exogenous variables, for \(i=1,\ldots,p\), \(A_{i}^{x}(s_{t})\) are \((\ell\times\ell)\) coefficient matrices at regime \(s_{t}\) that correspond to \(x_{t-1},\ldots,x_{t-p}\). The process \(x_{t}\) consists of the economic variables that affect the book value growth rate process \(b_{t}\). Note that the process \(x_{t}\) can include dividend-to-book ratio process \(\Delta_{t}\). Equation (3.7) can be compactly written by \[x_{t}=\Pi^{x}(s_{t})\mathsf{X}_{t-1}+v_{t}, \tag{3.8}\] where \(\mathsf{X}_{t-1}:=(\psi_{t}^{\prime},x_{t-1}^{\prime},\ldots,x_{t-p}^{\prime})^ {\prime}\) is an \(\big{(}[l+\ell p]\times 1\big{)}\) vector, which consists of exogenous variables and last \(p\) lagged values of the process \(x_{t}\) and \(\Pi^{x}(s_{t}):=[A_{0}^{x}(s_{t}):A_{1}^{x}(s_{t}):\cdots:A_{p}^{x}(s_{t})]\) is an \(\big{(}\ell\times[l+\ell p]\big{)}\) matrix, which consists of the coefficient matrices of the process \(x_{t}\). We suppose that the required rate of return depends on the exogenous variables and random amount \(u_{t}\), namely, \(k_{t}(s_{t})=A_{0}^{k}(s_{t})\psi_{t}+u_{t}\). Consequently, our private company valuation model is given by the following system \[\begin{cases}b_{t}=A_{0}^{k}(s_{t})\psi_{t}-R(s_{t})\Delta_{t}+u_{t}\\ x_{t}=\Pi^{x}(s_{t})\mathsf{X}_{t-1}+v_{t}\end{cases}. \tag{3.9}\] To simplify the model, we assume that a covariance matrix of a random residual process \(\xi_{t}:=(u_{t}^{\prime},v_{t}^{\prime})^{\prime}\) is homoscedastic, that is, \(\mathrm{Var}(\xi_{t})=\Sigma(s_{t})\). However, one can easily develop private company valuation models with heteroscedastic residuals as in Section 2. If regime random vector \(s_{t}\) is in a regime \(j\), then conditional on the information \(\mathcal{F}_{t-1}\), a log conditional density of a random vector \(y_{t}:=(b_{t}^{\prime},x_{t}^{\prime})^{\prime}\) is given by \[\ln(\eta_{tj}) = \ln\big{(}f(y_{t}|s_{t}=j,\mathcal{F}_{t-1},\alpha)\big{)}=- \frac{\tilde{n}}{2}\ln(2\pi)-\frac{1}{2}\ln(|\Sigma(j)|)\] \[- \frac{1}{2}\Big{(}u_{t}^{\prime}(j)\Omega_{uu}(j)u_{t}(j)+2u_{t} ^{\prime}(j)\Omega_{uv}(j)v_{t}(j)+v_{t}^{\prime}(j)\Omega_{vv}(j)v_{t}(j) \Big{)},\] where the residual vectors in the regime \(j\) are \(u_{t}(j)=b_{t}-A_{0}^{k}(j)\psi_{t}+R(j)\Delta_{t}\) and \(v_{t}(j)=x_{t}-\Pi^{x}(j)\mathsf{X}_{t-1}\) and \(\Omega_{uu}(j)\), \(\Omega_{uv}(j)\), \(\Omega_{vu}(j)\), and \(\Omega_{vv}(j)\) are partitions, corresponding to a residual vector \(\xi_{t}(j):=(u_{t}^{\prime}(j),v_{t}^{\prime}(j))^{\prime}\) of a matrix \(\Sigma(j)^{-1}\). To obtain the partial derivative of the log conditional density with respect to the book-to-price ratio vector \(r(s_{t})\), instead of the first equation of system (3.9), we need an equation \(J_{d}b_{t}=J_{d}A_{0}^{k}(s_{t})\psi_{t}+J_{d}\mathrm{diag}\{\Delta(s_{t})\}J_ {d}^{\prime}(s_{t})+J_{d}u_{t}\), where \(J_{d}:=[I_{n_{d}}:0]\) is an \((n_{d}\times n)\) matrix. Consequently, the partial derivative is given by \[\frac{\partial\ln(\eta_{tj})}{\partial r(j)^{\prime}}=-\Big{(}u_{t}^{\prime}( j)J_{d}^{\prime}J_{d}\Omega_{uu}J_{d}^{\prime}+v_{t}^{\prime}(j)\Omega_{vu}J_{d}^{ \prime}\Big{)}J_{d}\mathrm{diag}\{\Delta_{t}\}J_{d}^{\prime}. \tag{3.11}\] By equation (2.24), for each regime \(j=1,\ldots,N\), one obtains ML estimator of the parameter vector \(r(j)\) \[\hat{r}(j) := \Bigg{(}\sum_{t=1}^{T}J_{d}\mathrm{diag}\{\bar{\Delta}_{t,j}\}J_ {d}^{\prime}J_{d}\Omega_{uu}(j)J_{d}J_{d}^{\prime}\mathrm{diag}\{\bar{\Delta}_ {t,j}\}J_{d}^{\prime}\Bigg{)}^{-1}\] \[\times \sum_{t=1}^{T}J_{d}\mathrm{diag}\{\bar{\Delta}_{t,j}\}J_{d}^{ \prime}\bigg{(}J_{d}\Omega_{uu}(j)J_{d}^{\prime}J_{d}\Big{(}A_{0}^{k}(j)\bar {\psi}_{t,j}-\bar{b}_{t,j}\Big{)}\] \[- J_{d}\Omega_{uv}(j)\Big{(}\bar{x}_{t,j}-\Pi^{x}(j)\bar{\mathsf{ X}}_{t-1,j}\Big{)}\bigg{)},\] where \(\bar{\Delta}_{t,j}:=\Delta_{t}\sqrt{(z_{t|T})_{j}}\) is an \((n\times 1)\) dividend-to-book ratio process, adjusted by the regime \(j\), \(\bar{\psi}_{t,j}:=\psi_{t}\sqrt{(z_{t|T})_{j}}\) is an \((l\times 1)\) exogenous variables vector, adjusted by the regime \(j\), \(\bar{b}_{t,j}:=b_{t}\sqrt{(z_{t|T})_{j}}\) is an \((n\times 1)\) book value growth rate process, adjusted by the regime \(j\), and \(\bar{\mathsf{X}}_{t,j}:=\mathsf{X}_{t}\sqrt{(z_{t|T})_{j}}\) is an \(\big{(}[l+\ell p]\times 1\big{)}\) explanatory variables vector, adjusted by the regime \(j\). Let \(a^{k}(j):=\mathrm{vec}\big{(}A^{k}_{0}(j)\big{)}\) be a vectorization of the matrix \(A^{k}_{0}(j)\). Then, as \(A^{k}_{0}(j)\psi_{t}=(\psi^{\prime}_{t}\otimes I_{n})a^{k}(j)\) and partial derivative of the log conditional density with respect to the vector \(a^{k}(j)\) is \[\frac{\partial\ln(\eta_{tj})}{\partial a^{k}(j)^{\prime}}=\Big{(}u^{\prime}_{t }(j)\Omega_{uu}+v^{\prime}_{t}(j)\Omega_{uu}\Big{)}(\psi^{\prime}_{t}\otimes I _{n}). \tag{3.13}\] According to equation (3.13) and the procedure, which is used to obtain equations (2.33) and (2.35), for each regime \(j=1,\ldots,N\), we obtain ML estimator of the parameter matrix \(A^{k}_{0}(j)\) \[\hat{A}^{k}_{0}(j):=\Big{(}\bar{b}_{j}+R(j)\bar{\Delta}_{j}+\Omega^{-1}_{uu}(j )\Omega_{uv}(j)\big{(}\bar{x}_{j}-\Pi(j)\mathsf{X}_{j}\big{)}\Big{)}\bar{\psi} ^{\prime}_{j}\big{(}\bar{\psi}_{j}\bar{\psi}^{\prime}_{j}\big{)}^{-1}, \tag{3.14}\] where \(\bar{b}_{j}:=[\bar{b}_{1,j}:\cdots:\bar{b}_{T,j}]\) is an \((n\times T)\) matrix, \(\bar{\Delta}_{j}:=[\bar{\Delta}_{1,j}:\cdots:\bar{\Delta}_{T,j}]\) is an \((n\times T)\) matrix, \(\bar{\mathsf{X}}_{j}:=[\bar{\mathsf{X}}_{0,j}:\cdots:\bar{\mathsf{X}}_{T-1,j}]\) is an \(([l+\ell p]\times T)\) matrix, and \(\bar{\psi}_{j}:=[\bar{\psi}_{1,j}:\cdots:\bar{\psi}_{T,j}]\) is an \((l\times T)\) matrix. Similarly, for each regime \(j=1,\ldots,N\), one finds ML estimator of the parameter matrix \(\Pi^{x}(j)\): \[\hat{\Pi}^{x}(j):=\Big{(}\bar{x}_{j}+\Omega^{-1}_{vv}(j)\Omega_{vu}(j)\big{(} \bar{b}_{j}-A_{k}(j)\bar{\psi}_{j}+R(j)\bar{\Delta}_{j}\big{)}\Big{)}\bar{ \mathsf{X}}^{\prime}_{j}\big{(}\bar{\mathsf{X}}_{j}\bar{\mathsf{X}}^{\prime}_ {j}\big{)}^{-1}. \tag{3.15}\] Analogous to equation (2.35), it can be shown that for each regime \(j=1,\ldots,N\), ML estimator of the covariance matrix \(\Sigma(j)\) is given by \[\hat{\Sigma}(j)=\frac{1}{\sum_{t=1}^{T}(z_{t|T})_{j}}\begin{bmatrix}\bar{u}_{j }\bar{u}^{\prime}_{j}&\bar{u}_{j}\bar{v}^{\prime}_{j}\\ \bar{v}_{j}\bar{u}^{\prime}_{j}&\bar{v}_{j}\bar{v}^{\prime}_{j}\end{bmatrix} \tag{3.16}\] where the residual matrices that are adjusted by the regime \(j\) are \(\bar{u}_{j}:=\bar{b}_{j}-A^{k}_{0}(j)\bar{\psi}_{j}+R(j)\bar{\Delta}_{j}\) and \(\bar{v}_{j}:=\bar{x}_{j}-\Pi^{x}(j)\bar{\mathsf{X}}_{j}\). It is worth mentioning that if all the companies do not pay dividends, then for each \(j=1,\ldots,N\), we do not need to estimate the parameter \(r(j)\). Consequently, ML estimators of the parameters \(A^{k}_{0}(j)\) and \(\Pi^{x}(j)\) are obtained by substituting \(\bar{\Delta}_{j}=0\) into equations (3.14) and (3.15). ### The Bayesian Estimation Now, we move to the Bayesian analysis of linear regression. To obtain the Bayesian estimator of the private company valuation model, we need the following multivariate linear regression that corresponds to system (3.9) \[y_{t}=\Pi\mathsf{Y}_{t-1}+\xi_{t}, \tag{3.17}\] where \(y_{t}:=(b^{\prime}_{t},x^{\prime}_{t})^{\prime}\) is an \((\tilde{n}\times 1)\) vector, \(C\) is an \((\tilde{n}\times[n+l+\ell p])\) random matrix, \(\mathsf{Y}_{t-1}:=\big{(}\Delta^{\prime}_{t},\psi^{\prime}_{t},x^{\prime}_{t-1 },\ldots,x^{\prime}_{t-p}\big{)}^{\prime}\) is an \(([n+l+\ell p]\times 1)\) vector, and \(\xi_{t}:=(u^{\prime}_{t},v^{\prime}_{t})^{\prime}\) is an \((\tilde{n}\times 1)\) white noise process with a random covariance matrix \(\Sigma=\mathrm{Var}(\xi_{t})\). The matrix \(C\) has the following structure \[\Pi=\begin{bmatrix}\Pi_{b_{t}\Delta_{t}}&\Pi_{b_{t}\psi_{t}}&\Pi_{b_{t}x_{t-1}} &\Pi_{b_{t}x_{t-2}}&\ldots&\Pi_{b_{t}x_{t-p}}\\ \Pi_{x_{t}\Delta_{t}}&\Pi_{x_{t}\psi_{t}}&\Pi_{x_{t}x_{t-1}}&\Pi_{x_{t}x_{t-2}} &\ldots&\Pi_{x_{t}x_{t-p}}\end{bmatrix}, \tag{3.18}\] where for \(\alpha\in\{b_{t},x_{t}\}\) and \(\beta\in\{\Delta_{t},\psi_{t},x_{t-1},\ldots,x_{t-p}\}\), \(\Pi_{\alpha\beta}\) is a random coefficient matrix of the random vector \(\beta\), corresponding to the process \(\alpha\). Taking into account the structure of system (3.9), we expect a prior expectation matrix of the random matrix \(\Pi\) is given by \[\Pi_{0}=\mathbb{E}\big{[}\Pi|\Sigma\big{]} = \begin{bmatrix}\Pi^{*}_{b_{t}\Delta_{t}}&\Pi^{*}_{b_{t}\psi_{t}}&0 &0&\ldots&0\\ 0&0&\Pi^{*}_{x_{t}x_{t-1}}&0&\ldots&0\end{bmatrix}, \tag{3.19}\] where \(\Pi^{*}_{b_{t}\Delta_{t}}:=\mathbb{E}\big{[}\Pi_{b_{t}\Delta_{t}}\big{|}\Sigma \big{]}\) is an \((n\times n)\) diagonal matrix, whose first \(n_{d}\) components are correspond to prior expectation of the book-to-price ratio vector \(r\) and other components are zero, \(\Pi^{*}_{b_{t}\psi_{t}}:=\mathbb{E}\big{[}\Pi_{b_{t}\psi_{t}}\big{|}\Sigma\big{]}\) is an \((n\times l)\) prior expectation matrix of the random matrix \(A^{k}_{0}\), and \(\mathbb{E}\big{[}\Pi_{x_{t}x_{t-1}}|\Sigma\big{]}\) is an \((\ell\times\ell)\) diagonal prior expectation matrix of the random matrix \(A_{1}^{x}\) and its diagonal elements are given by equation (2.67). To obtain the prior variance of the random matrix \(\Pi\), we apply the idea in equation (2.69). By using the idea, diagonal elements of \(([n+l+\ell p]\times[n+l+\ell p])\) diagonal matrix \(\Lambda_{0}\) are defined by \[\Lambda_{0,ii}=\begin{cases}\lambda_{1}&\text{if}\quad 1\leq i\leq n+l,\\ \dfrac{\lambda_{2}}{s^{2}\sigma_{s}^{2}}&\text{if}\quad n+l+\ell(s-1)<i\leq n+ l+\ell s,\ s=1,\ldots,p.\end{cases} \tag{3.20}\] for \(i=1,\ldots,n+l+\ell p\). Other hyperparameters \(\nu_{0}\) and \(V_{0}\) are exactly the same defined as in Section 2.2. After defining the hyperparameters, one can obtain the Bayesian estimators using equations (2.65) and (2.66). ### The Kalman Filtering Because one can use ideas, which arise in the following to estimate the required rate of return on debtholders, in this Subsection, we will concentrate on the required rate of return on equity. Let us assume that the price-to-book ratio varies over time, that is, \(m_{t}=P_{t}/B_{t}\), \(t=1,\ldots,T\). Under the assumption, for a generic private company, equation (3.3) becomes \[m_{t}B_{t}=\big{(}(1+k_{t}^{\circ})m_{t-1}-\Delta_{t}\big{)}B_{t-1}. \tag{3.21}\] Therefore, using the relation \(B_{t}=(1+b_{t})B_{t-1}\) in equation (3.21) a relation between the dividend-to-book ratio, book value growth rate, the required rate of return on equity, and price-to-book ratios is given by \[\Delta_{t}=-(1+b_{t})m_{t}+(1+k_{t}^{\circ})m_{t-1}. \tag{3.22}\] To estimate the parameters of the required rate of return on equity, we must add a random amount, say, \(u_{t}\), into equation (3.22). Then, equation (3.22) becomes \[\Delta_{t}=-(1+b_{t})m_{t}+(1+k_{t}^{\circ})m_{t-1}+u_{t}. \tag{3.23}\] It should be noted that for the above equation, the price-to-book ratios \(m_{t}\) and \(m_{t-1}\) are unobserved (state) variables. For a non-dividend paying firm, the above equation becomes \[\tilde{b}_{t}=\tilde{k}_{t}^{\circ}-\tilde{m}_{t}+\tilde{m}_{t-1}+u_{t}, \tag{3.24}\] where \(\tilde{b}_{t}:=\ln(1+b_{t})\) is a log book value growth rate, \(\tilde{k}_{t}^{\circ}:=\ln(1+k_{t}^{\circ})\) is a log required rate of return on equity, and \(\tilde{m}_{t}:=\ln(m_{t})\) is an unobserved log price-to-book ratio, respectively, at time \(t\) of the non-dividend paying company. We assume that the price-to-book ratio and log price-to-book ratio are governed by the autoregressive distributed lag model of order \((q,p)\) (ADL\((q,p)\)), that is, \[m_{t} = \Phi_{1}m_{t-1}+\cdots+\Phi_{q}m_{t-q}+A_{0}^{m}\psi_{t}+A_{1}^{m }x_{t-1}+\cdots+A_{p}^{m}x_{t-p}+w_{t}\] \[= \Phi\mathsf{M}_{t-1}+\Pi^{x}\mathsf{X}_{t-1}+w_{t}\] and \[\tilde{m}_{t}=\Phi\tilde{\mathsf{M}}_{t-1}+\Pi^{x}\mathsf{X}_{t-1}+w_{t} \tag{3.26}\] where for \(i=1,\ldots,n\), \(\Phi_{i}\) is an \((n\times n)\) coefficient matrix, corresponding to the state vectors \(m_{t-i}\) and \(\tilde{m}_{t-i}\), \(\Phi:=[\Phi_{1}:\cdots:\Phi_{q}]\) is \((n\times nq)\) matrix, \(\mathsf{M}_{t-1}:=(m_{t-1}^{\prime},\ldots,m_{t-q}^{\prime})^{\prime}\) is an \((nq\times 1)\) state vector, and \(\mathsf{M}_{t-1}:=(\tilde{m}^{\prime}_{t-1},\ldots,\tilde{m}^{\prime}_{t-q})^{\prime}\) is an \((nq\times 1)\) state vector, and \(w_{t}\) is \((n\times 1)\) white noise process. Consequently, our models are given by the following systems \[\begin{cases}\Delta_{t}=-\text{diag}\{i_{n}+b_{t}\}m_{t}+\text{diag}\{i_{n}+A_{ 0}^{k}\psi_{t}\}m_{t-1}+u_{t}\\ x_{t}=\Pi^{x}\mathsf{X}_{t-1}+v_{t}\\ m_{t}=\Phi\mathsf{M}_{t-1}+\Pi^{m}\mathsf{X}_{t-1}+w_{t}\end{cases}\qquad \text{ for }t=1,\ldots,T \tag{3.27}\] for the dividend-paying company, and \[\begin{cases}\tilde{b}_{t}=A_{0}^{k}\psi_{t}-\tilde{m}_{t}+\tilde{m}_{t-1}+u_ {t}\\ x_{t}=\Pi^{x}\mathsf{X}_{t-1}+v_{t}\\ \tilde{m}_{t}=\Phi\tilde{\mathsf{M}}_{t-1}+\Pi^{m}\mathsf{X}_{t-1}+w_{t}\end{cases} \qquad\text{ for }t=1,\ldots,T \tag{3.28}\] for the non-dividend paying company. The systems (3.27) and (3.28) are more compactly written by \[\begin{cases}y_{t}=\Psi_{t}z_{t}+\varphi_{t}+\xi_{t}\\ z_{t}=Az_{t-1}+\Pi^{m}_{*}\mathsf{X}_{t-1}+\eta_{t}\end{cases}\qquad\text{ for }t=1,\ldots,T, \tag{3.29}\] where for the dividend-paying company, \(y_{t}:=(\Delta^{\prime}_{t},x^{\prime}_{t})^{\prime}\) is an \((\tilde{n}\times 1)\) vector, which is consists of observed variables' vectors \(\Delta_{t}\) and \(x_{t}\), \(z_{t}:=\mathsf{M}_{t}\) is a \((nq\times 1)\) state vector of the price-to-book ratios at times \(t,\ldots,t-q+1\), \[\Psi_{t}:=\begin{bmatrix}-\text{diag}\{i_{n}+b_{t}\}&\text{diag}\{i_{n}+A_{0} ^{k}\psi_{t}\}&0&\ldots&0\\ 0&0&0&\ldots&0\end{bmatrix} \tag{3.30}\] is an \((\tilde{n}\times nq)\) matrix, and \(\varphi_{t}:=((A_{0}^{k}\psi_{t})^{\prime},0)^{\prime}\) is an \((\tilde{n}\times 1)\) vector, and \(\xi_{t}=(u^{\prime}_{t},v^{\prime}_{t})^{\prime}\) is an \((\tilde{n}\times 1)\) vector, which is consists of observed variables' vectors \(\tilde{b}_{t}\) and \(x_{t}\), \(z_{t}:=\mathsf{M}_{t}\) is a \((nq\times 1)\) state vector of the log price-to-book ratios at times \(t,\ldots,t-q+1\), \[\Psi_{t}:=\begin{bmatrix}-I_{n}&I_{n}&0&\ldots&0\\ 0&0&0&\ldots&0\end{bmatrix} \tag{3.31}\] is an \((\tilde{n}\times nq)\) matrix, and \(\varphi_{t}:=((A_{0}^{k}\psi_{t})^{\prime},0)^{\prime}\) is an \((\tilde{n}\times 1)\) vector, and \(\xi_{t}=(u^{\prime}_{t},v^{\prime}_{t})^{\prime}\) is an \((\tilde{n}\times 1)\) white noise process, \(\eta_{t}:=(v^{\prime}_{t},0,\ldots,0)^{\prime}\) is an \((nq\times 1)\) random vector, \(\Pi^{m}_{*}:=[(\Pi^{m})^{\prime}:0:\cdots:0]^{\prime}\) is an \((nq\times n)\) matrix, whose first block is \(\Pi^{m}\) and other blocks are zero, and \[A:=\begin{bmatrix}\Phi_{1}&\ldots&\Phi_{q-1}&\Phi_{q}\\ I_{n}&\ldots&0&0\\ \vdots&\ddots&\vdots&\vdots\\ 0&\ldots&I_{n}&0\end{bmatrix} \tag{3.32}\] is an \((nq\times nq)\) matrix. The stochastic properties of systems (3.27)-(3.29) are governed by the random variables \(u_{1},\ldots,u_{T}\), \(v_{1},\ldots,v_{T}\), \(w_{1},\ldots,w_{T}\), and \(z_{0}\). We assume that the error random variables \(u_{t}\) and \(v_{t}\) for \(t=1,\ldots,T\) and initial book-to-price ratio \(m_{0}\) or log book-to-price ratio \(\tilde{m}_{0}\) are mutually independent, and follow normal distributions, namely, \[z_{0}\sim\mathcal{N}(\mu_{0},\Sigma_{0}),\quad\xi_{t}\sim\mathcal{N}(0,\Sigma _{\xi\xi}),\quad w_{t}\sim\mathcal{N}(0,\Sigma_{ww}),\quad\text{for }t=1,\ldots,T, \tag{3.33}\] where \[\Sigma_{\xi\xi}=\begin{bmatrix}\Sigma_{uu}&\Sigma_{uv}\\ \Sigma_{vu}&\Sigma_{vv}\end{bmatrix} \tag{3.34}\] is an \((\tilde{n}\times\tilde{n})\) covariance matrix of random error vector \(\xi_{t}\). For the rest of the subsection, we review the Kalman filtering for our model, see also Hamilton (1994) and Lutkepohl (2005). For \(t=0,\ldots,T\), let \(c_{t}:=(y^{\prime}_{t},z^{\prime}_{t})^{\prime}\) be a \(([\tilde{n}+nq]\times 1)\) vector, composed of the endogenous variable \(y_{t}\) and the state vector \(z_{t}\), and \(\mathcal{F}_{t}:=(\mathcal{F}_{0},\Delta^{\prime}_{1},\ldots,\Delta^{\prime} _{t},x^{\prime}_{1},\ldots,x^{\prime}_{t})\) and \(\mathcal{F}_{t}:=(\mathcal{F}_{0},\tilde{b}^{\prime}_{1},\ldots,\tilde{b}^{ \prime}_{t},x^{\prime}_{1},\ldots,x^{\prime}_{t})\) be a available information at time \(t\) of dividend-paying and non-dividend paying companies, respectively, where \(\mathcal{F}_{0}:=(B^{\prime}_{0},b^{\prime}_{1},\ldots,b^{\prime}_{T},\psi^{ \prime}_{1},\ldots,\psi^{\prime}_{T})\) is an initial information for dividend-paying companies and \(\mathcal{F}_{0}:=(B^{\prime}_{0},\psi^{\prime}_{1},\ldots,\psi^{\prime}_{T})\) is an initial information for non-dividend paying companies. Then, system (3.29) can be written in the following form, which only depends on \(z_{t-1}\) \[q_{t}=\begin{bmatrix}y_{t}\\ z_{t}\end{bmatrix}=\begin{bmatrix}\Psi_{t}\Pi_{*}^{m}\mathsf{X}_{t-1}+\varphi_ {t}\\ \Pi_{*}^{m}\mathsf{X}_{t-1}\end{bmatrix}+\begin{bmatrix}\Psi_{t}A\\ A\end{bmatrix}z_{t-1}+\begin{bmatrix}I_{\tilde{n}}&\Psi_{t}\\ 0&I_{nq}\end{bmatrix}\begin{bmatrix}\xi_{t}\\ \eta_{t}\end{bmatrix}\quad\text{for }t=1,2,\ldots. \tag{3.35}\] Because an error random vector \(\zeta_{t}:=(\xi^{\prime}_{t},\eta^{\prime}_{t})^{\prime}\) is independent of the information \(\mathcal{F}_{t-1}\), conditional on \(\mathcal{F}_{t-1}\), an expectation of a random vector \(x_{t}:=(y_{t},z_{t})^{\prime}\) is obtained by \[\begin{bmatrix}y_{t|t-1}\\ z_{t|t-1}\end{bmatrix}:=\begin{bmatrix}\Psi_{t}\Pi_{*}^{m}\mathsf{X}_{t-1}+ \varphi_{t}\\ \Pi_{*}^{m}\mathsf{X}_{t-1}\end{bmatrix}+\begin{bmatrix}\Psi_{t}A\\ A\end{bmatrix}z_{t-1|t-1} \tag{3.36}\] for \(t=1,\ldots,T\), where \(z_{0|0}:=(\mu_{0},\ldots,\mu_{0})^{\prime}\) is an \((nq\times 1)\) initial value, which is consists of the vector \(\mu_{0}\). If we use the tower property of conditional expectation and the fact that error random variables \(\xi_{t}\) and \(w_{t}\) are independent, and an error random vector \(\zeta_{t}=(\xi^{\prime}_{t},\eta^{\prime}_{t})^{\prime}\) is independent of the information \(\mathcal{F}_{t-1}\), then it is clear that \[\mathbb{E}\big{(}(z_{t-1}-z_{t-1|t-1})\zeta^{\prime}_{t}|\mathcal{F}_{t-1} \big{)}=0,\quad\mathbb{E}(\xi_{t}\eta^{\prime}_{t}|\mathcal{F}_{t-1})=0, \tag{3.37}\] for \(t=1,\ldots,T\). Consequently, it follows from equation (3.35) that conditional on \(\mathcal{F}_{t-1}\), a covariance matrix of the random vector \(q_{t}\) is given by \[\Sigma(q_{t}|t-1):=\text{Cov}(q_{t}|\mathcal{F}_{t-1})=\begin{bmatrix}\Sigma(y _{t}|t-1)&\Sigma(z_{t},y_{t}|t-1)^{\prime}\\ \Sigma(z_{t},y_{t}|t-1)&\Sigma(z_{t}|t-1)\end{bmatrix} \tag{3.38}\] for \(t=1,\ldots,T\), where conditional on \(\mathcal{F}_{t-1}\), a covariance matrix of the state vector \(z_{t}\) is \[\Sigma(z_{t}|t-1)=A\Sigma(z_{t-1}|t-1)A^{\prime}+\Sigma_{\eta\eta} \tag{3.39}\] with \(\Sigma_{\eta\eta}:=\text{Cov}(\eta_{t})=\text{diag}\{\Sigma_{ww},0\}\) is an \((nq\times nq)\) matrix, conditional on \(\mathcal{F}_{t-1}\), a variance of the endogenous variable \(y_{t}\) is \[\Sigma(y_{t}|t-1):=\Psi_{t}\Sigma(z_{t}|t-1)\Psi^{\prime}_{t}+\Sigma_{\xi\xi} \tag{3.40}\] with \(\Sigma(z_{0}|0):=\text{diag}\{\Sigma_{0},\ldots,\Sigma_{0}\}\) is \((nq\times nq)\) matrix, which is consists of the covariance matrix \(\Sigma_{0}\), and conditional on \(\mathcal{F}_{t-1}\), a covariance matrix between the endogenous variable \(y_{t}\) and the state vector \(z_{t}\) is \[\Sigma(z_{t},y_{t}|t-1)=\Sigma(z_{t}|t-1)\Psi^{\prime}_{t} \tag{3.41}\] As a result, due to equations (3.36) and (3.39)-(3.41), for given \(\mathcal{F}_{t-1}\), a conditional distribution of the process \(q_{t}\) is given by \[q_{t}=\begin{bmatrix}y_{t}\\ z_{t}\end{bmatrix}\ \bigg{|}\ \mathcal{F}_{t-1}\sim\mathcal{N}\left(\begin{bmatrix}y_{t |t-1}\\ z_{t|t-1}\end{bmatrix},\begin{bmatrix}\Sigma(y_{t}|t-1)&\Sigma(z_{t},y_{t}|t-1)^ {\prime}\\ \Sigma(z_{t},y_{t}|t-1)&\Sigma(z_{t}|t-1)\end{bmatrix}\right). \tag{3.42}\] It follows from the well-known formula of the conditional distribution of multivariate random vector and equation (3.42) that a conditional distribution of the state vector \(z_{t}\) given the endogenous variable \(y_{t}\) and the information \(\mathcal{F}_{t-1}\) is given by \[z_{t}\ |\ y_{t},\mathcal{F}_{t-1}\sim\mathcal{N}\Big{(}z_{t|t-1}+\mathcal{K}_{t} \big{(}y_{t}-y_{t|t-1}\big{)},\Sigma(z_{t}|t-1)-\mathcal{K}_{t}\Sigma(y_{t}|t-1 )\mathcal{K}^{\prime}_{t}\Big{)} \tag{3.43}\] for \(t=1,\dots,T\), where \(\mathcal{K}_{t}:=\Sigma(z_{t},y_{t}|t-1)\Sigma^{-1}(y_{t}|t-1)\) is the Kalman filter gain. Therefore, since \(\mathcal{F}_{t}=\{y_{t},\mathcal{F}_{t-1}\}\), we have \[z_{t|t}:=\mathbb{E}(z_{t}|\mathcal{F}_{t})=z_{t|t-1}+\mathcal{K}_{t}\big{(}y_{t }-y_{t|t-1}\big{)},\quad t=1,\dots,T \tag{3.44}\] and \[\Sigma(z_{t}|t):=\mathrm{Cov}(z_{t}|\mathcal{F}_{t})=\Sigma(z_{t}|t-1)- \mathcal{K}_{t}\Sigma(y_{t}|t-1)\mathcal{K}_{t}^{\prime},\quad t=1,\dots,T. \tag{3.45}\] Because the error random vector \(\zeta_{t}=(\xi_{t},\eta_{t})^{\prime}\) for \(t=T+1,T+2,\dots\) is independent of the full information \(\mathcal{F}_{T}\) and the state vector at time \(t-1\), \(z_{t-1}\), it follows from equation (3.29) and the tower property of conditional expectation that Kalman filter's forecast step is given by the following equations \[\begin{bmatrix}y_{t|T}\\ z_{t|T}\end{bmatrix}=\begin{bmatrix}\Psi_{t}z_{t|T}+\varphi_{t}\\ Az_{t-1|T}+\Pi_{*}^{m}\mathsf{X}_{t-1}\end{bmatrix}\text{ and }\begin{bmatrix} \Sigma(y_{t}|T)\\ \Sigma(z_{t}|T)\end{bmatrix}=\begin{bmatrix}\Psi_{t}\Sigma(z_{t}|T)\Psi_{t}^{ \prime}+\Sigma_{\xi\xi}\\ A\Sigma(z_{t-1}|T)A^{\prime}+\Sigma_{\eta\eta}\end{bmatrix},\;t=T+1,T+2,\dots. \tag{3.46}\] The Kalman filtering, which is considered the above provides an algorithm for filtering for the state vector \(z_{t}\), which is the unobserved variable. To estimate the parameters of our models (3.27) and (3.28), in addition to the Kalman filter, we also need to make inferences about the state vector \(z_{t}\) for \(t=1,\dots,T\) based on the full information \(\mathcal{F}_{T}\), see below. Such an inference is called the smoothed estimate of the state vector \(z_{t}\). The rest of the section is devoted to developing an algorithm, which is used to calculate the smoothed estimate \(z_{t|T}:=\mathbb{E}(z_{t}|\mathcal{F}_{T})\) for \(t=0,\dots,T-1\). Conditional on the information \(\mathcal{F}_{t+1}\), a conditional distribution of a random vector \((z_{t},z_{t+1})^{\prime}\) is given by \[\begin{bmatrix}z_{t+1}\\ z_{t}\end{bmatrix}\ \bigg{|}\ \mathcal{F}_{t}\sim\mathcal{N}\left(\begin{bmatrix}z_{t+1 |t}\\ z_{t|t}\end{bmatrix},\begin{bmatrix}\Sigma(z_{t+1}|t)&\Sigma(z_{t},z_{t+1}|t)^{ \prime}\\ \Sigma(z_{t},z_{t+1}|t)&\Sigma(z_{t}|t)\end{bmatrix}\right) \tag{3.47}\] for \(t=0,\dots,T-1\), where \(\Sigma(z_{t},z_{t+1}|t):=\mathrm{Cov}(z_{t},z_{t+1}|\mathcal{F}_{t})\) is a covariance between state vectors at times \(t\) and \(t+1\) given the information \(\mathcal{F}_{t}\). It follows from equation (3.29) that the covariance is calculated by \(\Sigma(z_{t},z_{t+1}|t)=\Sigma(z_{t}|t)A^{\prime}\). If we use the well-known formula of the conditional distribution of multivariate random vector once again, then a conditional distribution of the random state vector at time \(t\) given the state at time \(t+1\) and the information \(\mathcal{F}_{t}\) is given by \[z_{t}\ |\ z_{t+1},\mathcal{F}_{t}\sim\mathcal{N}\Big{(}z_{t|t}+\mathcal{S}_{t} \big{(}z_{t+1}-z_{t+1|t}\big{)},\Sigma(z_{t}|t)-\mathcal{S}_{t}\Sigma(z_{t+1} |t)\mathcal{S}_{t}^{\prime}\Big{)} \tag{3.48}\] for \(t=0,\dots,T-1\), where \(\mathcal{S}_{t}:=\Sigma(z_{t},z_{t+1}|t)\Sigma^{-1}(z_{t+1}|t)\) is the Kalman smoother gain. Because conditional on the state vector \(z_{t+1}\), the state vector at time \(t\), \(z_{t}\), is independent of an endogenous variable vector \((y_{t+1},\dots,y_{T})^{\prime}\), for each \(t=0,\dots,T-1\), it holds \(\mathbb{E}(z_{t}|z_{t+1},\mathcal{F}_{T})=\mathbb{E}(z_{t}|z_{t+1},\mathcal{F }_{t})=z_{t|t}+\mathcal{S}_{t}\big{(}z_{t+1}-z_{t+1|t}\big{)}\). Therefore, it follows from the tower property of the conditional expectation and conditional expectation in equation (3.48) that the smoothed inference of the state vector \(z_{t}\) is obtained by \[z_{t|T}=\mathbb{E}\big{(}\mathbb{E}(z_{t}|z_{t+1},\mathcal{F}_{T})\big{|} \mathcal{F}_{T}\big{)}=z_{t|t}+\mathcal{S}_{t}\big{(}z_{t+1|T}-z_{t+1|t} \big{)} \tag{3.49}\] for \(t=0,\dots,T-1\). Using equation (3.49) a difference between the state vector \(z_{t}\) and its Kalman smoother \(z_{t|T}\) is represented by \[z_{t}-z_{t|T}=z_{t}-\big{[}z_{t|t}+\mathcal{S}_{t}(z_{t+1}-z_{t+1|t})\big{]}+ \mathcal{S}_{t}(z_{t+1}-z_{t+1|T}). \tag{3.50}\] Observe that the square bracket term in the above equation is the conditional expectation of the state vector at time \(t\), which is given in equation (3.48). Thus, if we use the conditional covariance matrix of the state vector \(z_{t}\), which is given in equation (3.48) and use the tower property of conditional expectation once more, then we obtain that \[\Sigma(z_{t}|T)=\mathbb{E}\big{(}(z_{t}-z_{t|T})(z_{t}-z_{t|T})^{\prime}\big{|} \mathcal{F}_{T}\big{)}=\Sigma(z_{t}|t)-\mathcal{S}_{t}\big{(}\Sigma(z_{t+1}|t)- \Sigma(z_{t+1}|T)\big{)}\mathcal{S}_{t}^{\prime} \tag{3.51}\] \[\Sigma(z_{t},z_{t+1}|T)=\mathbb{E}\big{(}(z_{t}-z_{t|T})(z_{t+1}-z_{t+1|T})^{ \prime}\big{|}{\cal F}_{T}\big{)}={\cal S}_{t}\Sigma(z_{t+1}|T) \tag{3.52}\] for \(t=0,\ldots,T-1\). Firstly, let us consider the dividend-paying firm, which is given by system (3.27). In the EM algorithm, one considers a joint density function of a random vector, which is composed of observed variables and state (latent) variables. In our cases, the vectors of observed variables and state variables correspond to the vector of dividend-to-book ratios and economic variables, \(y:=(y^{\prime}_{1},\ldots,y^{\prime}_{T})^{\prime}\), and a vector of price-to-book ratio vectors, \(m:=(m^{\prime}_{0},\ldots,m^{\prime}_{T})^{\prime}\), respectively. Interesting usages of the EM algorithm in econometrics can be found in Hamilton (1990) and Schneider (1992). Let us denote the joint density function by \(f_{\Delta,m}(\Delta,m)\). The EM algorithm consists of two steps. In the expectation (E) step of the EM algorithm, one has to determine the form of an expectation of log of the joint density given the full information \({\cal F}_{T}\). We denote the expectation by \(\Lambda(\theta|{\cal F}_{T})\), that is, \(\Lambda(\theta|{\cal F}_{T}):=\mathbb{E}\big{(}\ln(f_{y,m}(y,m))|{\cal F}_{T} \big{)}\). For our model (3.27), one can show that the expectation of log of the joint density of the vectors of the observed variables \(y\) and the vector of the price-to-book ratio vectors \(m\) is \[\Lambda(\theta|{\cal F}_{T}) = -\frac{(\tilde{n}+n+1)T}{2}\ln(2\pi)-\frac{T}{2}\ln(|\Sigma_{ \xi\xi}|)-\frac{T}{2}\ln(|\Sigma_{ww}|)-\frac{1}{2}\ln(|\Sigma_{0}|)\] \[- \frac{1}{2}\sum_{t=1}^{T}\mathbb{E}\Big{[}u^{\prime}_{t}\Omega_{ uu}u_{t}\Big{|}{\cal F}_{T}\Big{]}-\sum_{t=1}^{T}\mathbb{E}\Big{[}u^{\prime}_{t} \Omega_{uv}v_{t}\Big{|}{\cal F}_{T}\Big{]}-\frac{1}{2}\sum_{t=1}^{T}\mathbb{E} \Big{[}v^{\prime}_{t}\Omega_{vv}v_{t}\Big{|}{\cal F}_{T}\Big{]}\] \[- \frac{1}{2}\sum_{t=1}^{T}\mathbb{E}\Big{[}w^{\prime}_{t}\Sigma^{ -1}_{ww}w_{t}\Big{|}{\cal F}_{T}\Big{]}-\frac{1}{2}\mathbb{E}\Big{[}(\tilde{m }_{0}-\mu_{0})^{\prime}\Sigma^{-1}_{0}\big{(}\tilde{m}_{0}-\mu_{0}\big{)} \Big{|}{\cal F}_{T}\Big{]},\] where \(\theta:=\big{(}{\rm vec}(A^{k}_{0})^{\prime},\mu^{\prime}_{0},{\rm vec}(\Phi) ^{\prime},{\rm vec}(\Pi^{x})^{\prime},{\rm vec}(\Pi^{m})^{\prime},{\rm vech}( \Sigma_{\xi\xi})^{\prime},{\rm vech}(\Sigma_{ww})^{\prime},{\rm vech}(\Sigma_{ 0})^{\prime}\big{)}^{\prime}\) is a \(([n(l+q+nq)+\tilde{n}(l+\tilde{n}p)+(\tilde{n}(\tilde{n}+1)+n(n+1)+nq(nq+1))/ 2]\times 1)\) vector, which consists of all parameters of the model (3.27), \(\Omega_{uu}\), \(\Omega_{uv}\), \(\Omega_{vu}\), and \(\Omega_{vv}\) are the partitions of the matrix \(\Sigma^{-1}_{\xi\xi}\), corresponding to the random vector \(\xi_{t}=(u^{\prime}_{t},v^{\prime}_{t})^{\prime}\), \[u_{t} = \Delta_{t}+{\rm diag}\{i_{n}+b_{t}\}m_{t}-{\rm diag}\{i_{n}+A^{k} _{0}\psi_{t}\}m_{t-1} \tag{3.54}\] \[= \Delta_{t}+M_{t}(i_{n}+b_{t})-M_{t-1}(i_{n}+A^{k}_{0}\psi_{t}), \tag{3.55}\] \[v_{t}=x_{t}-\Pi^{x}{\sf X}_{t-1}, \tag{3.56}\] and \[w_{t}=m_{t}-\Phi z_{t-1}-\Pi^{m}{\sf X}_{t-1} \tag{3.57}\] are the \((n\times 1)\), \((\ell\times 1)\), \((n\times 1)\) white noise processes, respectively, and \(M_{t}:={\rm diag}\{m_{t}\}\) is an \((n\times n)\) diagonal matrix, whose diagonal elements are \(m_{t}\). In the maximization (M) step of the EM algorithm, we need to find a maximum likelihood estimator \(\hat{\theta}\) that maximizes the expectation, which is determined in the E step. According to equation (3.55), the white noise process \(u_{t}\) can be written by \[u_{t}=\Delta_{t}+M_{t}(i_{n}+b_{t})-M_{t-1}i_{n}-(\psi^{\prime}_{t}\otimes M_{ t-1})a^{k}_{0} \tag{3.58}\] where \(a^{k}_{0}:={\rm vec}(A^{k}_{0})\) is a vectorization of the matrix \(A^{k}_{0}\). As a result, a partial derivative of the log-likelihood function with respect to the parameter \(a^{k}_{0}\) is given by \[\frac{\partial\Lambda(\theta|{\cal F}_{T})}{\partial(a^{k}_{0})^{\prime}}=\sum_ {t=1}^{T}\mathbb{E}\bigg{[}\Big{(}u^{\prime}_{t}\Omega_{uu}+v^{\prime}_{t} \Omega_{vu}\Big{)}(\psi^{\prime}_{t}\otimes M_{t-1})\bigg{|}{\cal F}_{T}\bigg{]}. \tag{3.59}\] Let \(J_{m}:=[I_{n}:0:\cdots:0]\) be \((n\times nq)\) matrix, whose first block matrix is \(I_{n}\) and other blocks are zero, \(z_{t-1,t-1|T}:=\mathbb{E}\big{[}z_{t-1}z^{\prime}_{t-1}\big{|}{\cal F}_{T} \big{]}=\Sigma(z_{t-1}|T)+z_{t-1|T}z^{\prime}_{t-1|T}\) be an \((nq\times nq)\) smoothed matrix, and \(z_{t-1,t|T}:=\mathbb{E}\big{[}z_{t-1}z_{t}^{\prime}\big{|}\mathcal{F}_{T}\big{]}= \mathcal{S}_{t-1}\Sigma(z_{t-1}|T)+z_{t-1|T}z_{t|T}^{\prime}\) be an \((nq\times tq)\) smoothed matrix, see equation (3.52). The matrix \(J_{m}\) can be used to extract the smoothed inference vector \(m_{t|T}\) and smoothed inference matrices \(m_{t-1,t-1|T}:=\mathbb{E}\big{[}m_{t-1}m_{t-1}^{\prime}\big{|}\mathcal{F}_{T} \big{]}\) and \(m_{t-1,t|T}:=\mathbb{E}\big{[}m_{t-1}m_{t}^{\prime}\big{|}\mathcal{F}_{T} \big{]}\) from the smoothed inference vector \(z_{t|T}\) and smoothed inference matrices \(z_{t-1,t-1|T}:=\mathbb{E}\big{[}z_{t-1}z_{t-1}^{\prime}\big{|}\mathcal{F}_{T} \big{]}\) and \(z_{t-1,t|T}:=\mathbb{E}\big{[}z_{t-1}z_{t}^{\prime}\big{|}\mathcal{F}_{T} \big{]}\), that is, \(m_{t|T}=J_{m}z_{t|T}\), \(m_{t-1,t-1|T}=J_{m}z_{t-1,t-1|T}J_{m}^{\prime}\), and \(m_{t-1,t|T}=J_{m}z_{t-1,t|T}J_{m}^{\prime}\). Since for all \(a,b\in\mathbb{R}^{n}\) vectors and \(C\in\mathbb{R}^{n\times n}\) matrix, \(\mathrm{diag}\{a\}C\mathrm{diag}\{b\}=C\odot(ab^{\prime})\), it follows from above equation (3.59) that a ML estimator of the parameter \(a_{b}^{b}\) is obtained by the following equation \[\hat{a}_{0}^{k} := \Bigg{(}\sum_{t=1}^{T}\Big{(}\psi_{t}\psi_{t}^{\prime}\otimes \Big{[}\Omega_{uu}\odot\big{(}J_{m}z_{t-1,t-1|T}J_{m}^{\prime}\big{)}\Big{]} \Big{)}\Bigg{)}^{-1}\] \[\times \sum_{t=1}^{T}\bigg{(}\psi_{t}\otimes\Big{[}\mathrm{diag}\big{\{} J_{m}z_{t-1|T}\big{\}}\Big{(}\Omega_{uu}\Delta_{t}+\Omega_{uv}(x_{t}-\Pi^{x} \mathsf{X}_{t-1})\Big{)}\] \[+ \Big{(}\Omega_{uu}\odot\big{(}J_{m}z_{t-1,t|T}\big{)}\Big{)}\big{(} i_{n}+b_{t}\big{)}-\Big{(}\Omega_{uu}\odot\big{(}J_{m}z_{t-1,t-1|T}\big{)} \Big{)}i_{n}\Big{\}}\bigg{)}.\] Due to equation (3.56), white noise process \(v_{t}\) is represented by \(v_{t}=x_{t}-(\mathsf{X}_{t-1}^{\prime}\otimes I_{\ell})\pi^{x}\), where \(\pi^{x}:=\mathrm{vec}(\Pi^{x})\) is a vectorization of the matrix \(\Pi^{x}\). Consequently, a partial derivative of the log-likelihood function with respect to the parameter \(\pi^{x}\) is given by \[\frac{\partial\Lambda(\theta|\mathcal{F}_{T})}{\partial(\pi^{x})^{\prime}}= \sum_{t=1}^{T}\mathbb{E}\bigg{[}\Big{(}v_{t}^{\prime}\Omega_{vv}+u_{t}^{\prime }\Omega_{uv}\Big{)}(\mathsf{X}_{t-1}^{\prime}\otimes I_{\ell})\bigg{|} \mathcal{F}_{T}\bigg{]}. \tag{3.61}\] Let \(\bar{z}_{\bullet|T}:=[z_{1|T}:\cdots:z_{T|T}]\) be an \((nq\times T)\) smoothed inference matrix and \(\bar{z}_{-1|T}:=[z_{0|T}:\cdots:z_{T-1|T}]\) be an \((nq\times T)\) smoothed inference matrix, which is backshifted the matrix \(\bar{z}_{\bullet|T}\) by one period. After some manipulation, we obtain an ML estimator of the parameter \(\Pi^{x}\) \[\hat{\Pi}^{x} := \Big{(}\bar{x}+\Omega_{vv}^{-1}\Omega_{vu}\big{(}\bar{\Delta}+(J_ {m}\bar{z}_{\bullet|T})\odot(i_{n}\otimes i_{T}^{\prime}+\bar{b})\] \[- (J_{m}\bar{z}_{-1|T})\odot(i_{n}\otimes i_{T}^{\prime}+A_{b}^{k} \bar{\psi})\big{)}\Big{)}\bar{\mathsf{X}}^{\prime}(\bar{\mathsf{X}}\bar{\mathsf{ X}}^{\prime})^{-1}.\] Because of equation (3.57), white noise process \(w_{t}\) is represented by \(w_{t}=J_{m}z_{t}-\Phi z_{t-1}-(\mathsf{X}_{t-1}^{\prime}\otimes I_{n})\pi^{m}\), where \(\pi^{m}:=\mathrm{vec}(\Pi^{m})\) is a vectorization of the matrix \(\Pi^{m}\). Therefore, a partial derivative of the log-likelihood function with respect to the parameter \(\pi^{m}\) is given by \[\frac{\partial\Lambda(\theta|\mathcal{F}_{T})}{\partial(\pi^{m})^{\prime}}=\sum_ {t=1}^{T}\mathbb{E}\Big{[}w_{t}^{\prime}\Omega_{ww}(\mathsf{X}_{t-1}^{\prime} \otimes I_{n})\bigg{|}\mathcal{F}_{T}\Big{]}. \tag{3.63}\] From the above equation, it can be shown that an ML estimator of the parameter \(\Pi^{m}\) is given by \[\hat{\Pi}^{m}:=\big{(}J_{m}\bar{z}_{\bullet|T}-\Phi\bar{z}_{-1|T}\big{)}\bar{ \mathsf{X}}^{\prime}(\bar{\mathsf{X}}\bar{\mathsf{X}}^{\prime})^{-1}. \tag{3.64}\] According to equation (3.57), white noise process \(w_{t}\) is represented by \(w_{t}=J_{m}z_{t}-(z_{t-1}^{\prime}\otimes I_{n})\phi-\Pi^{m}\mathsf{X}_{t-1}\), where \(\phi:=\mathrm{vec}(\Phi)\) is a vectorization of the matrix \(\Phi\). Therefore, a partial derivative of the log-likelihood function with respect to the parameter \(\phi\) is given by \[\frac{\partial\Lambda(\theta|\mathcal{F}_{T})}{\partial\phi^{\prime}}=\sum_{t=1}^ {T}\mathbb{E}\Big{[}w_{t}^{\prime}\Omega_{ww}(z_{t-1}^{\prime}\otimes I_{n}) \bigg{|}\mathcal{F}_{T}\Big{]}. \tag{3.65}\] Since \((z_{t-1}\otimes I_{n})\Sigma_{ww}^{-1}J_{m}z_{t}=\mathrm{vec}(\Sigma_{ww}^{ \prime}J_{m}z_{t-1}^{\prime})=(z_{t-1}z_{t}^{\prime}J_{m}^{\prime}\otimes\Sigma_{ww }^{-1})\mathrm{vec}(I_{n})\), after some manipulation, we arrive at an ML estimator of the parameter \(\Phi\) \[\hat{\Phi}:=\Bigg{(}\sum_{t=1}^{T}J_{m}z_{t-1,t|T}^{\prime}-\Pi^{m}\bar{\mathsf{ X}}z_{-1|T}^{\prime}\Bigg{)}\Bigg{(}\sum_{t=1}^{T}z_{t-1,t-1|T}\Bigg{)}^{-1}. \tag{3.66}\] where \(J_{m}:=[I_{n}:0]\) is an \((n\times nq)\) matrix, whose first block matrix is \(I_{n}\) and other blocks equal zero. For estimators of the covariance matrices \(\Sigma_{\eta\eta}\), \(\Sigma_{ww}\), and \(\Sigma_{0}\), the following formulas holds \[\hat{\Sigma}_{\xi\xi}:=\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[\xi_{t}\xi^{\prime} _{t}|\mathcal{F}_{T}]=\frac{1}{T}\sum_{t=1}^{T}\begin{bmatrix}\mathbb{E}[u_{t }u^{\prime}_{t}|\mathcal{F}_{T}]&\mathbb{E}[u_{t}v^{\prime}_{t}|\mathcal{F}_{ T}]\\ \mathbb{E}[v_{t}u^{\prime}_{t}|\mathcal{F}_{T}]&\mathbb{E}[v_{t}v^{\prime}_{t}| \mathcal{F}_{T}]\end{bmatrix}, \tag{3.67}\] \[\hat{\Sigma}_{ww}:=\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[w_{t}w^{\prime}_{t}| \mathcal{F}_{T}],\quad\text{ and }\quad\hat{\Sigma}_{0}:=\Sigma(z_{0}|T). \tag{3.68}\] To calculate the conditional expectations \(\mathbb{E}\big{(}\xi_{t}\xi^{\prime}_{t}|\mathcal{F}_{T}\big{)}\) and \(\mathbb{E}\big{(}w_{t}w^{\prime}_{t}|\mathcal{F}_{T}\big{)}\), observe that the random error processes at time \(t\) of the log book value growth rate process and the log multiplier process can be represented by \[u_{t} = u_{t|T}-\Psi_{\Delta,t}(z_{t}-z_{t|T})\] \[v_{t} = x_{t}-\Pi^{x}\mathsf{X}_{t-1} \tag{3.69}\] \[w_{t} = w_{t|T}+J_{m}(z_{t}-z_{t|T})-\Phi(z_{t-1}-z_{t-1|T}).\] where \(\Psi_{\Delta,t}:=[-\text{diag}\{i_{n}+b_{t}\}:\text{diag}\{i_{n}+A_{0}^{k}\psi _{t}\}:0:\cdots:0]\) is an \((n\times nq)\) matrix and its third to \(q\)-th block matrices are zero. Therefore, as \(v_{t}\), \(u_{t|T}\), and \(v_{t|T}\) are measurable with respect to the full information \(\mathcal{F}_{T}\) (known at time \(T\)), it follows from equations (3.69) that \[\mathbb{E}\big{(}u_{t}u^{\prime}_{t}|\mathcal{F}_{T}\big{)}=u_{t|T}u^{\prime} _{t|T}+\Psi_{\Delta,t}\Sigma(z_{t}|T)\Psi^{\prime}_{\Delta,t},\quad\mathbb{E} \big{(}u_{t}v^{\prime}_{t}|\mathcal{F}_{T}\big{)}=u_{t|T}v^{\prime}_{t},\quad \mathbb{E}\big{(}v_{t}v^{\prime}_{t}|\mathcal{F}_{T}\big{)}=v_{t}v^{\prime}_{t}, \tag{3.70}\] and \[\mathbb{E}\big{(}w_{t}w^{\prime}_{t}|\mathcal{F}_{T}\big{)} = w_{t|T}w^{\prime}_{t|T}+J_{m}\Sigma(z_{t}|T)J^{\prime}_{m}+\Phi \Sigma(z_{t-1}|T)\Phi^{\prime}\] \[- J_{m}\Sigma(z_{t}|T)\mathcal{S}^{\prime}_{t-1}\Phi^{\prime}- \Phi\mathcal{S}_{t-1}\Sigma(z_{t}|T)J^{\prime}_{m}.\] If we substitute equation (3.70) into (3.67), then under suitable conditions the zig-zag iteration that corresponds to equations (3.36), (3.39), (3.40), (3.44), (3.45), (3.49), (3.51), (3.60), (3.62), (3.64), and (3.66)-(3.68) converges to the maximum likelihood estimators of our log private company valuation model. Now we consider the non-dividend paying companies. For the non-dividend paying companies, their white noise process \(u_{t}\) is given by \[u_{t}=\tilde{b}_{t}-A_{0}^{k}\psi_{t}+J_{m}(z_{t}-z_{t-1}). \tag{3.72}\] In a similar manner as the public companies, it can be shown that ML estimators of the parameters \(A_{0}^{k}\) and \(\Pi^{x}\) are obtained by the following equations \[\hat{A}_{0}^{k}:=\Big{(}\bar{b}+J_{m}\big{(}\bar{z}_{\bullet|T}-\bar{z}_{-1|T} \big{)}+\Omega_{uu}^{-1}\Omega_{uv}\big{(}\bar{x}-\Pi^{x}\bar{\mathsf{X}} \Big{)}\Big{)}\bar{\psi}^{\prime}(\bar{\psi}\bar{\psi}^{\prime})^{-1} \tag{3.73}\] and \[\hat{\Pi}^{x}:=\Big{(}\bar{x}+\Omega_{vv}^{-1}\Omega_{vu}\big{(}\bar{b}-A_{0}^ {k}\bar{\psi}+J_{m}\big{(}\bar{z}_{\bullet|T}-\bar{z}_{-1|T}\big{)}\big{)} \Big{)}\bar{\mathsf{X}}^{\prime}(\bar{\mathsf{X}}\bar{\mathsf{X}}^{\prime})^{-1}. \tag{3.74}\] Since \(u_{t}=u_{t|T}-\Psi_{\bar{b},t}(z_{t}-z_{t|T})\), where \(u_{t|T}=\tilde{b}_{t}-A_{0}^{k}\psi_{t}+J_{m}(z_{t|T}-z_{t-1|T})\) is an \((n\times 1)\) smoothed white noise process and \(\Psi_{\bar{b},t}:=[-I_{n}:I_{n}:0:\cdots:0]\) is an \((n\times nq)\) matrix, whose third to \(q\)-th block matrices are zero, one finds that \[\mathbb{E}\big{(}u_{t}u^{\prime}_{t}|\mathcal{F}_{T}\big{)}=u_{t|T}u^{\prime}_{t| T}+\Psi_{\bar{b},t}\Sigma(z_{t}|T)\Psi^{\prime}_{\bar{b},t}. \tag{3.75}\] Using the same method as the dividend-paying company, one can obtain other ML estimators. The other ML estimators of parameters of the non-dividend paying company are given by equations (3.73)-(3.75). Let us suppose that the parameter estimation of our model is obtained. Then, a smoothed inference of the market value process at time \(t\) of the private company is calculated by the following formula \[V_{t|T}=m_{t|T}B_{t},\quad t=0,1,\ldots,T, \tag{3.76}\] where \(m_{t|T}=\exp\{\tilde{m}_{t|T}\}\) is a smoothed multiplier vector at time \(t\). Also, an analyst can forecast the market value process of the private company by using equations (3.46) and (3.76). ## 4 Numerical Results We start by applying the estimation method for parameter estimation of our model, see Section 3.4. For means of illustration, we have chosen three companies from different sectors (Healthcare, Financial Services, and Consumer), listed in the S&P 500 index. In order to increase the number of price and dividend observation points, we take quarterly data instead of yearly data. Our data covers a period from Q1 1990 to Q3 2021. That leads to \(T=127\) observations for Johnson & Johnson, PepsiCo, and JPMorgan. All quarterly price and dividend data have been collected from Thomson Reuters Eikon. The dividends of the selected companies have different patterns. In particular, JPMorgan cut its dividend by a huge amount due to the 2008/2009 financial crises, and the other companies have continuously increasing dividend dynamics, which are not affected by the 2008/2009 financial crises. For our model, we assume for all companies, that a default never occurs. We present estimations of the parameters for the selected companies in Table 1. The 2-9th rows of Table 1 correspond to the required rate of returns of the companies modeled by the regime-switching process with three regimes and the 10-13th rows of the same Table correspond to the required rate of returns of the companies take constant values (the regime-switching process takes one regime). In order to obtain estimations of the parameters, which correspond to the 2-9th rows of Table 1 we assume that the regime-switching process \(s_{t}\) follows a Markov chain with three regimes, namely, up regime (regime 1), normal regime (regime 2), and down regime (regime 3) and we use equations (2.21)-(2.25). Since explanations are comparable for the other companies, we will give explanations only for PepsiCo. In the 2nd row of Table 1, we provide estimations of the parameters \(\tilde{k}(1),\tilde{k}(2),\tilde{k}(3)\). For PepsiCo, in regimes 1, 2, and 3, estimations of the required rate of return are 19.44%, 3.37%, and -20.86%, respectively. For example, in the normal regime, the required rate of return of PepsiCo could be 2.89% on average. The 3-5th rows of Table 1 correspond to the transition probability matrix \(P\). For the selected companies, their transition probability matrices \(P\)s are ergodic, where ergodic means that one of the eigenvalues of \(P\) is unity and that all other eigenvalues of \(P\) are inside the unit circle, see Hamilton (1994). From the 3rd row of Table 1 one can deduce that if the required rate of return of PepsiCo is in the up regime then in the next period, it will switch to the normal regime with a probability of 0.814 or the down regime with a probability of 0.186 because it can not be in the up regime due to zero probability. If the required rate of return of PepsiCo is in the normal regime, corresponding to row 4 of the Table, then in the next period, it can not switch to the up regime because of zero probability, the normal regime with a probability of 0.962, or the down regime with a probability of 0.038. Finally, if the required rate of return of PepsiCo is in the down regime then in the next period, it will switch to the up regime with a probability of 0.840 or the down regime with a probability of 0.160 due to the normal regime's zero probability, see 5th row of the same Table. We provide the average persistence times of the regimes in the 6th row of Table 1. The average persistence time of the regime \(j\) is defined by \(\tau_{j}:=1/(1-p_{jj})\) for \(j=1,2,3.\) From Table 1, one can conclude that up, normal, and down regimes of PepsiCo's required rate of return will persist on average for 1.0, 25.6, and 1.3 quarters, respectively. In the 7th row of Table 1, we give ergodic probabilities \(\pi\) of the selected companies. Ergodic probability vector \(\pi\) of an ergodic Markov chain is defined from an equation \(P\pi=\pi\). The ergodic probability vector represents long-run probabilities, which do not depend on the initial probability vector \(\rho=z_{1|0}\). After sufficiently long periods, the required rate of return of PepsiCo will be in the up regime with a probability of \(0.042\), the normal regime with a probability of \(0.908\), or the down regime with a probability of \(0.050\), which are irrelevant to initial regimes. The 8th row of Table 1 is devoted to long-run expectations of the required rate of returns of the selected companies. The long-run expectation of the required rate of return is defined by \(k_{\infty}:=\lim_{t\to\infty}\mathbb{E}(k(s_{t})).\) For PepsiCo, it equals \(2.83\%\). So that after long periods, the average required rate of return of PepsiCo converges to \(2.83\%\). In the 9th row of Table 1, we present parameter estimations of standard deviations of the error random variables \(u_{t}\) for the selected companies. For PepsiCo, the parameter estimation equals \(0.079\). The 13th row of Table 1 corresponds to the parameter estimations of standard deviations, in which the required rate of returns of the companies are modeled by regime-switching process with one regime. For PepsiCo, the parameter estimation equals \(0.094\), where we used equation (3.1). As we compare the 9th row and 13th row of the Table, we can see that the estimations that correspond to the regime-switching process with three regimes are lower than the ones that correspond to the regime-switching process with one regime. Finally, the log required rate of returns estimation at time Q3 2021 of the firms are presented in row ten of the Table, while the corresponding \(95\%\) confidence intervals are included in rows 11 and 12 below. To calculate the log required rate of returns estimation and confidence bands, we used equations (2.41)/(2.42) and (2.47). The Table further illustrates average log returns (\(2.84\%\) for PepsiCo) and the return variability, as the return is supposed to lie within the (\(1.18\%\), \(4.50\%\)) interval with a \(95\%\) probability. It is worth mentioning that as calculations are based on the log required rate of return, we should convert them to the required rate of return using the formula \(k_{i}=\exp\{\tilde{k}_{i}\}-1\) for each company, see equation (2.48). In particular, for PepsiCo company, its point estimation of the required rate of return is \(k_{2}=\exp\{2.84\%\}-1=2.88\%\) and \(95\%\) confidence interval is \(\big{(}\exp\{1.18\%\}-1,\exp\{4.50\%\}-1\big{)}=(1.18\%,4.60\%)\). Also, note that since the required rate of return estimation expresses the average quarterly return of the companies, we can convert them yearly based using the formula \((1+k)^{4}-1\). For the selected firms, it will be interesting to plot the probabilistic inferences with the log return series. For each period \(t=1,\ldots,T\) and each firm, the probabilistic inferences are calculated by equation (2.21) and the log return series are calculated by the formula \(\tilde{k}_{t}:=\ln\big{(}(P_{t}+d_{t})/P_{t-1}\big{)}\). In Figure 1, we plotted the resulting series as a function of period \(t\). In Figure 1, the left axis \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Row & Prmtrs & \multicolumn{2}{c|}{Johnson \& Johnson} & \multicolumn{3}{c|}{PepsiCo} & \multicolumn{3}{c|}{JPMorgan} \\ \hline 2. & \(k(j)\) & 14.88\% & 3.30\% & –22.41\% & 19.44\% & 3.37\% & –20.86\% & 41.42\% & 2.79\% & –45.85\% \\ \hline 3. & & 0.000 & 1.000 & 0.000 & 0.000 & 0.814 & 0.186 & 0.193 & 0.807 & 0.000 \\ \hline 4. & \(P\) & 0.036 & 0.937 & 0.027 & 0.000 & 0.962 & 0.038 & 0.007 & 0.954 & 0.039 \\ \hline 5. & & 0.756 & 0.000 & 0.244 & 0.840 & 0.000 & 0.160 & 1.000 & 0.000 & 0.000 \\ \hline 6. & \(\tau_{j}\) & 1.000 & 15.79 & 1.322 & 1.000 & 26.60 & 1.191 & 1.239 & 21.60 & 1.000 \\ \hline 7. & \(\pi\) & 0.058 & 0.910 & 0.033 & 0.042 & 0.908 & 0.050 & 0.052 & 0.912 & 0.035 \\ \hline 8. & \(k_{\infty}\) & & 3.12\% & & 2.83\% & & 3.09\% & \\ \hline 9. & \(\sigma_{3}\) & & 0.064 & & 0.070 & & & 0.124 & \\ \hline 10. & \(k\) & & 3.14\% & & 2.84\% & & & 3.08\% & \\ \hline 11. & \(k_{L}\) & & 1.66\% & & 1.18\% & & –0.06\% & \\ \hline 12. & \(k_{U}\) & & 4.62\% & & 4.50\% & & 6.23\% & \\ \hline 13. & \(\sigma_{1}\) & & 0.084 & & 0.094 & & 0.178 & \\ \hline \end{tabular} \end{table} Table 1: ML Estimation for the Markov–Switching DDM corresponds to the return series, while the right axis corresponds to the probabilistic inference series for each company. From the Figure, and the 9th and 13th rows of Table 1, we can deduce that the regime-switching processes with three regimes are suited to explain the required rate of return series as compared to the regime-switching processes with one regime. From the Figure, we may expect that the log required rate of returns of the companies follow conditional heteroscedastic models. By using Eriews 12 the econometric program, one can conclude that the log required rate of returns of Johnson & Johnson and PepsiCo, which are demeaned by intercepts are white noise processes, while the log required rate of return of JPMorgan can be modeled by AR(0)-ARCH(1) process, namely, \[\tilde{k}_{3,t}=0.022+\xi_{3,t},\quad\xi_{3,t}=\sigma_{3,t}\varepsilon_{3,t}, \quad\sigma_{3,t}^{2}=0.015+0.616\xi_{3,t}^{2}.\] Because the coefficient of \(\xi_{3,t}^{2}\) of the above equation lies in an interval \([0,1)\), the log required rate of return process of JPMorgan is covariance stationary process, see McNeil et al. (2005). Finally, let us consider the Bayesian estimator of the companies' log required rate of returns. Since each of the log required rate of return processes are covariance stationary, we take \(\delta_{i}=0\) for \(i=1,2,3\). By using Akaike's and Schwartz's information criterion, we deduce that for the three companies, an order of simple VAR(\(p\)) process is \(p=1\). For this reason, we choose an order of Bayesian VAR(\(p\)) process by \(p=1\). Observe that because of the fact that all companies' log required rate of returns are Figure 1: Returns VS Regime Probabilities of Selected Companies covariance stationary, the prior expectation matrix \(\Pi_{0}\) equals zero, i.e., \(\Pi_{0}=0\). Since each company's log required rate of return follows AR(0) process, for each \(i=1,2\), \(3\), we estimate the parameter \(\sigma_{i}^{2}\) by the sample variance \(s_{i}^{2}=\frac{1}{T}\sum_{t=1}^{T}(k_{i,t}-\bar{k}_{i})^{2}\), where \(\bar{k}_{i}=\frac{1}{T}\sum_{t=1}^{T}k_{i,t}\) is the sample mean of \(i\)-th company's log required rate of return. To obtain the Bayesian estimator, we need to define the other hyperparameters: \(\lambda_{1}=5^{2}\), \(\lambda_{2}=0.2^{2}\), \(\nu_{0}=\tilde{n}+2=5\), and \(V_{0}=\text{diag}\{s_{1}^{2},s_{2}^{2},s_{3}^{2}\}\). By using equations (2.65) and (2.66), we obtain the Bayesian estimators, which are given in Table 2 of the parameters \(\Pi\) and \(\Sigma\). It should be noted that by applying the results in Table 2 and the Gibbs sampling method, which is mentioned before in Section 2.2, one may make inferences about the parameters and forecasts of the log required rate of returns of the companies. ## 5 Conclusion The most popular practical method, which is used to estimate the required rate of return on equity is the CAPM. However, the CAPM is sensitive to its inputs. Therefore, in this paper, instead of the traditional CAPM and its descendant versions, we introduce new estimation methods, covering the ML methods with regime-switching, the Bayesian method, and the Kalman filtering to estimate the required rate of return on equity. The required rate of return on equity has some practical applications. For example, in addition to its usage in stock valuation, it is an ingredient of the WACC. If a company is financed by liabilities, which are publicly traded in the exchanges, one can estimate the required rate of return on debtholders using the suggested methods. In this case, one can estimate the WACC of a company. In practice, the market price of liability (debt) equals a sum of payments of the liability discounted at the market interest rate, see, e.g., Brealey et al. (2020). In this paper, we introduce a simple method that evaluates the market values of liabilities. The method covers not only individual liabilities in the balance sheet but also whole liabilities in the balance sheet. Our purpose is to estimate the required rate of return on equity. However, the suggested methods can be used to estimate other significant parameters of the private company valuation model. In particular, we estimate price-to-book ratio vector by the ML method with regime-switching and the Bayesian method, and state (unobserved and latent) variable process of price-to-book ratio by the Kalman filtering method. For the Kalman filtering method, we develop the EM algorithm. If we know the book values of the next periods, then one may use forecasting inferences of the state variable to value a company in the next periods. Future research works should concentrate on extending the private company valuation model with state variable by state-space model with regime-switching, see Kim (1994).
2309.03485
Accelerated Decay due to Operator Spreading in Bulk-Dissipated Quantum Systems
Markovian open many-body quantum systems display complicated relaxation dynamics. The spectral gap of the Liouvillian characterizes the asymptotic decay rate towards the stationary state, but it has recently been pointed out that the spectral gap does not necessarily determine the overall relaxation time. Our understanding on the relaxation process before the asymptotically long-time regime is still limited. We here present a collective relaxation dynamics of autocorrelation functions in the stationary state. As a key quantity in the analysis, we introduce the instantaneous decay rate, which characterizes the transient relaxation and converges to the conventional asymptotic decay rate in the long-time limit. Our theory predicts that a bulk-dissipated system generically shows an accelerated decay before the asymptotic regime due to the scrambling of quantum information associated with the operator spreading.
Tatsuhiko Shirai, Takashi Mori
2023-09-07T05:35:08Z
http://arxiv.org/abs/2309.03485v3
# Accelerated Decay due to Operator Spreading in Bulk-Dissipated Quantum Systems ###### Abstract Markovian open many-body quantum systems display complicated relaxation dynamics. The spectral gap of the Liouvillian characterizes the asymptotic decay rate towards the stationary state, but it has recently been pointed out that the spectral gap does not necessarily determine the overall relaxation time. Our understanding on the relaxation process before the asymptotically long-time regime is still limited. We here present a collective relaxation dynamics of autocorrelation functions in the stationary state. As a key quantity in the analysis, we introduce the instantaneous decay rate, which characterizes the transient relaxation and converges to the conventional asymptotic decay rate in the long-time limit. Our theory predicts that a bulk-dissipated system generically shows an accelerated decay before the asymptotic regime due to the scrambling of quantum information associated with the operator spreading. _Introduction.--_ It is a fundamental problem in nonequilibrium statistical physics to elucidate how a quantum system approaches to stationarity under dissipative couplings to large environments [1; 2; 3]. This problem has been investigated mainly for small quantum systems, whereas our understanding on the nonequilibrium dynamics of open _many-body_ quantum systems has still been limited. The recent experimental progress using ultracold atoms and trapped ions has made us to introduce controlled dissipation to realize many-body quantum systems with desired properties [4; 5; 6; 7; 8]. The experimental background also motivates us to study the generic dynamical properties of open many-body quantum systems. Recently, it was pointed out that open many-body quantum systems exhibit counterintuitive dynamical features. The dynamics of Markovian open quantum systems are generated by the Liouvillian superoperator of the celebrated Lindblad form [9; 10]. The spectral gap of the Liouvillian, which is simply called the Liouvillian gap, gives the asymptotic decay rate [3], and thus it is expected that the relaxation time is given by the inverse of the Liouvillian gap. However, it was found that the relaxation time can be much longer than the inverse of the Liouvillian gap when there is a conserved current in the bulk (it is always the case when the dissipation acts only at the boundaries of the system) [11; 12; 13; 14]. The point is that the crossover time into the asymptotic regime is very long and even diverging in the thermodynamic limit. If the transient regime is dominant for the overall relaxation process, the decay rate in the transient regime determines the relaxation time. Hence, we should investigate the relaxation dynamics in the transient regime, which is the main focus of our work. In this Letter, we discuss the transient dynamics of quantum many-body systems under _bulk dissipation_. In contrast to boundary dissipation, we find that the relaxation time is generically much _shorter_ than the inverse of the Liouvillian gap in bulk-dissipated systems. Based on a rigorous inequality on the autocorrelation function, we introduce a key quantity called the instantaneous decay rate. We argue that the instantaneous decay rate exhibits three distinct dynamical regimes in the weak dissipation regime: (1) the acceleration regime, (2) the plateau regime, and (3) the asymptotic regime. Figure 1 illustrates dynamics of the instantaneous decay rate for various system sizes in a bulk-dissipated system. In the acceleration regime, the decay rate increases with time. This is a universal phenomenon under bulk dissipation, and this accelerated decay is explained by the operator-size growth (or the operator spreading) under unitary time evolution without dissipation. In the plateau regime, the decay rate is saturated at a constant value proportional to \(\gamma N\), where \(N\) is the system size. In the asymptotic regime, the decay rate converges to the asymptotic value, which is identical to the Liouvillian gap. In the following, we first provide the general set up, and then present the main result. We demonstrate the relevance of the theoretical results in a bulk-dissipated spin chain. _Set up.--_ Let us consider a finite quantum system, whose state at time \(t\) is represented by the density matrix \(\rho(t)\). Its time evolution is generated by the Liouvillian superoperator \(\mathcal{L}\) of the Lindblad form [1]: \(d\rho(t)/dt=\mathcal{L}\rho(t)\), where \[\mathcal{L}\rho=-i[\hat{H},\rho]+\gamma\sum_{k}\left(\hat{L}_{k}\rho\hat{L}_{k}^ {\dagger}-\frac{1}{2}\left\{\hat{L}_{k}^{\dagger}\hat{L}_{k},\rho\right\} \right). \tag{1}\] The first term of the right-hand side expresses the intrinsic unitary dynamics under the Hamiltonian \(\hat{H}\), whereas the second term represents the dissipation characterized by a set of Lindblad jump operators \(\{\hat{L}_{k}\}\). \(\gamma\) denotes the strength of dissipation. We put \(\hbar=1\). The Lindblad form ensures physically natural properties such as the complete positivity [9; 10]. Let us denote by \(\{\lambda_{\alpha}\}\) the eigenvalues of \(\mathcal{L}\). It is shown that any eigenvalue has a non-positive real part, and hence we sort the eigenvalues in the descending order: \[0=\lambda_{0}>\operatorname{Re}\lambda_{1}\geq\operatorname{Re}\lambda_{2} \geq\ldots. \tag{2}\] In this work, we assume that the zero eigenvalue is not degenerate: the steady state is unique. The Liouvillian gap \(g\) is defined as \[g=-\operatorname{Re}\lambda_{1}. \tag{3}\] The Liouvillian gap determines the asymptotic decay rate. To see it, we shall write a formal solution of the density matrix as \[\rho(t)=e^{\mathcal{L}t}\rho(0)=\rho_{\text{ss}}+\sum_{\alpha\neq 0}e^{\lambda_{ \alpha}t}C_{\alpha}\rho_{\alpha}, \tag{4}\] where \(\rho_{\text{ss}}\) is the density matrix of the stationary state and \(\rho_{\alpha}\) is \(\alpha\)th right eigenvector of \(\mathcal{L}\): \(\mathcal{L}\rho_{\alpha}=\lambda_{\alpha}\rho_{\alpha}\). Complex expansion coefficients are denoted by \(C_{\alpha}\). In the long-time limit, the first excited state (\(\alpha=1\)) will give a dominant contribution to the deviation from the stationary state: \(\rho(t)\sim\rho_{\text{ss}}+e^{\lambda_{1}t}C_{1}\rho_{1}\), and hence we have \[\|\rho(t)-\rho_{\text{ss}}\|_{1}\sim e^{-gt} \tag{5}\] in the long-time asymptotic regime, where \(\|\cdot\|_{1}\) denotes the trace norm. Let \(\hat{A}\) be an Hermitian operator. The autocorrelation function for \(\hat{A}\) in the stationary state is given by [15] \[C_{AA}(t)\coloneqq\langle\hat{A}(t),\hat{A}\rho_{\text{ss}}\rangle=\langle \hat{A}(t),\hat{A}\rangle_{\text{ss}}\,. \tag{6}\] We can assume \(\langle\hat{A}\rangle_{\text{ss}}\coloneqq\operatorname{Tr}(\hat{A}\rho_{ \text{ss}})=0\) without loss of generality. Here, we introduce two inner products. The first inner product is defined as \(\langle\hat{A},\hat{B}\rangle=\operatorname{Tr}(\hat{A}^{\dagger}\hat{B})\). Accordingly, we define an adjoint superoperator \(\tilde{\mathcal{L}}\) of \(\mathcal{L}\) as follows: \[\langle\hat{A},\mathcal{L}\hat{B}\rangle=\langle\tilde{\mathcal{L}}\hat{A}, \hat{B}\rangle\,. \tag{7}\] Physically, \(\tilde{\mathcal{L}}\) generates the dynamics of physical quantities in the Heisenberg picture. The expectation value of an Hermitian operator \(\hat{A}\) at time \(t\) is expressed as \[\langle\hat{A}(t)\rangle=\operatorname{Tr}[\hat{A}\rho(t)]=\langle\hat{A},e^{ \mathcal{L}t}\rho(0)\rangle=\langle e^{\tilde{\mathcal{L}}t}\hat{A},\rho(0) \rangle\,. \tag{8}\] Then, \(\hat{A}(t)\coloneqq e^{\tilde{\mathcal{L}}t}\hat{A}\) is interpreted as the time-evolved operator in the Heisenberg picture. It is explicitly given by \[\tilde{\mathcal{L}}\hat{A}=i[\hat{H},\hat{A}]+\sum_{k}\left(\hat{L}_{k}^{ \dagger}\hat{A}\hat{L}_{k}-\frac{1}{2}\left\{\hat{L}_{k}^{\dagger}\hat{L}_{k},\hat{A}\right\}\right). \tag{9}\] By using the Hermiticity-conserving property of \(\mathcal{L}\), it is shown that \(\tilde{\mathcal{L}}\) has the same eigenvalue spectrum as \(\mathcal{L}\). The second inner product is defined as \[\langle\hat{A},\hat{B}\rangle_{\text{ss}}\coloneqq\operatorname{Tr}(\hat{A}^{ \dagger}\hat{B}\rho_{\text{ss}}). \tag{10}\] The corresponding adjoint superoperator \(\tilde{\mathcal{L}}^{*}\) of \(\tilde{\mathcal{L}}\) associated with \(\langle\cdot,\cdot\rangle_{\text{ss}}\) is defined by \[\langle\hat{A},\tilde{\mathcal{L}}\hat{B}\rangle_{\text{ss}}=\langle\tilde{ \mathcal{L}}^{*}\hat{A},\hat{B}\rangle_{\text{ss}}\,. \tag{11}\] It should be noted that \(\tilde{\mathcal{L}}^{*}\) depends on the stationary state \(\rho_{\text{ss}}\). It is shown that \(\tilde{\mathcal{L}}^{*}\) is expressed as \(\tilde{\mathcal{L}}A=\mathcal{L}(\hat{A}\rho_{\text{ss}})\rho_{\text{ss}}^{-1}\)[16], where we assume that \(\rho_{\text{ss}}\) is invertible for simplicity. Again, \(\tilde{\mathcal{L}}^{*}\) has the same eigenvalue spectrum as \(\mathcal{L}\). Recent work introduced the symmetrized Liouvillian to study the relaxation dynamics in a transient regime [17]: \[\tilde{\mathcal{L}}_{s}=\frac{\tilde{\mathcal{L}}+\tilde{\mathcal{L}}^{*}}{2}. \tag{12}\] The symmetrized Liouvillian is a non-positive Hermite superoperator (i.e., \(\tilde{\mathcal{L}}_{s}\leq 0\) and \(\tilde{\mathcal{L}}_{s}^{*}=\tilde{\mathcal{L}}_{s}\)). The non-positivity is shown by using the complete positivity of \(\mathcal{L}\). Let us denote by \(\{s_{\alpha}\}\) the eigenvalues of \(\tilde{\mathcal{L}}_{s}\) and sort them in the descending order: \[0=s_{0}\geq s_{1}\geq s_{2}\geq\ldots. \tag{13}\] \(\tilde{\mathcal{L}}_{s}\) has a zero eigenvalue: \(\tilde{\mathcal{L}}_{s}\hat{I}=0\), where \(\hat{I}\) is the identity operator. The spectral gap \(g_{s}\) of \(\tilde{\mathcal{L}}_{s}\) is defined by \[g_{s}=-s_{1}. \tag{14}\] The spectral gap gives a simple bound on the autocorrelation in the stationary state [17]: \[|C_{AA}(t)|\leq e^{-g_{s}t}C_{AA}(0). \tag{15}\] However, as we see in the following, we need a more sophisticated analysis beyond the spectral gap bound like Eq. (15) to describe generic relaxation dynamics of bulk-dissipated quantum many-body systems. _Instantaneous decay rate.--_ Let us investigate how the autocorrelation function in the stationary state decays. Although the discussion below is general, for clarity we consider a bulk-dissipated Ising spin chain under the periodic boundary condition. The bulk Hamiltonian is given by \[H=\sum_{i=1}^{N}\left(h^{z}\tilde{\sigma}_{i}^{z}+h^{x}\tilde{\sigma}_{i}^{x}+J \tilde{\sigma}_{i}^{z}\tilde{\sigma}_{i+1}^{z}\right), \tag{16}\] where \(\hat{\sigma}_{i}^{x,y,z}\) denotes the Pauli matrices at site \(i\). We fix the parameters as \(h^{z}=0.9045\), \(h^{x}=0.809\), and \(J=1\). The jump operator acts on every site and is given by \[L_{i}=\hat{\sigma}_{i}^{-}=\frac{1}{2}(\hat{\sigma}_{i}^{x}-i\hat{\sigma}_{i}^{ y}). \tag{17}\] We assume weak dissipation and set \(\gamma=0.01\). This open quantum system has been implemented on Rydberg atoms under laser driving and dissipation [18; 19]. Each Rydberg atom is regarded as a two-level system, where the spin up (down) corresponds to the Rydberg (ground) state. In this model, the Liouvillian gap is independent of the system size (i.e., \(g\sim O(L^{0})\)). Figure 2 shows the autocorrelation functions of a local operator \(\hat{A}=\sigma_{1}^{z}-\langle\sigma_{1}^{z}\rangle_{\rm ss}\) for various system sizes by solid lines. In the long-time limit, the autocorrelation function decays at the rate of the Liouvillian gap. There is a different decay rate appearing in the transient regime. The decay rate is much larger than the Liouvillian gap. To understand how the transient dynamics emerges, we give an upper bound of the autocorrelation function: \[|C_{AA}(t)|\leq e^{-\int_{0}^{t}ds\,\kappa_{A}(s)}C_{AA}(0). \tag{18}\] Here, we introduce the _instantaneous decay rate_\(\kappa_{A}(t)\). The instantaneous decay rate is defined by using the symmetrized Liouvillian \(\tilde{\mathcal{L}}_{s}\) as \[\kappa_{A}(t)=-\frac{\langle\hat{A}(t),\tilde{\mathcal{L}}_{s}\hat{A}(t) \rangle_{\rm ss}}{\langle\hat{A}(t),\hat{A}(t)\rangle_{\rm ss}}. \tag{19}\] We prove the inequality. The autocorrelation function is generally bounded as follows: \[|C_{AA}(t)|\leq\sqrt{\langle\hat{A}(t),\hat{A}(t)\rangle_{\rm ss}}\sqrt{ \langle\hat{A},\hat{A}\rangle_{\rm ss}}, \tag{20}\] where we have used the Cauchy-Schwarz inequality \(|\,\langle\hat{A},\hat{B}\rangle_{\rm ss}\,|\leq\sqrt{\langle\hat{A},\hat{A} \rangle_{\rm ss}\,\langle\hat{B},\hat{B}\rangle_{\rm ss}}\). The time evolution of the quantity \(\langle\hat{A}(t),\hat{A}(t)\rangle_{\rm ss}\) is given by \[\frac{d}{dt}\,\langle\hat{A}(t),\hat{A}(t)\rangle_{\rm ss}=2\,\langle\hat{A}( t),\tilde{\mathcal{L}}_{s}\hat{A}(t)\rangle_{\rm ss}\,. \tag{21}\] Equation (21) is then formally solved as \[\langle\hat{A}(t),\hat{A}(t)\rangle_{\rm ss}=e^{-2\int_{0}^{t}ds\,\kappa_{A}( s)}\,\langle\hat{A},\hat{A}\rangle_{\rm ss}\,. \tag{22}\] By substituting it into Eq. (20), we obtain Eq. (18). The instantaneous decay rate is actually the decay rate of the quantity \(\sqrt{\langle\hat{A}(t),\hat{A}(t)\rangle_{\rm ss}}\), and it gives an upper bound on the autocorrelation function as in Eq. (18). Now we list important properties of the instantaneous decay rate. Firstly, it is non-negative \(\kappa_{A}(t)\geq 0\) since \(\tilde{\mathcal{L}}_{s}\) is a non-positive superoperator. Secondly, \(\kappa_{A}(t)\) is not smaller than the spectral gap \(g_{s}\) of \(\tilde{\mathcal{L}}_{s}\): \[\kappa_{A}(t)\geq g_{s} \tag{23}\] This is proved by using the following property: \[g_{s}=\inf_{\tilde{x}\neq 0,\,\langle\tilde{X}\rangle_{\rm ss}=0}\frac{ \langle\hat{X},\tilde{\mathcal{L}}_{s}\hat{X}\rangle_{\rm ss}}{\langle\hat{X}, \hat{X}\rangle_{\rm ss}}. \tag{24}\] The condition \(\langle\hat{X}\rangle_{\rm ss}=\langle\hat{I},\hat{X}\rangle_{\rm ss}=0\) ensures that \(\hat{X}\) is orthogonal to \(\hat{I}\), which is the eigenvector of \(\tilde{\mathcal{L}}_{s}\) corresponding to the zero eigenvalue, and thus \(g_{s}\) is expressed in the variational form as in Eq. (24). Equation (23) indicates that the instantaneous decay rate gives a simple bound on the autocorrelation function, i.e. Eq. (15), but it cannot capture the three dynamic regimes manifest in Fig. 1. Thirdly, the instantaneous decay rate converges to the asymptotic decay rate (i.e. the Liouvillian gap) in the long-time limit: \[\lim_{t\to\infty}\kappa_{A}(t)=g, \tag{25}\] when \(\operatorname{Tr}(\hat{A}\rho_{1})\neq 0\) and \(\operatorname{Im}\lambda_{n}=0\) for every \(n\) with \(\operatorname{Re}\lambda_{n}=-g\). _Accelerated decays due to operator spreading.--_ To gain insights into the relaxation dynamics in Fig. 2, we investigate the instantaneous decay rate. Figure 1 illustrates the dynamics of \(\kappa_{A}(t)\) for various system sizes by solid lines. We find three distinct dynamic regimes: acceleration regime, plateau regime, and asymptotic regime. \(\kappa_{A}(t)\) initially increases in the acceleration regime, has an almost constant value in the plateau regime, and decreases and converges to the Liouvillian gap in the asymptotic regime. The growth of \(\kappa_{A}(t)\) implies the acceleration of the decay. For weak dissipation, it is expected that the time evolution of \(\hat{A}(t)\) appearing in the instantaneous decay rate, i.e. Eq. (19), would be well approximated by the unitary dynamics without dissipation. The instantaneous decay rate is thus approximated as \[\kappa_{A}^{(0)}(t)=\frac{\langle\hat{A}_{0}(t),\tilde{\mathcal{L}}_{s}\hat{A} _{0}(t)\rangle_{\rm ss}}{\langle\hat{A}_{0}(t),\hat{A}_{0}(t)\rangle_{\rm ss}}, \quad\hat{A}_{0}(t)=e^{i\hat{H}t}\,\hat{A}e^{-i\hat{H}t}. \tag{26}\] Figure 2: Dynamics of the autocorrelation function \(C_{AA}(t)\) with \(\hat{A}=\sigma_{1}^{z}-\langle\sigma_{1}^{z}\rangle_{\rm ss}\) for various system sizes in the bulk-dissipated Ising spin chain. Solid lines show \(|C_{AA}(t)/C_{AA}(0)|\) for \(N=4,6,8\) from top to bottom. Dashed lines show the upper bound of the correlation functions in Eq. (18). Figure 1 shows the dynamics of \(\kappa_{A}^{(0)}(t)\) for \(N=7\) and \(8\). The approximated one correctly reproduces the instantaneous decay rate in the acceleration regime and the plateau regime. It indicates that the growth of the instantaneous decay rate arises due to the unitary dynamics and dissipation is relevant only for the crossover to the asymptotic regime. We now explain the growth of the instantaneous decay rate from the viewpoint of operator spreading. Figure 3 gives a schematic of operator spreading in a one-dimensional quantum spin system. Let us consider a local operator \(\hat{A}\) acting to a single site at initial time. In the figure, colored circles denote the sites where the operator \(\hat{A}_{0}(t)\) nontrivially acts. Under unitary dynamics, the operator spreads over the system and the number of colored sites linearly increases with time [20; 21]. Here, the number of colored sites is called the operator size. Then, we consider the effect of bulk dissipation. The instantaneous decay rate is roughly estimated as the product of \(\gamma\) and the operator size. Suppose an operator \(\hat{O}_{i}\) acting on a single site \(i\). The operator decays with a decay rate of \(\gamma\) since the dissipation independently acts on every site with strength \(\gamma\). Then, the operator acting to three sites such as \(\hat{O}_{i-1}\hat{O}_{i}\hat{O}_{i+1}\) decays with a decay rate of \(3\gamma\). In this way, the instantaneous decay rate accelerates due to the operator spreading in the acceleration regime. In the plateau regime, the instantaneous decay rate is saturated to a value proportional to \(N\). The plateau regime is also explained from operator spreading. In this regime, the operator \(\hat{A}^{(0)}(t)\) spreads across the entire system, and hence the operator size is \(N\). Using the relation between the operator size and the instantaneous decay rate, \(\kappa_{A}(t)\sim\gamma N\). The dissipation exhibits a collective decay in this regime. In the asymptotic regime, the instantaneous decay rate converges to the Liouvillian gap \(g\). The oscillation around the Liouvillian gap in Fig. 1 implies that \(\mathrm{Im}\,\lambda_{1}\neq 0\). We find in supplemental material that the crossover time into the asymptotic regime is inversely proportional to \(\gamma\) for weak dissipation. Thus, the plateau regime is longer for weaker dissipation. This \(\gamma\) dependence is consistent with a recent study on the dynamics of the operator size in open quantum systems [22]. Finally, we compare the exact autocorrelation function with its upper bound. In Fig. 2, we plot the autocorrelation \(|C_{AA}(t)/C_{AA}(0)|\) by solid lines, whereas the upper bound \(e^{-\int_{0}^{t}ds\kappa_{A}(s)}\) by dashed lines. The autocorrelation function shows a rapid decay due to the unitary dynamics at short times. This effect is not taken into account in the upper bound. Thus, we find that the inequality in Eq. (18) is not tight. However, the upper bound qualitatively reproduces the autocorrelation function at large times. In particular, the relaxation dynamics in the transient regime is well described by the plateau value of the instantaneous decay rate. The plateau value is proportional to the system size, which is consistent with the numerical observation that a larger system shows a faster decay. Thus, the autocorrelation function for a local operator exhibits a collective decay due to the operator spreading. _Summary and Discussion.--_ We studied the autocorrelation functions for a local operator in the stationary state in bulk-dissipated quantum many-body systems. In the transient regime, the correlation exhibits a fast relaxation with a decay rate much larger the asymptotic one. We derived a rigorous upper bound on the autocorrelation function. We demonstrated that the instantaneous decay rate shows a plateau over a long time interval (Fig. 1), which explains the collective decay found in the autocorrelation function (Fig. 2). We explained the mechanism of the accelerated decay from the operator spreading. This mechanism is generic in quantum many-body systems under weak bulk dissipation. In recent cold-atom experiments, bulk dissipation has been introduced in a controlled manner [8]. Ultracold atoms thus provide an experimental platform to confirm the collective relaxations predicted in this work. As a comparison of decay rates indicated by \(O(N)\), we explain the difference with super-radiance. Super-radiance is a phenomenon that occurs when \(N\) atoms couple with a common dissipative environment [23]. Therefore, it is qualitatively different from the accelerated decay in this work, in which the dissipation independently acts on each site. Although this work focuses on one-dimensional systems, the accelerated decay should also appear in high-dimensional systems. Namely, the instantaneous decay rate increases as \(\kappa_{A}(t)\sim\gamma t^{d}\) where \(d\) is a spatial dimension in the acceleration regime, \(\kappa_{A}(t)\sim\gamma N\) in the plateau regime, and \(\kappa_{A}(t)=g\) in the asymptotic regime. The crossover time between the plateau regime and the asymptotic regime should be longer for weaker dissipation. It is a future problem to study the explicit \(\gamma\) dependence of the crossover time in two or three dimensional systems. This work was supported by JSPS KAKENHI (Grant Numbers JP21H05185 and 23K13034) and by JST, PRESTO Grant No. JPMJPR2259. Figure 3: Mechanism of accelerated decay in a bulk-dissipated quantum chain with \(N=7\) spins. The vertical axis denotes the time. Colored circles represent sites where an operator \(\hat{A}^{(0)}(t)\) nontrivially acts.
2309.14476
Navier-Stokes Equations for Low-Temperature One-Dimensional Fluids
We consider one-dimensional interacting quantum fluids, such as the Lieb-Liniger gas. By computing the low-temperature limit of its (generalised) hydrodynamics we show how in this limit the gas is well described by a conventional viscous (Navier-Stokes) hydrodynamics for density, fluid velocity and the local temperature, and the other generalised temperatures in the case of integrable gases. The dynamic viscosity is proportional to temperature and can be expressed in a universal form only in terms of the emergent Luttinger Liquid parameter $K$ and its density. We show that the heating factor is finite even in the zero temperature limit, which implies that viscous contribution remains relevant also at zero temperatures. Moreover, we find that in the semi-classical limit of small couplings, kinematic viscosity diverges, reconciling with previous observations of Kardar-Parisi-Zhang fluctuations in mean-field quantum fluids.
Andrew Urichuk, Stefano Scopa, Jacopo De Nardis
2023-09-25T19:21:21Z
http://arxiv.org/abs/2309.14476v2
# Navier-Stokes equations for low-temperature one-dimensional fluids ###### Abstract We consider one-dimensional interacting quantum fluids, such as the Lieb-Liniger gas. By computing the low-temperature limit of its (generalised) hydrodynamics we show how in this limit the gas is well described by a conventional viscous (Navier-Stokes) hydrodynamic for density, fluid velocity and the local temperature, and the other generalised temperatures in the case of integrable gases. The dynamic viscosity is proportional to temperature and can be expressed in a universal form only in terms of the emergent Luttinger Liquid parameter \(K\) and its compressibility. We show that the heating factor is finite even in the zero temperature limit, which implies that viscous contribution remains relevant also at zero temperatures. Moreover, we find that in the semi-classical limit of small couplings, kinematic viscosity diverges, reconciling with previous observations of Kardar-Parisi-Zhang fluctuations in mean-field quantum fluids. _Introduction.--_ Quantum many-body interacting systems pose an immense technical challenge to modern-day physics due to their exponential complexity. A successful approach in the past years has been in borrowing concepts and ideas from classical hydrodynamic theory and applying them to quantum systems [1; 2; 3; 4; 5; 6]. The main idea behind these methods is the same in classical and quantum physics: the exponentially large information determining the state of the system is reduced to few thermodynamic functions, the hydrodynamic fields, that well characterize the local equilibrium state. For quantum gases in one spacial dimension, the theory of generalised hydrodynamics (GHD) [7; 8] (see also e.g. [9; 10; 11] for reviews) has shown to be able to perfectly capture the dynamics of integrable e.g. [12; 13; 14; 15; 16; 17; 18; 19] and near-integrable quantum gases [20; 21; 22; 23; 24; 25], as well as spin chains e.g. [26; 27; 28; 29; 30; 31; 32; 33], Fermionic systems [34; 35; 36; 37; 38; 39] and classical field theories [40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. At small values of temperatures and for gapless or gapped systems, GHD recovers historically well established results on the dynamics of low-temperature systems, in particular the celebrated Luttinger liquid theory [44; 45; 46; 47; 48; 49; 50] and the semi-classical approaches [51; 52]. In particular, by re-quantising the fluctuations on top of a classical background, evolving with GHD at zero temperature, one recovers a Luttinger Liquid theory on top of an evolving hydrodynamic fluid, connecting this way GHD with the most relevant field theory description of one dimensional quantum systems [53; 54; 55; 56; 57]. At zero temperature (or entropy) GHD is effectively a set of equations describing the dynamics of the Fermi points of the fermionised degrees of freedom [12]. In particular, it was shown that, at the level of Euler hydrodynamics, therefore neglecting any viscosity, the GHD evolution of a single Fermi sea (with two Fermi points) is exactly equivalent to the conventional hydrodynamics (CHD) evolution of density \(\rho\) and momentum (or fluid velocity \(\eta\)) as the two relevant hydrodynamic modes [12; 58]. However, while the latter creates hydrodynamic shocks and fails to be meaningful after the time \(t^{*}\) of the creation of the first shock [59; 60; 61; 62], Euler GHD is free of shocks as at time \(t^{*}\) two new modes, i.e. two new Fermi points are created, resolving this way the shock into a simple contact singularity [12]. However, while the latter is clearly valid for strictly integrable systems, it is unclear what is instead the correct hydrodynamics for quasi or even non-integrable gases at very low temperatures, that can describe different experimental settings. In this letter, we show that as Euler CHD is plagued by hydrodynamic shocks, one, on the other hand, cannot neglect viscosity terms that are finite as soon as interparticle interactions are non-zero. Viscous, or diffusive terms, have been indeed incorporated into GHD for a few years [63; 64; 65; 66; 67], and they have been shown to be essential for the thermalisation of quasi-integrable systems [68] and for well-describing spin dynamics in integrable spin chains [69; 70; 71; 72; 73; 74]. However, since integrable systems are typically ballistic, they usually account for small perturbative effects on top of the ballistic current. Here we shall show that the picture change drastically at very low temperature: there indeed they enter the CHD as a _dynamic viscosity_\(\mu(\rho,\beta)\) they fully regularize its shocks, making viscous CHD a perfectly valid hydrodynamics for low-temperature gapless systems. By taking a low-temperature limit of GHD we here determine a simple and universal expression for the dynamic viscosity which only depends on the density \(\rho\) and on the Luttinger liquid parameter \(K(\rho)\) for a given interaction strength. Therefore, we claim that our result is universal for any one-dimensional interacting system at low temperatures. _GHD and CHD.--_ We start by deriving CHD by taking the low-temperature limit of GHD, and we consider the Lieb-Liniger model [75] as a reference, although our derivation is fully generic. The Lieb-Liniger model for \(N\) contact-interacting bosonic particles in an external potential \(V(x)\) is given by the Hamiltonian \[\hat{H}=\frac{1}{2}\sum_{i=1}^{N}\partial_{x_{i}}^{2}+2c\sum_{i<j=1}^{N} \delta(x_{i}-x_{j})+\sum_{i=1}^{N}V(x_{i}) \tag{1}\] and it represents a paradigmatic model for one-dimensional interacting systems and cold atomic gases, see e.g. Ref. [76; 77; 78]. In the repulsive regime \(c>0\), its eigenstates are labelled by (fermionic) quasiparticles with bare energy \(\varepsilon(\theta)=\theta^{2}/2\) and momentum \(k(\theta)=\theta\), with the parameter \(\theta\in[-\infty,\infty]\) In the thermodynamic limit, the state is specified by a filling function \(n(\theta)\) fixed by the temperature and chemical potential, where \(\theta\) are the rapidities or the quasi-momenta of the particles. Within the framework of GHD, one assumes that at each position \(x,t\) there exists a fluid cell where the gas is locally thermodynamic, and where we can introduce a local filling function \(n(\theta;x,t)\). The time evolution of the latter then reads, given also the external force \(\mathfrak{f}=-\partial_{x}V\), as \[\partial_{t}n(\theta;x,t)+v^{\rm eff}(\theta;x,t)\partial_{x}n(\theta;x,t)+ \mathfrak{f}\partial_{\theta}n(\theta;x,t)=0 \tag{2}\] where \(v^{\rm eff}(\theta;x,t)=(\partial_{\theta}\varepsilon)^{\rm dr}/(\partial_{ \theta}k)^{\rm dr}\) is the dressed velocity of the quasiparticles on top of the background fixed by the filling \(n(\theta;x,t)\). The latter can be found by knowing the explicit dressing operation, which reads as \(f^{\rm dr}=(1-\varphi n)^{-1}\cdot f\) with the scattering matrix \(\varphi(\theta)=c/(\pi(c^{2}+\theta^{2}))\) acting as a convolution operator \(\varphi\cdot f=\int\varphi(\theta-\alpha)f(\alpha)d\alpha\). At low temperatures, the filling function becomes very close to a sharp Fermi sea, such that \(\delta n(\theta;x,t)\to-\sum_{\sigma}\sigma\delta(\theta-\theta(x,t)^{\sigma})\) with \(\theta^{\sigma}\) the two Fermi edges, indexed by \(\sigma=\pm 1\), such that eq. (2) becomes an equation only for the two Fermi edges \(\theta^{\sigma}(x,t)\), namely at the only points where the delta functions are not zero. It is useful to always choose a co-moving reference frame, namely we describe the fluid as being at each position \(x\) in a Fermi sea with Fermi edges \(-q,q\) and Fermi velocity \(v_{F}=\pi\rho/K\), with \(K\) the Luttinger liquid parameter [46], obtained by a fluid _velocity boost_ (i.e. employing the Galilean invariance of the fluid) \(\eta=(\theta^{+}+\theta^{-})/2\), i.e. \(q=\theta^{+}-\eta\), \(-q=\theta^{-}-\eta\). In this notation, we obtain, \[\sum_{\sigma}\delta(\theta-\theta^{\sigma})[\partial_{t}\theta^{\sigma}+( \eta+\sigma v_{F}(\rho))\partial_{x}\theta^{\sigma}-\mathfrak{f}]=0, \tag{3}\] with the Fermi edge \(q(x,t)\) fixed by the local density \(\rho(x,t)\). The latter can be shown, see Supplementary Material (SM) [79], to be fully equivalent to CHD for density and fluid velocity readings as \[\partial_{t}\rho + \partial_{x}(\eta\rho)=0\] \[\partial_{t}\eta + \eta\partial_{x}\eta+\rho^{-1}\partial_{x}\mathcal{P}_{s}^{(0)}( \rho)=\mathfrak{f}, \tag{4}\] with \(\mathcal{P}_{s}^{(0)}(\rho)\) is the static pressure of the gas at zero temperature, fixed by the equation of state of the system. In the specific example of the Lieb-Liniger gas, the static pressure at zero temperature is given by the integral of the dressed energy \(\mathcal{P}_{s}^{(0)}(\rho)=-\int_{-q}^{q}d\theta^{\rm\varepsilon dr}(\theta)/2\pi\) and where \(q\) is obtained by inverting the relation \(\rho=\int_{-q}^{q}1^{\rm dr}(\theta)d\theta/2\pi\). Finally, we should also mention that we have used the momentum field \(p\) and we have introduced the fluid velocity \(\eta=p/\rho\). The equation for the fluid velocity \(\eta\) in eq. (4) takes the form of the celebrated Burgers equation, which is known to display hydrodynamic shocks [80]. In Euler GHD such shocks are resolved by adding to eqs. (3) two extra Fermi edges, i.e. by introducing \(2n\) hydrodynamic fields \(\theta^{\sigma}\) with \(\sigma=1,\ldots,2n\) that describe \(n\) split Fermi seas [12]. In the phase-space \((\theta,x)\) this translates into the existence of a _continuous_ one-dimensional contour \(\Gamma(s)\) that separates the region where \(n(\theta,x)=1\) from the region where \(n(\theta,x)=0\). Clearly, this picture is based on the underlying integrability of the model, i.e. in the fact that all the rapidities \(\theta\) are conserved quantities. For non-integrable generic systems, it is clear that this phenomenon cannot be the one that regularises the shocks. As shocks in the Burgers equation are regularised by a finite viscosity, we shall then see in the coming section how to indeed introduce viscosity in CHD. _Viscous CHD.--_ We now wish to include diffusive or viscosity effects. In order to do so, we shall consider local thermodynamic states with finite but small temperatures, as there are no viscosity effects at strictly zero temperatures. We consider therefore local canonical equilibrium states with given density \(\rho\) (or chemical potential \(\mu\)), fluid velocity \(\eta\) and temperature \(T=1/\beta\). In the LL model, this corresponds to filling functions of the form \(n(\theta;x,t)=(1+e^{\beta\varepsilon(\theta;x,t)})^{-1}\) given in terms of its pseudo-energy \[\varepsilon=\epsilon_{0}(\theta-\eta(x,t))-(q(x,t))^{2}-\frac{\varphi\cdot \log(1+e^{-\beta\varepsilon})}{\beta(x,t)}, \tag{5}\] with bare energy \(\epsilon_{0}(\theta)=\theta^{2}/2\). We then move to introduce diffusion terms into CHD by taking the high \(\beta\) limit of the diffusive GHD, which is known from refs. [63; 64] and it is given by adding to the r.h.s. of eq. (2) the diffusive part \[R\cdot\partial_{x}(R^{-1}\cdot\tilde{D}\cdot\partial_{x}n(\alpha)), \tag{6}\] with the kernels \(R_{\theta,\theta^{\sigma}}\) and \(\tilde{D}_{\theta,\theta^{\sigma}}\) presented in [79]. As diffusive GHD contains \(\partial_{x}^{2}\) terms, a description only in terms of the Fermi edges becomes impossible, as we cannot simply use that the derivative of the filling is a delta function at the edges, as this would produce \(\delta^{\prime}(\theta-\theta^{\sigma})=\partial\delta(\theta-\theta^{ \sigma})\) whose meaning outside integration is unclear. The only way to clarify their role is to plug them within the hydrodynamic equation for the only two modes which are relevant here, i.e. density and momentum (fluid velocity). This already clarifies the essential difference of viscous CHD from Euler GHD: while the latter can be simply obtained as an equation for the evolution of the Fermi edges (3) it can be trivially extended to any number of them during time evolution. Viscous CHD instead requires fixing the number of Fermi edges from the start, and, as we shall see, does not require producing new Fermi edges, as hydrodynamic shocks are regularised by viscosity. By means of a long but straightforward calculation deferred to the SM [79], we then arrive to \[\partial_{t}\rho + \partial_{x}(\eta\rho)=0,\] \[\partial_{t}\eta + \eta\partial_{x}\eta+\frac{\partial_{x}\mathcal{P}_{s}(\rho,T)}{ \rho}=\frac{\partial_{x}(\mu(\rho,T)\partial_{x}\eta)}{\rho}+\mathfrak{f}, \tag{7}\] with the dynamic viscosity at the leading order is linear in temperature, as already remarked previously [81], and it reads in terms of the slope in the density dependence of the emergent Luttinger liquid parameter \(K\) as \[\mu(\rho,T)=\rho\frac{T}{4\pi}K(\partial_{\rho}\log K)^{2}+O(T^{3}), \tag{8}\] which can also be rewritten in terms of the derivative of the Fermi velocity as \(\mu(\rho,T)=T(1-K\partial_{\rho}v_{F}/\pi)^{2}/4v_{F}\). This result can be understood similarly as in the kinetic picture of diffusion in integrable models proposed in [82]: the excitations close to the Fermi edges move with Fermi velocity \(v_{F}(\rho)=\pi\rho/K(\rho)\) but as there are local density fluctuations \(\rho\to\rho+\delta\rho\) within the fluid, the Luttinger parameter (i.e. the Fermi velocity) also fluctuates \(K\to K+\partial_{\rho}K\delta\rho\), giving a diffusive spreading to the trajectories. The dependence of the Luttinger parameter on density (and therefore on the position in an inhomogeneous fluid) is what makes inhomogeneous Luttinger liquids for interacting systems non-conformal invariant [83] and it is a direct effect of non-trivial interactions. In free Fermi gases, the dependence of the Luttinger parameter on the density trivialises, \(K\to 1\), and viscosity vanishes, as expected for non-interacting particles. Indeed, in the limit of large coupling, the density profile of LL model is equivalent to that of a free fermionic model, the so-called Tonks gas. The first corrections in \(1/c\) of the viscosity reads as, \[\mu(\rho,T)=\frac{4\rho}{\pi}\frac{T}{c^{2}}+O(c^{-3}), \tag{9}\] which manifests the actual dependence with respect to the reduced temperature \(t=T/c^{2}\) at small \(t\). The pressure is given by taking the first correction to static pressure in \(T^{2}\), \(\mathcal{P}_{s}^{(0)}(\rho)\to\mathcal{P}_{s}(\rho,T)\) with \[\mathcal{P}_{s}(\rho,T)=\mathcal{P}_{s}^{(0)}(\rho)+T^{2}\rho\tilde{\chi}_{e} /2, \tag{10}\] with \(\tilde{\chi}_{e}=K/(3\rho^{2})\). Eq. (7) gives the evolution of density and momentum of the fluid, which is one-to-one with the chemical potential \(q\) and the boost \(\eta\). Given that the system is at a finite temperature, and this also represents a hydrodynamic variable, an extra equation is needed: the one for the evolution of the energy density \(e\). We define energy density at rest as \(e=E/\rho-\eta^{2}/2\), where \(E\) is the total energy of the fluid, which in the LL model is computed via \(E=\int d\theta n(\theta)(\theta^{2}/2+V(x))^{\rm dr}\). By proceeding in an analogous manner as to derive the eqs. (7), we obtain \[\partial_{t}e+\eta\partial_{x}e+\frac{\mathcal{P}_{s}(\rho,T)}{\rho}\partial _{x}\eta=\frac{\mu(\rho,T)}{\rho}(\partial_{x}\eta)^{2}. \tag{11}\] where, as expected, the _kinematic viscosity_\(\nu=\mu/\rho\) now enters the equation. We can convert this equation into an equation for the evolution of the temperature field \(T(x,t)=1/\beta(x,t)\) using the definition of the energy susceptibility at fixed density \(\left.\delta e/\delta T\right|_{\rho}=\tilde{\chi}_{e}T\). Given the expression at zero temperature for generic interacting systems, we then obtain \[\partial_{t}T + \eta\partial_{x}T=-\mathcal{P}_{s}^{T}T\partial_{x}\eta+\frac{ \mu(\rho,T)}{T\rho\tilde{\chi}_{e}}(\partial_{x}\eta)^{2} \tag{12}\] with \(\mathcal{P}_{s}^{T}=\pi\left(1/\rho+\partial_{\rho}v_{F}/v_{F}\right)/(3v_{F} \tilde{\chi}_{e})\). Notice that one may expect a term accounting for thermal conduction \(\partial_{x}(k(\rho,T)\partial_{x}T)\) on the rhs of (12), but given that thermal conductivity \(k(\rho,T)\sim T^{2}\) we neglect this term as of the order of temperature to 2nd power. Eqs. (7) and (12) take exactly the same form as the standard Navier-Stokes equation for a fluid, i.e. continuity equation, conservation of momentum and conservation of energy, and it is quite remarkable that we can derive it in an exact, non-perturbative way. We first notice that the heating factor in eq. (12) is \(\sim\mu/T\) namely it is order zero in temperature. Therefore, even if the system is initially at zero temperature, the rapid growth of the velocity gradient \(\partial_{x}\eta\) heats the system, giving therefore finite viscosity \(\mu\) to the dynamics of \(\eta\) and regularising its shocks. Moreover, given that the dynamic viscosity in eq. (8) is expressed only in terms of the universal features of the low-temperature effective field theory, namely the Luttinger liquid constant \(K\) and its density-dependence (which is non-trivial only for interacting quantum gases), we conjecture that its form is universal for generic one-dimensional quantum fluids. The argument is simple: at low temperatures, all quantum fluids become effectively integrable, as their description is in terms of Luttinger liquid freely propagating bosonic modes. Their diffusion is therefore expected to be the same as for integrable quasiparticles, where the kinetic picture explained above applies. We also stress that there is no fundamental difference in the nature of transport at small temperatures between integrable and non-integrable systems, contrary to what is claimed in previous literature [84]. _Front dynamics in integrable gases.--_ We now focus on the fate of an initial density bump, for example in a system at a given temperature \(T_{0}\) and density \(\rho_{0}\). Such a setting is Figure 1: Evolution of the excess density \(\delta\rho=\rho-\rho_{\infty}\) for an initial bump obtained as the ground state density of (1) with interaction \(c=1\) and Gaussian potential \(V(x)=-\mu-A\exp(-x^{2}/\sigma^{2})\), released at \(t>0\); here \(\sigma=1\), \(A\) and \(\mu\) are set such that the background density \(\rho_{\infty}=1\) and maximum \(\rho_{0}=1.2\). We compare the result by zero-entropy GHD (solid thick lines) and diffusive GHD at different values of inverse temperatures \(\beta\) (see legend). The result of viscous GHD is shown for large enough values of \(\beta\) as it converges to a well-defined limit (dashed line). Times are \(t=0,3,6,12.5,18.5,25,32.5\) and increase from leftmost to rightmost peak in the figure. Inset: evolution of the inverse temperature and generalised temperatures starting from a thermal case at low temperature in the integrable gas, here for simplicity the Lieb-Liniger gas at large coupling (Tonks limit). The temperatures are obtained by fitting the quasiparticle occupations \(n(\theta;x,t)\) at a given position \(x_{0}\) during the front dynamic expansion. We clearly see the moment when the Fermi sea split as the moment of temperatures inversion. paradigmatic to understanding the response of a system to external perturbations and we here use it to establish the main differences between viscous CHD and integrable diffusive GHD at low temperature, see Fig. 1. When the system is fully integrable, for example in the absence of forces \(\mathfrak{f}\), one could expect that not only temperature and chemical potential can characterise a local stationary state, but a large number of Lagrange multipliers associated to higher charges, i.e. a Generalised Gibbs Ensembles (GGE), deviating this way from the behaviour of a generic non-integrable system. Namely, even if the gas is initially in a thermal state, higher chemical potentials could be activated as time grows. In particular, a first deviation from thermal is by considering the pseudo-energy of eq. (5) and substituting \[\epsilon_{0}\to\frac{\theta^{2}}{2}+\gamma_{3}\frac{\theta}{3!}+\gamma_{4} \frac{\theta^{4}}{4!} \tag{13}\] with \(\gamma_{n}\) the chemical potentials associated to the conserved charge \(\hat{Q}_{n}\), whose expectation value reads as \(Q_{n}=\int d\theta n(\theta)(\theta^{n})^{\mathrm{dr}}/n!\). Such a state is a valid stationary state in the LL gas, due to its integrability. By extending the calculations done for the case of temperature (12) we find, see [79], that the evolution of such temperatures is expressed by quite lengthy partial differential equations. The main relevant fact is that while \(\partial_{t}T\sim O(T)\), we instead find \(\partial_{t}\gamma_{n}\sim O(T^{0})\), namely even by starting with a thermal gas at low temperature, the integrable gas tends to generate finite generalised temperatures with time evolution. Such temperature dynamics lead to the splitting of the Fermi seas (occurring at a certain position \(x_{0}\) when \(\gamma_{4}(x_{0},t)\neq 0\) and \(\beta(x_{0},t)\to-\infty\)) and to a much different shock regularisation dynamics compared to the CHD one, even if both are regular hydrodynamics. Indeed, viscous CHD converges to a given profile, with a finite shock width, as the temperature is lowered, and it never develops a shock, even if naively the viscosity \(\mu\) tends to zero. The reason is the evolution of temperature (12), where, given that the heating factor \(\mu/T\) is finite in the zero temperature limit, even if temperature initially is zero, the creation of a shock heats the system and therefore increases the viscosity, self-regulating this way the shock. In Fig. 1, zero-temperature GHD agrees with diffusive GHD at finite temperature, signalling how the zero-temperature approximation is often able to capture out-of-equilibrium fluids. On the other hand, observed deviations from viscous CHD are attributed to the absence of higher conservation laws, as discussed above. _The small coupling limit and KPZ physics.--_ As already discussed in the first Lieb-Liniger paper [75], the limit of small coupling of the LL gas does not simply recover free bosons. Indeed, when \(c\to 0^{+}\), its ground state becomes the one describing the so-called (semi-classical) condensate solution of the Nonlinear Schrodinger equation (NLS) [85]. This is characterised by vanishing Fermi momentum \(q\sim\sqrt{c}\) but with diverging dressed functions \(1^{\mathrm{dr}}\sim 1/\sqrt{c}\) in order to keep the density \(\rho\) finite in the limit. The relevant excitations become then the Bogoliubov excitations with spectrum given by \(\varepsilon_{k}\sim|k|\) and therefore with a degenerate group velocity \(v_{k}\sim\mathrm{sgn}(k)\). As \(\partial_{\rho}v_{F}(\rho)\sim\lim_{q\to 0}\partial_{q}v_{q}\) the latter diverge, giving a divergent dynamic viscosity at low coupling as \[\mu(\rho,T)\sim\frac{T}{\sqrt{c}}, \tag{14}\] signalling the breakdown of the viscous CHD and the emergence of Kardar-Parisi-Zhang physics [86]. The latter is well-known to emerge in the stochastic Burgers equation, i.e. given a white \(\delta-\)correlated noise \(w\), a local perturbation of a hydrodynamic field \(\phi\) satisfying \(\partial_{t}\phi+\partial_{x}\left(v\phi+\kappa\phi^{2}+w\right)=0,\) moves with finite velocity \(v\) and _spreads super-diffusively_, as opposed to the diffusive case of eq. (7) whenever \(\mu\) is finite. Such a phenomenon is known to appear in generic finite-components one-dimensional fluids, as described by the non-linear fluctuating hydrodynamic (NLFH) theory [87], which can be successfully applied also to lattice (i.e. non-integrable) NLS [88]. Again, the argument is simple: whenever the Euler currents contain non-linearities, one should expect that the introduction of a small noise (which for example can describe the interaction with other non-hydrodynamic modes) always leads to the KPZ universal fixed point. However, this is not the case at finite coupling as, while it is true that eq. (7) contains the non-linearity \(\eta^{2}\), there exists a continuous, thermally activated, spectrum of modes around the Fermi points with velocities \(v_{F}\pm\delta(\theta)\) (with \(\delta(\theta)\) a tiny function of \(\theta\)) that are responsible for a finite diffusion constant in the system. It is only in the small coupling limit that the velocities of all such excitations become degenerate, therefore diminishing the number of effective hydrodynamic modes to only the three macroscopic ones. In this limit, therefore, the theory of NLFH applies and KPZ physics emerges, as signalled by the divergent dynamic viscosity. We should stress that the existence of KPZ physics in the NLS at low temperature was first established in [89] and we here give a first analytical proof of its divergent diffusion constant. Moreover, it is interesting to notice that the degeneracy of the hydrodynamic modes leads to super-diffusive KPZ physics similarly also in the Heisenberg spin chain (and any other integrable model with non-abelian symmetry) [74; 90] at finite temperatures. There the relevant degenerate excitations are not the ones around the Fermi points but the so-called giant magnons [91; 92], namely magnonic excitations with large spin and vanishing velocities. _Conclusion.--_ We have here shown that the standard Navier-Stokes equation for the evolution of density, momentum and temperature can be derived naturally from the low-temperature expansion of the GHD for the LL gas. We have found universal expressions for the linear part of in temperature of the dynamic viscosity, which we conjecture to apply to generic one-dimensional fluids. We have shown that the viscosity is zero in the free fermions limit, as expected, and that it diverges in the semi-classical limit of weakly interacting bosons at small temperatures, which despite many numerical works, it was never established from first principles. The divergent viscosity signals the emergence of KPZ super-diffusive spreading [89], in analogy to the one observed in integrable spin chains [93]. We have shown that the viscous terms regularise hydrodynamic instabilities in one-dimensional gases, although the inclusion of generalised temperatures is necessary in order to predict the full form of the shock front in the integrable gas. Moreover, one should also expect that when the system is strongly out of equilibrium and gradients of \(\eta\) are strong, the system strongly heats locally, invalidating, therefore, the zero temperature approximation. Our findings therefore suggest that zero-entropy hydrodynamics becomes harder to physically realise whenever interactions are present. Differently from previous attempts to derive viscosity in quantum fluids by perturbative corrections to Luttinger liquids, see for example [84, 6, 94, 81], we here derive non-perturbatively using the generalised hydrodynamic of a specific model, the LL gas, and we extend our result to generic systems, given the universality of its formulation. Clearly, a different derivation only involving Luttinger liquid modes would also be desirable in the future. Our result is ready to be checked by means of numerical simulations [95, 96, 97] and to apply to different quantum fluids as such as chiral edge modes [98, 99, 100], to open the way for a full fluctuating hydrodynamic theory [101] of Luttinger liquid field theories, and to unveil Burgers-like turbulent phases [102] in low-dimensional quantum fluids. _Acknowledgements.--_ We are thankful to B. Doyon, K. Kheruntsyan, I. Bouchoule, and J. Dubail for insightful discussions. We are in debt to M. Panfil for sharing his notes on dressed kernel identities. We thank G. Del Vecchio Del Vecchio for the related collaboration. This work has been partially funded by the ERC Starting Grant 101042293 (HEPIQ) (J.D.N., S.S and A.U.) and by the ERC Consolidator Grant 771536 (NEMO) (S.S.). S.S. is thankful to LPTM (CergyParis) for the kind hospitality during the development of this work.
2309.06072
The $χ$-binding function of $d$-directional segment graphs
Given a positive integer $d$, the class $d$-DIR is defined as all those intersection graphs formed from a finite collection of line segments in ${\mathbb R}^2$ having at most $d$ slopes. Since each slope induces an interval graph, it easily follows for every $G$ in $d$-DIR with clique number at most $\omega$ that the chromatic number $\chi(G)$ of $G$ is at most $d\omega$. We show for every even value of $\omega$ how to construct a graph in $d$-DIR that meets this bound exactly. This partially confirms a conjecture of Bhattacharya, Dvo\v{r}\'ak and Noorizadeh. Furthermore, we show that the $\chi$-binding function of $d$-DIR is $\omega \mapsto d\omega$ for $\omega$ even and $\omega \mapsto d(\omega-1)+1$ for $\omega$ odd. This refutes said conjecture of Bhattacharya, Dvo\v{r}\'ak and Noorizadeh.
Lech Duraj, Ross J. Kang, Hoang La, Jonathan Narboni, Filip Pokrývka, Clément Rambaud, Amadeus Reinald
2023-09-12T09:09:42Z
http://arxiv.org/abs/2309.06072v1
# The \(\chi\)-binding function of \(d\)-directional segment graphs ###### Abstract Given a positive integer \(d\), the class \(d\)-DIR is defined as all those intersection graphs formed from a finite collection of line segments in \(\mathbb{R}^{2}\) having at most \(d\) slopes. Since each slope induces an interval graph, it easily follows for every \(G\) in \(d\)-DIR with clique number at most \(\omega\) that the chromatic number \(\chi(G)\) of \(G\) is at most \(d\omega\). We show for every even value of \(\omega\) how to construct a graph in \(d\)-DIR that meets this bound exactly. This partially confirms a conjecture of Bhattacharya, Dvorak and Noorizadeh. Furthermore, we show that the \(\chi\)-binding function of \(d\)-DIR is \(\omega\mapsto d\omega\) for \(\omega\) even and \(\omega\mapsto d(\omega-1)+1\) for \(\omega\) odd. This refutes said conjecture of Bhattacharya, Dvorak and Noorizadeh. ## 1 Introduction In structural graph theory, a fundamental task is to characterise the complex, global parameter of chromatic number \(\chi\) in terms of the simpler, more local parameter of clique number \(\omega\). For a given graph class, the question of whether this task is even well-defined belongs to the theory of \(\chi\)-boundedness, an area systematically initiated by Gyarfas in the 1980s [10]. A graph class \(\mathcal{G}\) is called _\(\chi\)-bounded_ if there is some function \(f_{\mathcal{G}}\) such that \(\chi(G)\leq f_{\mathcal{G}}(\omega(G))\) for any \(G\in\mathcal{G}\), and, if that is so, the optimal choice of \(f_{\mathcal{G}}\) is called the _\(\chi\)-binding_ function of \(\mathcal{G}\). Note that the triangle-free graphs of arbitrarily large chromatic number give rise to many interesting graph classes that are not \(\chi\)-bounded. This area of mathematics is deep and active, but despite many recent advances, there are many important classes for which the question of \(\chi\)-boundedness is difficult and remains open; for a nice recent account of the state of the art, see [11]. Suffice it to say that determination of the \(\chi\)-binding function for a nontrivial (non-perfect) class is generally considered a rarity. Since even before the beginning [1], much attention has been focused on intersection classes, that is, graph classes defined by taking some natural collection of sets, usually geometrically-defined, and forming for every finite subcollection of those sets an auxiliary graph in which each vertex corresponds to a set, and two vertices are connected by an edge if and only if the corresponding sets have a nontrivial intersection. Such a restriction is not too confining, not only because many intersection classes are fundamental to the structural understanding of graphs (consult, e.g. [11, 12] for more context), but also because intersection classes are more than rich enough for its \(\chi\)-boundedness theory to contain many fascinating challenges. As one prominent (and pertinent) example, consider the collection of straight line segments in the plane \(\mathbb{R}^{2}\) (or more formally, the collection of those closed intervals drawn between some pairs of points in \(\mathbb{R}^{2}\)). Its intersection class is called the _segment (intersection) graphs_. If all the segments happen to lie in parallel, then the resulting segment graph is an interval graph, and thus perfect; that is, it has equality between \(\chi\) and \(\omega\). One should not expect this to remain true when we allow the segment slopes to vary, but it is reasonable to ask if there might remain a good relationship between \(\chi\) and \(\omega\), that is, are segment graphs \(\chi\)-bounded? Indeed, Erdos asked this in the 1970s (see [1, 2] for more details of the problem's provenance). Despite sustained attention and nice partial results (see e.g. [10, 11, 12]), Erdos's innocent-looking but very difficult challenge was not resolved until 2014, and in the negative. Work of Pawlik, Kozik, Krawczyk, Lason, Micek, Trotter, and Walczak [2] provided a strikingly elegant construction of triangle-free segment graphs of arbitrarily large chromatic number. Interestingly, the graphs produced by the construction are isomorphic to the intersection graphs that were exhibited by Burling [12] to show that the intersection class for axis-aligned boxes in \(\mathbb{R}^{3}\) is not \(\chi\)-bounded. One might consider this a coda, but permit us to prolong the narrative, in particular by parameterising according to the number of segment slopes. We first note that the construction of Pawlik _et al._ must have arbitrarily many slopes. Suppose to the contrary that the segments admit at most \(d\) slopes. By the same observation as above, each slope induces a perfect graph \(H\) having \(\chi(H)\) equal to \(\omega(H)\leq 2\). By colouring the slopes disjointly from one another, we can conclude that \(\chi\) is at most \(2d\) for the entire graph, a contradiction. More generally, given a positive integer \(d\), we refer to the _\(d\)-directional segment graphs_ as those intersection graphs formed from a collection of line segments in \(\mathbb{R}^{2}\) having at most \(d\) distinct slopes. Call the class of such graphs _\(d\)-DIR_. (Take note of the subtlety that this is not _per se_ an intersection class, but rather a union of many of them.) What we just argued is that \(\chi(G)\leq d\omega(G)\) for any \(G\) in \(d\)-DIR. Thus, in a simple fashion, we can conclude that the graph class \(d\)-DIR is \(\chi\)-bounded. The question we would like to address here is, what is the \(\chi\)-binding function of \(d\)-DIR? This question was raised recently by Bhattacharya, Dvorak and Noorizadeh [1]. They proved that there are \(2\)-directional segment graphs of clique number \(2t\) and chromatic number \(4t\), for all \(t\in\mathbb{N}\), which attains the bound of the above simple argument in this special case. They moreover conjectured that that simple bound is optimal in general, that is, that \(\omega\mapsto d\omega\) is the \(\chi\)-binding function of \(d\)-DIR. We succeeded in completely resolving this question as follows. **Theorem 1**.: _The \(\chi\)-binding function of \(d\)-DIR is_ \[\omega\mapsto\begin{cases}d\omega&\text{if $\omega$ is even, or}\\ d(\omega-1)+1&\text{if $\omega$ is odd.}\end{cases}\] One can interpret this statement as a graceful generalisation of perfection in the \(d=1\) case. The crux in Theorem 1 is to prove the lower bound in the even \(\omega\) case. Here is a rough sketch of that construction, which at a high level combines aspects of both the construction of Pawlik _et al._ and that of Bhattacharya _et al._, and indeed strengthens/refines both constructions. We start by fixing \(\omega=2\) and proceed by induction on \(d\), exhibiting graphs achieving high fractional chromatic number. We refine the induction used by Pawlik _et al._ by controlling, at every step, the growth of the number of slopes required for achieving a given fractional chromatic number. We then obtain the result for all even \(\omega>2\) by blowing up each segment by a factor \(\omega/2\), building on ideas of Bhattacharya _et al._. The structure of the paper is as follows. In Section 2, we lay the ground for our proof by introducing the notions and tools used throughout the paper. Then we settle the even \(\omega\) case in Section 3. The lower bound for the odd \(\omega\) case in Section 4 is a mild adaptation of the even \(\omega\) construction. We give a short proof of the upper bound for the odd \(\omega\) case in Section 5. We end with some open questions for future research in Section 6. ## 2 Preliminaries ### Fractional colouring In the proof, we will essentially work with fractional chromatic number, a standard parameter lower bounding the chromatic number, which has many equivalent definitions. For all positive integers \(a\) and \(t\), a _\(t\)-fold \(a\)-colouring_ of a graph \(G\) is a function \(\varphi:V(G)\rightarrow\binom{\{1,\ldots,a\}}{t}\) such that for every edge \(uv\in E(G)\), \(\varphi(u)\cap\varphi(v)=\emptyset\). Recall that the _fractional chromatic number_\(\chi_{f}(G)\) of a graph \(G\) is the limit of \(a/t\) as \(t\rightarrow\infty\) with \(a\) being the least integer such that \(G\) admits a \(t\)-fold \(a\)-colouring. It holds that \(\chi_{f}(G)\leq\chi(G)\) always. For any \(t\)-fold colouring \(\varphi\) of a graph \(G\), if \(X\) is a set of vertices of \(G\), we denote by \(\varphi(X)\) the set of colours used to colour the vertices in \(X\), i.e. \(\varphi(X)=\bigcup_{v\in X}\varphi(v)\). ### Segments A _segment_\(S\) with endpoints \(A=(x_{1},y_{1})\) and \(B=(x_{2},y_{2})\) in \(\mathbb{R}^{2}\) is the set \(\{A+\lambda(B-A)\mid\lambda\in[0,1]\}\). The _length of a segment_ is its Euclidean norm. A _rectangle_\(R\) is defined as the Cartesian product of two segments \([a,c]\times[b,d]\) with \(a,b,c,d\in\mathbb{R}\) and \(c>a\) and \(d>b\). A rectangle always has non-zero area. If the two intervals have the same length, we say that \(R\) is a _square_. We naturally define the _left_, _right_, _top_ and _bottom sides_ of a rectangle \(R\) as the four segments whose union is the boundary of \(R\). The left and right side of \(R\) are the vertical sides of \(R\), and the top and bottom sides of \(R\) are the horizontal sides of \(R\). The length of a vertical side is the _height_ of \(R\), the length of a horizontal side is the _width_ of \(R\). The aspect ratio of \(R\) is the height divided by the width of \(R\). A segment \(S\)_crosses a rectangle \(R\) vertically or horizontally_ if \(S\) intersects the two horizontal or vertical sides of \(R\), respectively. We say that a rectangle \(R_{1}\) crosses a rectangle \(R_{2}\) vertically (respectively, horizontally) if the left and right sides (respectively, the top and bottom sides) of \(R_{1}\) cross the rectangle \(R_{2}\) vertically (respectively, horizontally). If \(R_{1}\) crosses \(R_{2}\) vertically, then \(R_{2}\) crosses \(R_{1}\) horizontally. The _slope_ of a segment in \(\mathbb{R}^{2}\) with endpoints \((x_{1},y_{1}),(x_{2},y_{2})\) is \(\frac{y_{2}-y_{1}}{x_{2}-x_{1}}\) if \(x_{1}\neq x_{2}\), and \(\infty\) otherwise. The _slope number_ of a finite family \(\mathcal{S}\) of segments is the cardinality of the set of the slopes of segments in \(\mathcal{S}\). Given a set \(\mathcal{S}\) of segments, the _intersection graph_\(G(\mathcal{S})\) of \(\mathcal{S}\) is the graph with vertex set \(\mathcal{S}\) and with edges consisting of all pairs \(SS^{\prime}\) with \(S,S^{\prime}\in\mathcal{S}\) such that \(S\cap S^{\prime}\neq\emptyset\). A _\(t\)-blowup of a segment_ is a multiset of \(t\) copies of the same segment. A _\(t\)-blowup of a set of segments_ is the multiset of the \(t\)-blowup of the segments. A _\(t\)-blowup of a graph_ is the graph obtained from \(G\) by replacing every vertex by a copy of \(K_{t}\), and every edge by a copy of \(K_{t,t}\). Observe that a \(t\)-fold colouring of \(G\) corresponds to a colouring of the \(t\)-blowup of \(G\). ### Configurations We now introduce the notion of probes, first used in [10]. Let \(\mathcal{S}\) be a family of segments in _the unit square_\(\mathcal{U}=[0,1]\times[0,1]\). Let \(P=[a,c]\times[b,d]\) be a rectangle included in \(\mathcal{U}\). We denote by \(\mathcal{S}(P)\) the set of segments in \(\mathcal{S}\) intersecting \(P\). The _right extension_ of \(P\) is the rectangle \([a,1]\times[b,d]\). The rectangle \(P=[a,c]\times[b,d]\) is a _probe_ for \(\mathcal{S}\) if the following hold: * \(c=1\), that is, the left side of \(P\), lies on the boundary of \(\mathcal{U}\); * all segments in \(\mathcal{S}(P)\) cross \(P\) vertically; and * all segments in \(\mathcal{S}(P)\) are pairwise disjoint. Given a probe \(P\), a _root_ of \(P\) is a rectangle \([a,c^{\prime}]\times[b,d]\) where \(c^{\prime}\) is a real number such that \([a,c^{\prime}]\times[b,d]\) is disjoint from every segment in \(\mathcal{S}\). A _configuration_ is a pair \(\mathcal{C}=(\mathcal{S},\mathcal{P})\) with \(\mathcal{S}\) a set of segments in \(\mathcal{U}\) and \(\mathcal{P}\) a family of probes for \(\mathcal{S}\). For convenience, we define the intersection graph \(G(\mathcal{C})\) of a configuration \(\mathcal{C}=(\mathcal{S},\mathcal{P})\) as \(G(\mathcal{S})\). We say that \(\mathcal{C}\) is triangle-free if \(G(\mathcal{S})\) is a triangle-free graph. Let \(R\) be a square inside \(\mathcal{U}\) and let \(\psi\) be the positive homothety mapping \(\mathcal{U}\) to \(R\). The \(R\)-scaled copy of \(\mathcal{C}\) is the configuration \(\mathcal{C}^{\prime}=(\mathcal{S}^{\prime},\mathcal{P}^{\prime})\) where \(\mathcal{S}^{\prime}\) is the set of images of the segments in \(\mathcal{S}\) by \(\psi\) and \(\mathcal{P}^{\prime}\) is the set of the right-extensions of the images by \(\psi\) of the probes in \(\mathcal{P}\). Observe that \(\mathcal{S}^{\prime}\) has the same set of slopes as \(\mathcal{S}\). ### Copying configurations Our refinement on the construction of [PKK\({}^{+}\)14] is obtained at the cost of adding significantly many more copies of the construction achieving \(\chi\) colours when building the graph requiring \(\chi+1\) colours. Here, we describe this operation. For every configuration \(\mathcal{C}\) and for every positive integer \(k\), we define the new configuration \(\mathcal{C}^{(k)}\) with segments set \(\mathcal{S}^{(k)}\) and probes set \(\mathcal{P}^{(k)}\) as "a way of joining \(k\) independent copies of \(\mathcal{C}\) together". With every probe \(P\in\mathcal{P}^{(k)}\), we associate a set of \(k\) sub-rectangles of \(P\) that we call the _pillars_ of \(P\). The pillars have the same height as \(P\), are pairwise disjoint, and no pillars start from the left side of \(P\) so that there is a root of \(P\) not belonging to any pillar. Informally, each pillar will correspond to a copy of \(\mathcal{C}\). Formally, \(\mathcal{C}^{(k)}=(\mathcal{S}^{(k)},\mathcal{P}^{(k)})\) is defined as follows: * If \(k=1\), then \(\mathcal{C}^{(1)}=\mathcal{C}\). For every probe \(P\), we define its only pillar \(I_{0}^{P}\) as the minimal rectangle containing \(P\) minus a root. Every segment of \(\mathcal{S}(P)\) is a segment of \(\mathcal{S}(I_{0}^{P})\) and it also crosses \(I_{0}^{P}\) vertically. * If \(k\geq 2\), then for every probe \(P\in\mathcal{P}^{(k-1)}\) we choose a square \(R_{P}\) in a root of \(P\) disjoint from its pillars and insert an \(R_{P}\)-scaled copy \(\mathcal{C}_{P}=(\mathcal{S}_{P},\mathcal{P}_{P})\) of \(\mathcal{C}\). We define \(\mathcal{S}^{(k)}=\mathcal{S}^{(k-1)}\cup\bigcup_{P\in\mathcal{P}^{(k-1)}} \mathcal{S}_{P}\) and \(\mathcal{P}^{(k)}=\bigcup_{P\in\mathcal{P}^{(k-1)}}\mathcal{P}_{P}\). In other words, instead of \(P\), we take new probes, generated in \(R_{P}\) by the new copy of \(\mathcal{C}\), extended to the right. These new probes are then wholly contained in \(P\). For every such probe \(P^{\prime}\), we define \(k\) different pillars of \(P^{\prime}\). There are \(k-1\) pillars \(I_{0}^{P^{\prime}},I_{1}^{P^{\prime}},\ldots,I_{k-2}^{P^{\prime}}\) which are intersections between the pairwise disjoint pillars \(I_{0}^{P},I_{1}^{P},\ldots,I_{k-2}^{P}\) of \(P\) and \(P^{\prime}\) and one pillar \(I_{k-1}^{P^{\prime}}\) that is the minimal rectangle containing \(R_{P}\cap P^{\prime}\) minus a root of \(P^{\prime}\). Since \(R_{P}\) is contained in a root of \(P\), \(I_{k-1}^{P^{\prime}}\) is also disjoint from \(I_{0}^{P^{\prime}},\ldots,I_{k-2}^{P^{\prime}}\). Furthermore, for every \(i\in\{0,\ldots,k-2\}\), since segments in \(\mathcal{S}^{(k-1)}(I_{i}^{P})\) cross \(I_{i}^{P}\) vertically, they also cross \(I_{i}^{P^{\prime}}\) vertically. Therefore, \(\mathcal{S}^{(k-1)}(I_{i}^{P})\subseteq\mathcal{S}^{(k)}(I_{i}^{P^{\prime}})\). Note that \(G(\mathcal{C}^{(k)})\) is the union of disjoint copies of \(G(\mathcal{C})\), and in particular \(\mathcal{C}^{(k)}\) is triangle-free if \(\mathcal{C}\) is triangle-free. See Figure 1. **Lemma 2**.: _Let \(\mathcal{C}=(\mathcal{S},\mathcal{P})\) be a configuration and let \(t\) be a positive integer. If there exists a nonnegative integer \(s\) such that, for every \(t\)-fold colouring \(\varphi\) of \(G(\mathcal{C})\), there exists a probe \(P\in\mathcal{P}\) such that \(|\varphi(\mathcal{S}(P))|\geq s\), then, for every \(t\)-fold colouring \(\varphi\) of \(G(\mathcal{C}^{(k)})\), there exists a probe \(P^{\prime}\) such that, for every pillar \(I\) of \(P^{\prime}\), \(|\varphi(\mathcal{S}(I))|\geq s\)._ Proof.: The proof is an induction on \(k\). Suppose that \(k=1\). Observe that \(\mathcal{C}^{(1)}=\mathcal{C}\). Let \(\varphi\) be a \(t\)-fold colouring of \(G(\mathcal{C}^{(1)})\). There exists a probe \(P\) such that \(|\varphi(\mathcal{S}(P))|\geq s\) by assumption on \(\mathcal{C}\). Since every segment of \(\mathcal{S}(P)\) crosses the only pillar \(I\) of \(P\) vertically, \(|\varphi(\mathcal{S}(I))|\geq s\). Therefore, \(P^{\prime}=P\) verifies the conclusion of Lemma 2. Suppose that \(k\geq 2\). Let \(\varphi\) be a \(t\)-fold colouring of \(G(\mathcal{C}^{(k)})\). By the inductive hypothesis, there exists a probe \(P\) of \(\mathcal{C}^{(k-1)}\) such that for every \(i\in\{0,\ldots,k-2\}\), every pillar \(I_{i}^{P}\) of \(P\) verifies \(|\varphi|_{\mathcal{S}^{(k-1)}}(\mathcal{S}^{(k-1)}(I_{i}^{P}))|\geq s\). By the definition of \(\mathcal{C}^{(k)}\), there exists a square \(R_{P}\) in a root of \(P\) with an \(R_{P}\)-scaled copy \(\mathcal{C}_{P}=(\mathcal{S}_{P},\mathcal{P}_{P})\) of \(\mathcal{C}\). By assumption on \(\mathcal{C}\), there exists a probe \(P^{\prime}\in\mathcal{P}_{P}\subseteq\mathcal{P}^{(k)}\) such that \(|\varphi|_{\mathcal{S}_{P}}(\mathcal{S}_{P}(P^{\prime}))|\geq s\). Since every segment of \(\mathcal{S}_{P}(P^{\prime})\) crosses \(I_{k-1}^{P^{\prime}}\) vertically, \(|\varphi(\mathcal{S}^{(k)}(I_{k^{\prime}-1}^{P^{\prime}}))|\geq|\varphi|_{ \mathcal{S}_{P}}(\mathcal{S}_{P}(I_{k-1}^{P^{\prime}}))|\geq|\varphi|_{ \mathcal{S}_{P}}(\mathcal{S}_{P}(P^{\prime}))|\geq s\). For every \(i\in\{0,\ldots,k-2\}\), every segment of \(\mathcal{S}^{(k-1)}(I_{i}^{P})\subseteq\mathcal{S}^{(k)}(I_{i}^{P})\) crosses the pillar \(I_{i}^{P^{\prime}}\) vertically. Therefore, we also have \(|\varphi(\mathcal{S}^{(k)}(I_{i}^{P^{\prime}}))|\geq|\varphi|_{\mathcal{S}^{( k-1)}}(\mathcal{S}^{(k-1)}(I_{i}^{P}))|\geq s\). ## 3 Lower bound for even \(\omega\) Since the fractional chromatic number is a lower bound for the chromatic number, the following result gives a construction to certify a lower bound for the \(\chi\)-binding function for \(d\)-DIR in the case of even clique numbers, which together with the trivial upper bound establishes the even case in Theorem 1. **Theorem 3**.: _For all positive integers \(t\) and \(d\), there exists a triangle-free configuration \(\mathcal{C}_{t,d}=(\mathcal{S}_{t,d},\mathcal{P}_{t,d})\) with slope number at most \(d\) such that for every \(t\)-fold colouring \(\varphi\) of \(G(\mathcal{C}_{t,d})\), there exists a probe \(P\in\mathcal{P}_{t,d}\) such that \(|\varphi(\mathcal{S}_{t,d}(P))|\geq 2td\)._ Proof.: We proceed by induction on \(d\). For \(d=0\), the configuration \((\emptyset,\{\mathcal{U}\})\) satisfies the statement of the theorem. Suppose that \(d>0\) and that \(\mathcal{C}_{t,d-1}\) is already constructed. Let \(\mathcal{H}=\mathcal{C}_{t,d-1}^{(4t+1)}=(\mathcal{S}_{\mathcal{H}},\mathcal{ P}_{\mathcal{H}})\). Let \(\Gamma\) be the set of the slopes of the segments in \(\mathcal{S}_{t,d-1}\). We define the slope \(\gamma\) as the minimum aspect ratio over all probes of \(\mathcal{P}_{\mathcal{H}}\), divided by \(4t\). We will construct a family of configurations \(\mathcal{H}_{i}=(\mathcal{S}_{i},\mathcal{P}_{i})\) for all \(i\geq 0\) such that 1. for every segment \(S\in\mathcal{S}_{i}\), the slope of \(S\) is in \(\Gamma\cup\{\gamma\}\), and 2. for every \(t\)-fold colouring \(\varphi\) of \(G(\mathcal{H}_{i})\), there is a probe \(P\in\mathcal{P}_{i}\) such that \(|\varphi(\mathcal{S}_{i}(P))|\geq 2t(d-1)+2t(1-\frac{1}{2^{i}})\). For \(i=\lfloor\log(2t)\rfloor+1\), Item 2 implies that \(|\varphi(\mathcal{S}_{i}(P))|\geq 2td\), and so it suffices to take \(\mathcal{C}_{t,d}=\mathcal{H}_{i}\). We build \(\mathcal{H}_{i}\) as follows. For \(i=0\), we take \(\mathcal{H}_{0}=\mathcal{C}_{t,d-1}.\) Item 1 holds by definition of \(\mathcal{H}_{0}\) and Item 2 holds by the inductive hypothesis on \(\mathcal{C}_{t,d-1}\). Now, assume that \(i>0\). We will build \(\mathcal{S}_{i}\) and \(\mathcal{P}_{i}\) iteratively. Initialise \(\mathcal{S}_{i}\) to \(\mathcal{S}_{i-1}\) and \(\mathcal{P}_{i}\) to \(\emptyset\). For each probe \(P\in\mathcal{P}_{i-1}\), let \(R\) be a square in a root of \(P\), and add an \(R\)-scaled copy \(\mathcal{H}_{P}\) of \(\mathcal{H}\) in \(R\). For every probe \(Q\) in \(\mathcal{H}_{P}\), we divide \(Q\) vertically into \(4t\) rectangles of equal height with the same width as \(Q\). We call these rectangles the _layers_ of \(Q\). Let \((L_{j}(Q))_{j\in\{0,\ldots,4t-1\}}\) be the layers of \(Q\) starting from the top. First, we add the top layer \(L_{0}(Q)\) to \(\mathcal{P}_{i}\) as a single probe. Then, for every \(j\in\{1,\ldots,4t-1\}\), we add two segments in the interior of \(L_{j}(Q)\) to \(\mathcal{S}_{i}\), with slope \(\gamma\), that intersect at exactly one point: \(D_{1}^{j}\) crossing exactly the pillars \(I_{j+1}^{Q},\ldots,I_{4t}^{Q}\) horizontally, and \(D_{2}^{j}\) crossing \(I_{j}^{Q}\) horizontally and no other pillars. Since the slope of the diagonal of a layer is at least \(\gamma\), such segments are well-defined. See Figure 2. Finally, we add to \(\mathcal{P}_{i}\) two probes \(P_{D_{1}^{j}},P_{D_{2}^{j}}\), with \(P_{D_{1}^{j}}\) crossed vertically by exactly \(D_{1}^{j}\) and \(I_{0}^{Q},\ldots,I_{j}^{Q}\), and \(P_{D_{2}^{j}}\) crossed vertically by exactly \(D_{2}^{j}\) and \(I_{0}^{Q},\ldots,I_{j-1}^{Q}\). See Figure 3. It follows from the construction that \(\mathcal{P}_{i}\) is a set of probes for \(\mathcal{S}_{i}\), and that every segment in \(\mathcal{S}_{i}\) has its slope in \(\Gamma\cup\{\gamma\}\) so Item 1 holds. It remains to prove that for every \(t\)-fold colouring \(\varphi\), there is a probe \(P\in\mathcal{P}_{i}\) such that \(|\varphi(\mathcal{S}_{i}(P))|\geq 2t(d-1)+2t(1-\frac{1}{2^{i}})\). Let \(\varphi\) be a \(t\)-fold colouring of \(G(\mathcal{H}_{i})\). By the inductive hypothesis, there exists \(P\in\mathcal{P}_{i-1}\) such that \(|\varphi(\mathcal{S}_{i-1}(P))|\geq 2t(d-1)+2t(1-\frac{1}{2^{i-1}})\). By Lemma 2 applied to \(\mathcal{H}=\mathcal{C}_{t,d-1}^{(4t+1)}\), there exists a probe \(Q\) of \(\mathcal{H}_{P}\) such that every pillar \(I_{j}^{Q}\) verifies \(|\varphi(\mathcal{S}_{i}(I_{j}^{Q}))|\geq 2t(d-1)\). For every \(j\in\{0,\dots,4t-1\}\), let \(\Phi_{j}=\varphi(\mathcal{S}_{i}(I_{j}^{Q}))\). Recall that the top layer \(L_{0}(Q)\) was added as a probe, and it crosses all the \(I_{j}^{Q}\)s horizontally. If it contains at least \(2t(d-1)+2t(1-\frac{1}{2^{i}})\) colours, then this probe satisfies Item 2. So, from now on, we may assume that \(Q\) contains fewer than \(2t(d-1)+2t(1-\frac{1}{2^{i}})\) colours. Since \(|\Phi_{0}|\geq 2t(d-1)\), we have \(2t(d-1)+2t(1-\frac{1}{2^{i}})-|\Phi_{0}|<2t\). Therefore, at most \(2t-1\) indices \(j\in\{1,\dots,4t-1\}\) are such that the set \(\Phi_{j}\) contains a colour not in \(\bigcup_{j^{\prime}\leq j-1}\Phi_{j^{\prime}}\). Thus, for at least \(4t-1-2t+1=2t\) indices \(j\in\{1,\dots,4t-1\}\), \(\Phi_{j}\subseteq\bigcup_{j^{\prime}\leq j-1}\Phi_{j^{\prime}}\). Symmetrically, for at least \(2t\) indices \(j\in\{1,\dots,4t-1\}\), we have \(\Phi_{j}\subseteq\bigcup_{j^{\prime}\geq j+1}\Phi_{j^{\prime}}\). Thus, there is an index \(j_{0}\in\{1,\dots,4t-1\}\) such that \(\Phi_{j_{0}}\subseteq\bigcup_{j^{\prime}\geq j_{0}+1}\Phi_{j^{\prime}}\) and \(\Phi_{j_{0}}\subseteq\bigcup_{j^{\prime}\leq j_{0}-1}\Phi_{j^{\prime}}\). Let \(A=\Phi_{j_{0}}\), \(B=\bigcup_{j^{\prime}\leq j_{0}-1}\Phi_{j^{\prime}}\) and \(C=\bigcup_{j^{\prime}\geq j_{0}+1}\Phi_{j^{\prime}}\). We have \(A\subseteq B\) and \(A\subseteq C\). Consider the layer \(L_{j_{0}}(Q)\) and the corresponding segments \(D_{1}^{j_{0}},D_{2}^{j_{0}}\). Since \(D_{1}^{j_{0}}\) and \(D_{2}^{j_{0}}\) share a common point, \(\varphi(D_{1}^{j_{0}})\cap\varphi(D_{2}^{j_{0}})=\emptyset\). Since \(D_{1}^{j_{0}}\) crosses \(I_{j_{0}+1}^{Q},\dots,I_{4t}^{Q}\) horizontally, we have \(\varphi(D_{1}^{j_{0}})\cap C=\emptyset\). Since \(A\subseteq C\), we also have \(\varphi(D_{1}^{j_{0}})\cap A=\emptyset\). Moreover, \(D_{2}^{j_{0}}\) crosses \(I_{j_{0}}^{Q}\) horizontally, so \(\varphi(D_{2}^{j_{0}})\cap A=\emptyset\). As a result, \(\varphi(D_{1}^{j_{0}})\), \(\varphi(D_{2}^{j_{0}})\), and \(A\) are pairwise disjoint. Thus, \(|\varphi(D_{1}^{j_{0}})\cup\varphi(D_{2}^{j_{0}})\cup A|=|\varphi(D_{1}^{j_{0} })|+|\varphi(D_{2}^{j_{0}})|+|A|\geq t+t+2t(d-1)=2td\). Hence, letting \(Z_{i-1}=\varphi(\mathcal{S}_{i-1}(P))\), we have \(|(\varphi(D_{1}^{j_{0}})\cup\varphi(D_{2}^{j_{0}})\cup A)\setminus Z_{i-1}| \geq 2td-|Z_{i-1}|\). Therefore, there is some \(\alpha\in\{1,2\}\) such that \(|(\varphi(D_{\alpha}^{j_{0}})\cup A)\setminus Z_{i-1}|\geq\frac{2td-|Z_{i-1}|} {2}\). It follows that \[|\varphi(D_{\alpha}^{j_{0}})\cup A\cup Z_{i-1}|\geq\frac{2td+|Z_{i-1}|}{2} \geq 2t(d-1)+2t\left(1-\frac{1}{2^{i}}\right).\] If \(\alpha=1\), then \(\varphi(\mathcal{S}_{i}(P_{D_{1}^{j_{0}}}))\) contains at least \(|\varphi(D_{1}^{j_{0}})\cup B\cup Z_{i-1}|\geq|\varphi(D_{1}^{j_{0}})\cup A \cup Z_{i-1}|\geq 2t(d-1)+2t(1-\frac{1}{2^{i}})\) colours. The same lower bound holds for \(\varphi(\mathcal{S}_{i}(P_{D_{2}^{j_{0}}}))\) by a similar argument when \(\alpha=2\). Finally, Item 2 always holds for \(\mathcal{H}_{i}\). This concludes the proof of the theorem. **Corollary 4**.: _For all positive integers \(t\) and \(d\), there exists a multiset of segments \(\mathcal{S}_{t,d}^{\prime}\subseteq\mathcal{U}\) with slope number at most \(d\) such that \(\omega(G(\mathcal{S}_{t,d}^{\prime}))=2t\) and \(\chi(G(\mathcal{S}_{t,d}^{\prime}))\geq 2td\)._ Proof.: Let \(\mathcal{C}_{t,d}=(\mathcal{S}_{t,d},\mathcal{P}_{t,d})\) be a configuration given by Theorem 3. Let \(\mathcal{S}_{t,d}^{\prime}\) be a \(t\)-blowup of \(\mathcal{S}_{t,d}\) and for every \(v\in\mathcal{S}_{t,d}\), let \(\{v_{1},\cdots,v_{t}\}\) be the segments of \(\mathcal{S}_{t,d}^{\prime}\) corresponding to the \(t\)-blowup of \(v\). Since the graph \(G(\mathcal{S}_{t,d})\) is triangle-free, we have \(\omega(G(\mathcal{S}_{t,d}^{\prime}))=2t\). Let \(\varphi\) be a colouring of \(G(\mathcal{S}_{t,d}^{\prime})\). The colouring \(\varphi\) corresponds to a \(t\)-fold colouring \(\varphi_{t}\) of \(G(\mathcal{S}_{t,d})\) such that for all \(v\in V(G(\mathcal{S}_{t,d}))\), \(\varphi_{t}(v)=\bigcup_{i\in\{1,\cdots,t\}}\varphi(v_{i})\). By Theorem 3, there exists a probe \(P\in\mathcal{P}_{t,d}\) such that \(|\varphi_{t}(\mathcal{S}_{t,d}(P))|\geq 2td\). Hence, by definition of \(\varphi_{t}\), \(\varphi\) uses at least \(2td\) colours as desired. ## 4 Lower bound for odd \(\omega\) The construction of Theorem 3 is obtained from a \(t\)-blowup of a triangle-free \(d\)-DIR graph which results in a graph with clique number \(2t\). To certify the lower bound for the \(\chi\)-binding function in the case of odd clique numbers, we rely on the same construction to get a \((d-1)\)-DIR graph and modify the last set of segments to obtain a \(d\)-DIR graph with odd clique number. **Theorem 5**.: _For every nonnegative integer \(t\) and every positive integer \(d\), there exists a multiset of segments \(\mathcal{S}_{t,d}^{\prime\prime}\) with slope number at most \(d\) such that \(\omega(G(\mathcal{S}_{t,d}^{\prime\prime}))=2t+1\) and \(\chi(G(\mathcal{S}_{t,d}^{\prime\prime}))\geq 2td+1\)._ Proof.: If \(t=0\), then a single segment satisfies the conclusion of the theorem. Now, assume that \(t>0\). By Theorem 3, there exists a triangle-free configuration \(\mathcal{C}_{t,d-1}=(\mathcal{S}_{t,d-1},\mathcal{P}_{t,d-1})\) with slope number at most \(d-1\) such that for every \(t\)-fold colouring \(\varphi\) of \(G(\mathcal{C}_{t,d-1})\), there exists a probe \(P\in\mathcal{P}_{t,d-1}\) such that \(|\varphi(\mathcal{S}_{t,d-1}(P))|\geq 2t(d-1)\). Let \(\mathcal{H}=\mathcal{C}_{t,d-1}^{(2t+3)}=(\mathcal{S}_{\mathcal{H}},\mathcal{P}_{ \mathcal{H}})\). Let \(\Gamma\) be the set of the slopes of the segments in \(\mathcal{S}_{t,d-1}\). We define the slope \(\gamma\) as the minimum aspect ratio over all probes of \(\mathcal{P}_{\mathcal{H}}\), divided by \(2t+1\). Let \(\mathcal{S}_{\mathcal{H},t}\) be the \(t\)-blowup of \(\mathcal{S}_{\mathcal{H}}\). We construct the multiset \(\mathcal{S}_{t,d}^{\prime\prime}\) of segments as follows. Initialise \(\mathcal{S}_{t,d}^{\prime\prime}\) with \(\mathcal{S}_{\mathcal{H},t}\). For every probe \(Q\) in \(\mathcal{H}\), we divide \(Q\) vertically into \(2t+1\) layers similarly to the construction in the proof of Theorem 3. For \(j\in\{1,\ldots,2t+1\}\), let \(L_{j}(Q)\) be the \(j^{\text{th}}\) layer of \(Q\) starting from the top. For every \(j\in\{1,\ldots,2t+1\}\), add \(t+1\) coinciding segments \((D_{1,\ell}^{j})_{\ell\in\{1,\ldots,t+1\}}\) with slope \(\gamma\) in the interior of \(L_{j}(Q)\) crossing exactly the pillars \(I_{j+1}^{Q},\ldots,I_{2t+2}^{Q}\) horizontally, and \(t\) coinciding segments \((D_{2,\ell}^{j})_{\ell\in\{1,\ldots,t\}}\) with slope \(\gamma\) in the interior of \(L_{j}(Q)\) crossing the pillar \(I_{j}^{Q}\) horizontally and no other pillars, such that \(D_{1,\ell}^{j}\) and \(D_{2,\ell^{\prime}}^{j}\) intersect in exactly one point, which is the same for every pair \(\ell\in\{1,\ldots,t+1\}\) and \(\ell^{\prime}\in\{1,\ldots,t\}\). Observe that \(G(\mathcal{S}_{t,d}^{\prime\prime})\) has clique number \(2t+1\). Now, we prove that in every colouring of \(G(\mathcal{S}_{t,d}^{\prime\prime})\), at least \(2td+1\) colours are used. Let \(\varphi\) be a colouring of \(G(\mathcal{S}_{t,d}^{\prime\prime})\). If \(|\varphi(\mathcal{S}_{t,d}^{\prime\prime})|\geq 2td+1\), then the theorem holds. Otherwise, let \(\varphi_{t}\) be the \(t\)-fold colouring of \(G(\mathcal{S}_{\mathcal{H}})\) corresponding to \(\varphi|_{\mathcal{S}_{t,t}}\). By Lemma 2, there exists a probe \(Q\in\mathcal{P}_{\mathcal{H}}\) such that for every pillar \(I\) of \(Q\), \(|\varphi_{t}(\mathcal{S}_{\mathcal{H}}(I))|\geq 2t(d-1)\). Since the total number of colours used by \(\varphi\) is at most \(2td\), we deduce that \(\varphi(\mathcal{S}_{t,d}^{\prime\prime}(I_{j}^{Q}))\) contains a colour not in \(\bigcup_{j^{\prime}\geq j+1}\varphi(\mathcal{S}_{t,d}^{\prime\prime}(I_{j^{ \prime}}^{Q}))\) for at most \(2t\) values of \(j\in\{1,\ldots,2t+1\}\). Thus there is an index \(j_{0}\in\{1,\ldots,2t+1\}\) such that \(\varphi(\mathcal{S}_{t,d}^{\prime\prime}(I_{j_{0}}^{Q})\subseteq\bigcup_{j^{ \prime}\geq j_{0}+1}\varphi(\mathcal{S}_{t,d}^{\prime\prime}(I_{j}^{Q}))\). Let \(A=\varphi(\mathcal{S}_{t,d}^{\prime\prime}(I_{j_{0}}^{Q}))\) and let \(B=\bigcup_{j^{\prime}\geq j_{0}+1}\varphi(\mathcal{S}_{t,d}^{\prime\prime}(I_{ j^{\prime}}^{Q}))\). Let \(\Phi_{1}=\{\varphi(D_{1,\ell}^{j_{0}})\mid\ell\in\{1,\ldots,t+1\}\}\) and let \(\Phi_{2}=\{\varphi(D_{2,\ell}^{j_{0}})\mid\ell\in\{1,\ldots,t\}\}\). Similarly to the reasoning in the proof of Theorem 3, \(\Phi_{1}\) and \(\Phi_{2}\) are disjoint due to the common point shared by the segments in \((D_{1,\ell}^{j_{0}})_{\ell}\) and the ones in \((D_{2,\ell^{\prime}}^{j_{0}})_{\ell^{\prime}}\), \(\Phi_{1}\) and \(B\) are disjoint because the segments of \((D_{1,\ell}^{j_{0}})_{\ell}\) cross \(I_{j^{\prime}}^{Q}\) horizontally for \(j^{\prime}\geq j_{0}+1\) (and since \(A\subseteq B\), we also have that \(\Phi_{1}\) and \(A\) are disjoint), and \(\Phi_{2}\) and \(A\) are disjoint since since the segments of \((D_{2,\ell^{\prime}}^{j_{0}})_{\ell^{\prime}}\) cross \(I_{j_{0}}^{Q}\) horizontally. Therefore, \(\Phi_{1}\), \(\Phi_{2}\), and \(A\) are pairwise disjoint so \(|\Phi_{1}\cup\Phi_{2}\cup A|\geq(t+1)+t+2t(d-1)=2td+1\). Finally, \(\varphi\) has to use at least \(2td+1\) colours and this concludes the proof of the theorem. ## 5 Upper bound for odd \(\omega\) In this section, we upper bound the \(\chi\)-binding function for \(d\)-DIR in the case of odd clique numbers, which combined with the result of the previous section settles the odd case in Theorem 1. Let \(G\) be an interval graph and let \(\mathcal{I}=(I_{v}\mid v\in V(G))\) be an interval representation of \(G\). Let \(w\) be an integer with \(w\geq\omega(G)\). For any \(x\in\mathcal{I}\), we define the set \(S_{x}=\{u\in V(G)\mid x\in I_{u}\}\) of vertices corresponding to intervals containing \(x\). A vertex \(v\) of \(G\) is said to be \(w\)_-special_ if there exists \(x\in I_{v}\) such that \(|S_{x}|\leq\frac{w-1}{2}\). Observe that \(S_{x}\) consists of a clique of \(w\)-special vertices only. **Lemma 6**.: _Let \(G\) be an interval graph with clique number at most \(w\). There is a colouring \(\varphi:V(G)\rightarrow\{0,\ldots,w-1\}\) of \(G\) such that \(\varphi(v)\neq 0\) for every \(w\)-special vertex \(v\)._ Proof.: Suppose for a contradiction that there exists a counterexample \(G\), and moreover choose one with \(|V(G)|\) minimal. Let \(\mathcal{I}=(I_{v}\mid v\in V(G))\) be an interval representation of \(G\). We deal with two cases according to the existence of some \(w\)-special vertices "splitting" \(G\). Suppose there exists a small clique of \(w\)-special vertices that "separates \(G\) into a non-empty left side and non-empty right side". Formally, suppose that there exists a point \(x\in\mathbb{R}\) such that \(\{I\in\mathcal{I}\mid I\subseteq(-\infty,x)\}\) and \(\{I\in\mathcal{I}\mid I\subseteq(x,+\infty)\}\) are non-empty, and the set \(S_{x}=\{u\in V(G)\mid x\in I_{u}\}\) has size at most \(\frac{w-1}{2}\). Then vertices in \(S_{x}\) are \(w\)-special by definition. Let \(G_{\ell}\) and \(G_{r}\) be the subgraphs of \(G\) induced by intervals intersecting \((-\infty,x]\) and \([x,+\infty)\), respectively. By minimality of \(|V(G)|\), there are \(w\)-colourings \(\varphi_{\ell}\) and \(\varphi_{r}\) of \(G_{\ell}\) and \(G_{r}\), respectively, such that \(w\)-special vertices are not coloured \(0\). We show how to combine these colourings into one for \(G\). Note first that \(w\)-special vertices in \(G\), when applicable, are also \(w\)-special in \(G_{\ell}\) and \(G_{r}\). In particular, \(S_{x}\) consists only of \(w\)-special vertices, which are not coloured \(0\) by both \(\varphi_{\ell}\) and \(\varphi_{r}\). By definition \(S_{x}=V(G_{\ell})\cap V(G_{r})\), so up to permuting the colours in \(\{1,2,\ldots,w-1\}\) for one of the colourings, we can assume that \(\varphi_{\ell}(u)=\varphi_{r}(u)\) for every \(u\in V(G_{\ell})\cap V(G_{r})\). Then, colouring \(\varphi\) defined by \(\varphi(u)=\varphi_{\ell}(u)\) for every \(u\in V(G_{\ell})\) and \(\varphi(u)=\varphi_{r}(u)\) for every \(u\in V(G_{r})\) is a \(w\)-colouring of \(G\) such that no \(w\)-special vertices are coloured \(0\). If there exist no such separating sets, then every \(w\)-special vertex \(u\) is "leftmost" or "rightmost" in \(\mathcal{I}\). Formally, \(u\) is such that \(\min(I_{u})\leq\min_{v\in V(G)}\max(I_{v})\) or \(\max(I_{u})\geq\max_{v\in V(G)}\min(I_{v})\). This implies that there are at most \(2\frac{w-1}{2}=w-1\)\(w\)-special vertices. Since \(G\) is an interval graph and \(w\geq\omega(G)\), there exists a \(w\)-colouring of \(G\). Then, at least one colour is not used by the \(w\)-special vertices, letting us permute it with colour \(0\) to obtain a colouring where no \(w\)-special vertices are coloured \(0\). **Theorem 7**.: _For any graph \(G\) in \(d\)-DIR, if \(\omega(G)\) is odd, then \(\chi(G)\leq d(\omega(G)-1)+1\)._ Proof.: Let \((I_{u}\mid u\in V(G))\) be a \(d\)-DIR representation of \(G\). Let \(V_{1},\ldots,V_{d}\) be the partition of \(V(G)\) according to the slopes of the segments in its \(d\)-DIR representation. For every \(i\in\{1,2,\ldots,d\}\), \(G[V_{i}]\) is an interval graph and \(\omega(G)\geq\omega(G[V_{i}])\). So by Lemma 6, there is a colouring \(\varphi_{i}\) of \(G[V_{i}]\) using the colours \(0,1,2,\ldots,\omega(G)-1\) and no \(\omega(G)\)-special vertices of \(G[V_{i}]\) are coloured \(0\). For every \(i\in\{1,\ldots,d\}\) and for every \(v\in V_{i}\), let \(\varphi(v)=(i,\varphi_{i}(v))\) if \(\varphi_{i}(v)\neq 0\), and \(\varphi(v)=0\) otherwise. We claim that \(\varphi\) is a \((d(\omega(G)-1)+1)\)-colouring of \(G\). Indeed, if \(uv\) is an edge in \(G\) such that \(\varphi(u)=\varphi(v)\), then \(\varphi(u)=\varphi(v)=0\). In other words, \(u\) and \(v\) are both non-special vertices and are in distinct \(V_{i}\)s. Let \(i_{u},i_{v}\in\{1,\ldots,d\}\) such that \(u\in V_{i_{u}}\) and \(v\in V_{i_{v}}\). Since \(\omega(G)\) is odd, the intersection point \(x\) of \(I_{u}\) and \(I_{v}\) is included in at least \(\frac{\omega(G)+1}{2}\) intervals corresponding to vertices in \(V_{i_{u}}\) and at least \(\frac{\omega(G)+1}{2}\) intervals corresponding to vertices in \(V_{i_{v}}\). This gives a clique in \(G\) of order \(\omega(G)+1\), a contradiction. ## 6 Concluding remarks Recall that the _Hall ratio_\(\rho(G)\) of a graph \(G\) is defined by \(\rho(G)=\max_{H\leq G}\frac{|V(H)|}{\alpha(H)}\), where \(\alpha\) denotes the independence number. Note that \(\omega(G)\leq\rho(G)\leq\chi(G)\) for all \(G\), and it was remarked in [1] how \(\rho\) is bounded while \(\chi\) is unbounded for a certain sequence of Kneser graphs. Can we find constructions to certify the same lower bounds as in Theorem 1 but for \(\rho\) instead of \(\chi\)? In particular, for every \(d\) and even \(\omega\) is there a \(d\)-directional segment graph of clique number \(\omega\) with \(\rho\geq d\omega\)? Or a potentially easier version is the Hall ratio analogue of Erdos's question: are there triangle-free segment graphs of arbitrarily large \(\rho\)? We remark that Walczak [20] settled a similar, weaker version of this last question. A possible generalisation of our problem is to consider rectangles instead of segments. In a similar manner, we can introduce the notion of the \(d\)-directional rectangle graphs: consider a set \(d\) non-parallel lines and any set of rectangles such that each of them has one of the sides parallel to one of the given lines. We want to find the \(\chi\)-bounding function of such graphs. This problem has been quite extensively studied in the case of axis-parallel rectangles (i.e. \(d=1\)) and thanks to Chalermsook and Walczak [14] it is known that in this case \(O(\omega\log\omega)\) colours always suffice; however, the lower bound remains \(\Omega(\omega)\). This implies that for any \(d\), no more than \(O(d\omega\log\omega)\) colours are needed, and our construction from this paper can be easily adapted (by replacing the segments with "thin enough" rectangles) to show a lower bound of \(\omega\cdot(d+o(1))\). The gap between linear and log-linear function remains open, though, in both \(d=1\) and the general case. Acknowledgement.This work was initiated during the _Sparse (Graphs) Coalition_ session, \(\chi\)_-boundedness_, organised by James Davies, Bartosz Walczak, and the second author. Our team thanks the organisers and participants for the conducive working atmosphere. Open access statement.For the purpose of open access, a CC BY public copyright license is applied to any Author Accepted Manuscript (AAM) arising from this submission.
2309.09341
Kernel Function, $q$-Integral Transformation and $q$-Heun Equations
We find kernel functions of the $q$-Heun equation and its variants. We apply them to obtain $q$-integral transformations of solutions to the $q$-Heun equation and its variants. We discuss special solutions of the $q$-Heun equation from the perspective of the $q$-integral transformation.
Kouichi Takemura
2023-09-17T18:11:07Z
http://arxiv.org/abs/2309.09341v3
# Kernel function, \(q\)-integral transformation and \(q\)-Heun equations ###### Abstract. We find kernel functions of the \(q\)-Heun equation and its variants. We apply them to obtain \(q\)-integral transformations of solutions to the \(q\)-Heun equation and its variants. We discuss special solutions of the \(q\)-Heun equation from the perspective of the \(q\)-integral transformation. Key words and phrases:Kernel function, Jackson integral, Heun equation, \(q\)-Heun equation, Ruijsenaars system 2020 Mathematics Subject Classification: 39A13,44A20 ## 1. Introduction Heun's differential equation is given by \[\frac{d^{2}y}{dz^{2}}+\left(\frac{\gamma}{z}+\frac{\delta}{z-1}+\frac{\epsilon }{z-t}\right)\frac{dy}{dz}+\frac{\alpha\beta z-B}{z(z-1)(z-t)}y=0, \tag{1.1}\] with the condition \(\gamma+\delta+\epsilon=\alpha+\beta+1\). It sometimes appears in the study of mathematical physics. Relationship with the Kerr black holes is remarkable among them. The parameter \(B\) in equation Eq. (1.1) is called the accessory parameter, and it plays special roles in the analysis of the Heun equation. It is known that Heun's differential equation admits integral transformations. Kazakov and Slavyanov established Euler integral transformations of Heun's differential equation in [8]. Set \[\gamma^{\prime}=\gamma+1-\alpha,\delta^{\prime}=\delta+1-\alpha, \ \epsilon^{\prime}=\epsilon+1-\alpha,\ \alpha^{\prime}=2-\alpha,\ \beta^{\prime}=-\alpha+\beta+1,\] \[B^{\prime}=B+(1-\alpha)(\epsilon+\delta t+(\gamma-\alpha)(t+1)) \tag{1.2}\] and assume that the function \(v(w)\) is a solution to the Heun equation with the parameters \(\gamma^{\prime},\delta^{\prime},\epsilon^{\prime},\alpha^{\prime},\beta^{ \prime},B^{\prime}\), i.e. \[\frac{d^{2}v}{dw^{2}}+\left(\frac{\gamma^{\prime}}{w}+\frac{\delta^{\prime}}{ w-1}+\frac{\epsilon^{\prime}}{w-t}\right)\frac{dv}{dw}+\frac{\alpha^{\prime} \beta^{\prime}w-B^{\prime}}{w(w-1)(w-t)}v=0, \tag{1.3}\] then it was established in [8] that the function \[y(z)=\int_{C}v(w)(z-w)^{-\alpha}dw \tag{1.4}\] is a solution to Eq. (1.1) for a suitable cycle \(C\). Examples of the cycle were described explicitly in [8, 7]. Note that the integral transformation was also obtained in [19, 20] by considering the middle convolution for some special system of Fuchsian equations. As an application of the integral transformation of the Heun equation, a correspondence of special solutions of the Heun equation was obtained in [22]. Namely the polynomial-type solutions correspond to the solutions written as a finite sum of the hypergeometric functions by the integral transformation. Some of integral transformations are related with the kernel function. Let \(\mathbf{x}=(x_{1},\ldots,x_{m})\) and \(\mathbf{y}=(y_{1},\ldots,y_{n})\) be the variables and \((\mathcal{A}_{\mathbf{x}},\mathcal{B}_{\mathbf{y}})\) be a pair of operators which act on meromorphic functions in \(\mathbf{x}\) and \(\mathbf{y}\) respectively. In [10], \(\Phi(\mathbf{x};\mathbf{y})\) is called a kernel function for the pair \((\mathcal{A}_{\mathbf{x}},\mathcal{B}_{\mathbf{y}})\), if it satisfies a functional equation of the form \[\mathcal{A}_{\mathbf{x}}\Phi(\mathbf{x};\mathbf{y})=\mathcal{B}_{\mathbf{y}} \Phi(\mathbf{x};\mathbf{y}). \tag{1.5}\] Langmann studied the kernel functions systematically in the analysis of eigenfunctions for quantum integrable systems such as the Calogero-Moser-Sutherland system and the Inozemtsev system [11, 12]. Note that we can regard the eigenvalue problem of the Inozemtsev system as a multi-variable generalization of Heun's differential equation (see [21] for a review). By using the kernel function identity for the Inozemtsev system, we can obtain integral transformations of eigenfunctions of the Inozemtsev system and related systems [12]. The Ruijsenaars-van Diejen system (see [13, 25]) is a relativistic (or discrete) quantum integrable system of the Inozemtsev type, and kernel functions of them were studied by Ruijsenaars [14, 15] and Komori, Noumi and Shiraishi [10]. Atai and Noumi [2] applied the kernel functions of the Ruijsenaars-van Diejen system to the integral transformations of the eigenfunctions. In this paper, we obtain kernel function identities for \(q\)-deformations of Heun's differential equations and apply them to integral transformations. Here, the \(q\)-Heun equation was introduced by Hahn [6] as the form \[\{a_{2}x^{2}+a_{1}x+a_{0}\}g(x/q)-\{b_{2}x^{2}+b_{1}x+b_{0}\}g(x)+\{c_{2}x^{2}+ c_{1}x+c_{0}\}g(xq)=0, \tag{1.6}\] with the condition \(a_{2}a_{0}c_{2}c_{0}\neq 0\). It was rediscovered in [23] by considering degeneration of the Ruijsenaars-van Diejen system four times. The \(q\)-Heun equation was obtained in [23] as an eigenfunction of the fourth degeneration of the Ruijsenaars-van Diejen operator of one variable \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1}, \alpha_{2},\beta)\] \[=x^{-1}(x-q^{h_{1}+1/2}t_{1})(x-q^{h_{2}+1/2}t_{2})T_{x}^{-1}+q^{ \alpha_{1}+\alpha_{2}}x^{-1}(x-q^{l_{1}-1/2}t_{1})(x-q^{l_{2}-1/2}t_{2})T_{x}\] \[-\{(q^{\alpha_{1}}+q^{\alpha_{2}})x+q^{(h_{1}+h_{2}+l_{1}+l_{2}+ \alpha_{1}+\alpha_{2})/2}(q^{\beta/2}+q^{-\beta/2})t_{1}t_{2}x^{-1}\}, \tag{1.7}\] where \(T_{x}^{\pm 1}g(x)=g(q^{\pm 1}x)\). Namely, Eq. (1.6) admits an expression as \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2}, \beta)g(x)=Eg(x), \tag{1.8}\] where \(E\) is an arbitrary complex number. The eigenvalue \(E\) corresponds to the parameter \(b_{1}\) in Eq. (1.6), and it plays a role of the accessory parameter. As \(q\to 1\), we essentially obtain the differential equation \[\frac{d^{2}y}{dz^{2}}+\left(\frac{\gamma}{z}+\frac{\delta}{z-t_{1}}+\frac{ \epsilon}{z-t_{2}}\right)\frac{dy}{dz}+\frac{\alpha\beta z-B}{z(z-t_{1})(z-t _{2})}y=0, \tag{1.9}\] which is equivalent to Heun's differential equation. Other \(q\)-deformations of Heun's differential equations were introduced in [24] by focusing on the third degenerate Ruijsenaars-van Diejen operator \(A^{(3)}(x;h_{1},h_{2},h_{3},l_{1},l_{2},l_{3},\alpha,\beta)\) (see Eq. (2.14)) and the second degenerate one \(A^{(2)}(x;h_{1},h_{2},h_{3},h_{4},l_{1},l_{2},l_{3},l_{4},\alpha)\) (see Eq. (2.15)), and they were called the variants of the \(q\)-Heun equation. In this paper, we obtain kernel function identities for the \(q\)-Heun equation and its variants. Set \[P^{(1)}_{\mu,\mu_{0}}(x,s)=\frac{(q^{\mu}s/x;q)_{\infty}}{(q^{\mu_{0}}s/x;q)_{ \infty}},\quad(a;q)_{\infty}=\prod_{j=0}^{\infty}(1-q^{j}a). \tag{1.10}\] We obtain a kernel function identity for the \(q\)-Heun equation as follows; **Theorem 1.1**.: _If the parameters satisfy_ \[\chi=(\tilde{h}_{1}+\tilde{h}_{2}-\tilde{l}_{1}-\tilde{l}_{2}+ \tilde{\alpha}_{1}-\tilde{\alpha}_{2}-\tilde{\beta})/2,\;\nu=\mu_{0}+\alpha_{1 }-\tilde{\alpha}_{2},\;\mu=\mu_{0}+\chi+1,\] \[\beta=-\tilde{\beta}-\chi,\;\alpha_{2}=\alpha_{1}+\tilde{\alpha }_{1}-\tilde{\alpha}_{2}-\chi,\;l_{i}=\tilde{h}_{i}+\mu_{0},\;h_{i}=\tilde{l}_ {i}+\mu_{0}+\chi,\;(i=1,2), \tag{1.11}\] _then the function \(\Phi(x,s)=x^{-\alpha_{1}}s^{1+\chi-\tilde{\alpha}_{1}}P_{\mu,\mu_{0}}(x,s)\) satisfies_ \[A^{(4)}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)\Phi(x,s)=q^{ \nu}A^{(4)}(s;\tilde{h}_{1},\tilde{h}_{2},\tilde{l}_{1},\tilde{l}_{2},\tilde{ \alpha}_{1},\tilde{\alpha}_{2},\tilde{\beta})\Phi(x,s). \tag{1.12}\] We also obtain kernel function identities for the variants of the \(q\)-Heun equation in Theorems 2.2 and 2.3. As an application, we have \(q\)-integral transformations of solutions to the \(q\)-Heun equation and its variants. A \(q\)-integral transformation for the \(q\)-Heun equation was already obtained in [18] by using the \(q\)-middle convolution which was introduced by Sakai and Yamaguchi [16]. However, convergence of the \(q\)-integral transformation associated with the \(q\)-middle convolution was not discussed in [16]. In this paper, we obtain \(q\)-integral transformation without using the \(q\)-middle convolution and we discuss the convergence directly. It seems that \(q\)-integral transformations for the variants of the \(q\)-Heun equation have not been known before, and we obtain them in this paper. This paper is organized as follows. In section 2, we obtain identities for kernel functions related to the \(q\)-Heun equation and its variants. In section 3, we formulate a method of finding a \(q\)-integral transformation by using a kernel function and its identity. In section 4, we obtain \(q\)-integral transformations for the \(q\)-Heun equation and its variants. In section 5, we discuss special solutions of the \(q\)-Heun equation from the perspective of the \(q\)-integral transformation. In section 6, we give concluding remarks. Throughout this paper, we assume that \(q\) is a complex number such that \(0<|q|<1\). ## 2. Kernel functions related to \(q\)-Heun equations We introduce functions which appear in the kernel function identities. Recall that the function \(P^{(1)}_{\mu,\mu_{0}}(x,s)\) was defined as \[P^{(1)}_{\mu,\mu_{0}}(x,s)=\frac{(q^{\mu}s/x;q)_{\infty}}{(q^{\mu_{0}}s/x;q)_{ \infty}}. \tag{2.1}\] It satisfies \[P_{\mu,\mu_{0}}(x,s/q)=P_{\mu,\mu_{0}}(qx,s)=\frac{x-q^{\mu-1}s}{x-q^{\mu_{0}-1}s }P_{\mu,\mu_{0}}(x,s), \tag{2.2}\] \[P_{\mu,\mu_{0}}(x,qs)=P_{\mu,\mu_{0}}(x/q,s)=\frac{x-q^{\mu_{0}}s}{x-q^{\mu}s}P _{\mu,\mu_{0}}(x,s).\] The function \[P_{\mu,\mu_{0}}^{(2)}(x,s)=(x/s)^{\mu-\mu_{0}}\frac{(q^{-\mu_{0}+1}x/s;q)_{ \infty}}{(q^{-\mu+1}x/s;q)_{\infty}} \tag{2.3}\] also satisfies Eq. (2.2). Set \(\vartheta_{q}(t)=(t,q/t,q;q)_{\infty}(=(t;q)_{\infty}(q/t;q)_{\infty}(q;q)_{ \infty})\). Then we have \[\frac{P_{\mu,\mu_{0}}^{(2)}(x,q^{n}\xi)}{P_{\mu,\mu_{0}}^{(1)}(x,q^{n}\xi)}= \frac{P_{\mu,\mu_{0}}^{(2)}(x,\xi)}{P_{\mu,\mu_{0}}^{(1)}(x,\xi)}=(x/\xi)^{ \mu-\mu_{0}}\frac{\vartheta_{q}(q^{-\mu_{0}+1}x/\xi)}{\vartheta_{q}(q^{-\mu+ 1}x/\xi)} \tag{2.4}\] for \(n\in\mathbb{Z}\). On the limit with respect to the variable \(s\), we have \[\lim_{s\to 0}P_{\mu,\mu_{0}}^{(1)}(x,s)=1,\ \lim_{s\to\infty}s^{\mu-\mu_{0}}P_{ \mu,\mu_{0}}^{(2)}(x,s)=x^{\mu-\mu_{0}}, \tag{2.5}\] and it follows from Eq. (2.4) that \[\lim_{L\to+\infty}P_{\mu,\mu_{0}}^{(2)}(x,s)|_{s=q^{L}\xi}=(x/\xi)^{\mu-\mu_{0 }}\frac{\vartheta_{q}(q^{-\mu_{0}+1}x/\xi)}{\vartheta_{q}(q^{-\mu+1}x/\xi)}, \tag{2.6}\] \[\lim_{K\to-\infty}s^{\mu-\mu_{0}}P_{\mu,\mu_{0}}^{(1)}(x,s)|_{s=q^{K-1}\xi}= \xi^{\mu-\mu_{0}}\frac{\vartheta_{q}(q^{-\mu+1}x/\xi)}{\vartheta_{q}(q^{-\mu _{0}+1}x/\xi)}.\] Recall that the fourth degeneration of the Ruijsenaars-van Diejen operator of one variable ia written as \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1}, \alpha_{2},\beta)\] \[=x^{-1}(x-q^{h_{1}+1/2}t_{1})(x-q^{h_{2}+1/2}t_{2})T_{x}^{-1}+q^{ \alpha_{1}+\alpha_{2}}x^{-1}(x-q^{l_{1}-1/2}t_{1})(x-q^{l_{2}-1/2}t_{2})T_{x}\] \[-\{(q^{\alpha_{1}}+q^{\alpha_{2}})x+q^{(h_{1}+h_{2}+l_{1}+l_{2}+ \alpha_{1}+\alpha_{2})/2}(q^{\beta/2}+q^{-\beta/2})t_{1}t_{2}x^{-1}\}, \tag{2.7}\] where \(T_{x}^{\pm 1}g(x)=g(q^{\pm 1}x)\), and it is related with the \(q\)-Heun equation as explained in the introduction. We obtain a kernel function and its identity for the operator \(A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)\) as follows. **Theorem 2.1**.: _If the parameters satisfy_ \[\chi=(\tilde{h}_{1}+\tilde{h}_{2}-\tilde{l}_{1}-\tilde{l}_{2}+ \tilde{\alpha}_{1}-\tilde{\alpha}_{2}-\tilde{\beta})/2,\ \nu=\mu_{0}+\alpha_{1}-\tilde{\alpha}_{2},\ \mu=\mu_{0}+\chi+1,\] \[\beta=-\tilde{\beta}-\chi,\ \alpha_{2}=\alpha_{1}+\tilde{\alpha}_{1}- \tilde{\alpha}_{2}-\chi,\ l_{i}=\tilde{h}_{i}+\mu_{0},\ h_{i}=\tilde{l}_{i}+\mu_{ 0}+\chi,\ (i=1,2), \tag{2.8}\] _then we have_ \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1}, \alpha_{2},\beta)x^{-\alpha_{1}}s^{1+\chi-\tilde{\alpha}_{1}}P_{\mu,\mu_{0}}(x,s)\] \[=q^{\nu}A^{\langle 4\rangle}(s;\tilde{h}_{1},\tilde{h}_{2}, \tilde{l}_{1},\tilde{l}_{2},\tilde{\alpha}_{1},\tilde{\alpha}_{2},\tilde{ \beta})x^{-\alpha_{1}}s^{1+\chi-\tilde{\alpha}_{1}}P_{\mu,\mu_{0}}(x,s), \tag{2.9}\] _where \(P_{\mu,\mu_{0}}(x,s)\) is a function which satisfies Eq. (2.2)._ Proof.: Set \[a_{0}(x) =x^{-1}(x-q^{h_{1}+1/2}t_{1})(x-q^{h_{2}+1/2}t_{2}),\] \[b_{0}(x) =-(q^{\alpha_{1}}+q^{\alpha_{2}})x-q^{(h_{1}+h_{2}+l_{1}+l_{2}+ \alpha_{1}+\alpha_{2})/2}(q^{\beta/2}+q^{-\beta/2})t_{1}t_{2}x^{-1},\] \[c_{0}(x) =q^{\alpha_{1}+\alpha_{2}}x^{-1}(x-q^{l_{1}-1/2}t_{1})(x-q^{l_{2}- 1/2}t_{2}),\] \[\tilde{a}_{0}(s) =s^{-1}(s-q^{\tilde{h}_{1}+1/2}t_{1})(s-q^{\tilde{h}_{2}+1/2}t_{ 2}),\] \[\tilde{b}_{0}(s) =-(q^{\tilde{\alpha}_{1}}+q^{\tilde{\alpha}_{2}})s-q^{(\tilde{h} _{1}+\tilde{h}_{2}+\tilde{l}_{1}+\tilde{l}_{2}+\tilde{\alpha}_{1}+\tilde{ \alpha}_{2})/2}(q^{\tilde{\beta}/2}+q^{-\tilde{\beta}/2})t_{1}t_{2}s^{-1},\] \[\tilde{c}_{0}(s) =q^{\tilde{\alpha}_{1}+\tilde{\alpha}_{2}}s^{-1}(s-q^{\tilde{l} _{1}-1/2}t_{1})(s-q^{\tilde{l}_{2}-1/2}t_{2}).\] It follows from Eq. (2.2) that Eq. (2.9) is equivalent to the equation \[(q^{\alpha_{1}}a_{0}(x)-q^{\nu+1+\chi-\tilde{\alpha}_{1}}\tilde{c }_{0}(s))\frac{x-q^{\mu_{0}}s}{x-q^{\mu}s}\] \[+(q^{-\alpha_{1}}c_{0}(x)-q^{\nu-1-\chi+\tilde{\alpha}_{1}}\tilde {a}_{0}(s))\frac{x-q^{\mu-1}s}{x-q^{\mu_{0}-1}s}+b_{0}(x)-q^{\nu}\tilde{b}_{0}( s)=0. \tag{2.10}\] By using the relations in Eq. (2.8), we have \[q^{-\alpha_{1}}c_{0}(x)-q^{\nu-1-\chi+\tilde{\alpha}_{1}}\tilde {a}_{0}(s)\] \[=q^{\alpha_{2}}\{x^{-1}(x-q^{l_{1}-1/2}t_{1})(x-q^{l_{2}-1/2}t_{2 })-q^{\mu_{0}-1}s^{-1}(s-q^{l_{1}-\mu_{0}+1/2}t_{1})(s-q^{l_{2}-\mu_{0}+1/2}t_ {2})\}\] \[=q^{\alpha_{2}}(x-q^{\mu_{0}-1}s)(1-q^{l_{1}+l_{2}-\mu_{0}}t_{1}t _{2}x^{-1}s^{-1}),\] \[q^{\alpha_{1}}a_{0}(x)-q^{\nu+1+\chi-\tilde{\alpha}_{1}}\tilde{c }_{0}(s)\] \[=q^{\alpha_{1}}\{x^{-1}(x-q^{h_{1}+1/2}t_{1})(x-q^{h_{2}+1/2}t_{2 })-q^{\mu}s^{-1}(s-q^{h_{1}+1/2-\mu}t_{1})(s-q^{h_{2}+1/2-\mu}t_{2})\}\] \[=q^{\alpha_{1}}(x-q^{\mu}s)(1-q^{h_{1}+h_{2}+1-\mu}t_{1}t_{2}x^{-1 }s^{-1}). \tag{2.11}\] Hence \[(q^{\alpha_{1}}a_{0}(x)-q^{\nu+1+\chi-\tilde{\alpha}_{1}}\tilde{c }_{0}(s))\frac{x-q^{\mu_{0}}s}{x-q^{\mu}s}+(q^{-\alpha_{1}}c_{0}(x)-q^{\nu-1- \chi+\tilde{\alpha}_{1}}\tilde{a}_{0}(s))\frac{x-q^{\mu-1}s}{x-q^{\mu_{0}-1}s}\] \[=q^{\alpha_{1}}(x-q^{\mu_{0}}s)(1-q^{h_{1}+h_{2}+1-\mu}t_{1}t_{2}x ^{-1}s^{-1})+q^{\alpha_{2}}(x-q^{\mu-1}s)(1-q^{l_{1}+l_{2}-\mu_{0}}t_{1}t_{2}x ^{-1}s^{-1})\] \[=(q^{\alpha_{1}}+q^{\alpha_{2}})x+(q^{\alpha_{1}+h_{1}+h_{2}+1+\mu _{0}-\mu}+q^{\alpha_{2}+l_{1}+l_{2}-\mu_{0}+\mu-1})t_{1}t_{2}x^{-1}\] \[\quad-(q^{\alpha_{1}+\mu_{0}}+q^{\alpha_{2}+\mu-1})s-(q^{\alpha_{ 1}+h_{1}+h_{2}+1-\mu}+q^{\alpha_{2}+l_{1}+l_{2}-\mu_{0}})t_{1}t_{2}s^{-1}, \tag{2.12}\] and we obtain Eq. (2.10) by using Eq. (2.8). Note that Eq. (2.8) is equivalent to \[\chi =(h_{1}+h_{2}-l_{1}-l_{2}+\alpha_{1}-\alpha_{2}-\beta)/2,\;\nu= \mu_{0}+\alpha_{1}-\tilde{\alpha}_{2},\;\mu=\mu_{0}+\chi+1,\] \[\tilde{\beta} =-\beta-\chi,\;\tilde{\alpha}_{1}=\tilde{\alpha}_{2}+\alpha_{2}- \alpha_{1}+\chi,\;\tilde{h}_{i}=l_{i}-\mu_{0},\;\tilde{l}_{i}=h_{i}-\mu_{0}- \chi,\;(i=1,2). \tag{2.13}\] Theorem 1.1 in the introduction follows immediately from Theorem 2.1, because the function \(P^{(1)}_{\mu,\mu_{0}}(x,s)\) satisfies Eq. (2.2). In order to define the variants of the \(q\)-Heun equation, we introduce the following operators \[A^{(3)}(x;h_{1},h_{2},h_{3},l_{1},l_{2},l_{3},\alpha,\beta)\] \[=x^{-1}\prod_{n=1}^{3}(x-q^{h_{n}+1/2}t_{n})T_{x}^{-1}+q^{2\alpha +1}x^{-1}\prod_{n=1}^{3}(x-q^{l_{n}-1/2}t_{n})T_{x}\] \[\quad+q^{\alpha+1/2}[-(q^{1/2}+q^{-1/2})x^{2}+\sum_{n=1}^{3}(q^{h_ {n}}+q^{l_{n}})t_{n}x\] \[\quad+q^{(l_{1}+l_{2}+l_{3}+h_{1}+h_{2}+h_{3})/2}(q^{\beta/2}+q^{ -\beta/2})t_{1}t_{2}t_{3}x^{-1}], \tag{2.14}\] \[A^{(2)}(x;h_{1},h_{2},h_{3},h_{4},l_{1},l_{2},l_{3},l_{4},\alpha)\] \[=x^{-2}\prod_{n=1}^{4}(x-q^{h_{n}+1/2}t_{n})T_{x}^{-1}+q^{2\alpha +1}x^{-2}\prod_{n=1}^{4}(x-q^{l_{n}-1/2}t_{n})T_{x}\] \[\quad+q^{\alpha+1/2}\Big{[}-(q^{1/2}+q^{-1/2})x^{2}+\sum_{n=1}^{4 }(q^{h_{n}}+q^{l_{n}})t_{n}x\] \[\quad+\prod_{n=1}^{4}q^{(h_{n}+l_{n})/2}t_{n}\Big{\{}-(q^{1/2}+q^ {-1/2})x^{-2}+\sum_{n=1}^{4}\Big{(}\frac{1}{q^{h_{n}}t_{n}}+\frac{1}{q^{l_{n} }t_{n}}\Big{)}x^{-1}\Big{\}}\Big{]}. \tag{2.15}\] The third and the second degenerate Ruijsenaars-van Diejen operators of one variable in [24] are realized as the case \(\alpha=-1/2\) in Eqs. (2.14) and (2.15), and conversely Eqs. (2.14) and (2.15) are obtained from the third and the second degenerate Ruijsenaars-van Diejen operators of one variable in [24] by appropriate gauge transformations. The variant of the \(q\)-Heun equation of degree three is written as \[A^{(3)}(x;h_{1},h_{2},h_{3},l_{1},l_{2},l_{3},\alpha,\beta)g(x)=Eg(x),\quad(E \in\mathbb{C}), \tag{2.16}\] and the variant of the \(q\)-Heun equation of degree three is written as \[A^{(2)}(x;h_{1},h_{2},h_{3},h_{4},l_{1},l_{2},l_{3},l_{4},\alpha)g(x)=Eg(x), \quad(E\in\mathbb{C}). \tag{2.17}\] Note that the variant of the \(q\)-Heun equation of degree three is a \(q\)-deformation of the second order Fuchsian differential equation with four singularities \(\{t_{1},t_{2},t_{3},\infty\}\) written as \[\frac{d^{2}y}{dz^{2}}+\left(\frac{\gamma}{z-t_{1}}+\frac{\delta}{z-t_{2}}+ \frac{\epsilon}{z-t_{3}}\right)\frac{dy}{dz}+\frac{\alpha\beta z-B}{(z-t_{1}) (z-t_{2})(z-t_{3})}y=0, \tag{2.18}\] and the variant of the \(q\)-Heun equation of degree four is a \(q\)-deformation of the second order Fuchsian differential equation with four singularities \(\{t_{1},t_{2},t_{3},t_{4}\}\). We obtain a kernel function and its identity for the operator \(A^{(3)}(x;h_{1},h_{2},h_{3},l_{1},l_{2},l_{3},\alpha,\beta)\) as follows. **Theorem 2.2**.: _If the parameters satisfy_ \[\chi=(\tilde{h}_{1}+\tilde{h}_{2}+\tilde{h}_{3}-\tilde{l}_{1}- \tilde{l}_{2}-\tilde{l}_{3}-\tilde{\beta})/2,\ \nu=2\mu_{0}+\alpha-\tilde{\alpha}+\chi,\ \mu=\mu_{0}+\chi+1,\] \[\beta=-\tilde{\beta}-\chi,\ l_{i}=\tilde{h}_{i}+\mu_{0},\ h_{i}= \tilde{l}_{i}+\mu_{0}+\chi,\ (i=1,2,3), \tag{2.19}\] _then we have_ \[A^{(3)}(x;h_{1},h_{2},h_{3},l_{1},l_{2},l_{3},\alpha,\beta)x^{- \alpha}s^{\chi+1-\tilde{\alpha}}P_{\mu,\mu_{0}}(x,s)\] \[=q^{\nu}A^{(3)}(s;\tilde{h}_{1},\tilde{h}_{2},\tilde{h}_{3}, \tilde{l}_{1},\tilde{l}_{2},\tilde{l}_{3},\tilde{\alpha},\tilde{\beta})x^{- \alpha}s^{\chi+1-\tilde{\alpha}}P_{\mu,\mu_{0}}(x,s). \tag{2.20}\] Theorem 2.2 is shown similarly to Theorem 2.1. Note that Eq. (2.19) is equivalent to \[\chi=(h_{1}+h_{2}+h_{3}-l_{1}-l_{2}-l_{3}-\beta)/2,\ \nu=2\mu_{0}+ \alpha-\tilde{\alpha}+\chi,\ \mu=\mu_{0}+\chi+1,\] \[\tilde{\beta}=-\beta-\chi,\ \tilde{h}_{i}=l_{i}-\mu_{0},\ \tilde{l}_{i}=h_{i}-\mu_{0}-\chi,\ (i=1,2,3). \tag{2.21}\] We also obtain a kernel function and its identity for the operator \(A^{(2)}(x;h_{1},h_{2},h_{3},h_{4},l_{1},l_{2},l_{3},l_{4},\alpha)\) as follows. **Theorem 2.3**.: _If the parameters satisfy_ \[\chi=(\tilde{h}_{1}+\tilde{h}_{2}+\tilde{h}_{3}+\tilde{h}_{4}- \tilde{l}_{1}-\tilde{l}_{2}-\tilde{l}_{3}-\tilde{l}_{4})/2,\ \nu=2\mu_{0}+\alpha-\tilde{\alpha}+\chi,\ \mu=\mu_{0}+\chi+1,\] \[l_{i}=\tilde{h}_{i}+\mu_{0},\ h_{i}=\tilde{l}_{i}+\mu_{0}+\chi, \ (i=1,2,3,4), \tag{2.22}\] _then we have_ \[A^{(2)}(x;h_{1},h_{2},h_{3},h_{4},l_{1},l_{2},l_{3},l_{4},\alpha )x^{-\alpha}s^{\chi+1-\tilde{\alpha}}P_{\mu,\mu_{0}}(x,s)\] \[=q^{\nu}A^{(2)}(s;\tilde{h}_{1},\tilde{h}_{2},\tilde{h}_{3}, \tilde{h}_{4},\tilde{l}_{1},\tilde{l}_{2},\tilde{l}_{3},\tilde{l}_{4},\tilde{ \alpha})x^{-\alpha}s^{\chi+1-\tilde{\alpha}}P_{\mu,\mu_{0}}(x,s). \tag{2.23}\] Theorem 2.3 is also shown similarly to Theorem 2.1. Note that Eq. (2.22) is equivalent to \[\chi=(h_{1}+h_{2}+h_{3}+h_{4}-l_{1}-l_{2}-l_{3}-l_{4})/2,\ \nu=2\mu_{0}+\alpha-\tilde{\alpha}+\chi,\ \mu=\mu_{0}+\chi+1,\] \[\tilde{h}_{i}=l_{i}-\mu_{0},\ \tilde{l}_{i}=h_{i}-\mu_{0}-\chi,\ (i=1,2,3,4). \tag{2.24}\] ## 3. Kernel function and \(q\)-integral transformation In this section, we obtain \(q\)-integral transformations by using the kernel function. Let \(\xi\in\mathbb{C}\setminus\{0\}\). The infinite sum defined by integral \[\int_{0}^{\xi\infty}f(s)\,d_{q}s=(1-q)\sum_{n=-\infty}^{\infty}q^{n}\xi f(q^{n }\xi), \tag{3.1}\] is called the Jackson integral. It is known that the usual integral over \((0,+\infty)\) is recovered as \(q\to 1\) (see e.g. [4]). **Theorem 3.1**.: _Assume that the function \(\Phi(x,s)\) satisfies_ \[\{a(x)T_{x}^{-1}+b(x)+c(x)T_{x}-(\tilde{a}(s)T_{s}^{-1}+\tilde{b}(s)+\tilde{c}(s) T_{s})\}\Phi(x,s)=0. \tag{3.2}\] _If the function \(h(s)\) satisfies_ \[\{q\tilde{a}(qs)T_{s}+\tilde{b}(s)+q^{-1}\tilde{c}(s/q)T_{s}^{-1}\}h(s)=0, \tag{3.3}\] _the Jackson integral_ \[g(x):=\int_{0}^{\xi\infty}h(s)\Phi(x,s)\,d_{q}s \tag{3.4}\] _and \(g(q^{\pm 1}x)\) converge, the limits_ \[g_{1}(x):=\lim_{L\to+\infty}qs\tilde{a}(qs)h(qs)\Phi(x,s)-s \tilde{c}(s)h(s)\Phi(x,qs)|_{s=q^{L}\xi}\] \[g_{2}(x):=\lim_{K\to-\infty}qs\tilde{a}(qs)h(qs)\Phi(x,s)-s \tilde{c}(s)h(s)\Phi(x,qs)|_{s=q^{K-1}\xi} \tag{3.5}\] _converge, and the variable \(\xi\) is independent of the variable \(x\) or it is proportional to \(x\) (i.e. \(\xi=Ax\) where \(A\) is independent of \(x\)), then the function \(g(x)\) in Eq. (3.4) satisfies_ \[\{a(x)T_{x}^{-1}+b(x)+c(x)T_{x}\}g(x)=(1-q)(g_{2}(x)-g_{1}(x)). \tag{3.6}\] To obtain the theorem, we use the following proposition. **Proposition 3.2**.: _Assume that the function \(\Phi(x,s)\) satisfies Eq. (3.2) and the function \(h(s)\) satisfies Eq. (3.3). Set_ \[g^{[K,L]}(x)=(1-q)\sum_{n=K}^{L}sh(s)\Phi(x,s)|_{s=q^{n}\xi}. \tag{3.7}\] _(i) If the variable \(\xi\) is independent of the variable \(x\), then we have_ \[\{a(x)T_{x}^{-1}+b(x)+c(x)T_{x}\}g^{[K,L]}(x)\] \[=-(1-q)[qs\tilde{a}(qs)h(qs)\Phi(x,s)-s\tilde{c}(s)h(s)\Phi(x,qs )]|_{s=q^{L}\xi}\] \[\quad+(1-q)[qs\tilde{a}(qs)h(qs)\Phi(x,s)-s\tilde{c}(s)h(s)\Phi( x,qs)]|_{s=q^{K-1}\xi}. \tag{3.8}\] _(ii) If \(\xi=Ax\) and \(A\) is independent of \(x\), then we have_ \[\{a(x)T_{x}^{-1}+b(x)+c(x)T_{x}\}g^{[K,L]}(x)\] \[=-(1-q)[sa(x)h(s)\Phi(x/q,s)-qsc(x)h(qs)\Phi(qx,qs)]_{s=q^{L}Ax}\] \[\quad+(1-q)[sa(x)h(s)\Phi(x/q,s)-qsc(x)h(qs)\Phi(qx,qs)]_{s=q^{K-1 }Ax}\] \[\quad-(1-q)[qs\tilde{a}(qs)h(qs)\Phi(x,s)-s\tilde{c}(s)h(s)\Phi( x,qs)]|_{s=q^{L}Ax}\] \[\quad+(1-q)[qs\tilde{a}(qs)h(qs)\Phi(x,s)-s\tilde{c}(s)h(s)\Phi( x,qs)]|_{s=q^{K-1}Ax} \tag{3.9}\] Proof.: It follows from Eq. (3.2) that \[[a(x)\Phi(x/q,s)+b(x)\Phi(x,s)+c(x)\Phi(qx,s)]_{s=q^{n}\xi}\] \[\quad=[\tilde{a}(s)\Phi(x,s/q)+\tilde{b}(s)\Phi(x,s)+\tilde{c}(s )\Phi(x,qs)]_{s=q^{n}\xi}. \tag{3.10}\] (i) If \(\xi\) is independent \(x\), then we have \[\{a(x)T_{x}^{-1}+b(x)+c(x)T_{x}\}\sum_{n=K}^{L}s\Phi(x,s)h(s)|_{s=q^ {n}\xi}\] \[=\sum_{n=K}^{L}[s\tilde{a}(s)\Phi(x,s/q)h(s)+s\tilde{b}(s)\Phi(x,s )h(s)+s\tilde{c}(s)\Phi(x,qs)h(s)]|_{s=q^{n}\xi}\] \[=\sum_{n=K-1}^{L-1}qs\tilde{a}(qs)\Phi(x,s)h(qs)|_{s=q^{n}\xi}+ \sum_{n=K}^{L}s\tilde{b}(s)\Phi(x,s)h(s)|_{s=q^{n}\xi}\] \[\quad+\sum_{n=K+1}^{L+1}q^{-1}s\tilde{c}(s/q)\Phi(x,s)h(s/q)|_{s=q ^{n}\xi}\] \[=\sum_{n=K}^{L}[q\tilde{a}(qs)h(qs)+\tilde{b}(s)h(s)+q^{-1} \tilde{c}(s/q)h(s/q)]s\Phi(x,s)|_{s=q^{n}\xi}\] \[\quad-qs\tilde{a}(qs)\Phi(x,s)h(qs)|_{s=q^{L}\xi}+q^{-1}s\tilde{c }(s/q)\Phi(x,s)h(s/q)|_{s=q^{L+1}\xi}\] \[\quad+qs\tilde{a}(qs)\Phi(x,s)h(qs)|_{s=q^{K-1}\xi}-q^{-1}s \tilde{c}(s/q)\Phi(x,s)h(s/q)|_{s=q^{K}\xi}. \tag{3.11}\] Therefore we obtain (i) by Eq. (3.3). (ii) If \(\xi=Ax\) and \(A\) is independent of \(x\), then it follows from Eq. (3.10) that \[\{a(x)T_{x}^{-1}+b(x)+c(x)T_{x}\}\sum_{n=K}^{L}s\Phi(x,s)h(s)|_{s=q ^{n}Ax}\] \[=\sum_{n=K}^{L}\{a(x)q^{n-1}Ax\Phi(x/q,q^{n-1}Ax)h(q^{n-1}Ax)+b(x )q^{n}Ax\Phi(x,q^{n}Ax)h(q^{n}Ax)\] \[\quad\quad+c(x)q^{n+1}Ax\Phi(qx,q^{n+1}Ax)h(q^{n+1}Ax)\}\] \[=\sum_{n=K}^{L}sh(s)(a(x)\Phi(x/q,s)+b(x)\Phi(x,s)+c(x)\Phi(qx,s) )|_{s=q^{n}Ax}\] \[\quad+a(x)q^{K-1}Ax\Phi(x/q,q^{K-1}Ax)h(q^{K-1}Ax)-a(x)q^{L}Ax \Phi(x/q,q^{L}Ax)h(q^{L}Ax)\] \[\quad-c(x)q^{K}Ax\Phi(qx,q^{K}Ax)h(q^{K}Ax)+c(x)q^{L+1}Ax\Phi(qx, q^{L+1}Ax)h(q^{L+1}Ax)\] \[=\sum_{n=K}^{L}[s\tilde{a}(s)\Phi(x,s/q)h(s)+s\tilde{b}(s)\Phi(x, s)h(s)+s\tilde{c}(s)\Phi(x,qs)h(s)]|_{s=q^{n}Ax}\] \[\quad-[sa(x)\Phi(x/q,s)h(s)-qsc(x)\Phi(qx,qs)h(qs)]_{s=q^{L}Ax}\] \[\quad+[sa(x)\Phi(x/q,s)h(s)-qsc(x)\Phi(qx,qs)h(qs)]_{s=q^{K-1}Ax}, \tag{3.12}\] where we applied Eq. (3.10) in the case \(\xi=Ax\). Hence we obtain (ii) by repeating the discussion in (i). We continue the proof of Theorem 3.1. If the variable \(\xi\) is independent of the variable \(x\), then we obtain Theorem 3.1 from Eq. (3.8) as \(L\to+\infty\) and \(K\to-\infty\). We consider the case \(\xi=Ax\). The convergence of the Jackson integrals \(g(q^{\pm 1}x)\) is equivalent to the convergence of the summations \[\sum_{n=0}^{L}s\Phi(q^{\pm 1}x,s)h(s)|_{s=q^{n}Ax},\quad\sum_{n=K}^{-1}s\Phi(q^{ \pm 1}x,s)h(s)|_{s=q^{n}Ax} \tag{3.13}\] as \(L\to+\infty\) and \(K\to-\infty\), and it follows that \[\lim_{L\to+\infty}s\Phi(q^{\pm 1}x,s)h(s)|_{s=q^{n}Ax}=0=\lim_{K\to-\infty}s \Phi(q^{\pm 1}x,s)h(s)|_{s=q^{n}Ax}. \tag{3.14}\] Therefore we have \[\lim_{n\to\pm\infty}[sa(x)\Phi(x/q,s)h(s)-qsc(x)\Phi(qx,qs)h(qs)]_{s=q^{n}Ax}=0, \tag{3.15}\] and we obtain Theorem 3.1 for the case \(\xi=Ax\) from Eq. (3.9) as \(L\to+\infty\) and \(K\to-\infty\). ## 4. \(q\)-integral transformations for \(q\)-Heun equations and its variants In this section, we obtain \(q\)-integral transformations for the \(q\)-Heun equation and its variants by applying results in sections 2 and 3. Recall that the operator \(A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)\) was introduced in Eq. (2.7), and the \(q\)-Heun equation was written as the equation for the eigenfunction of it with the eigenvalue \(E\). In Theorem 2.1, we obtained the identity \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1}, \alpha_{2},\beta)x^{-\alpha_{1}}s^{1+\chi-\tilde{\alpha}_{1}}P_{\mu,\mu_{0}}(x,s)\] \[=q^{\nu}A^{\langle 4\rangle}(s;\tilde{h}_{1},\tilde{h}_{2}, \tilde{l}_{1},\tilde{l}_{2},\tilde{\alpha}_{1},\tilde{\alpha}_{2},\tilde{ \beta})x^{-\alpha_{1}}s^{1+\chi-\tilde{\alpha}_{1}}P_{\mu,\mu_{0}}(x,s), \tag{4.1}\] where the parameters satisfy Eq. (2.8). We now apply Theorem 3.1 for the \(q\)-Heun equation. Let \(E\) and \(\tilde{E}\) be constants such that \(E=q^{\nu}\tilde{E}\) and write \[a(x)T_{x}^{-1}+b(x)+c(x)T_{x}=A^{\langle 4\rangle}(x;h_{1},h_{2},l_ {1},l_{2},\alpha_{1},\alpha_{2},\beta)-E,\] \[\tilde{a}(s)T_{s}^{-1}+\tilde{b}(s)+\tilde{c}(s)T_{s}=q^{\nu}\{A^ {\langle 4\rangle}(s;\tilde{h}_{1},\tilde{h}_{2},\tilde{l}_{1},\tilde{l}_{2}, \tilde{\alpha}_{1},\tilde{\alpha}_{2},\tilde{\beta})-\tilde{E}\}. \tag{4.2}\] Eq. (4.1) gives a realization of Eq. (3.2) by setting \(\Phi(x,s)=x^{-\alpha_{1}}s^{1+\chi-\tilde{\alpha}_{1}}P_{\mu,\mu_{0}}(x,s)\). In this situation, the coefficients of the equation \[\{q\tilde{a}(qs)T_{s}+\tilde{b}(s)+q^{-1}\tilde{c}(s/q)T_{s}^{-1}\}h(s)=0 \tag{4.3}\] is described as \[q\tilde{a}(qs)=q^{\nu+2}s^{-1}(s-q^{\tilde{h}_{1}-1/2}t_{1})(s- q^{\tilde{h}_{2}-1/2}t_{2}),\] \[q^{-1}\tilde{c}(s/q)=q^{\nu+\tilde{\alpha}_{1}+\tilde{\alpha}_{ 2}-2}s^{-1}(s-q^{\tilde{l}_{1}+1/2}t_{1})(s-q^{\tilde{l}_{2}+1/2}t_{2}),\] \[\tilde{b}(s)=-q^{\nu}\{(q^{\tilde{\alpha}_{1}}+q^{\tilde{\alpha} _{2}})s+q^{(\tilde{h}_{1}+\tilde{h}_{2}+\tilde{l}_{1}+\tilde{l}_{2}+\tilde{ \alpha}_{1}+\tilde{\alpha}_{2})/2}(q^{\tilde{\beta}/2}+q^{-\tilde{\beta}/2})t_ {1}t_{2}s^{-1}\}-q^{\nu}\tilde{E}. \tag{4.4}\] Eq. (4.3) is equivalent to the equation \(A^{\langle 4\rangle}(x;h^{\prime}_{1},h^{\prime}_{2},l^{\prime}_{1},l^{\prime}_{2}, \alpha^{\prime}_{1},\alpha^{\prime}_{2},\beta^{\prime})h(s)=E^{\prime}h(s)\), where \[\alpha^{\prime}_{i}=2-\tilde{\alpha}_{i},\;l^{\prime}_{i}=\tilde{h}_{i},\;h^{ \prime}_{i}=\tilde{l}_{i},\;(i=1,2),\;\beta^{\prime}=\tilde{\beta},\;E^{\prime }=q^{2-\tilde{\alpha}_{1}-\tilde{\alpha}_{2}}\tilde{E}. \tag{4.5}\] Note that the exponents of Eq. (4.3) about \(s=\infty\) are \(\{\alpha^{\prime}_{1},\alpha^{\prime}_{2}\}\) and the exponents about \(s=0\) are \(\{\lambda^{\prime}_{+},\lambda^{\prime}_{-}\}\), where \(\lambda^{\prime}_{\pm}=(h^{\prime}_{1}+h^{\prime}_{2}-l^{\prime}_{1}-l^{ \prime}_{2}-\alpha^{\prime}_{1}-\alpha^{\prime}_{2}\pm\beta^{\prime}+2)/2\). Hence, if \(\alpha^{\prime}_{1}-\alpha^{\prime}_{2}\not\in\mathbb{Z}\) (resp. \(\beta^{\prime}\not\in\mathbb{Z}\)), then there exist the solutions \(h_{1}(s)\) and \(h_{2}(s)\) (resp. \(h_{3}(s)\) and \(h_{4}(s)\)) of Eq. (4.3) about \(s=\infty\) (resp. \(s=0\)) such that \(h_{1}(s)/s^{-\alpha^{\prime}_{1}}\to 1\) and \(h_{2}(s)/s^{-\alpha^{\prime}_{2}}\to 1\) as \(s\to\infty\) (resp. \(h_{3}(s)/s^{\lambda^{\prime}_{+}}\to 1\) and \(h_{4}(s)/s^{\lambda^{\prime}_{-}}\to 1\) as \(s\to 0\)). See [24] for details of the exponents. Then we obtain the following theorem from Theorem 3.1. **Theorem 4.1**.: _Assume that the function \(h(s)\) satisfies_ \[A^{\langle 4\rangle}(x;h^{\prime}_{1},h^{\prime}_{2},l^{\prime}_{1},l^{\prime} _{2},\alpha^{\prime}_{1},\alpha^{\prime}_{2},\beta^{\prime})h(s)=E^{\prime}h(s) \tag{4.6}\] _and_ \[\lim_{L\to+\infty}\frac{h(s)}{s^{(h^{\prime}_{1}+h^{\prime}_{2}-l^{\prime}_{1 }-l^{\prime}_{2}-\alpha^{\prime}_{1}-\alpha^{\prime}_{2}+\beta^{\prime}+2)/2} }\big{|}_{s=q^{L}\xi}=C_{1},\;\lim_{K\to-\infty}\frac{h(s)}{s^{-\alpha^{\prime }_{1}}}\big{|}_{s=q^{K}\xi}=C_{2} \tag{4.7}\] _for some constants \(E^{\prime},C_{1},C_{2}\). Then the Jackson integral_ \[g(x)=x^{-\alpha_{1}}\int_{0}^{\xi\infty}s^{-(h^{\prime}_{1}+h^{\prime}_{2}-l^ {\prime}_{1}-l^{\prime}_{2}-\alpha^{\prime}_{1}-\alpha^{\prime}_{2}+\beta^{ \prime}+2)/2}h(s)P^{(1)}_{\mu,\mu_{0}}(x,s)\,d_{q}s \tag{4.8}\] _converges and it satisfies_ \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)g(x )=Eg(x)+(1-q)(g_{2}(x)-g_{1}(x)), \tag{4.9}\] _where_ \[g_{1}(x)=C_{1}x^{-\alpha_{1}}q^{\mu_{0}+\alpha_{1}+h^{\prime}_{1 }+h^{\prime}_{2}+\chi}(q^{\beta^{\prime}}-1)t_{1}t_{2},\] \[g_{2}(x)=C_{2}x^{-\alpha_{1}}\frac{\vartheta_{q}(q^{-\mu_{0}- \chi}x/\xi)}{\vartheta_{q}(q^{-\mu_{0}+1}x/\xi)}\xi^{\chi+1}q^{\mu_{0}+\alpha _{1}}(q^{\alpha^{\prime}_{2}-\alpha^{\prime}_{1}}-1),\] \[\chi=(l^{\prime}_{1}+l^{\prime}_{2}-h^{\prime}_{1}-h^{\prime}_{2} -\alpha^{\prime}_{1}+\alpha^{\prime}_{2}-\beta^{\prime})/2,\;\mu=\mu_{0}+1+\chi,\] \[E=q^{\mu_{0}+\alpha_{1}-\alpha^{\prime}_{1}}E^{\prime},\;\beta=- \beta^{\prime}-\chi,\;\alpha_{2}=\alpha_{1}-\alpha^{\prime}_{1}+\alpha^{ \prime}_{2}-\chi,\] \[l_{i}=l^{\prime}_{i}+\mu_{0},\;h_{i}=h^{\prime}_{i}+\mu_{0}+ \chi,\;(i=1,2). \tag{4.10}\] _In particular, if \(C_{1}=C_{2}=0\), then we have_ \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)g(x )=Eg(x). \tag{4.11}\] Proof.: On the given parameters \(h^{\prime}_{1},h^{\prime}_{2},l^{\prime}_{1},l^{\prime}_{2},\alpha^{\prime}_{1 },\alpha^{\prime}_{2},\beta^{\prime},E^{\prime}\), we define the parameters \(\tilde{h}_{1},\tilde{h}_{2},\tilde{l}_{1},\tilde{l}_{2},\tilde{\alpha}_{1}, \tilde{\alpha}_{2},\tilde{\beta},\tilde{E}\) by Eq. (4.5). Then it follows from Eq. (4.6) that the function \(h(s)\) satisfies Eq. (4.3) with Eq. (4.4). By combining Eq. (4.10) with Eq. (4.5), we obtain Eq. (2.8). It follows from Eq. (4.7) that \(h(s)s^{1+\chi-\tilde{\alpha}_{1}}|_{s=q^{L}\xi}\to C_{1}\) as \(L\to+\infty\). On the first limit in Eq. (3.5), we have \[x^{-\alpha_{1}}[qs\tilde{a}(qs)h(qs)s^{1+\chi-\tilde{\alpha}_{1}} P^{(1)}_{\mu,\mu_{0}}(x,s)-s\tilde{c}(s)h(s)(qs)^{1+\chi-\tilde{\alpha}_{1}}P^{(1)}_ {\mu,\mu_{0}}(x,qs)]|_{s=q^{L}\xi}\] \[=q^{\nu}x^{-\alpha_{1}}[(qs-q^{\tilde{h}_{1}+1/2}t_{1})(qs-q^{ \tilde{h}_{2}+1/2}t_{2})q^{-1-\chi+\tilde{\alpha}_{1}}P^{(1)}_{\mu,\mu_{0}}(x, s)h(qs)(qs)^{1+\chi-\tilde{\alpha}_{1}}\] \[\quad-q^{\tilde{\alpha}_{1}+\tilde{\alpha}_{2}}(s-q^{\tilde{l}_{ 1}-1/2}t_{1})(s-q^{\tilde{l}_{2}-1/2}t_{2})q^{1+\chi-\tilde{\alpha}_{1}}P^{(1 )}_{\mu,\mu_{0}}(x,qs)h(s)s^{1+\chi-\tilde{\alpha}_{1}}]|_{s=q^{L}\xi}\] \[\to C_{1}x^{-\alpha_{1}}q^{\nu}(q^{\tilde{h}_{1}+\tilde{h}_{2}- \chi+\tilde{\alpha}_{1}}-q^{\tilde{\alpha}_{1}+\tilde{\alpha}_{2}+\tilde{l}_ {1}+\tilde{l}_{2}+\chi-\tilde{\alpha}_{1}})t_{1}t_{2}\] \[=C_{1}x^{-\alpha_{1}}q^{\mu_{0}+\alpha_{1}+(\tilde{h}_{1}+\tilde{ h}_{2}+\tilde{l}_{1}+\tilde{l}_{2}+\tilde{\alpha}_{1}-\tilde{\alpha}_{2})/2}(q^{ \tilde{\beta}/2}-q^{-\tilde{\beta}/2})t_{1}t_{2} \tag{4.12}\] as \(L\to+\infty\). Here we used \(P^{(1)}_{\mu,\mu_{0}}(x,s)\to 1\) as \(s\to 0\) (see Eq. (2.5)). Therefore the first limit in Eq. (3.5) converges and we have \[g_{1}(x)=C_{1}x^{-\alpha_{1}}q^{\mu_{0}+\alpha_{1}+(h^{\prime}_{1}+h^{\prime}_ {2}+l^{\prime}_{1}+l^{\prime}_{2}-\alpha^{\prime}_{1}+\alpha^{\prime}_{2})/2}( q^{\beta^{\prime}/2}-q^{-\beta^{\prime}/2})t_{1}t_{2}. \tag{4.13}\] It also follows from Eq. (4.7) that \(h(s)s^{2-\tilde{\alpha}_{1}}|_{s=q^{K-1}\xi}\to C_{2}\) as \(K\to-\infty\). On the second limit in Eq. (3.5), we have \[x^{-\alpha_{1}}[qs\tilde{a}(qs)h(qs)s^{1+\chi-\tilde{\alpha}_{1 }}P^{(1)}_{\mu,\mu_{0}}(x,s)-s\tilde{c}(s)h(s)(qs)^{1+\chi-\tilde{\alpha}_{1}} P^{(1)}_{\mu,\mu_{0}}(x,qs)]|_{s=q^{K-1}\xi}\] \[=x^{-\alpha_{1}}(x/\xi)^{\mu_{0}-\mu}\frac{\vartheta_{q}(q^{-\mu+ 1}x/\xi)}{\vartheta_{q}(q^{-\mu_{0}+1}x/\xi)}[q^{\tilde{\alpha}_{1}-1}s^{-1} \tilde{a}(qs)s^{1+\chi}P^{(2)}_{\mu,\mu_{0}}(x,s)h(qs)(qs)^{2-\tilde{\alpha}_{ 1}}\] \[\quad-s^{-1}\tilde{c}(s)q^{-\tilde{\alpha}_{1}}(qs)^{1+\chi}P^{(2 )}_{\mu,\mu_{0}}(x,qs)h(s)s^{2-\tilde{\alpha}_{1}}]|_{s=q^{K-1}\xi}\] \[\to x^{-\alpha_{1}}(x/\xi)^{-1-\chi}\frac{\vartheta_{q}(q^{-\mu+1} x/\xi)}{\vartheta_{q}(q^{-\mu_{0}+1}x/\xi)}q^{\nu}(q^{\tilde{\alpha}_{1}}x^{1+\chi}C_{ 2}-q^{\tilde{\alpha}_{2}}x^{1+\chi}C_{2})\] \[=C_{2}x^{-\alpha_{1}}\xi^{1+\chi}\frac{\vartheta_{q}(q^{-\mu_{0}- \chi}x/\xi)}{\vartheta_{q}(q^{-\mu_{0}+1}x/\xi)}q^{\mu_{0}+\alpha_{1}-\tilde{ \alpha}_{2}}(q^{\tilde{\alpha}_{1}}-q^{\tilde{\alpha}_{2}}) \tag{4.14}\] as \(K\to-\infty\). Here we used Eq. (2.5). Therefore the second limit in Eq. (3.5) converges and we have \[g_{2}(x)=C_{2}x^{-\alpha_{1}}\xi^{\chi+1}\frac{\vartheta_{q}(q^{-\mu_{0}-\chi}x /\xi)}{\vartheta_{q}(q^{-\mu_{0}+1}x/\xi)}q^{\mu_{0}+\alpha_{1}}(q^{\alpha^{ \prime}_{2}-\alpha^{\prime}_{1}}-1). \tag{4.15}\] We show convergence of the Jackson integral in Eq. (4.8), which is equivalent to convergence of the summation of the sequence \(a_{n}\) over \(n\in\mathbb{Z}\), where \[a_{n}=sh(s)s^{1+\chi-\tilde{\alpha}_{1}}P^{(1)}_{\mu,\mu_{0}}(x,s)|_{s=q^{n}\xi}. \tag{4.16}\] Since \(h(s)s^{1+\chi-\tilde{\alpha}_{1}}|_{s=q^{L}\xi}\to C_{1}\) as \(L\to+\infty\) and \(P^{(1)}_{\mu,\mu_{0}}(x,s)\to 1\) as \(s\to 0\), there exists an integer \(N_{1}\) and a positive number \(C^{\prime}_{1}\) such that \(|a_{n}|\leq C^{\prime}_{1}q^{n}\) for any integer \(n\) such that \(n\geq N_{1}\). Hence, we obtain convergence of the summation of \(a_{n}\) over \(n\in\mathbb{Z}_{\geq 0}\). Eq. (4.16) is also expressed as \[a_{n}=(x/\xi)^{\mu_{0}-\mu}\frac{\vartheta_{q}(q^{-\mu+1}x/\xi)}{\vartheta_{q}(q ^{-\mu_{0}+1}x/\xi)}s^{-1}h(s)s^{2-\tilde{\alpha}_{1}}s^{1+\chi}P^{(2)}_{\mu, \mu_{0}}(x,s)|_{s=q^{n}\xi}. \tag{4.17}\] Since \(h(s)s^{2-\bar{\alpha}_{1}}|_{s=q^{K-1}\xi}\to C_{2}\) as \(K\to-\infty\) and \(s^{\mu-\mu_{0}}P^{(2)}_{\mu,\mu_{0}}(x,s)\to x^{\mu-\mu_{0}}\) as \(s\to\infty\), there exists an integer \(N_{2}\) and a positive number \(C_{2}^{\prime}\) such that \(|a_{n}|\leq C_{2}^{\prime}q^{-n}\) for any integer \(n\) such that \(n\leq N_{2}\). Hence, we obtain convergence of the summation of \(a_{n}\) over \(n\in\mathbb{Z}_{\leq-1}\). Therefore, convergence of the Jackson integral in Eq. (4.8) is shown. Recall that Eq. (4.1) gives a realization of Eq. (3.2) and we have the relation \(E=q^{\mu_{0}+\alpha_{1}-\alpha_{1}^{\prime}}E^{\prime}=q^{\nu}\bar{E}\) by Eqs. (4.5) and (4.10). Hence, we have confirmed the assumption of Theorem 3.1, and we obtain that the function \(g(x)\) satisfies Eq. (4.9). In Theorem 4.1, we can replace the Jackson integral in Eq. (4.8) with \[g(x)=x^{-\alpha_{1}}\int_{0}^{\xi\infty}s^{-(h_{1}^{\prime}+h_{2}^{\prime}-l_ {1}^{\prime}-l_{2}^{\prime}-\alpha_{1}^{\prime}-\alpha_{2}^{\prime}+\beta^{ \prime}+2)/2}h(s)P^{(2)}_{\mu,\mu_{0}}(x,s)\,d_{q}s. \tag{4.18}\] Namely, under the assumption of Theorem 4.1, the function \(g(x)\) in Eq. (4.18) converges and it satisfies \[A^{(4)}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)g(x)=Eg(x)+(1-q) (g_{2}(x)-g_{1}(x)), \tag{4.19}\] where \[g_{1}(x)=C_{1}x^{-\alpha_{1}+\chi+1}\xi^{-\chi-1}\frac{\vartheta_ {q}(q^{-\mu_{0}+1}x/\xi)}{\vartheta_{q}(q^{-\mu_{0}-\chi}x/\xi)}q^{\mu_{0}+ \alpha_{1}+h_{1}^{\prime}+h_{2}^{\prime}+\chi}(q^{\beta^{\prime}}-1)t_{1}t_{2},\] \[g_{2}(x)=C_{2}x^{-\alpha_{1}+\chi+1}q^{\mu_{0}+\alpha_{1}}(q^{ \alpha_{2}^{\prime}-\alpha_{1}^{\prime}}-1),\] \[\chi=(l_{1}^{\prime}+l_{2}^{\prime}-h_{1}^{\prime}-h_{2}^{\prime }-\alpha_{1}^{\prime}+\alpha_{2}^{\prime}-\beta^{\prime})/2,\;\mu=\mu_{0}+1+\chi,\] \[E=q^{\mu_{0}+\alpha_{1}-\alpha_{1}^{\prime}}E^{\prime},\;\beta=- \beta^{\prime}-\chi,\;\alpha_{2}=\alpha_{1}-\alpha_{1}^{\prime}+\alpha_{2}^{ \prime}-\chi,\] \[l_{i}=l_{i}^{\prime}+\mu_{0},\;h_{i}=h_{i}^{\prime}+\mu_{0}+ \chi,\;(i=1,2). \tag{4.20}\] This is proved by rewriting the proof of Theorem 4.1 while replacing \(P^{(2)}_{\mu,\mu_{0}}(x,s)\) with \(P^{(1)}_{\mu,\mu_{0}}(x,s)\) by using Eq. (2.4). We specialize the parameters in Theorem 4.1 to reproduce Theorem 5.4 in [18]. **Corollary 4.2**.: _Assume that \(h_{1}^{\prime}+h_{2}^{\prime}-l_{1}^{\prime}-l_{2}^{\prime}-\alpha_{1}^{\prime }-\alpha_{2}^{\prime}+\beta^{\prime}+2=0\) and the function \(h(s)\) satisfies_ \[A^{(4)}(x;h_{1}^{\prime},h_{2}^{\prime},l_{1}^{\prime},l_{2}^{\prime},\alpha_{ 1}^{\prime},\alpha_{2}^{\prime},\beta^{\prime})h(s)=E^{\prime}h(s) \tag{4.21}\] _and_ \[\lim_{L\to+\infty}h(s)\big{|}_{s=q^{L}\xi}=0,\;\lim_{K\to-\infty}\frac{h(s)}{s ^{-\alpha_{1}^{\prime}}}\big{|}_{s=q^{K}\xi}=0 \tag{4.22}\] _for some constants \(E^{\prime}\). Then the Jackson integral_ \[g(x)=\int_{0}^{\xi\infty}h(s)P^{(1)}_{-\alpha_{1}^{\prime},0}(x,s)\,d_{q}s= \int_{0}^{\xi\infty}h(s)\frac{(q^{-\alpha_{1}^{\prime}}s/x;q)_{\infty}}{(s/x; q)_{\infty}}\,d_{q}s \tag{4.23}\] _converges and it satisfies_ \[A^{(4)}(x;h_{1}^{\prime}+1-\alpha_{1}^{\prime},h_{2}^{\prime}+1-\alpha_{1}^{ \prime},l_{1}^{\prime},l_{2}^{\prime},0,\alpha_{2}^{\prime}-1,\beta^{\prime}+ 1-\alpha_{1}^{\prime})g(x)=q^{-\alpha_{1}^{\prime}}E^{\prime}g(x). \tag{4.24}\] Proof.: Set \(\mu_{0}=0\) and \(\alpha_{1}=0\) in Theorem 4.1. It follows from \(h_{1}^{\prime}+h_{2}^{\prime}-l_{1}^{\prime}-l_{2}^{\prime}-\alpha_{1}^{\prime}- \alpha_{2}^{\prime}+\beta^{\prime}+2=0\) that \(\chi=1-\alpha_{1}^{\prime}\), and we obtain the corollary from Theorem 4.1. We investigate a \(q\)-integral transformation for the variant of the \(q\)-Heun equation of degree three. The operator \(A^{(3)}(x;h_{1},h_{2},h_{3},l_{1},l_{2},l_{3},\alpha,\beta)\) in Eq. (2.14) was used to write down the variant of the \(q\)-Heun equation of degree three. In Theorem 2.2, we obtained the identity \[A^{(3)}(x;h_{1},h_{2},h_{3},l_{1},l_{2},l_{3},\alpha,\beta)x^{- \alpha}s^{\chi+1-\tilde{\alpha}}P_{\mu,\mu_{0}}(x,s)\] \[=q^{\nu}A^{(3)}(s;\tilde{h}_{1},\tilde{h}_{2},\tilde{h}_{3}, \tilde{l}_{1},\tilde{l}_{2},\tilde{l}_{3},\tilde{\alpha},\tilde{\beta})x^{- \alpha}s^{\chi+1-\tilde{\alpha}}P_{\mu,\mu_{0}}(x,s), \tag{4.25}\] where the parameters satisfy Eq. (2.19). We now apply Theorem 3.1 for the variant of the \(q\)-Heun equation of degree three. Let \(E\) and \(\tilde{E}\) be constants such that \(E=q^{\nu}\tilde{E}\) and write \[a(x)T_{x}^{-1}+b(x)+c(x)T_{x}=A^{(3)}(x;h_{1},h_{2},h_{3},l_{1}, l_{2},l_{3},\alpha,\beta)-E,\] \[\tilde{a}(s)T_{s}^{-1}+\tilde{b}(s)+\tilde{c}(s)T_{s}=q^{\nu}\{A ^{(3)}(s;\tilde{h}_{1},\tilde{h}_{2},\tilde{h}_{3},\tilde{l}_{1},\tilde{l}_{2 },\tilde{l}_{3},\tilde{\alpha},\tilde{\beta})-\tilde{E}\}. \tag{4.26}\] Eq. (4.25) gives a realization of Eq. (3.2) by setting \(\Phi(x,s)=x^{-\alpha}s^{\chi+1-\tilde{\alpha}}P_{\mu,\mu_{0}}(x,s)\). In this situation, the coefficients of the equation \[\{q\tilde{a}(qs)T_{s}+\tilde{b}(s)+q^{-1}\tilde{c}(s/q)T_{s}^{-1}\}h(s)=0 \tag{4.27}\] are described as \[q\tilde{a}(qs)=q^{\nu+3}s^{-1}(s-q^{\tilde{h}_{1}-1/2}t_{1})(s- q^{\tilde{h}_{2}-1/2}t_{2})(s-q^{\tilde{h}_{3}-1/2}t_{3}),\] \[q^{-1}\tilde{c}(s/q)=q^{\nu+2\tilde{\alpha}-2}s^{-1}(s-q^{\tilde {l}_{1}+1/2}t_{1})(s-q^{\tilde{l}_{2}+1/2}t_{2})(s-q^{\tilde{l}_{3}+1/2}t_{3}),\] \[\tilde{b}(s)=q^{\nu+\tilde{\alpha}+1/2}[-(q^{1/2}+q^{-1/2})s^{2}+ \sum_{n=1}^{3}(q^{\tilde{h}_{n}}+q^{\tilde{l}_{n}})t_{n}s\] \[\qquad+q^{(\tilde{l}_{1}+\tilde{l}_{2}+\tilde{l}_{3}+\tilde{h}_{ 1}+\tilde{h}_{2}+\tilde{h}_{3})/2}(q^{\tilde{\beta}/2}+q^{-\tilde{\beta}/2})t_ {1}t_{2}t_{3}s^{-1}]-q^{\nu}\tilde{E}. \tag{4.28}\] Eq. (4.27) is equivalent to the equation \(A^{(3)}(x;h_{1}^{\prime},h_{2}^{\prime},h_{3}^{\prime},l_{1}^{\prime},l_{2}^{ \prime},l_{3}^{\prime},\alpha^{\prime},\beta^{\prime})h(s)=E^{\prime}h(s)\), where \[l_{i}^{\prime}=\tilde{h}_{i},\;h_{i}^{\prime}=\tilde{l}_{i}\;(i=1,2,3),\; \alpha^{\prime}=2-\tilde{\alpha},\;\beta^{\prime}=\tilde{\beta},\;E^{\prime}= q^{2-2\tilde{\alpha}}\tilde{E}. \tag{4.29}\] Note that the exponents of Eq. (4.27) about \(s=\infty\) are \(\{\alpha^{\prime},\alpha^{\prime}+1\}\) and the exponents about \(s=0\) are \(\{\lambda_{+}^{\prime},\lambda_{-}^{\prime}\}\), where \(\lambda_{\pm}^{\prime}=(h_{1}^{\prime}+h_{2}^{\prime}+h_{3}^{\prime}-l_{1}^{ \prime}-l_{2}^{\prime}-l_{3}^{\prime}-2\alpha^{\prime}\pm\beta^{\prime}+2)/2\). The singular point \(s=\infty\) is non-logarithmic. Then we obtain the following theorem from Theorem 3.1. **Theorem 4.3**.: _Assume that the function \(h(s)\) satisfies_ \[A^{(3)}(x;h_{1}^{\prime},h_{2}^{\prime},h_{3}^{\prime},l_{1}^{\prime},l_{2}^{ \prime},l_{3}^{\prime},\alpha^{\prime},\beta^{\prime})h(s)=E^{\prime}h(s) \tag{4.30}\] _and_ \[\lim_{L\to+\infty}\frac{h(s)}{s^{(\tilde{h}_{1}^{\prime}+h_{2}^{\prime}+h_{3}^ {\prime}-l_{1}^{\prime}-l_{2}^{\prime}-l_{3}^{\prime}-2\alpha^{\prime}+\beta^{ \prime}+2)/2}}\big{|}_{s=q^{L}\xi}=0,\;\lim_{K\to-\infty}\frac{h(s)}{s^{-\alpha^ {\prime}-1}}\big{|}_{s=q^{K}\xi}=0 \tag{4.31}\] _for some constant \(E^{\prime}\). Then the Jackson integral_ \[g(x)=x^{-\alpha}\int_{0}^{\xi\infty}s^{-(h_{1}^{\prime}+h_{2}^{\prime}+h_{3}^{ \prime}-l_{1}^{\prime}-l_{2}^{\prime}-l_{3}^{\prime}-2\alpha^{\prime}+\beta^{ \prime}+2)/2}h(s)P_{\mu,\mu_{0}}^{(1)}(x,s)\,d_{q}s \tag{4.32}\] _converges and it satisfies_ \[A^{(3)}(x;h_{1},h_{2},h_{3},l_{1},l_{2},l_{3},\alpha,\beta)g(x)=Eg(x), \tag{4.33}\] _where_ \[\chi=(l_{1}^{\prime}+l_{2}^{\prime}+l_{3}^{\prime}-h_{1}^{\prime}-h_{2}^{ \prime}-h_{3}^{\prime}-\beta^{\prime})/2,\;E=q^{2\mu_{0}+\alpha-\alpha^{ \prime}+\chi}E^{\prime},\;\mu=\mu_{0}+1+\chi, \tag{4.34}\] \[\beta=-\beta^{\prime}-\chi,\;l_{i}=l_{i}^{\prime}+\mu_{0},\;h_{i}=h_{i}^{ \prime}+\mu_{0}+\chi,\;(i=1,2,3).\] We investigate a \(q\)-integral transformation for the variant of the \(q\)-Heun equation of degree four. The operator \(A^{(2)}(x;h_{1},h_{2},h_{3},h_{4},l_{1},l_{2},l_{3},l_{4},\alpha)\) in Eq. (2.15) was used to write down the variant of the \(q\)-Heun equation of degree four. In Theorem 2.3, we obtained the identity \[A^{(2)}(x;h_{1},h_{2},h_{3},h_{4},l_{1},l_{2},l_{3},l_{4},\alpha )x^{-\alpha}s^{\chi+1-\tilde{\alpha}}P_{\mu,\mu_{0}}(x,s)\] \[=q^{\nu}A^{(2)}(s;\tilde{h}_{1},\tilde{h}_{2},\tilde{h}_{3}, \tilde{h}_{4},\tilde{l}_{1},\tilde{l}_{2},\tilde{l}_{3},\tilde{l}_{4},\tilde{ \alpha})\}x^{-\alpha}s^{\chi+1-\tilde{\alpha}}P_{\mu,\mu_{0}}(x,s), \tag{4.35}\] where the parameters satisfy Eq. (2.22). We now apply Theorem 3.1 for the variant of the \(q\)-Heun equation of degree four. Let \(E\) and \(\tilde{E}\) be constants such that \(E=q^{\nu}\tilde{E}\) and write \[a(x)T_{x}^{-1}+b(x)+c(x)T_{x}=A^{(2)}(x;h_{1},h_{2},h_{3},h_{4}, l_{1},l_{2},l_{3},l_{4},\alpha)-E,\] \[\tilde{a}(s)T_{s}^{-1}+\tilde{b}(s)+\tilde{c}(s)T_{s}=q^{\nu}\{A ^{(2)}(s;\tilde{h}_{1},\tilde{h}_{2},\tilde{h}_{3},\tilde{h}_{4},\tilde{l}_{1 },\tilde{l}_{2},\tilde{l}_{3},\tilde{l}_{4},\tilde{\alpha})-\tilde{E}\}. \tag{4.36}\] Eq. (4.35) gives a realization of Eq. (3.2) by setting \(\Phi(x,s)=x^{-\alpha}s^{\chi+1-\tilde{\alpha}}P_{\mu,\mu_{0}}(x,s)\). In this situation, the coefficients of the equation \[\{q\tilde{a}(qs)T_{s}+\tilde{b}(s)+q^{-1}\tilde{c}(s/q)T_{s}^{-1}\}h(s)=0 \tag{4.37}\] are described as \[q\tilde{a}(qs)=q^{\nu+3}s^{-2}(s-q^{\tilde{h}_{1}-1/2}t_{1})(s-q^{ \tilde{h}_{2}-1/2}t_{2})(s-q^{\tilde{h}_{3}-1/2}t_{3})(s-q^{\tilde{h}_{4}-1/2 }t_{4}),\] \[q^{-1}\tilde{c}(s/q)=q^{\nu+2\tilde{\alpha}-2}s^{-2}(s-q^{\tilde {l}_{1}+1/2}t_{1})(s-q^{\tilde{l}_{2}+1/2}t_{2})(s-q^{\tilde{l}_{3}+1/2}t_{3} )(s-q^{\tilde{l}_{4}+1/2}t_{4}),\] \[\tilde{b}(s)=q^{\nu+\tilde{\alpha}+1/2}\Big{[}-(q^{1/2}+q^{-1/2} )s^{2}+\sum_{n=1}^{4}(q^{\tilde{h}_{n}}+q^{\tilde{l}_{n}})t_{n}s\] \[\quad+\prod_{n=1}^{4}q^{(\tilde{h}_{n}+\tilde{l}_{n})/2}t_{n}\cdot \Big{\{}-(q^{1/2}+q^{-1/2})s^{-2}+\sum_{n=1}^{4}\Big{(}\frac{1}{q^{\tilde{h}_ {n}}t_{n}}+\frac{1}{q^{\tilde{l}_{n}}t_{n}}\Big{)}s^{-1}\Big{\}}\Big{]}-q^{\nu} \tilde{E}. \tag{4.38}\] Eq. (4.37) is equivalent to the equation \(A^{(2)}(x;h_{1}^{\prime},h_{2}^{\prime},h_{3}^{\prime},h_{4}^{\prime},l_{1}^{ \prime},l_{2}^{\prime},l_{3}^{\prime},l_{4}^{\prime},\alpha^{\prime})h(s)=E^{ \prime}h(s)\), where \[l_{i}^{\prime}=\tilde{h}_{i},\;h_{i}^{\prime}=\tilde{l}_{i},\;(i=1,2,3,4),\; \alpha^{\prime}=2-\tilde{\alpha},\;E^{\prime}=q^{2-2\tilde{\alpha}}\tilde{E}. \tag{4.39}\] Note that the exponents of Eq. (4.37) about \(s=\infty\) are \(\{\alpha^{\prime},\alpha^{\prime}+1\}\) and the exponents about \(s=0\) are \(\{\lambda^{\prime},\lambda^{\prime}+1\}\), where \(\lambda^{\prime}=(h_{1}^{\prime}+h_{2}^{\prime}+h_{3}^{\prime}+h_{4}^{\prime}- l_{1}^{\prime}-l_{2}^{\prime}-l_{3}^{\prime}-l_{4}^{\prime}-2\alpha^{\prime}+2)/2\). The singular points \(s=0\) and \(s=\infty\) are non-logarithmic. Then we obtain the following theorem from Theorem 3.1. **Theorem 4.4**.: _Assume that the function \(h(s)\) satisfies_ \[A^{\langle 2\rangle}(x;h_{1}^{\prime},h_{2}^{\prime},h_{3}^{\prime},h_{4}^{ \prime},l_{1}^{\prime},l_{2}^{\prime},l_{3}^{\prime},l_{4}^{\prime},\alpha^{ \prime})h(s)=E^{\prime}h(s) \tag{4.40}\] _and_ \[\lim_{L\to+\infty}\frac{h(s)}{s^{(h_{1}^{\prime}+h_{2}^{\prime}+h_{3}^{\prime }+h_{4}^{\prime}-l_{1}^{\prime}-l_{2}^{\prime}-l_{3}^{\prime}-2\alpha^{\prime }+4)/2}}\big{|}_{s=q^{L}\xi}=0,\ \lim_{K\to-\infty}\frac{h(s)}{s^{-\alpha^{\prime}-1}}\big{|}_{s=q^{K}\xi}=0 \tag{4.41}\] _for some constant \(E^{\prime}\). Then the Jackson integral_ \[g(x)=x^{-\alpha}\int_{0}^{\xi\infty}s^{-(h_{1}^{\prime}+h_{2}^{\prime}+h_{3}^{ \prime}+h_{4}^{\prime}-l_{1}^{\prime}-l_{2}^{\prime}-l_{3}^{\prime}-l_{4}^{ \prime}-2\alpha^{\prime}+2)/2}h(s)P_{\mu,\mu_{0}}^{(1)}(x,s)\,d_{q}s \tag{4.42}\] _converges and it satisfies_ \[A^{\langle 2\rangle}(x;h_{1},h_{2},h_{3},h_{4},l_{1},l_{2},l_{3},l_{4},\alpha )g(x)=Eg(x), \tag{4.43}\] _where_ \[\chi=(l_{1}^{\prime}+l_{2}^{\prime}+l_{3}^{\prime}+l_{4}^{\prime}-h_{1}^{ \prime}-h_{2}^{\prime}-h_{3}^{\prime}-h_{4}^{\prime})/2,\ E=q^{2\mu_{0}+ \alpha-\alpha^{\prime}+\chi}E^{\prime}, \tag{4.44}\] \[\mu=\mu_{0}+1+\chi,\ l_{i}=l_{i}^{\prime}+\mu_{0},\ h_{i}=h_{i}^{\prime}+\mu_ {0}+\chi,\ (i=1,2,3,4).\] ## 5. Special solutions of the \(q\)-Heun equation We investigate special solutions of the \(q\)-Heun equation. Recall that the \(q\)-Heun equation is written as \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)g(x)=Eg (x), \tag{5.1}\] where \(E\) is an arbitrary complex number and \(A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)\) is written as \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1}, \alpha_{2},\beta)\] \[=x^{-1}(x-q^{h_{1}+1/2}t_{1})(x-q^{h_{2}+1/2}t_{2})T_{x}^{-1}+q^{ \alpha_{1}+\alpha_{2}}x^{-1}(x-q^{l_{1}-1/2}t_{1})(x-q^{l_{2}-1/2}t_{2})T_{x}\] \[-\{(q^{\alpha_{1}}+q^{\alpha_{2}})x+q^{(h_{1}+h_{2}+l_{1}+l_{2}+ \alpha_{1}+\alpha_{2})/2}(q^{\beta/2}+q^{-\beta/2})t_{1}t_{2}x^{-1}\}. \tag{5.2}\] To investigate monomial solutions of the \(q\)-Heun equation, we look for monomial eigenfunctions of the operator \(A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)\). **Proposition 5.1**.: _Set \((i,i^{\prime})=(1,2)\) or \((2,1)\). If \(\pm\beta=h_{1}+h_{2}-l_{1}-l_{2}+\alpha_{i}-\alpha_{i^{\prime}}+2\), then_ \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1}, \alpha_{2},\beta)x^{-\alpha_{i}}\] \[=-(q^{\alpha_{i}+h_{1}+1/2}t_{1}+q^{\alpha_{i}+h_{2}+1/2}t_{2}+ q^{\alpha_{i^{\prime}}+l_{1}-1/2}t_{1}+q^{\alpha_{i^{\prime}}+l_{2}-1/2}t_{2})x^{- \alpha_{i}}. \tag{5.3}\] Proof.: Let us find eigenfunctions of the operator \(A^{\langle 4\rangle}\) in the form \(x^{\nu}\). We have \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1}, \alpha_{2},\beta)x^{\nu}\] \[=(q^{-\nu}+q^{\alpha_{1}+\alpha_{2}+\nu}-q^{\alpha_{1}}-q^{\alpha_ {2}})x^{\nu+1}\] \[\quad+(q^{h_{1}+h_{2}+1-\nu}+q^{\alpha_{1}+\alpha_{2}+l_{1}+l_{2} -1+\nu}\] \[\quad\quad-q^{(h_{1}+h_{2}+l_{1}+l_{2}+\alpha_{1}+\alpha_{2}+ \beta)/2}-q^{(h_{1}+h_{2}+l_{1}+l_{2}+\alpha_{1}+\alpha_{2}-\beta)/2})t_{1}t_{ 2}x^{\nu-1}\] \[\quad-(q^{-\nu+h_{1}+1/2}t_{1}+q^{-\nu+h_{2}+1/2}t_{2}+q^{\alpha_ {1}+\alpha_{2}+\nu+l_{1}-1/2}t_{1}+q^{\alpha_{1}+\alpha_{2}+\nu+l_{2}-1/2}t_{ 2})x^{\nu}. \tag{5.4}\] If \(\nu=-\alpha_{1}\) or \(\nu=-\alpha_{2}\), then the coefficient of \(x^{\nu+1}\) vanishes. In the case \(\nu=-\alpha_{1}\) (resp. \(\nu=-\alpha_{2}\)), the condition \(\pm\beta=h_{1}+h_{2}-l_{1}-l_{2}+\alpha_{1}-\alpha_{2}+2\) (resp. \(\pm\beta=h_{1}+h_{2}-l_{1}-l_{2}-\alpha_{1}+\alpha_{2}+2\)) is sufficient for elimination of the term \(x^{\nu-1}\). Then we obtain the expression in the proposition. Note that we have \[x^{-1}[(x-q^{h_{1}+1/2}t_{1})(x-q^{h_{2}+1/2}t_{2})-q^{\alpha_{i ^{\prime}}}(x-q^{l_{1}-1/2}t_{1})(x-q^{l_{2}-1/2}t_{2})T_{x}][T_{x}^{-1}-q^{ \alpha_{i}}]\] \[=A^{(4)}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)\] \[\qquad+q^{\alpha_{i}+h_{1}+1/2}t_{1}+q^{\alpha_{i}+h_{2}+1/2}t_{2 }+q^{\alpha_{i^{\prime}}+l_{1}-1/2}t_{1}+q^{\alpha_{i^{\prime}}+l_{2}-1/2}t_{2}, \tag{5.5}\] if \(\pm\beta=h_{1}+h_{2}-l_{1}-l_{2}+\alpha_{i}-\alpha_{i^{\prime}}+2\), and the function \(x^{-\alpha_{i}}\) satisfies \([T_{x}^{-1}-q^{\alpha_{i}}]x^{-\alpha_{i}}=0\). On a gauge transformation, the following proposition is shown readily. **Proposition 5.2**.: _If the function \(f(x)\) satisfies_ \[x^{-1}(x-q^{l_{1}+1/2}t_{1})(x-q^{h_{2}+1/2}t_{2})f(x/q)+q^{\alpha _{1}+\alpha_{2}}x^{-1}(x-q^{h_{1}-1/2}t_{1})(x-q^{l_{2}-1/2}t_{2})f(qx)\] \[-\{(q^{\alpha_{1}}+q^{\alpha_{2}})x+q^{(h_{1}+h_{2}+l_{1}+l_{2}+ \alpha_{1}+\alpha_{2})/2}(q^{\beta/2}+q^{-\beta/2})t_{1}t_{2}x^{-1}\}f(x)=Ef(x)\] _and_ \[g_{1}(x)=\frac{(q^{h_{1}+1/2}t_{1}/x;q)_{\infty}}{(q^{l_{1}+1/2}t_{1}/x;q)_{ \infty}}f(x),\;g_{2}(x)=x^{h_{1}-l_{1}}\frac{(x/(q^{l_{1}-1/2}t_{1});q)_{ \infty}}{(x/(q^{h_{1}-1/2}t_{1});q)_{\infty}}f(x),\] _then the functions \(g_{1}(x)\) and \(g_{2}(x)\) satisfy_ \[x^{-1}(x-q^{h_{1}+1/2}t_{1})(x-q^{h_{2}+1/2}t_{2})g(x/q)+q^{\alpha _{1}+\alpha_{2}}x^{-1}(x-q^{l_{1}-1/2}t_{1})(x-q^{l_{2}-1/2}t_{2})g(qx)\] \[-\{(q^{\alpha_{1}}+q^{\alpha_{2}})x+q^{(h_{1}+h_{2}+l_{1}+l_{2} +\alpha_{1}+\alpha_{2})/2}(q^{\beta/2}+q^{-\beta/2})t_{1}t_{2}x^{-1}\}g(x)=Eg( x).\] By applying gauge transformations to Proposition 5.1, we have the following solutions of the \(q\)-Heun equation. **Proposition 5.3**.: _Set \((i,i^{\prime})=(1,2)\) or \((2,1)\). (i) If \(\pm\beta=-h_{1}+h_{2}+l_{1}-l_{2}+\alpha_{i}-\alpha_{i^{\prime}}+2\), then_ \[A^{(4)}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)x^ {-\alpha_{i}}\frac{(q^{h_{1}+1/2}t_{1}/x;q)_{\infty}}{(q^{l_{1}+1/2}t_{1}/x;q)_ {\infty}}\] \[=-(q^{\alpha_{i}+l_{1}+1/2}t_{1}+q^{\alpha_{i}+h_{2}+1/2}t_{2}+q^ {\alpha_{i^{\prime}}+h_{1}-1/2}t_{1}+q^{\alpha_{i^{\prime}}+l_{2}-1/2}t_{2})\] \[\qquad\cdot x^{-\alpha_{i}}\frac{(q^{h_{1}+1/2}t_{1}/x;q)_{\infty }}{(q^{l_{1}+1/2}t_{1}/x;q)_{\infty}},\] \[A^{(4)}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)x ^{-\alpha_{i}+h_{1}-l_{1}}\frac{(x/(q^{l_{1}-1/2}t_{1});q)_{\infty}}{(x/(q^{h _{1}-1/2}t_{1});q)_{\infty}}\] \[=-(q^{\alpha_{i}+l_{1}+1/2}t_{1}+q^{\alpha_{i}+h_{2}+1/2}t_{2}+q^ {\alpha_{i^{\prime}}+h_{1}-1/2}t_{1}+q^{\alpha_{i^{\prime}}+l_{2}-1/2}t_{2})\] \[\qquad\cdot x^{-\alpha_{i}+h_{1}-l_{1}}\frac{(x/(q^{l_{1}-1/2}t_{ 1});q)_{\infty}}{(x/(q^{h_{1}-1/2}t_{1});q)_{\infty}}. \tag{5.6}\] _(ii) If \(\pm\beta=-h_{1}-h_{2}+l_{1}+l_{2}+\alpha_{i}-\alpha_{i^{\prime}}+2\), then_ \[A^{(4)}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)x^ {-\alpha_{i}}\frac{(q^{h_{1}+1/2}t_{1}/x,q^{h_{2}+1/2}t_{2}/x;q)_{\infty}}{(q^ {l_{1}+1/2}t_{1}/x,q^{l_{2}+1/2}t_{2}/x;q)_{\infty}}\] \[=-(q^{\alpha_{i}+l_{1}+1/2}t_{1}+q^{\alpha_{i}+l_{2}+1/2}t_{2}+q^ {\alpha_{i^{\prime}}+h_{1}-1/2}t_{1}+q^{\alpha_{i^{\prime}}+h_{2}-1/2}t_{2})\] \[\qquad\cdot x^{-\alpha_{i}}\frac{(q^{h_{1}+1/2}t_{1}/x,q^{h_{2}+1 /2}t_{2}/x;q)_{\infty}}{(q^{l_{1}+1/2}t_{1}/x,q^{l_{2}+1/2}t_{2}/x;q)_{\infty}},\] \[A^{(4)}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)x^ {-\alpha_{i}+h_{1}+h_{2}-l_{1}-l_{2}}\frac{(x/(q^{l_{1}-1/2}t_{1}),x/(q^{l_{2} -1/2}t_{2});q)_{\infty}}{(x/(q^{h_{1}-1/2}t_{1}),x/(q^{h_{2}-1/2}t_{2});q)_{ \infty}}\] \[=-(q^{\alpha_{i}+l_{1}+1/2}t_{1}+q^{\alpha_{i}+l_{2}+1/2}t_{2}+q^ {\alpha_{i^{\prime}}+h_{1}-1/2}t_{1}+q^{\alpha_{i^{\prime}}+h_{2}-1/2}t_{2})\] \[\qquad\cdot x^{-\alpha_{i}+h_{1}+h_{2}-l_{1}-l_{2}}\frac{(x/(q^{l_ {1}-1/2}t_{1}),x/(q^{l_{2}-1/2}t_{2});q)_{\infty}}{(x/(q^{h_{1}-1/2}t_{1}),x/(q^ {h_{2}-1/2}t_{2});q)_{\infty}}. \tag{5.7}\] Note that, if \(\pm\beta=-h_{1}+h_{2}+l_{1}-l_{2}+\alpha_{i}-\alpha_{i^{\prime}}+2\), then \[x^{-1}[(x-q^{h_{2}+1/2}t_{2})-q^{\alpha_{i^{\prime}}-1}(x-q^{l_{ 2}-1/2}t_{2})T_{x}][(x-q^{h_{1}+1/2}t_{1})T_{x}^{-1}-q^{\alpha_{i}}(x-q^{l_{1} +1/2}t_{1})]\] \[=A^{(4)}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)\] \[\qquad+q^{\alpha_{i}+l_{1}+1/2}t_{1}+q^{\alpha_{i}+h_{2}+1/2}t_{2} +q^{\alpha_{i^{\prime}}+h_{1}-1/2}t_{1}+q^{\alpha_{i^{\prime}}+l_{2}-1/2}t_{2}, \tag{5.8}\] and, if \(\pm\beta=-h_{1}-h_{2}+l_{1}+l_{2}+\alpha_{i}-\alpha_{i^{\prime}}+2\), then \[x^{-1}[1-q^{\alpha_{i^{\prime}}-2}T_{x}][(x-q^{h_{1}+1/2}t_{1})(x-q^{h_{2}+1 /2}t_{2})T_{x}^{-1}-q^{\alpha_{i}}(x-q^{l_{1}+1/2}t_{1})(x-q^{l_{2}+1/2}t_{2})]\] \[=A^{(4)}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)\] \[\qquad+q^{\alpha_{i}+l_{1}+1/2}t_{1}+q^{\alpha_{i}+l_{2}+1/2}t_{2} +q^{\alpha_{i^{\prime}}+h_{1}-1/2}t_{1}+q^{\alpha_{i^{\prime}}+h_{2}-1/2}t_{2}. \tag{5.9}\] Next, we apply the \(q\)-integral transformations in Theorem 4.1 to some solutions of the \(q\)-Heun equation given in Propositions 5.3 and 5.1. Assume that the parameters in \(A^{\langle 4\rangle}(x;h_{1}^{\prime},h_{2}^{\prime},l_{1}^{\prime},l_{2}^{ \prime},\alpha_{1}^{\prime},\alpha_{2}^{\prime},\beta^{\prime})\) satisfy \[\beta^{\prime}=-h_{1}^{\prime}+h_{2}^{\prime}+l_{1}^{\prime}-l_{2}^{\prime}- \alpha_{1}^{\prime}+\alpha_{2}^{\prime}+2. \tag{5.10}\] Then it follows from Proposition 5.3 (i) that \[A^{\langle 4\rangle}(x;h_{1}^{\prime},h_{2}^{\prime},l_{1}^{ \prime},l_{2}^{\prime},\alpha_{1}^{\prime},\alpha_{2}^{\prime},\beta^{\prime} )x^{-\alpha_{2}^{\prime}+h_{1}^{\prime}-l_{1}^{\prime}}\frac{(x/(q^{l_{1}^{ \prime}-1/2}t_{1});q)_{\infty}}{(x/(q^{h_{1}^{\prime}-1/2}t_{1});q)_{\infty}}\] \[=-(q^{\alpha_{2}^{\prime}+l_{1}^{\prime}+1/2}t_{1}+q^{\alpha_{2} ^{\prime}+h_{2}^{\prime}+1/2}t_{2}+q^{\alpha_{1}^{\prime}+h_{1}^{\prime}-1/2 }t_{1}+q^{\alpha_{1}^{\prime}+l_{2}^{\prime}-1/2}t_{2})\] \[\qquad\cdot x^{-\alpha_{2}^{\prime}+h_{1}^{\prime}-l_{1}^{\prime }}\frac{(x/(q^{l_{1}^{\prime}-1/2}t_{1});q)_{\infty}}{(x/(q^{h_{1}^{\prime}-1/ 2}t_{1});q)_{\infty}}. \tag{5.11}\] We apply Theorem 4.1 in the case \[E^{\prime}=-(q^{\alpha_{2}^{\prime}+l_{1}^{\prime}+1/2}t_{1}+q^ {\alpha_{2}^{\prime}+h_{2}^{\prime}+1/2}t_{2}+q^{\alpha_{1}^{\prime}+h_{1}^{ \prime}-1/2}t_{1}+q^{\alpha_{1}^{\prime}+l_{2}^{\prime}-1/2}t_{2}),\] \[h(s)=s^{-\alpha_{2}^{\prime}+h_{1}^{\prime}-l_{1}^{\prime}}\frac {(s/(q^{l_{1}^{\prime}-1/2}t_{1});q)_{\infty}}{(s/(q^{h_{1}^{\prime}-1/2}t_{1 });q)_{\infty}},\;\mu_{0}=0. \tag{5.12}\] It follows from Eq. (5.10) that the parameters in Theorem 4.1 satisfy \[\chi=-h_{2}^{\prime}+l_{2}^{\prime}-1,\;\mu=-h_{2}^{\prime}+l_{2 }^{\prime},\;\alpha_{2}=\alpha_{1}-\alpha_{1}^{\prime}+\alpha_{2}^{\prime}+h_ {2}^{\prime}-l_{2}^{\prime}+1,\] \[\beta=h_{1}^{\prime}-l_{1}^{\prime}+\alpha_{1}^{\prime}-\alpha_{ 2}^{\prime}-1,\;l_{1}=l_{1}^{\prime},\;l_{2}=l_{2}^{\prime},\;h_{1}=h_{1}^{ \prime}-h_{2}^{\prime}+l_{2}^{\prime}-1,\;h_{2}=l_{2}^{\prime}-1,\] \[E=-q^{\alpha_{1}}(q^{\alpha_{2}^{\prime}-\alpha_{1}^{\prime}+l_{ 1}^{\prime}+1/2}t_{1}+q^{\alpha_{2}^{\prime}-\alpha_{1}^{\prime}+h_{2}^{ \prime}+1/2}t_{2}+q^{h_{1}^{\prime}-1/2}t_{1}+q^{l_{2}^{\prime}-1/2}t_{2}). \tag{5.13}\] In particular, we have \(h_{2}=l_{2}-1\). The Jackson integral \(g(x)\) in Eq. (4.8) is written as \[g(x) =x^{-\alpha_{1}}\int_{0}^{\xi\infty}s^{-\beta^{\prime}}\frac{(s/ (q^{l_{1}^{\prime}-1/2}t_{1});q)_{\infty}}{(s/(q^{h_{1}^{\prime}-1/2}t_{1});q) _{\infty}}\frac{(q^{-h_{2}^{\prime}+l_{2}^{\prime}}s/x;q)_{\infty}}{(s/x;q)_{ \infty}}\,d_{q}s\] \[=(1-q)\xi^{-\beta^{\prime}+1}x^{-\alpha_{1}}\frac{(q^{-l_{1}^{ \prime}+1/2}\xi/t_{1},q^{-h_{2}^{\prime}+l_{2}^{\prime}}\xi/x;q)_{\infty}}{(q^ {-h_{1}^{\prime}+1/2}\xi/t_{1},\xi/x;q)_{\infty}}\] \[\qquad\cdot\sum_{n=-\infty}^{\infty}\frac{(q^{-h_{1}^{\prime}+1/2 }\xi/t_{1},\xi/x;q)_{n}}{(q^{-l_{1}^{\prime}+1/2}\xi/t_{1},q^{-h_{2}^{\prime }+l_{2}^{\prime}}\xi/x;q)_{n}}q^{(-\beta^{\prime}+1)n}. \tag{5.14}\] If the parameters satisfy \(\beta^{\prime}<0\) and \(\alpha_{1}^{\prime}<\alpha_{2}^{\prime}\), then the constants \(C_{1}\) and \(C_{2}\) in Theorem 4.1 are equal to \(0\) respectively, the Jackson integral \(g(x)\) in Eq. (5.14) converges and it satisfies \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)g(x)=Eg (x), \tag{5.15}\] where the parameters satisfy Eq. (5.13). We have \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1}, \alpha_{2},\beta)+q^{\alpha_{1}}(q^{\alpha_{2}^{\prime}-\alpha_{1}^{\prime}+l_{ 1}^{\prime}+1/2}t_{1}+q^{\alpha_{2}^{\prime}-\alpha_{1}^{\prime}+h_{2}^{\prime}+ 1/2}t_{2}+q^{h_{1}^{\prime}-1/2}t_{1}+q^{l_{2}^{\prime}-1/2}t_{2})\] \[=x^{-1}(x-q^{l_{2}^{\prime}-1/2}t_{2})[(x-q^{h_{1}^{\prime}-h_{2} ^{\prime}+l_{2}^{\prime}-1/2}t_{1})T_{x}^{-1}+q^{2\alpha_{1}-\alpha_{1}^{ \prime}+\alpha_{2}^{\prime}+h_{2}^{\prime}-l_{2}^{\prime}+1}(x-q^{l_{1}^{ \prime}-1/2}t_{1})T_{x}\] \[\qquad-q^{\alpha_{1}}\{(1+q^{-\alpha_{1}^{\prime}+\alpha_{2}^{ \prime}+h_{2}^{\prime}-l_{2}^{\prime}+1})x-(q^{h_{1}^{\prime}-1/2}+q^{-\alpha_ {1}^{\prime}+\alpha_{2}^{\prime}+l_{1}^{\prime}+1/2})t_{1}\}], \tag{5.16}\] and we essentially obtain the \(q\)-hypergeometric equation. If \(\xi=q^{l^{\prime}_{1}+1/2}t_{1}\), then \[g(x)=(1-q)(q^{l^{\prime}_{1}+1/2}t_{1})^{-\beta^{\prime}+1}x^{- \alpha_{1}}\frac{(q,q^{-h^{\prime}_{2}+l^{\prime}_{2}+l^{\prime}_{1}+1/2}t_{1}/ x;q)_{\infty}}{(q^{-h^{\prime}_{1}+l^{\prime}_{1}+1},q^{l^{\prime}_{1}+1/2}t_{1}/ x;q)_{\infty}}\] \[\qquad\qquad\cdot\ _{2}\phi_{1}\Big{(}\begin{array}{c}q^{l^{ \prime}_{1}+1/2}t_{1}/x,q^{-h^{\prime}_{1}+l^{\prime}_{1}+1}\\ q^{-h^{\prime}_{2}+l^{\prime}_{2}+l^{\prime}_{1}+1/2}t_{1}/x\end{array};q,q^{- \beta^{\prime}+1}\Big{)}. \tag{5.17}\] If \(\xi=q^{h^{\prime}_{2}-l^{\prime}_{2}+1}x\), then \[g(x)=(1-q)(q^{h^{\prime}_{2}-l^{\prime}_{2}+1}x)^{-\beta^{\prime }+1}x^{-\alpha_{1}}\frac{(q^{h^{\prime}_{2}-l^{\prime}_{1}-l^{\prime}_{2}+3/2} x/t_{1},q;q)_{\infty}}{(q^{-h^{\prime}_{1}+h^{\prime}_{2}-l^{\prime}_{2}+3/2}x/t_{1},q ^{h^{\prime}_{2}-l^{\prime}_{2}+1};q)_{\infty}}\] \[\qquad\qquad\cdot\ _{2}\phi_{1}\Big{(}\begin{array}{c}q^{-h^{ \prime}_{1}+h^{\prime}_{2}-l^{\prime}_{2}+3/2}x/t_{1},q^{h^{\prime}_{2}-l^{ \prime}_{2}+1}\\ q^{h^{\prime}_{2}-l^{\prime}_{1}-l^{\prime}_{2}+3/2}x/t_{1}\end{array};q,q^{- \beta^{\prime}+1}\Big{)}. \tag{5.18}\] These results agree with the \(q\)-integral representations of the \(q\)-hypergeometric equation obtained in [1]. Assume that the parameters in \(A^{(4)}(x;h^{\prime}_{1},h^{\prime}_{2},l^{\prime}_{1},l^{\prime}_{2},\alpha^ {\prime}_{1},\alpha^{\prime}_{2},\beta^{\prime})\) satisfy \[\beta^{\prime}=-h^{\prime}_{1}-h^{\prime}_{2}+l^{\prime}_{1}+l^{\prime}_{2}- \alpha^{\prime}_{1}+\alpha^{\prime}_{2}+2. \tag{5.19}\] Then it follows from Proposition 5.3 (ii) that \[A^{(4)}(x;h^{\prime}_{1},h^{\prime}_{2},l^{\prime}_{1},l^{\prime }_{2},\alpha^{\prime}_{1},\alpha^{\prime}_{2},\beta^{\prime})x^{-\alpha^{ \prime}_{2}+h^{\prime}_{1}+h^{\prime}_{2}-l^{\prime}_{1}-l^{\prime}_{2}}\frac {(x/(q^{l^{\prime}_{1}-1/2}t_{1},x/(q^{l^{\prime}_{2}-1/2}t_{2});q)_{\infty}}{ (x/(q^{h^{\prime}_{1}-1/2}t_{1}),x/(q^{h^{\prime}_{2}-1/2}t_{2});q)_{\infty}}\] \[=-(q^{\alpha^{\prime}_{2}+l^{\prime}_{1}+1/2}t_{1}+q^{\alpha^{ \prime}_{2}+l^{\prime}_{2}+1/2}t_{2}+q^{\alpha^{\prime}_{1}+h^{\prime}_{1}-1/2 }t_{1}+q^{\alpha^{\prime}_{1}+h^{\prime}_{2}-1/2}t_{2})\] \[\qquad\qquad\cdot x^{-\alpha^{\prime}_{2}+h^{\prime}_{1}+h^{\prime }_{2}-l^{\prime}_{1}-l^{\prime}_{2}}\frac{(x/(q^{l^{\prime}_{1}-1/2}t_{1}),x/ (q^{l^{\prime}_{2}-1/2}t_{2});q)_{\infty}}{(x/(q^{h^{\prime}_{1}-1/2}t_{1}),x/ (q^{h^{\prime}_{2}-1/2}t_{2});q)_{\infty}}. \tag{5.20}\] We apply Theorem 4.1 in the case \[E^{\prime}=-(q^{\alpha^{\prime}_{2}+l^{\prime}_{1}+1/2}t_{1}+q ^{\alpha^{\prime}_{2}+l^{\prime}_{2}+1/2}t_{2}+q^{\alpha^{\prime}_{1}+h^{ \prime}_{1}-1/2}t_{1}+q^{\alpha^{\prime}_{1}+h^{\prime}_{2}-1/2}t_{2}),\] \[h(s)=s^{-\alpha^{\prime}_{2}+h^{\prime}_{1}+h^{\prime}_{2}-l^{ \prime}_{1}-l^{\prime}_{2}}\frac{(s/(q^{l^{\prime}_{1}-1/2}t_{1}),s/(q^{l^{ \prime}_{2}-1/2}t_{2});q)_{\infty}}{(s/(q^{h^{\prime}_{1}-1/2}t_{1}),s/(q^{h^{ \prime}_{2}-1/2}t_{2});q)_{\infty}},\ \mu_{0}=0. \tag{5.21}\] It follows from Eq. (5.19) that the parameters in Theorem 4.1 satisfy \[\chi=-\beta^{\prime}-1=(-l_{1}-l_{2}+h_{1}+h_{2}+\alpha_{1}- \alpha_{2}-1)/2,\ \mu=\chi+1,\] \[\beta=1,\ \alpha_{2}=\alpha_{1}-\alpha^{\prime}_{1}+\alpha^{\prime}_{ 2}-\chi,\ l_{1}=l^{\prime}_{1},\ l_{2}=l^{\prime}_{2},\ h_{1}=h^{\prime}_{1}+ \chi,\ h_{2}=h^{\prime}_{2}+\chi,\] \[E=-q^{\alpha_{1}}(q^{\alpha^{\prime}_{2}-\alpha^{\prime}_{1}+l^{ \prime}_{1}+1/2}t_{1}+q^{\alpha^{\prime}_{2}-\alpha^{\prime}_{1}+l^{\prime}_{2} +1/2}t_{2}+q^{h^{\prime}_{1}-1/2}t_{1}+q^{h^{\prime}_{2}-1/2}t_{2}). \tag{5.22}\] The Jackson integral \(g(x)\) in Eq. (4.8) is written as \[g(x) =x^{-\alpha_{1}}\int_{0}^{\xi\infty}\frac{(s/(q^{l_{1}^{\prime}-1/2 }t_{1}),s/(q^{l_{2}^{\prime}-1/2}t_{2});q)_{\infty}}{(s/(q^{h_{1}^{\prime}-1/2 }t_{1}),q^{h_{2}^{\prime}-1/2}t_{2});q)_{\infty}}\frac{(q^{\mu}s/x;q)_{\infty}} {(s/x;q)_{\infty}}\,d_{q}s\] \[=(1-q)\xi x^{-\alpha_{1}}\frac{(q^{-l_{1}+1/2}\xi/t_{1},q^{-l_{2} +1/2}\xi/t_{2},q^{\chi+1}\xi/x;q)_{\infty}}{(q^{-h_{1}+\chi+1/2}\xi/t_{1},q^{-h _{2}+\chi+1/2}\xi/t_{2},\xi/x;q)_{\infty}}\] \[\qquad\cdot\sum_{n=-\infty}^{\infty}\frac{(q^{-h_{1}+\chi+1/2} \xi/t_{1},q^{-h_{2}+\chi+1/2}\xi/t_{2},\xi/x;q)_{n}}{(q^{-l_{1}+1/2}\xi/t_{1},q^{-l_{2}+1/2}\xi/t_{2},q^{\chi+1}\xi/x;q)_{n}}q^{n}. \tag{5.23}\] If the parameters satisfy \(\alpha_{1}^{\prime}<\alpha_{2}^{\prime}\), then the constants \(C_{1}\) and \(C_{2}\) in Theorem 4.1 satisfy \(C_{1}=1\) and \(C_{2}=0\), the Jackson integral \(g(x)\) in Eq. (5.23) converges and it satisfies \[A^{(4)}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)g(x)=Eg(x)-(1-q) x^{-\alpha_{1}}q^{\alpha_{1}+h_{1}+h_{2}-\chi}(q^{-\chi-1}-1)t_{1}t_{2}. \tag{5.24}\] Note that the homogeneous version of Eq. (5.24) is written in the variant of the \(q\)-hypergeometric equation of degree two (see [5] for the definition), and our results agree with the results obtained in [1]. Assume that the parameters in \(A^{(4)}(x;h_{1}^{\prime},h_{2}^{\prime},l_{1}^{\prime},l_{2}^{\prime},\alpha_ {1}^{\prime},\alpha_{2}^{\prime},\beta^{\prime})\) satisfy \[\beta^{\prime}=h_{1}^{\prime}+h_{2}^{\prime}-l_{1}^{\prime}-l_{2}^{\prime}- \alpha_{1}^{\prime}+\alpha_{2}^{\prime}+2. \tag{5.25}\] Then it follows from Proposition 5.1 that \[A^{(4)}(x;h_{1}^{\prime},h_{2}^{\prime},l_{1}^{\prime},l_{2}^{ \prime},\alpha_{1}^{\prime},\alpha_{2}^{\prime},\beta^{\prime})x^{-\alpha_{2} ^{\prime}}\] \[=-(q^{\alpha_{2}^{\prime}+h_{1}^{\prime}+1/2}t_{1}+q^{\alpha_{2} ^{\prime}+h_{2}^{\prime}+1/2}t_{2}+q^{\alpha_{1}^{\prime}+l_{1}^{\prime}-1/2}t _{1}+q^{\alpha_{1}^{\prime}+l_{2}^{\prime}-1/2}t_{2})x^{-\alpha_{2}^{\prime}}. \tag{5.26}\] We apply Theorem 4.1 in the case \[E^{\prime}=-(q^{\alpha_{2}^{\prime}+h_{1}^{\prime}+1/2}t_{1}+q^{ \alpha_{2}^{\prime}+h_{2}^{\prime}+1/2}t_{2}+q^{\alpha_{1}^{\prime}+l_{1}^{ \prime}-1/2}t_{1}+q^{\alpha_{1}^{\prime}+l_{2}^{\prime}-1/2}t_{2}),\] \[h(s)=s^{-\alpha_{2}^{\prime}},\;\mu_{0}=0. \tag{5.27}\] It follows from Eq. (5.25) that the parameters in Theorem 4.1 satisfy \[\chi=-\beta^{\prime}-\alpha_{1}^{\prime}+\alpha_{2}^{\prime}+1, \;\mu=\chi+1,\;\alpha_{2}=\alpha_{1}+\beta^{\prime}-1,\] \[\beta=\alpha_{1}^{\prime}-\alpha_{2}^{\prime}-1,\;l_{1}=l_{1}^{ \prime},\;l_{2}=l_{2}^{\prime},\;h_{1}=h_{1}^{\prime}+\chi,\;h_{2}=h_{2}^{ \prime}+\chi,\] \[E=-q^{\alpha_{1}}(q^{\alpha_{2}^{\prime}-\alpha_{1}^{\prime}+h_{ 1}^{\prime}+1/2}t_{1}+q^{\alpha_{2}^{\prime}-\alpha_{1}^{\prime}+h_{2}^{ \prime}+1/2}t_{2}+q^{\alpha_{1}^{\prime}+l_{1}^{\prime}-1/2}t_{1}+q^{\alpha_ {1}^{\prime}+l_{2}^{\prime}-1/2}t_{2}). \tag{5.28}\] In particular, we have \(\beta+h_{1}+h_{2}-l_{1}-l_{2}-\alpha_{1}+\alpha_{2}+2=0\). The Jackson integral \(g(x)\) in Eq. (4.8) is written as \[g(x) =x^{-\alpha_{1}}\int_{0}^{\xi\infty}s^{-\beta^{\prime}}\frac{(q^{ -\beta^{\prime}-\alpha_{1}^{\prime}+\alpha_{2}^{\prime}+2}s/x;q)_{\infty}}{(s/ x;q)_{\infty}}\,d_{q}s\] \[=(1-q)\xi^{-\beta^{\prime}+1}x^{-\alpha_{1}}\frac{(q^{-\beta^{ \prime}-\alpha_{1}^{\prime}+\alpha_{2}^{\prime}+2}\xi/x;q)_{\infty}}{(\xi/x;q)_ {\infty}}\sum_{n=-\infty}^{\infty}\frac{(\xi/x;q)_{n}}{(q^{-\beta^{\prime}- \alpha_{1}^{\prime}+\alpha_{2}^{\prime}+2}\xi/x;q)_{n}}q^{(-\beta^{\prime}+ 1)n}. \tag{5.29}\] If the parameters satisfy \(\beta^{\prime}<0\) and \(\alpha_{1}^{\prime}<\alpha_{2}^{\prime}\), then the constants \(C_{1}\) and \(C_{2}\) in Theorem 4.1 are equal to \(0\) respectively, the Jackson integral \(g(x)\) in Eq. (5.29) converges and it satisfies \[A^{\langle 4\rangle}(x;h_{1},h_{2},l_{1},l_{2},\alpha_{1},\alpha_{2},\beta)g(x)=Eg (x), \tag{5.30}\] where the parameters satisfy Eq. (5.28). Recall that the Ramanujan's sum for \({}_{1}\psi_{1}(a;b;q,z)\) (the bilateral summation formula) is written as \[\sum_{n=-\infty}^{\infty}\frac{(a;q)_{n}}{(b;q)_{n}}z^{n}=\frac{(q,b/a,az,q/( az);q)_{\infty}}{(b,q/a,z,b/(az);q)_{\infty}} \tag{5.31}\] for \(|b/a|<|z|<1\) and \(|q|<1\). It follows from Eq. (5.31) in the case \(a=\xi/x\), \(b=q^{-\beta^{\prime}-\alpha_{1}^{\prime}+\alpha_{2}^{\prime}+2}\xi/x\), \(z=q^{-\beta^{\prime}+1}\) that the function \(g(x)\) in Eq. (5.29) is written as \[g(x)=x^{-\alpha_{1}}\frac{(q^{-\beta^{\prime}+1}\xi/x,x/(q^{-\beta^{\prime}} \xi),q^{-\beta^{\prime}-\alpha_{1}^{\prime}+\alpha_{2}^{\prime}+2},q;q)_{ \infty}}{(\xi/x,qx/\xi,q^{-\beta^{\prime}+1},q^{-\alpha_{1}^{\prime}+\alpha_{2 }^{\prime}+1};q)_{\infty}}. \tag{5.32}\] Since \(\vartheta_{q}(qax)/\vartheta_{q}(qbx)=(b/a)\vartheta_{q}(ax)/\vartheta_{q}(bx)\), the function \(g(x)\) in Eq. (5.32) behaves the same as \(x^{-\alpha_{2}}\) on the transformation \(x\mapsto qx\). Therefore, we may conclude that the \(q\)-integral transformation on this case produces another monomial solution from a monomial solution. ## 6. Concluding remarks In this paper, we found kernel function identities for the \(q\)-Heun equation and the variants of the \(q\)-Heun equation of degree three and four. We applied them to obtain \(q\)-integral transformations of solutions to the \(q\)-Heun equation and its variants. Moreover, we investigated special solutions of the \(q\)-Heun equation from the perspective of the \(q\)-integral transformation. We give some comments on issues related to the results in this paper. The variants of the \(q\)-Heun equation was found by considering degenerations of the Ruijsenaars-van Diejen system, and kernel functions of the Ruijsenaars-van Diejen system had been found in [14, 15, 10]. On the other hand, we found kernel functions of the \(q\)-Heun equation and its variants directly in this paper. It is expected to obtain degenerations of kernel functions from the non-degenerate Ruijsenaars-van Diejen system, although calculation would be extremely complicated without nice ideas. Extension of our identities of kernel functions to the multivariable cases is also expected. We investigated \(q\)-integral transformations of monomial-type solutions of the \(q\)-Heun equation in section 5. Polynomial solutions of the \(q\)-Heun equation and its variants were investigated in [24, 9], and it is expected to study \(q\)-integral transformations related to the polynomial-type solutions. Results in [22] for Heun's differential equation might be helpful on this direction. In [18], the \(q\)-middle convolution by Sakai and Yamaguchi [16] was applied to the linear \(q\)-difference equation related with the \(q\)-Painleve equation whose symmetry is of type \(D_{5}^{(1)}\), and the \(q\)-integral transformation of the \(q\)-Heun equation was obtained as a corollary. The variants of the \(q\)-Heun equation of degree three and four is related with the \(q\)-Painleve equation of type \(E_{6}^{(1)}\) and \(E_{7}^{(1)}\) through the space of initial conditions [17]. In this paper, we obtained \(q\)-integral transformations of solutions to the \(q\)-Heun equation and its variants by using the identities of the kernel functions. It seems that relationship between the variants of the \(q\)-Heun equation of degree three and four with the \(q\)-Painleve equation of type \(E_{6}^{(1)}\) and \(E_{7}^{(1)}\) through the \(q\)-middle convolution is not fully understood. The paper [3] by Fujii and Nobukawa might be related to this problem. ## Acknowledgements The author is grateful to Yumi Arai for discussion. He is supported by JSPS KAKENHI Grant Number JP22K03368.
2309.16507
Innovation Modeling Grid
This technical document presents the committee driven innovation modeling methodology "Innovation Modeling Grid" in detail. This document is the successor of three publications on IMoG and focuses on presenting all details of the methodology
Oliver Klemp
2023-09-28T15:12:51Z
http://arxiv.org/abs/2309.16507v1
# Innovation Modeling Grid Technical Documentation ###### Abstract We consider a _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_ version of the _global_global DLR - Institute for Systems Engineering for future mobility ## Abstract This technical document presents the committee driven innovation modeling methodology "Innovation Modeling Grid" in detail. This document is the successor of three publications on IMoG [6, 11, 17] and focuses on presenting all details of the methodology. #### Acknowledgments This work has been supported by the GENIAL! project as funded by the German Federal Ministry of Education and Research (BMBF) under the funding code 16ES0865-16ES0876 in the ICT 2020 funding programme. ###### Contents * 1 Overview over IMoG * 2 Delimitation of IMoG to relevant thematic fields * 3 Process for IMoG * 3.1 Roles and Responsibility * 3.2 Process parts: The Activities, Artifacts and Tools * 3.2.1 An abstract overview over the activities, artifacts and tools * 3.2.2 Detailed activities description * 3.3 Example - Mobility with an e-scooter * 4 Innovation Modeling Grid * 4.1 Design Principles of IMoG * 4.2 Innovation Modeling Grid Methodology * 4.2.1 Strategy Perspective * 4.2.2 Functional Perspective * 4.2.3 Quality Perspective * 4.2.4 Structural Perspective * 4.2.5 Domain Knowledge Perspective * 4.2.6 Connecting Perspectives * 4.2.7 Reviewing IMoG: Pros and Cons * 4.3 FAQ * II Details on the IMoG Methodology * 5 Strategy Perspective * 5.1 Model elements * 5.2 E-Scooter example * 5.3 Strategy Perspective: Strengths and Limitations * 5.4 Strategy Perspective FAQ * 6 Functional Perspective * 6.1 Model elements * 6.2 E-Scooter example * 6.3 Functional Perspective: Strengths and Limitations * 6.4 Functional Perspective FAQ * 6.4.1 Feature Tree Base * 6.4.2 Tooling * 6.4.3 Concepts and Dependencies * 6.4.4 General Stuff * 7 Quality Perspective * 7.1 Model elements * 7.2 E-Scooter example * 7.3 Quality Perspective: Strengths and Limitations * 7.4 Quality Perspective FAQ * 7.4.1 Requirements * 8 Structural Perspective * 8.1 Model elements * 8.2 E-Scooter example * 8.3 Structural Perspective: Strengths and Limitations * 8.4 Structural Perspective FAQ * 9 Domain Knowledge Perspective ## III Tooling, Evaluation and Closing * 10 Tooling Prototype * 10.1 Functional Perspective Prototype * 10.2 Tooling Evaluation * 11 Evaluation * 12 Closing ## Part I Overview over IMo ## Chapter 1 Introduction This document presents the modeling methodology Innovation Modeling Grid. The Innovation Modeling Grid (IMoG) targets the discussion and modeling of innovations in a committee. The methodology shall reduce the start-up time for innovation modeling by pre-structuring the innovation in the sense of advising what type of elements exist and how they relate to each other. The modeling methodology originates from a project within the context of the automobile industry, which is used here to motivate the methodology. The automobile industry is undergoing a major transformation and is facing the following situation: First, there is the huge demand for autonomous and highly automated driving. Autonomous driving shall provide a safer and more efficient transportation, while allowing passengers to focus on other things. If the driver wants to enjoy driving, then highly automated systems shall support the driver with several assistance systems to ensure a safe journey. This demand is on a different complexity level than the typical innovations known in the automotive industry and will shape the future development. Secondly, there is also the huge demand of more sustainability due to the climate change. The electrification of the transport sector and the limited amount of rare resources require new technologies and design principles. Individualization is another demand. The passenger demands more comfort and custom functionality in vehicles. Individualization requires a rethinking towards data driven and software defined vehicles. Software defined vehicles also relate to high complexity and high loads of external communication. Additionally, data driven in-vehicle applications represent potential for new business models for software companies. Referencing business models, mobility as a service is an uprising trend as a business model for car manufacturers. Not only conventional vehicles are considered, but also the whole transportation sector including trains, the aerospace and the last mile. This trend requires a rethinking of the structure of the automotive industry as a whole. The question raises of what is holding the automotive industry back of just addressing these demands today? Well, each of the demands represent a special challenge. The autonomous driving functionality relies on an accurate perception of the environment, an accurate localization of the car's position, an accurate prediction of the behavior of the other traffic participants - optionally by sharing the intends by communication with the other participants or with the infrastructure -, a sophisticated control of its own behavior, including a computation of trajectories, monitoring its own behavior and securing the vehicle from unauthorized manipulation and the driving functionality relies on an accurate planning and navigating of routes from point A to point B. This functionality poses a major challenge to the industry with a high level of complexity, including high safety and security requirements. The computation of this functionality is expected to be mainly implemented in software. To stay competitive the automotive value chain needs to adjust to this new software focus. The climate change and the implied departure from combustion engines to electric engines is a another challenge. Combustion engines are decried as not acceptable anymore as a future transport solution for the masses. The expected technology shift goes towards the electrification of cars, which represents a far easier technology than highly optimized combustion engines for market newcomers. On the other hand, the electrification requires a lot of rare resources - like Lithium - and requires huge accumulators. Up-scaling the power net in the vehicle itself proposes a challenge of its own by considering electromagnetic conductivity, cable weight and so on. These factors increase the market pressure for all members of the automotive value chain. The next challenge is proposed by the individualization of software defined vehicles [4, 16, 7]. The deciding factor for autonomous vehicles lies in the software centered complexity as mentioned in the autonomous challenge. Individualization requires on top of that short market times. For example a user likes to connect the newest smartphone generation with the vehicle. These vehicles can not be simply called into the next workshop to update the software, so over the air updates are required. These demands further amplify the need for a software focus. The value chain has the implied complexity and furthermore the implied safety and security requirements. Finally, the general trend of car manufacturers is their move towards new business models that focus on mobility as a service. Cause for the shift is the globalization and climate change, which demand a rethinking of mobility. Cities get more and more interconnected with various optimized solutions for mobility: from (underground and intercity) trains, to various models of bus systems, over motorcycles, e-scooters and e-bikes for short trips. Owning cars is thus not mandatory in larger citizens anymore and therefore, a lower demand for owning vehicles can be expected. Additionally, improvements in the drive train of new cars tend to be insignificant from the view of the end user equalizing the quality of cars in the market and decreasing the importance of the brand of the car. The car manufacturers have to think about how they can bind their current customers if they do not want to loose market stakes. The general trend towards retaining current customers lies in adapting to their new demands. Future business models are expected to be focused around the aspect of mobility as a service. This restructuring poses a major challenge to the whole value chain. There is another challenge that is not induced by customer demands. That is the current structure of the value chain (see Figure 1.1). The long established automotive value chain between the Original Equipment Manufacturers (OEMs), chip manufacturers (Tier 1) and semiconductor suppliers (Tier 2) is very fragmented and optimized for producing vehicles with long product- and lifecycles [13]. This structure works well for modular design with hardware elements that have a long cycle times. However, following this principle of modular design leads to a sequential working process, resulting in long communication times and slow innovation speed with no effective use of horizontal connections between the suppliers. Additionally, the structure does not address the new complexity, the software focus and a service oriented business model in a suitable manner. The demanded fast and safe realizations represent an enormous technological and methodical challenge. On the one side, the car manufacturer has to anticipate the very rapidly changing possibilities of future microelectronic platforms, sensors and semiconductor technologies already at the time of product definition to include them in the next generation. On the other side, the suppliers have to know early enough the requirements of future functionality to strategically invest into technology developments on a quantitative and reliable basis. The missing communication between the value chain makes it hard to understand and predict the future and slows the value chain significantly down. These challenges introduce together a lot of uncertainty - a well known challenge in requirements engineering: The complexity, including safety and security, adds uncertainty due to the vast exploration space, as not every solution can be explored. Additionally, this complexity makes it hard to predict the future market trends and new Figure 1.1: A broad view on the structure of the current automotive value chain. Its understanding of the common future is not as easy as in the last decades. Thus their common invisible guideline can not be simply assumed to exist anymore. Instead the automotive value chain has to collaborate and explicitly design it to achieve their maximum efficiency. directions of technology. The limited resources, competition and time to market challenges add uncertainty in the sense of limited time with the pressure to find good solutions in time. The new business models and new demands challenge the partners of the value chain by being uncertain on how they will cooperate together in the future. Also directly related is the fact that the value chain has been indirectly guided by a common understanding of future technology over the past decade. With the raising complexity and uncertainty this invisible guideline more and more disappears. Overall, the uncertainty and missing of knowledge makes it hard to predict the future, plan innovations and do the right investment decisions. The pressuring question is therefore how the value chain can sustain the new business models, autonomisation, electrification and individualization with their high requirements. One way to cope with these challenges is by trying to boost innovation with a public roadmapping approach. This road mapping approach focuses on shaping the understanding of the innovation, which is in essence a requirements engineering problem. The goal of the roadmap is to better understand and communicate future innovations, the required future technologies and the decisions about their future directions of the other partners of the value chain. The expected gain of this synchronizing of the strategies across the value chain is an acceleration of the development of future innovative applications. The approach can be understood as followed: The automotive value chain forms a committee for creating a public roadmap on a specific innovation (see Figure 1.2). This committee may include several car manufacturers (Original Equipment Manufacturers, also called OEMs), several software and hardware component suppliers (Tier 1) as well as several semiconductor suppliers (Tier 2). The committee is open for the public and for new members to comply with the compliance laws. The committee meets and discusses the innovation by focusing on the strategies of the stakeholders, the features and functions of the innovation and by exploring the possible solutions of the innovation. It is crucial for the success to discuss on an appropriate level, which however varies from innovation to innovation. An appropriate level includes the understanding of the problems and its technical constraints, but it does not include too many details about the development of the innovation as the innovation in itself is not implemented by the committee. The immediate question appears as for every approach: "How does a consistent public road maps based information transfer in the value chain tackle the challenges?". By understanding the future of the innovation and the value chain, the innovation becomes plannable, which reduces the uncertainty about the future for each partner and reduces the involved risks in investment decisions. The manufacturer can reliably plan with the discussed chip technology long before it is available. The suppliers gain an early insight into forthcoming requirements with the certainty that their newly developed chip technologies and components will suit an existing demand. The exploration of solutions directly helps with handling the complexity and boosting quality management. The competition aspect is covered by using the roadmap to prepare early for the future innovation. The discussion of the strategies helps with adjusting the value chain to a software defined focus and addressing the new business models. Overall the public roadmap enables to adjust the value chain to the new demands. This holistic road mapping is new to the value chain and requires to be open to the public to play within the rules of compliance. It also enables to communicate in a horizontal manner between suppliers, that boosts synergies in the innovations design. One major gain is the expected speed up in the innovation cycles that is crucial to Figure 1.3: Expected gain of the approach. Figure 1.2: An example committee that aims to boost innovation with a public roadmapping approach. meet the short time to market demand known from software development. This speed up can be imagined as followed (see Figure 1.3): A company without a roadmap explores an innovation by starting with a seemingly feasible direction. They may spend some time exploring, explaining and discussing with other suppliers and adjusting the direction as needed. They may find a detail that is not satisfiable by the chosen direction and thus try out the next feasible direction. They proceed this way - with some more minor adjustments here and there - until the innovation is sufficiently explored. A committee with a roadmap may be faster by discarding unsatisfiable directions earlier, which smoothens the path taken. This speed is achieved for several reasons: First, by discussing and sharing expectations with the committee, technological possibilities are better understood. This directly leads to an earlier discarding of unsatisfiable directions. Secondary, the value chain can parallelize the development. This parallelization is achieved through the already mentioned reduction of uncertainties and risks, which leads to the confidence to initiate investments at an early stage. Lastly, the committee has the opportunity to standardize common terms to significantly reduce the communication overhead via misunderstandings across the whole value chain. Finally, a short note about the limitations of this road mapping approach. The development and engineering tasks are unaffected by this approach. That means that the development still has to cope with the increased complexity and the technologies have to be developed as well. Given this road mapping approach, we investigated the research question of what an appropriate process, methodology and tool for this approach would be. In our opinion a dedicated methodology supported by a process and a tailored tooling is required to efficiently handle this specific context. We developed the Innovation Modeling Grid (also called IMoG) [6; 10] as a methodology with a process and a tooling prototype to enable an open, fair and compliant communication along the value chain. The methodology's goal is to efficiently represent and model early microelectronic innovations to enable a consistent information transfer along the value chain. The process recommends who is doing what with which tool to produce which artifacts and the tooling supports as good as possible the above mentioned process and methodology. The process, the methodology and the tooling are the main focus of this document, which are described in detail in the further sections. The document will finish with the preliminary evaluation results and a closing. ## Chapter 2 Delimitation of IMoG to relevant thematic fields IMoG relates to the umbrella term of innovation management, IMoG shares similarities with the general roadmapping approach and IMoG shares similarities with other well known fields like requirements engineering and systems engineering. Innovation management refers to the systematic approach of planning, controlling and executing activities related to innovations. It includes the generation of ideas, evaluation of the feasibility of the ideas, managing development prototypes and guiding the whole process up to the product. Innovation management is used in companies to drive growth, competitiveness and sustainability. Furthermore, innovation management also refers to the management of innovation outside of companies. It provides means to communicate with the stakeholders of the corporation and supports the synchronization and harmonization of agreements between the corporation and external business units. IMoG can be considered an innovation management technique, although it does not specifically target any particular company or emphasize decision-making activities. Thus, IMoG is only applicable for the specific class of committee applications in innovation management, where the focus does not lie on presenting the decisions of a company to their stakeholder. Roadmapping is a technique used in innovation management that relates to IMoG. Roadmapping is a creative analysis procedure used to analyse, forecast and visualize the development paths of products, services and technologies [14]. Roadmapping is widely recognized as a strategic management tool to forecast the future development. Roadmaps serve various purposes depending on the involved stakeholders. They support achieving a robust and market-oriented technological positioning as well as for enhancing, protecting an utilizing the competence of the organization [14]. Furthermore, roadmaps play a crucial role in providing orientation for employees, for external stakeholder and for marketing when published. IMoG shares many similarities with roadmapping, however, IMoG's context slightly differs from the typical roadmapping context. The typical roadmapping approach requires perfect or quite complete knowledge about the scope of the roadmap, topics of interest and future directions of the object under consideration. However, this knowledge is often not given in a microelectronic value chain with a huge number of different stakeholder and varying expertise. Therefore, the IMoG methodology does not build on the assumption of perfect knowledge and includes a sophisticated investigation of the problem space before investigating the future possibilities. Furthermore, we assume, that the topic under investigation is quite complex and that the solution space is not yet fully understood by the committee. To tackle this complexity, the IMoG methodology recommends a model-based supported investigation of possible future solu tions with dedicated tools. To better understand the solution space and decision making, the IMoG methodology recommends to parameterize and decompose the solutions until they are sufficiently understood. This parameterization and decomposition requires dedicated tools to handle the complexity. Nonetheless, many typical roadmapping techniques can be applied at the later states of the IMoG methodology when the problem is better understood and when the possible solutions are collected. The difference in imperfect knowledge and complexity also requires to handle the workshops with the committee differently to the typical roadmapping workshops. This document gives in the later chapters recommendations on how the IMoG methodology may be applied in these workshops and how the roadmap for the microelectronic value chain as a whole can be addressed. IMoG also shares similarities with requirements engineering. IMoG divides the innovation modeling into the problem and solution space, which is a common approach in requirements engineering. Furthermore the alignment and understanding of the innovation is a crucial part of IMoG, which relates to the goal of requirements engineering to foster a better understanding between all stakeholder. IMoG distinguishes itself from requirements engineering by focusing on the class of innovation modeling in committees while requirements engineering covers the more general and abstract guidelines for stakeholder and system investigation. Systems engineering focuses on how a system can be systematically developed and systems engineering does not specifically consider committees or abstract concepts. IMoG also investigates the system decomposition and shares similar concepts. However, IMoG does not require the level of detail known from systems engineering models, because the innovation is not developed by the committee members. The development of the innovation happens after the public committee phase internally in the corporations. ## Chapter 3 Process for IMoG This chapter covers the process that is recommended for the committee to create a roadmap for their innovation. This section presents the recommended process for the committee to create an innovation roadmap, referred to as IMoG's process. The description of IMoG's process commences by introducing the various roles involved in Section 3.1. Subsequently, it outlines the process activities, the produced artifacts, and the tools involved in 3.2. Notably, IMoG's process does not propose any template for milestones. The decision to exclude such a template is based on the assumption that it would vary significantly for each specific innovation. The process description is finally illustrated with a "Mobility with an e-scooter" innovation from the time before e-scooters got popular in cities in Section 3.3. ### 3.1 Roles and Responsibility The roles of the members of the committee are presented first. IMoG defines three disjunctive sets of roles. Each member of the committee may take zero, one or more roles from each role set. This implies that each member of the committee may represent several roles and that their roles may differ depending on the task. The first set of roles defines roles of the corporation each member may represent. The corporation roles include the role of the OEM (Original Equipment Manufacturer), the role of the Tier 1 supplier and the role of the Tier 2 supplier. The three roles are defined as follows (inspired by Knauf [12]): * The **OEM** (Original Equipment Manufacturer) is the manufacturer of the end product, which deals with the market launch of the vehicle. * The **Tier 1** suppliers develop system solutions that are tailored to the end product without major changes. * The **Tier 2** creates the components needed to be integrated into systems. This includes the production of semiconductors and microcontrollers. The corporation roles of the automotive value chain are often more differentiated than in OEM, Tier 1 and Tier 2. However, the roles defined were evaluated as sufficient enough for automotive committees discussing microelectronic innovations. The second set of roles defines the roles of the members of the committee. The roles are described in Table 3.1. The third set of roles are the roles of the corporation employees, which specialize the role of the "Corporation Representative" to execute the specific activities of IMoG. These (in-house) employees help the committee by providing and compiling information. These roles are described in Table 3.2. _Examples_ Two examples for a set of chosen roles are shown in the following: A corporation member of an Original Equipment Manufacturer (see Figure 3.1a) has an idea for a new innovation he likes to discuss. He founds a committee for discussing the new innovation and takes the role of the committee leader. He thus have two roles assigned: The role of an OEM representative and the role of the committee leader. The committee leader invites a member of a Tier 2 supplier to join the committee see Figure 3.1b). She decides to represent her corporation and play an active role in the committee. She additionally brings her expertise as a roadmap manager. She thus have three roles assigned: The role of a Tier 2 representative, the role of the corporation representative for the corporation she works for and the role of the roadmap manager of her corporation. \begin{table} \begin{tabular}{l|l} **Roles** & **Description** \\ \hline Committee Leader & The responsible person leading the roadmap committee. \\ Corporation Representative & The responsible person of a corporation to coordinate the corporation internal tasks to produce the needed inputs for the roadmap. \\ IMoG responsible & The responsible person of creating and maintaining the IMoG model on the command of the committee members. The IMoG responsible Model Expert is also called IMoG Modeler. \\ Roadmap Manager & The roadmap manager of the committee is responsible for the creation and maintenance of the roadmap. \\ \end{tabular} \end{table} Table 3.1: The involved roles in the automotive value chain committee Figure 3.1: Two role examples. Figures by StoryboardThat (), www.storyboardthat.com, used by permission. \begin{table} \begin{tabular}{l|l} **Roles** & **Description** \\ \hline Roadmap Manager & The roadmap manager monitors the innovation status, reports to top management on the feasibility of the innovation, surveys new technologies from other partners, and updates the roadmap. The roadmap manager investigates trends and innovations. During innovation modeling, the roadmap manager performs the initial tasks and writes the roadmap after consulting with the other domain experts, requirements engineers, and system architects. \\ Requirements Engineer & The requirements engineer creates initial top-level requirements for the innovation and captures them uniformly (formally or in natural language). The requirements engineer leverages the expertise of the domain experts and system architects to uniformly refine the requirements in the system models. \\ System Architect & The system architect has the role of an interdisciplinary expert who designs systems by using modeling techniques. The system architect has know-how in the area of software-hardware design. In innovation modeling, the system architect takes on the role of the innovation modeler and its decomposition into subsystems. \\ Domain Expert & The domain expert represents a specialist of a particular discipline covering subdomains of development. The domain expert supports the innovation modeling and evaluates its influences and dependencies of certain domain elements on other domain elements. \\ \end{tabular} \end{table} Table 3.2: The involved roles executing the required activities of the recommended process for IMoG ### 3.2 Process parts: The Activities, Artifacts and Tools This section presents the recommended activities, the target artifacts and the recommended tools of IMoG's process. The section starts with the overview in Section 3.2.1 of the activities, the produced artifacts and the involved tools. Based on this overview the process details are described from the side of the activities in Section 3.2.2. In chapter 4, the IMoG methodology is introduced. The methodology shows _what_ is captured in the roadmap model, in _which_ way these elements relate and _how_ details shall be processed. However, the process description encompasses the artifacts, which represent the results of IMoG's methodology. Because of this dependency, it is recommended to read the methodology chapter first before reading the artifact description. Similarly, it is recommended to read the description about the involved tools after the artifacts. #### An abstract overview over the activities, artifacts and tools ##### Activities IMoG recommends seven activities for modeling the innovation. Every of these activities is processed by people taking the recommended roles of IMoG's process, which (the roles) were presented in Section 3.1. The mapping of which activity is processed by which roles is presented in Figure 3.2. Note that the roles are now depicted as colored stick figures. Despite their graphical depiction their meaning remains the same as before. The activities are described in the following. The first activity is called **Innovation Identification**. The innovation identification includes creative methods as well as market segment analysis to develop a new innovation idea and create an initial description of the innovation. The involved roles include the committee leader, the IMoG modeler and the corporation representatives. The committee leader sets up and coordinates the meetings. The IMoG modeler is responsible Figure 3.2: Activities (arrows) and roles of the working process. The space between the activities represents nothing special and is for the sake of the graphical representation only. The roles are described in Section 3.1. for creating the models and the corporation representatives are responsible for proposing their interests. The in-house roles include the roadmap manager and the domain experts that help the representatives to identify and describe their interests. The second activity is called **Feature and Function Identification**. The purpose of this activity is to refine the problem understanding and create a feature hierarchy based on the description of the innovation. As a further refinement, the feature hierarchy may include user stories and use cases. The involved roles include the same committee members of the innovation identification activity: the committee leader, the IMoG modeler and the corporation representatives. Requirements engineers of the corporations support the creation of the feature hierarchy. The third activity is called **Requirements Elicitation**, which adds quality requirements and constraints to the feature hierarchy and refines the problem space further. It is the last activity focusing on the problem space. The roles that are involved in this activity are the same as in feature and function identification activity. The solution space of the innovation is examined after the problem is sufficiently understood. The corresponding activity is called **Solution Space Exploration**. It consists of modeling the possible solutions of the innovation with (sufficient) technical details. The involved committee roles include the committee leader, the IMoG modeler and the corporation representatives. The corporation internal leader of this activity is the system architect to examine and analyze the possible solutions. The system architect gets support from the requirements engineer and the domain expert, however, their help is of supportive nature. After the solutions are examined, the committee extracts the insights gained by the generated model and saves them in their database for further innovations. This activity is called **Extraction and Saving of the Insights**. No in-house corporation roles are needed. The **roadmap writing** is the next activity building upon the insights from the last activity. The committee members meets again to discuss the roadmap together. The modeling activities are finished and thus the IMoG modeler does not take part in this activity. The roadmap manager takes responsibility for the roadmap writing, structures the document, and assigns tasks. After this activity, the main roadmapping activities are done. Based on this roadmap, reoccurring meetings are established to **maintain and update** the roadmap. The same roles are involved as in the writing of the roadmap. It is not required to complete each of the seven activities before starting the next one (as usual). Instead, it is sufficient to draft each model of each activity and refine them when necessary, similarly to what was proposed with the twin peaks model [14]. Artifacts _(It is recommended to read Chapter 4 before this Section.)_ The artifacts are also added to the process image in Figure 3.3. The idea description and the filled Strategy Perspective constitute the artifacts of the "Innovation Identification" activity. The details of the perspective are presented in Chapter 4. The artifacts of the "Feature and Function Identification" activity are the user stories, use cases and the filled Functional Perspective. The filled Quality Perspective constitutes the artifact of the "Requirements Elicitation (Quality Requirements and Constraints)" activity. The filled Structural Perspective constitutes the artifact of the "Solution Space Exploration" activity. The (Domain) Knowledge Perspective and the list of insights constitute the artifacts of the "Extracting and saving Insights for future IMoG Innovations" activity. Finally, the roadmap is the artifact of the "roadmap writing" activity, which is updated within the "maintaining and updating the model and the roadmap" activity. The artifacts are illustrated with the presentation of the perspectives in Chapter 4 The proposed tools are added to the process image in Figure 3.4. Overall, we think that a dedicated tooling for IMoG is required and thus the recommended tooling for IMoG shown in the figure is such dedicated tooling. A tooling prototype for the Functional Perspective is already implemented and called "IMoG IRIS" prototype. Unfortunately, the resources for implementing were not enough to extend the dedicated prototype for the remaining activities. These left open implementations are marked in the figure with "To Be Done". Next to a dedicated tooling, some activities are best supported by using extra tools: The "Innovation Identification" activity would be best supported by a creativity tool that the committee is well versed with. This, for example, may be a mind mapping tool, some whiteboards or something else. The "Feature and Function Identification" activity would be best supported with a dedicated tool to create and manage use cases and user stories. This could be a common text and table manipulation tool or a more sophisticated requirements engineering tool that supports user stories and use cases well. The "Extracting and saving Insights for future IMoG Innovations" activity would be best supported by a text editor to write the insights down. A text editor is helpful for the Figure 3.3: IMoG process with artifacts appended #### Detailed activities description This section describes each activity in detail. Figure 3.4: IMoG’s process description appended with the proposed tooling (final representation). (Activity) **Innovation Identification** (Roles) The involved roles include the committee leader to set up and manage the meetings, the IMoG modeler responsible for creating the models and the corporation representatives in the roles of the roadmap manager for proposing their interests in the innovations as well as some domain experts for supporting the roadmap managers. (Short activity description) This activity uses creative methods as well as market segment analysis to develop a new innovation idea and create an initial description (see figure 3.5). (Sub-activities) (Detailed description) The committee leader invites the committee members with the above mentioned roles to a meeting to develop a new innovation idea. The committee decides on a creative method or decides on a market analysis technique and carries out the creative method. Which creative method or market analysis to choose is not defined nor restricted. Creative methods include for example "Brainstorming", "Mind maps", "Zwicky Boxes", "Walt Disney method", "Scenario projection", etc. Market analysis include for example "User needs projection", "User stories", "Time to market analysis" or "Business model analysis". The method that fits the committee members and their idea the best is the one to choose. Once the committee finished the creative method and closes the meeting, the committee leader writes down a description of the result of the creative method. Based on this description the IMoG responsible person translates this description into a draft of the Strategy Perspective (see Section 4.2.1). Now, the committee refines this model of the Strategy Perspective in a few more meetings or by assigning personal tasks. The refinement process can also include refinement and review processes in the corporations itself by ask internals (probably people with the roadmap manager role or domain expert role) to give their inputs. The input may include more information about the innovation, refined descriptions, goals or identify elements that shall be traced. This refinement process goes on until they are sufficiently satisfied with the result (Strategy Perspective). (Artifacts) The innovation description (including the common vision and possibly some diagrams) and the filled Strategy Perspective (presenting the vision, the diagrams as well as the stakeholders interests, concerns and strategy and textual goals) constitute the artifacts of the "Innovation Identification" activity. (Tools) Overall, we think that a dedicated tooling for IMoG is needed and thus the proposed tool for IMoG would be such dedicated tooling. We already implemented a tooling prototype for the Functional Perspective, called "IMoG IRIS" prototype. Unfortunately, we do not have enough resources to extend the dedicated prototype (IMoG IRIS) for the Strategy Perspective activities. Additionally, this activity is best supported by a creativity tool that the individual committee that they efficiently use already. This creativity tool may be any tool that supports the creative techniques and analysis (paper, mindmaps, documents, scratchboards, d Figure 3.5: Possible methods for the Innovation Identification activity. (Activity) **Feature and Function Identification** (Roles) The involved roles include the committee leader, the IMoG modeler and the corporation representatives. In-house requirements engineers are also involved in this activity. (Short activity description) The goal of this activity and the Functional Perspective is to refine the problem space and create a feature hierarchy including optional user stories and use cases based on the description of the innovation. (Sub-activities) (Detailed description) After the identification of the innovation in the "Innovation Identification" activity, the next activity focuses on the identification of the features and functions needed to fulfill the innovation. The features and functions shall represent a refining of the problem space of the innovation. The committee leader invites the committee members with the above mentioned roles to a meeting. First they decide if they want to create user stories and use cases for understanding the general conditions of their innovation or if the general conditions of the innovation are sufficiently understood without user stories and use cases. If they decide to create user stories and use cases, they use the meeting to identify the user stories and use cases. The corporation representatives are responsible for giving and checking the input for the user stories and use cases. They may request their in-house requirements engineers for supporting this task. The IMoG modeler supports the corporation representatives by creating templates and giving advice for formulation. The committee leader moderates the meetings. The outcome of the meeting is a draft of these user stories and use cases. The members of the committee then distribute tasks to refine the user stories and use cases to a sufficient degree. They meet and refine again until they are sufficiently satisfied with the result. (Detailed description continued) Then the committee leader invites the committee members for another meeting to create a draft of the feature model in the Functional Perspective. The IMoG modeler creates a draft of the Functional Perspective based on the inputs of the corporation representatives. The committee refines the model by giving input via in-house meetings of the corporations and by additional committee meetings. The in-house requirements engineers help the corporation representative and checks the validity and consistency of the Functional Perspective. This refinement process goes on until they are sufficiently happy with the result (Strategy Perspective). (Artifacts) The artifacts of the "Feature and Function Identification" activity are the features and functions in the form of the Functional Perspective and the optional user stories or use cases that are mapped on the features and functions. All features, functions, user stories, use cases as well as all other information are represented in the Functional Perspective. (Tools) Overall, we think that a dedicated tooling for IMoG is needed and thus the proposed tool for IMoG would be such dedicated tooling. We implemented a tooling prototype for the Functional Perspective, called "IMoG IRIS" prototype. Additionally, this activity is best supported with a dedicated tool to create and manage the use cases and user stories. This could be a common text and table manipulation tool or a more sophisticated requirements engineering tool that supports well user stories and use cases. (Activity) Requirements Elicitation (Quality Requirements and Constraints) (Roles) The involved roles include the committee leader, the IMoG modeler and the corporation representatives. In-house requirements engineers are involved in this activity. (Short activity description) The goal of this activity and the Quality Perspective is to refine the problem space by adding quality requirements and constraints to the feature hierarchy and finishing the refinement of the problem space. (Detailed description) The committee leader invites once again the committee members with the above mentioned roles to a meeting. Every quality requirement and constraint that came up during the identification of the features and functions is now placed into the Quality Perspective of IMoG and mapped on the features and functions of the Functional Perspective. (Note: Process Requirements are not relevant, because the innovation is not built in the committee!). Afterwards, a dedicated round of meetings moderated by the committee leader and focusing on the structured elicitation of missing quality requirements and constraints is started. These meetings follow the typical steps of requirement engineering (requirements elicitation, requirements analysis requirements documentation, requirements verification and validation [8]). The corporation representatives are responsible for eliciting the requirements and checking the consistency of the requirements. They may request their in-house requirements engineers for supporting this task. The IMoG modeler supports the corporation representatives by filling the requirements into the Quality Perspective. They stop the rounds of meetings once they are sufficiently satisfied with the result. After the meetings, the modeling of requirements and the problem space is done. (Artifacts) The artifacts of the "Requirements Elicitation" activity are the added quality requirements and constraints in the Quality Perspective, which are mapped on the features and functions of the Functional Perspective. With the requirements elicited the modeling of the problem space is finished (or at least interpreted as a draft like in agile work / twin peaks [14]). (Tools) Overall, we think that a dedicated tooling for IMoG is needed and thus the proposed tool for IMoG would be such dedicated tooling. We already implemented a tooling prototype for the Functional Perspective, called "IMoG IRIS" prototype. Unfortunately, we do not have enough resources to extend the dedicated prototype (IMoG IRIS) for the Quality Perspective activities. Thus standard requirements managing tools like IBM Rational DOORS or Jama are recommended. (Activity) **Solution Space Exploration** (Roles) The involved roles include the committee leader, the IMoG modeler and the corporation representatives. In-house requirements engineers are involved in this activity. The leader of this task is the system architect to examine and analyze the possible solutions. The system architect gets support from the requirements engineer and the domain expert, however, their help is of supportive nature. (Short activity description) The goal of this activity and the goal of the Structural Perspective is to model the solution space of the innovation with (sufficient) technical details. (Sub-activities) (Detailed description) The modeling of the solution space is the next step. The committee leader invites the committee members with the above mentioned roles to explore and discuss the possible solutions. The exploration of the solutions may include the following steps (the order does not have to be strictly followed): Starting with the context level, a model describing the environment of the intended innovation solution and the innovation itself is designed. In this context model, the innovation can be understood and represented as a black box (meaning that the innovation is not decomposed or any of its parts further described). The next step may include the system decomposition focusing on how the innovation can be constructed. The innovation is represented in detail (white box). The general decomposition concepts of using logical components, "solution principles" (e.g. combustion or electric) and actual solutions (e.g. specific engines) as well as hardware and software mappings are part of the system decomposition step. This step may also include the mapping of the features and functions of the Functional Perspective on the components of the system decomposition to achieve traceability to the problem space. The third step may include an effect chain modeling to depict and analyze the connections between the innovation components (system decomposition parts) and its environment. (Detailed description continued) The analysis allows to understand the dependencies between the innovation components and its environment and their impact on changes. The fourth step may include the structured elicitation of missing quality requirements and constraints for the solutions. This step uses the same steps already mentioned in the Quality Perspective (Adding solution requirements to QP, requirements elicitation, requirements analysis requirements documentation, requirements verification and validation [8]). The last step may include an alternatives exploration including the use of Key Performance Indicators (KPIs) to describe the possible alternatives of the system components and their advantages and limitations. These steps can be divided over a series of meetings with internal discussions in the corporations, where the committee leader manages the formalities. The IMoG modeler and the corporation representative including their internal roles of the system architect, requirements engineer and domain expert, are responsible for exploring the solution space using the five steps. Additionally, the IMoG modeler is responsible for the creation of the Structural Perspective. The importance of each internal role varies between the steps: The system architect is most important during the context level modeling, the system decomposition and FP mapping, the effect chain analysis and the alternatives exploration. The system architect however plays a smaller role in the requirements elicitation for solutions step, where the requirements engineer has the responsibility. On the other side the requirements engineer plays a less important role in the other steps. The domain experts gives their expertise in all steps. However, their input is especially in the system decomposition, the effect chain modeling and in the alternatives exploration needed. After the steps, the modeling of requirements and the solution space is done. (Artifacts) The artifacts of the "Solution Space Exploration" activity are the model of the solutions covered in the Structural Perspective and its dependencies to the other perspectives. This includes the added quality requirements and constraints to the Quality Perspective and the mapping on the features and functions of the Functional Perspective. The solution space exploration may include a context model and the decomposition of the innovation including effects and alternatives. (Tools) Overall, we think that a dedicated tooling for IMoG is needed and thus the proposed tool for IMoG would be such dedicated tooling. We already implemented a tooling prototype for the Functional Perspective, called "IMoG IRIS" prototype. Unfortunately, we do not have enough resources to extend the dedicated prototype (IMoG IRIS) for the Structural Perspective activities. Thus standard system modeling tools like any UML tool or SysML tool are recommended. (Activity) **Extracting and saving insights** (for future IMoG innovations) (Roles) The involved roles include the committee leader, the IMoG modeler and the corporation representatives. In-house roles are not needed in this activity. (Short activity description) The goal of this activity and the (Domain) Knowledge Perspective is to extract the insights of the innovation to use them as a basis for the roadmap and to save the insights of the innovation for future IMoG innovations. (Sub-activities) (Detailed description) The modeling of the problem space and solution space is finished. The committee leader invites the committee members with the above mentioned roles to extract the insights of the model to use them as a basis for the roadmap. The insight extraction is done by the committee members. This includes the committee leader, the IMoG modeler and the corporation representatives. The outcome of this first activity is the list of insights written down into a document. Afterwards the IMoG modeler exports the IMoG elements into a publicly available database. For this activity, the IMoG modeler asks the committee members which elements to save and which not. The IMoG modeler suggests dependencies to draw between IMoG elements and already available information and discusses these suggestions with the committee members. The committee members may also suggest dependencies. The outcome of this second activity is the publicly available database enhanced by elements of the regarded innovation. (Artifacts) The artifacts of the "Extracting and saving insight" activity are a list of insights as a basis for the roadmap activities and the publicly available database enhanced by elements of the regarded innovation. (Tools) Overall, we think that a dedicated tooling for IMoG including a publicly available cloud service containing the IMoG model is needed and thus the proposed tool for IMoG would be such dedicated tooling. We already implemented a tooling prototype for the Functional Perspective, called "IMoG IRIS" prototype. Unfortunately, we do not have enough resources to extend the dedicated prototype (IMoG IRIS) for the Domain Knowledge Perspective activities. Thus standard text editors and databases are recommended. (Activity) Roadmap Writing (Roles) The involved roles include the committee leader, the roadmap manager of the committee and the corporation representatives. The roadmap manager takes responsibility for the roadmap writing, structures the document, and assigns tasks. The corporation representatives are responsible for giving sufficient help in the roadmap creation, reviewing and refinement. In-house roles are not needed in this activity. (Short activity description) The goal of this activity is to use the extracted insights of the innovation to write the roadmap. (Detailed description) The modeling activities are finished and the insights of the innovation are extracted. The committee leader invites the roadmap manager of the committee and the committee members to write the roadmap based on the insights of the innovation. The roadmap manager of the committee creates a first draft of the document structure with sufficient support of the committee members and then refines the roadmap together with the committee members. This refinement process goes on until they are sufficiently satisfied with the roadmap. (Artifacts) The artifact of the "Roadmap Writing" activity is the roadmap. (Tools) Standard text editors (e.g., LaTeX, word, etc.) are recommended. (Activity) Maintain and update the roadmap (Roles) The involved roles include the committee leader, committee leader, the IMoG modeler, the roadmap manager of the committee and the corporation representatives. In-house roles are not needed in this activity. (Short activity description) Reoccurring meetings are established to maintain and update the roadmap. (Detailed description) The roadmap is written! Now the committee meets once in a defined time frame to maintain and update the IMoG model and roadmap. The committee leader manages the formalities, the IMoG modeler is responsible for updating of the model and the roadmap manager of the committee is responsible for updating the roadmap. The corporation representatives are the most important members here as they decide in which direction the roadmap should be point. The outcome of this activity is the updated model and roadmap. (Artifacts) The artifacts of the "Maintain and Update the Roadmap" activity are an updated model and an updated roadmap. (Tools) All of the tools mentioned in the other activities are required to maintain the model and the roadmap. ### Example - Mobility with an e-scooter Let's illustrate the process with an example storyboard (see Figures 3.6 and 3.7). A manager (see the Figure on the right) from a known car manufacturer wants to dive into future mobility aspects and explore new areas for potential investments. He likes the aspect of e-scooters as part of future mobility services and decides to think through this innovation together with the automotive value chain. He starts a new committee with himself as the committee leader and publicly invites partners of the automotive value chain to join the committee. Several members of the automotive value chain join the innovation exploration. Some of them will also decline. He also creates a public invitation to increase the number of partners and thus the relevance of the committee. Some additional OEMs, Tier1 and Tier2 join the consortium. Before starting the committee he requests the committee corporations to assign the internal roles. Each OEM, Tier1 and Tier2 assign the roles of the Roadmap Manager, Requirement Engineer, System Architect and Domain Expert. Additionally, the committee leader invites suitable people to take over the role of the IMoG modeler and the role of the roadmap manager. Then the committee is ready to explore the innovation by following IMoG's process. The committee starts with the first activity - the Innovation Identification (see Figures 3.8 and 3.9). The committee leader invites the committee to a meeting to identify the innovation. The committee meets and chooses a fitting creativity method to identify the innovation they want to explore. In this case, they agree on using the creativity method [https://en.wikipedia.org/wiki/Morphological_analysis_](https://en.wikipedia.org/wiki/Morphological_analysis_)(problem-solving). The committee carries out the creativity method until they are satisfied with their result. Based on the outcome, the committee leader writes an innovation description. Afterwards, the IMoG modeler takes the creativity method outcome and the innovation description to create a draft of the Strategy Perspective. Both results are discussed and refined in the committee and internally until they are satisfied. Then the Strategy Perspective and the Innovation Identification activity is finished. The second activity is the Feature and Function Identification (see Figures 3.10 and 3.11). The committee leader invites the committee to a meeting to identify the features and functions. The committee decides if they want to create user stories or use cases before identifying the features and functions. In this case they agree to create them. The committee starts to define user stories and use cases. The user stories and use cases are refined internally in each company with their requirements engineer and consolidated in the committee. Afterwards or before the elevation of the user stories and use cases, the features and function are defined and put into relation in a feature model. Similarly to the user stories and use cases, the feature model is refined internally in each company with their requirements engineer and consolidated in the committee. Then, the Feature and Function Identification activity is finished. The next activity is the Requirements Elicitation. The committee leader invites the Figure 3.6: Roles storyboard. Figure 3.7: Roles storyboard (part 2). Figure 3.8: The first activity: Innovation Identification Figure 3.9: The first activity: Innovation Identification (part 2) Figure 3.10: The second activity: Feature and Function Identification Figure 3.11: The second activity: Feature and Function Identification (part 2) committee to a meeting. Then the committee meets to add the requirements to the features and functions (see Figures 3.12 and 3.13). They write down the requirements and constraints that were raised in the last two activities and now structurally elicit missing quality requirements and constraints for the features and functions. Once they are done, the requirements and constraints are refined and completed internally with the requirements engineer. Afterwards, the Requirements Elicitation activity and the modeling of the problem space is done. The next activity is the Solution Space Modeling and Analysis (see Figures 3.14 and 3.15). The committee leader invites the committee to another meeting. The committee explores the solution space of the innovation by modeling the innovation and its environment. The exploration includes several steps like the context level modeling, the system decomposition and feature mapping, the effect chain and impact analysis, the requirements elicitation for solutions and the alternative and key performance indicator exploration. Once the exploration is drafted, the refinement of the innovation model takes place internally within the companies. In this activity, the system architect takes the main lead and gets support by the requirements engineer and the domain expert. When this activity is finished, the solution space is also finished. The next activity is the extraction of insights for future IMoG innovations (see Figures 3.16 and 3.17). The committee meets again to discuss. With the problem space and solution space modeled, the key elements of the innovation are identified and listed to be used in the innovation roadmap and in future IMoG models. Once the elements are identified, the IMoG modeler puts them into the public database to be reused. Now it is time to write the roadmap! The next activity is the roadmap writing (see Figure 3.18). The roadmap manager of the committee creates a draft of the roadmap. The committee discusses the draft and refines it until the roadmap is finished. With this activity finished the general direction is known to the value chain, which can now start their own development processes outside the committee to make the innovation come true. The main committee activities are also ended with this activity. The last activity left open is the maintenance and update of the model and the roadmap (see Figures 3.19). In predefined intervals (like every two years), the committee takes place and realign the model and the roadmap with their gained knowledge over the time frame. With this activity they can also take action for important changes that the whole value chain needs to know. The model is also refined internally with all important roles. When this activity is done, the committee meets then again at the next predefined date for the maintenance and update. Figure 3.12: The third activity: Requirements Elicitation Figure 3.13: The third activity: Requirements Elicitation (part 2) Figure 3.14: The fourth activity: Solution Space Modeling and Analysis Figure 3.15.: The fourth activity: Solution Space Modeling and Analysis (part 2) Figure 3.16: The fifth activity: Extraction of Insights for future IMoG Innovations Figure 3.17: The fifth activity: Extraction of Insights for future IMoG Innovations (part 2) Figure 3.18: The sixth activity: Roadmap Writing Figure 3.19: The seventh activity: Maintenance and Update of the Model and the Roadmap ## Chapter 4 Innovation Modeling Grid We developed a methodology called "Innovation Modeling Grid" (IMoG) to accelerate the innovation development process along the automotive value chain. The methodology IMoG provides a structure and defines elements to model the problem and the solution space of innovations. IMoG defines a structure to reduce the time spend on the "What and how to model?" question and to help the modeler to focus on their innovation instead. Furthermore, a process and a dedicated tooling supporting the methodology is required to handle the methodology this public roadmapping context. A dedicated tooling for IMoG is currently in progress, but out of the scope of this document. Section 4.1 presents the design principles of the methodology IMoG. IMoG itself is described in Section 4.2. An FAQ is given in Section 4.3. IMoG's process is described in Chapter 3. ### 4.1 Design Principles of IMoG We developed IMoG under the context that an automotive value chain committee creates and maintains a public microelectronic roadmap. This context shaped IMoG and we raised the following design principles: * **Abstract Innovations**: The focus of IMoG lies on describing abstract innovations that are shared in a public committee. These innovations are represented by a mix of informal and formal elements to remain beneficial to all participants. IMoG models are expected to include fewer details than development and engineering models. Therefore, complex modeling concepts are left out. This includes, for example, the concept of "Ports" to model communication interfaces of solutions and check their consistency. However, this does not mean that any kind of detail is too much. It is expected that IMoG contains sufficient details of the crucial parts of the innovation where the highest uncertainty and risk lies. Instead of ports, it is recommended to describe one communication channel with sufficient details. * **Problem space vs solution space**: IMoG divides the modeling into a problem description and a solution description [5, 15]: The problem space should mainly contain information about the problem with as little information as necessary about the possible solutions. The solution space covers on the other side the possible solutions. Furthermore, a map between the problem space elements and the solution space elements is necessary for basic tracing. Natural language constraints, quality requirements and general conditions complete this tracing by giving the option to add further information. In the context of a road mapping committee, this problem-solution distinction is suitable, because it eliminates the frequently asked question whether a particular "Function", "Block", or "Requirement" describes in the IMoG model the target state or the actually designed state and thus helps reducing the cognitive load. * **Support of Decomposition / Refinement / Variability**: IMoG distinguishes between three core concepts: Decomposition, Refinement and Variability. Decomposition describes a partitioning of an element into its parts. Refinement describes a more fine grained specification of an element. Variability describes possible alternatives of an element. Variability tends to create the wish for measurement and assessment, thus suitable comparison parameters should be defined. These concepts are less important for describing the problem space. However, they are invaluable to understand and apply for the solution space. * **Other modeling dimensions**: IMoG applies other concepts to manage innovation modeling as well. The concepts of abstraction levels and perspectives help with the separation of concerns by focusing on certain aspects of the innovation. Furthermore, abstraction levels and types help with the support of filtering mechanisms to hide temporally unneeded details. The concept of availability describes when certain elements are available and ready-to-use, which plays an integral part in road mapping. ### 4.2 Innovation Modeling Grid Methodology The Innovation Modeling Grid (IMoG) is depicted as a matrix in Figure 4.1. Each row represents an abstraction level, which can be understood as separating and designing the details of the innovation at different detail levels. IMoG proposes three abstraction levels: * The **Context Level** describes the innovation as a whole system embedded into its environment. In the automotive domain, this level is particularly interesting for the OEM(s) in the committee. * The **System Level** represents the innovation systems and its parts and is primarily relevant for Tier 1 suppliers in the classic automotive environment. * The **Component Level** consists of the components of the system. The modeler can also add more abstraction levels if needed. However, the three abstraction levels should be a sufficient starting point for the most innovations. IMoG follows a classical approach to distinguish between the problem space and the solution space, and proposes to analyze the spaces through so-called perspectives. A perspective describes an aspect of the innovation, for example the strategy, the features or the structure of the innovation. In IMoG, each perspective is represented as a column. The Strategy Perspective, the Functional Perspective, and partly the Quality Perspective relate to the problem space and focus on describing aspects of the problem without many Figure 4.1: IMoG version 1.4. It contains three abstraction levels (rows) and five perspectives (columns). Each perspective and abstraction level is interconnected with its neighbor cell. technical details. On the other hand, the Structural Perspective, the Domain Knowledge Perspective, and the latter part of the Quality Perspective relate to the solution space and describe potential technical solutions corresponding to the problem in an abstract manner. In the context of innovations this solution space are kept abstract as the knowledge about the future is only vague. The five perspectives and the three levels of abstraction are arranged into a grid, where each cell in the grid represents a model of an aspect of the innovation on a specific abstraction level. A grid cell is called a _view_. Note that not all grid cells need to be filled: When a modeler does not see a purpose for one view, then there is no issue omitting this view. This may happen if a breakdown of a model is not further required. The IMoG meta model recommends for each perspective a set of model elements. The corresponding details are out of the scope of this article. Each perspective is presented in the following. Afterwards, the interconnection (and thus the arrows in Figure 4.1) between the perspectives are described. #### Strategy Perspective The identification of an innovation usually starts with creativity techniques, sketches and discussions. These discussions are the starting point of the Strategy Perspective. The Roadmap Manager of the committee (see Section 3.1) takes the result and writes an innovation description as well as the strategy description behind the innovation. The description targets the innovation strategy, which may contain a vision, rationales, images, goals and diagrams. Those descriptions can contain identifiable elements to enable referencing and tracing. Additionally, the description contains companies' intends and their stakes in the innovation. The description and identifiable elements encompass enough information to start the modeling activities on the other Perspectives. The filled Strategy Perspective constitutes a part of the artifacts of the "Innovation Identification" activity. In the following, we illustrate the process steps with the innovation "Providing mobility with an e-scooter" (see Figure 4.2). The Strategy Perspective of the e-scooter innovation includes a description with a vision and what the innovation is about, the goals written as text as well as goals listed as elements for cross referencing, information from the car manufacturer (OEM) regarding their estimated customer needs, their concern and possibly some additional bubble diagram for a better explanation of their interest and information from the other suppliers (Tier 1 and Tier 2) including their interest, diagrams, etc. #### Functional Perspective The Functional Perspective describes the required features (end-user visible characteristics) and functions (traceable tasks or actions that a system shall perform) of the innovation. The features and functions of the Functional Perspective represent a derivative of the well-known feature models from [9]. The Functional Perspective's input is the strategy description. Optionally, User Stories or Use Cases can be created if the committee determines the need for more information on each feature and function. Considering the e-scooter example, the Functional Perspective model may look as it is depicted in Figure 4.3. It starts with "Providing mobility with an e-scooter" as its root feature, decomposed into several other features using well-known relations. The mandatory relations are depicted using an arrow with a black circle at its end, and optional relations as an arrow with a white circle at its end. Finally, an or-relation with cardinality is depicted as a black arc with several arrows going out of it with its cardinality interval) and a constraint relation ("requires"). For more details about these described relations, see [9, 5]. The Variation Point representation represents a labeled Figure 4.3: Functional Perspective: a part of the feature model of the e-scooter. Figure 4.2: Strategy Perspective: part of the innovation description of the e-scooter. alternative relation which graphical depiction is IMoG specific. The e-scooter feature description can be viewed on the left. It includes a detailed textual description, aligned goals, basic working conditions and other properties, like notes, priorities or links to user stories and use cases. The functions of the model are left out of the image. One specialty of the Functional Perspective is the detailed description of each feature and function, which helps to understand what they actually represent. #### Quality Perspective Based on the strategy description and the features and functions, the Quality Perspective captures further quality requirements and constraints of each feature and function. Requirement diagrams and requirement tables are suitable representations of the Quality Perspective. The strategy description, the features and functions and the requirements and constraints build together the problem space. It shall noted, that the Quality Perspective also contains the quality requirements and constraints of the solution space, which are referenced on the solutions on the Structural Perspective. The e-scooter innovation's Quality Perspective is depicted in Figure 4.4. It contains the quality requirements for the problem space and for the solution space. #### Structural Perspective The Structural Perspective targets the modeling of the solution space. It is worth mentioning, that the word "Structural" does not mean here the relations of solution blocks to each other alone, but also includes properties and values of these solution blocks. The context level of the Structural Perspective contains the environment, the relation and the effects between the environment and the innovation. A simple environment description may for example contain the street, the driver and the e-scooter (see Figure 4.5). It contains the innovation (e-scooter) with the driver and roadway blocks (blue rectangles with a name and optionally a stereotype over the name). Each block has Figure 4.4: Quality Perspective: a typical table of requirements with many attributes, which reference features or functions of the Functional Perspective or solution blocks of the Structural Perspective. The details – like the meaning of the attributes – can be chosen depending on each innovation and are not further elaborated here. As depicted on the right side of the image, filter functionality is of special importance for the Quality Perspective. variants attached, that specify similar forms of solutions. The variants are depicted (as green and white boxes) next to the solution blocks with the stereotype <<Variant>>. Each of these blocks own properties, which further refine the block. The properties are crucial information for analyzing and evaluating the solution space. Furthermore, relations like 'Incoming Forces' and 'weight' are modeled as unidirectional (purple) arrows, where purple represents the color for relations stereotyped as <<effect>>. The solution blocks of the different abstraction levels are left out of the model. The system level contains the decomposition of the innovation into components, while also including the software and the hardware elements. Software and hardware elements as well as architectures and mappings between them are included in the system level. The component level encompasses the system atoms which are decomposed from the system blocks. These may include sensor descriptions with parameters, functions, properties or abstract technologies. The atoms may include sensor descriptions with parameters, functions, properties or abstract technologies. When creating a solution space any form of constraints and parameters of chosen technologies are particularly of interest. Furthermore, requirements can be added to any solution block on any abstraction level. These requirements are placed on the Quality Perspective and referenced on the corresponding solution blocks on the Structural Perspective. An example of a system model can be viewed in Figure 4.6: it decomposes the e-scooter block known from Figure 4.5 into several parts of the e-scooter. The model elements are designed specifically for the microelectronic context. #### Domain Knowledge Perspective The insights from the innovation and the reusable element are collected and stored on the Domain Knowledge Perspective. The elements of the Domain Knowledge Perspective enable references to the finished innovation model in future innovation models (see Figure 4.7). Furthermore, the Domain Knowledge Perspective may contain a component database in a knowledge representation. The database may, for example, contain sensor characteristics and constraints from road traffic regulations, with each element owning an id, a name, a type, an estimated year of availability and several properties depending on the context of innovation. In essence, this perspective is used to refine the model with existing knowledge and constraints. Afterwards, the gained insights can be used to write the roadmap! Figure 4.5: Structural Perspective - Context Level: A simple context model for the e-scooter. Figure 4.6: Structural Perspective - System Level: The decomposition of the e-scooter into its system parts. It contains many blocks, variants, relations and channels (for modeling communication). This figure shall only give a glance at what may be included in the Structural Perspective. Figure 4.7: Domain Knowledge Perspective: A glance at the database view of the Domain Knowledge Perspective. #### Connecting Perspectives All perspectives were presented in detail. However, their interconnection needs to be mentioned. These interconnections are already visible in Figure 4.1 and shall be described here briefly. The elements of the Strategy Perspective can be referred by the features and functions, building the interconnection between the Strategy Perspective and Functional Perspective (represented by the <<references>> relation in Figure 4.1). The constraints are part of the Quality Perspective and own a target reference to the corresponding features and functions. The same holds for the requirements mapped on the Structural Perspective's solution blocks. Thus the Quality Perspective has traces to both Functional Perspective and Structural Perspective (represented by the <<constrains>> relation in Figure 4.1). Each feature and function should be mapped on one or several solution blocks (represented by the <<allocate>> relation in Figure 4.1). This allocation is crucial, because it represents the interconnection of the problem space with the solution space. Finally, there is the reference between the solution blocks of the Structural Perspective and the Domain Knowledge Perspective (represented by the <<references>> relation in Figure 4.1). Thus all perspectives are interconnected to each other. Worth to note is, that the IMoG modeler has to take care of not introducing inconsistencies (e.g., a requirement that is mapped on a feature or function, which is then allocated on a solution block that owns a contradicting requirement). #### Reviewing IMoG: Pros and Cons IMoG's definition comes with strengths, some limitations and some recommendations from the authors. These strengths, limitations are presented in the following including some recommendations and the experience from the authors. One strength of IMoG is that it is well defined and owns a concise meta model for innovations. This is illustrated by the following points. First, the distinction between problem and solution space reduces the thinking overhead when exploring innovations. One can first focus on the problem and its needs before diving into solutions. This distinction was evaluated in IMoG's definition phase as very helpful. Second, the recommended elements of IMoG are on an appropriate level of abstraction for modeling innovations. Elements that are required in engineering phases are left out. This is especially true for innovations discussed in committees. Third, IMoG's perspectives and abstraction levels represent a good choice. These perspectives and abstraction levels are not too many or too detailed, however, they do capture the important aspects of innovations, like strategies, features and functions, requirements and constraints as well as solutions and properties. And fourth, the handling of availability and variability is supported as well, which is crucial for modeling future innovation while coping with the design space. Another strength is IMoG's flexibility. It is possible to start an innovation considering market pulls as well as with considering technology pushs. A market pull is understood as an innovation that is driven by a demand of the market. A market pull in IMoG starts with the identification of the innovation from the Strategy Perspective and then slowly moves over the functions and requirements to the solutions. A technology push is understood as an innovation that is driven by the development of a new technology. A technology push in IMoG starts with modeling the technology on the solution level and then slowly explores the possible demands on the Quality Perspective, Functional Perspective and Strategy Perspective. Another sign of IMoG's flexibility lies in its domain agnostic-ness. IMoG targets microelectronic innovations discussed in a committee, however many elements of IMoG are abstract enough to be used in any context. Elements like software and hardware are more system related, however, still very abstract. Thus, IMoG can be applied to similar (enough) problems. Furthermore, IMoG is easy to apply with an IMoG expert. By guiding through the exploration process substantial time can be saved as the people creating the idea do not have to bother with the modeling elements and modeling decisions. This was also validated in the evaluation studies conducted by the authors. IMoG's validity, usefulness and adequacy were all positively evaluated. IMoG also has some limitations. Its high abstraction is the cost of flexibility. Without having an IMoG expert, it is challenging to find a suitable path through the grid for a specific innovation, because multiple paths may seem valuable. Furthermore, detailed behavioral models are out of scope of IMoG. This can be considered a strength as state machines and alike are often too much detail for innovations. And if the detailed behavioral models are really required, they may be added as an attachment. On the other side, the high level of abstraction is definitely a challenge when transforming the IMoG model into a product level model. A transformation approach is required here that takes the innovation's context into account (e.g. see Broy et al. in [3]). Overall, IMoG is difficult to apply without guidance from an IMoG expert. Another limitation of IMoG is the scalability known from other modeling languages. Its graphical nature does not scale very well in large diagrams, however, innovation modeling tend to have a small amount of elements. Therefore, scalability was not yet identified as a big problem. Intellectual Property protection is of high importance in committees. This limitation is not tackled by IMoG, however, it does not restrict the use of further approaches tackling this issue while using IMoG. From the experience of the authors, the following three recommendations support the application of IMoG: First, it is recommended to interpret the abstraction levels as filtering mechanisms. This thinking helps to apply abstraction levels only when they provide a clear advantage and not just "...because IMoG says so". Furthermore, it is recommended to search for an IMoG expert before starting the innovation modeling in a committee. Without one, the whole modeling phase may become quite challenging and inefficient. This may include improvised, ineffective meetings with inconsistent diagram exchanges. Finally, it is recommended to make use of the Glossary and FAQ, that was created for IMoG as well as for every perspective. ### 4.3 Faq The Roadmap Manager, the Requirements Manager, the System Engineer and the Domain Expert are the Stakeholders mentioned in IMoG. However Requirements Engineering and Systems Modeling often contain much more Stakeholders, Why are there only these few Stakeholders defined? Requirements Engineering and Systems Modeling often present the Stakeholders which do have a stake in the product like the customer, the maintainer, the investor, etc. However, in the context of roadmapping innovations in a committee modeling those Stakeholders is often not needed. This stems from the abstract level of the roadmapping activities and the avoidance of details to not unnecessary constrain the solution space and protect Intellectual Property. The here mentioned Stakeholders are the roles that build the innovation model in the committee. To not assume or restrict the internal structures and to not confuse the participating corporations, the Stakeholder for each Perspectives are intentionally kept abstract. The Stakeholder descriptions exists for hinting on the participation roles in IMoG and are expected to be filled by people that have a different role description in their company. One person may event fulfil more than one Stakeholder role in IMoG. **Capturing and modeling failures and paths that did not succeed can help enormously to avoid repeating the same error. Is there a way to model failures in IMoG?** There is currently no support for modeling failures. One may include a custom stereotype and filter mechanism to support failures, however this would introduce some difficulties in tracing and coverage analysis as well as generating integration problems and cause model bloat. Additionally failures may not only include solutions but Features and Functions as well. For example, the Flying and During functionality combined may be unfeasible and represent a failure in the model. The removal or explicitly marking of one of those Intechnolifies as a failure may incorporate many blocks and relations and cause the model to be broken. Thus, there is no support for failure modeling. To model failures one may use mechanisms of version control, solely existing failure models and textual descriptions to cover this need. Additionally one may use the Knowledge Perspective to add knowledge about failures and add descriptions of failed blocks into the database. The latter options are considered sufficient. **A typical way to move from the Problem Space to the Solution Space is to model first functions, then describe the technology (or concepts) and lastly describe the solutions via elements like components and parts. The technology and concepts are considered constraints. How does this move look like in IMoG?** In IMoG this move can be represented in a similar fashion: First the functionality is described on the Functional Perspective. Then the technology and solutions are both described on the Structural Perspective: The technology can be represented by using so called Templates of Groups of Components with its connections. If the technology description contain some constraints, these can be described as Requirements on the Quality Perspective and linked on the templates and inner components. The solutions can be represented as the components (or Blocks) of the templates. Thus, an element mapping may look as follows: Function * Technology; Template * Block (Part). **Does a Problem-Solution Separation exist in IMoG?** Yes, the Strategy Perspective, Functional Perspective and half of the Quality Perspective belong to the Problem Space. The Structural Perspective, the Knowledge Perspective and the other part of the Quality Perspective belong to the Solution Space. **How to properly handle changes in the MoG model?** Analyses like Change Impact Analyses, Version Control with Diff Tools that show the differences on a model level (similar to Git, but on a different abstraction level) and other model evolution analysis shall be provided. The implementation details are yet to be defined. A prototypical, MoG independent implementation can be viewed in the Tool Iris from the University of Um. **Where is the line to distinguish between abstraction layers?** There is no clear recommendation that says when to use which abstraction layer. Instead, it is recommended to interpret the abstraction levels as filtering mechanisms. This thinking helps to apply abstraction levels only when they provide a clear advantage and not just because [MoG says so. **Does MoG support Change Management?** MoGs definition does not address this topic, because Change Management is viewed as an orthogonal topic. The model management and tooling should support Change Management via Version Control Systems and suitable processes. ## Part II Details on the IMoG Methodology ## Chapter 5 Strategy Perspective The Strategy Perspective is the first perspective in IMoG applications and targets the capturing of the chosen innovation (see Figure 5.1). The purpose of the Strategy Perspective is the capturing of the innovation idea as well as the capturing of the interests and strategies of the stakeholders. Regarding, the IMoG process, the Strategy Perspective is the generated artifact of the first activity: "Innovation Identification". The identification itself is not part of the Strategy Perspective. It is recommended to use creativity techniques to identify the innovation and use afterwards the Strategy Perspective to describe the innovation extensively. The choice of creativity technique should be chosen based on the preferences and experience of the involved stakeholders in the committee. One key principle of the Strategy Perspective is to not bother the committee with modeling restriction and give as much freedom as possible to capture the early innovation. Thus, the only guidelines of the Strategy Perspective care of labeling and referencing, to allow a tracing of information. Overall, the Strategy Perspective can be seen like the presentation of the innovation to externals. Figure 5.1: Location of the Strategy Perspective in IMoG The innovation identification activity and the capturing of the innovation can be imagined as followed (see Figure 5.2): The identification of an innovation starts with many discussions, sketches and non formal descriptions. For such activities, creativity techniques like brainstorming, scenario projection and zwicky boxes are suited. Based on these ideas, marketing analysts take the idea and perform market segment analysis and analyze business opportunities. This may include looking on user needs, environmental constraints, business models, time to market predictions and more. The outcome of these creativity results and analyses is the starting point of the Strategy Perspective. The committee leader takes the outcome and writes a draft of the innovation description. The description is then refined with the committee. The description may contain a vision, an explanation about the overall strategy, goals and diagrams. Identifiable elements can also be added to the content of the Strategy Perspective to allow referencing and tracing of goals, text phrases etc. The description and identifiable elements encompasses enough information to start the real modeling activities on the other perspectives. The chapter is structured as followed: In Section 5.1 the meta model and its model elements are presented. In Section 5.2 an example of the Strategy Perspective is given. The strengths and limitations of the Strategy Perspective are discussed in Section 5.3. A FAQ finalizes the description in Section 5.4. ### Model elements The meta model of the Strategy Perspective is kept simple. When discussing strategy it is uncommon to just start modeling activities. Instead the committee is mostly interested on the description of their interests. This meta model tries to encompass this view by only introducing descriptions (in form of HTML divs) and traceable (identifiable) elements in the Strategy Perspective model. The descriptions can be labeled to allow filtering them out. There are no relations defined for connecting different information on the Strategy Perspective. However, the identifiable elements are defined for the purpose of perspective cross-referencing (including references to functions, requirements Figure 5.2: Activities considered for the Strategy Perspective and structure). Description: Description: Description: The HTML Div is the container for all descriptions, textual goals, diagrams, etc. Additionally, they contain the Identifiable Elements embedded in their descriptions. The HTML Divs can be named to allow filtering them out by the tooling. Example: The round rectangle represents an example HTML div with a name, descriptions, an image and identifiable elements. Figure 5.3: The model elements of the Strategy Perspective. ### Providing mobility with an E-Scooter The addition of other mobility concepts, besides can and provide, transport, make a valuable contribution to a better road transport. #### 5.3.1 E-Scooter's reputation at stake and gender We have also considered a _good_ (or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _good_) _(or _good_) _(or _ _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _good_) _(or _) _(or _good_) _(or _good_) _(or _) _(or _good_) _(or _) _(or _good_) _(or _ _)_(or _) (or _)_(or _) _(or _) (or _)_ (or _)_ (or _) _(or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_(or _)_ (or _)_ (or _)_(or _)_ _(or _)_ (or _)_ (or _)_(or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_(or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ _(or _)_ _(or _)_ (or _)_ (or _)_ (or _)_ (or _)_ _(or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ _(or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_(or _)_ _(or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ _(or _)_ (or _)_ (or _)_ (or _)_ (or _ _)_ (or _)_ (or _)_ (or _)_ (or _)_ (or _)_ _(or _ )_(or _ )_ _(or _)_ (or _)_ _(or _)_ (or _)_ (or _)_ (or _)_ _(or _)_ (or _)_ (or _)_ (or _)_ _(or _)_ (or _ )_ (or _)_ (or _)_ _(or _)_ (or _)_ _(or _)_ _(or _)_ _(or _)_ (or _)_ (or _)_ _(or _)_ (or _ )_ _(or _ )_ _(or _)_ _(or _)_ _(or _)_ (or _ )_ _(or _ )_ (or _)_ _(or _)_ (or _)_ _(or _ _)_ _(or _)_ _ (or _ )_ _(or _)_ _(or _)_ _(or _ )_ _(or _ )_ _(or _)_ _(or _ )_ _(or _)_ _ (or _)_ _ _(or _)_ _ (or _)_ _(or _ )_ _(or _)_ _ (or _)_ (or _ )_ _(or _)_ _ (or _)_ (or _ )_ _(or _ _)_ (or _)_ _ (or _ _)_ (or _ )_ _(or _ )_ _(or **Identifiable element** The identifiable element is designed for tracing over perspectives. It defines the following attributes: * The _id_ represents the identifier. It can be a number, a string or any other value the tooling allows. * The _category_ attribute allows to customly group identifiable elements by strings. Example categories (maybe proposed by the tooling) are: * Modeling Goals * Sub Goals * Marketing Strategies (e.g. "Production + Sales OEM") * Parameters + Characteristics (2 Brakes) * Chosen E-Scooter values (Speed \(>\) 100km/h) * Tier 1 specialized part (Microcontroller Z) * Mindmap element * Technology demand * The _text_ represents the content. * The optional _value_ attribute allows to enhance the element with a value. The value can be used for checking consistency between the identifiable element of the Strategy Perspective and any other element that includes a value or property. Note, that on early strategy considerations values are seldom available. * The _discussion_ and _version_ fields enhance the Block description by allowing discussions and version control. Example: The following three examples represent Identifiable Elements with an id, a category and a text. The optional value is not set and there exists no discussion and version number. (The empty non optional attributes are not shown here). \begin{tabular}{|l|} \hline PMG1 | Modeling Goal | The evaluation of future batterie technologies \\ \hline PMG2 | Modeling Goal | The evaluation of future microcontroller technologies and their \\ software \\ \hline PMG3 | Modeling Goal | An evaluation of future e-scooter use cases. \\ \hline \end{tabular} ### E-Scooter example The example of the Strategy Perspective describes the innovation of "Providing mobility with an e-scooter". The example is divided into two views: The Strategy Description View describes the innovation textually and graphically from the OEM view, Tier 1 view, Tier 2 view and from a general view. It contains descriptions, goals, business models, aspects, important notes and diagrams from creativity methods. Additionally, some text phrases and goals are made identifiable to allow a mapping and tracing on the other perspectives. The Strategy List View shows the identifiable elements from the Strategy Perspective, that were used in the Strategy Description View. The descriptions are left out. This view is used to focus and remark the identifiable elements. **Strategy Description View** **Description:** _The addition of other mobility concepts, besides cars and public transport, make a valuable contribution to a better road transport. E-scooters represent a flexible and perfect way to get from A to B for short distances, as they are more environmentally friendly, transportable and practical for many situations._ **Goal of the innovation (written as back in the 2010en):** An e-scooter is in itself not something new. However, several parts are questionable to be sufficiently fulfilled. Starting with the limited amount of energy accumulators are either too heavy to transport while driving or the capacity of the accumulator is still too small. New evolving battery technologies however may push the e-scooter to a practicable level. In the same sense, microcontrollers and balancing systems are nowadays too unreliable and too big to comfortably be built inside the e-scooter. The evolving AI technologies may be a promising approach to tackst fast decision making. Additionally the legislative gray zone, is speaking against a fast introduction of e-sooters. There are some more problems related to e-cooters like space requirements in OPNV, etc. The goal of this innovation model is to make a feasibility analysis of e-sooters as a promising and technically practicable mobility solution in the future. **Preliminary Modeling Goals:** The following elements shall be included in the model: PMG1 | Modeling Goal | The evaluation of future batteries technologies PMG2 | Modeling Goal | The evaluation of future microcontroller technologies and their software PMG3 | Modeling Goal | An evaluation of future e-scooter use cases. OEM Informations User Needs Projection: * The user needs to be able to drive and perform similar tasks to reach its destination. * The user want to carry stuff alongside. * The distance reached shall be acceptable high (e.g. 20km) * The e-scooter shall help optionally on balancing issues. * (The exact User Needs shall be defined by the requirements engineer) User Stories: The user want to get from point A to B (short trip) with a comfortable e-scooter. (The exact User Stories shall be defined by the requirements engineer) Environmental Constraints: From the legal perspective (2010) e-scooters are still in a gray zone. There needs to be legislation done to let them drive on the streets. Business Opportunities: Time to Market: Battery technology are trouble some. It is expected though that once the legislation is done, all issues are solved. Return on Investment: Market pressure is fierckly expected, because the electro mobility technology is well known. However there shall be a good profit as a pioneer. OEM Business Model: There are several business models possible. Three of them are as follows: The easiest business model contains the **production and sales** of the e-scooter itself. This works very well while the market is relatively new, the product is technically superior or the brand itself is an important factor. Another option contains the **leasing** of the e-scooters. The production superiority plays in the leasing model a less important role. However, higher costs are expected in the product life cycle due to service guidance. A key driver for this model is a good customer retention. Another option contains the collaboration with the government to provide e-scooter as **part of the OPNV**. This option is especially interesting when the local or country wide OPNV is **from of chroma** Additional OEM Important Aspects: The OEMs agreed on certain subgoals that they want to be fulfilled by the later innovation modeling. These subgoals shall be tracked in the model, thus they got IDs. However, they are not meant as full fleshed requirements yet. The requirements engineer shall take them as input and create detailed requirements. The subgoals are as follows: SG 1 [] Sub Goal [] The e-scooter shall be able to transport the user. SG 2 [] Sub Goal [] The e-scooter shall be comfortable to transport. SG 3 [] Sub Goal [] The e-scooter shall be able to be parked in any legal location. SG 4 [] Sub Goal [] The e-scooter shall be available for leasing. OEM Notes: Additionally, the OEMs made notes about words they used and what basic conditions they mean. These are not appended with IDs, however they shall be somehow considered 1. **Basic Working Conditions:** Mission areas are different scenarios. The scenarios include individual owned e-scooters, permanently used e-scooters, comfort requested e-scooters and simple requested e-scooters. 2. An E-SCooter is self-balancing if it is equipped with an integrated electronic balance, propulsion, steering, and deceleration system by which it can maintain itself balance. ### Creative idea drawings: The following diagrams shall provide additional informations and support modeling. The creativity method "Zwicky Box" (also known as "Morphologischer Kasten") is used. The first step in the creativity method "Zwicky Box"contains the identification of the problem. Here the problem is formulated as the question: "_How could the future mobility look like?_" The next step contains the collection of parameters, which represent the most important indicators of the mobility concepts. Here, "_Speed_", "_Travel Distance_", "_Parking Overhead_", "_Luggage Space_"_and "_Drive Train_" are chosen. The third step includes the identification of caracteristics of each parameter. Here several speed values, travel distances, etc are chosen. This results into the template table below. The fourth and final step is the creative and exciting one. Based on the template combine the different characteristics into a new mobility concepts. After combining, you may have some thrilling new mobility solutions. Here of cause - next to several other combinations - the e-scooter mobility solution was identified and taken for further consideration. The diagram adds valuable additional information from the OEM perspective for the modeling of the innovation! One information was created as an Identifiable Element for further references: ``` EP 1 | E-Scooter Property | The e-scooter shall be able to reach 20km/h. | Speed > 20km/h ``` **Problem**: How could the future mobility look like? ``` Parameter & Characteristic ``` Speed & <30km/h 30-70 km/h >70km/h Template ``` Travel Distance & <10km 10-100km >100km Parking Overhead & Yes & Parking Lot ## Appendix A ## Tier 1 Informations ### Tier 1 Business Model: [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] ## Appendix A ## Tier 2 Informations ### Tier 2 Business Model: The e-scooter innovation provides just a few Tier 2 interests: First and most important, providing e-scooter technology makes the Tier 2 take part of the e-scooter branch. In playing part in a new market branch equates to more sales. Second, **the identification of new mobility solutions and their enablers** allow the Tier 2 to forecast the new technologies needed for developing those solutions, which in turn builds market advantages. Some technology demands could be: * new energy supply system concepts * better sensors * optimized small micro controller architectures * human interaction display technology Especially the energy supply technology is a promising demand for the Tier 2s. That considered, there is no disruptive behavior recognizable for the Tier 2s. ### Tier 2 Market Segment Analysis: #### User Needs Projection: * The OEM and Tier 1 needs batteries, that have a huge capacity. * (The remaining User Needs shall be defined by the requirements engineer) #### User Stories: The OEM and Tier 1 wants to get some better battery technology from us. #### Environmental Constraints: None worth mentioning #### Business Opportunities: **Time to Market:** Immediately once the technology is available. **Return on Investment:** Market pressure is expected once other Tier 2 got similar well #### Tier 2 Creative idea drawing: The Tier 2 chooses to draw a midmap (see the figure below): The midmap focuses on the 4 identified technological interests of the e-scooter, namely the energy supply system, the sensors, the microcontroller architecture and the human infrac. Each of the parts have as leaves important properties and technologies assigned, which the Tier 2 identifies as the most interesting. The midmap adds general value to the information of the OEM and Tier 1 in form of the important technology factors. ## 5.3 Strategy Perspective: Strengths and Limitations The Strategy Perspective is kept abstract on purpose. It contains (mostly) non formal descriptions making it easy to kick start the Strategy Perspective by directly starting with the creative methods results. Additionally, the Strategy Perspective does not restrict IMoG to use any specific creativity methodology. While the burden to choose a creativity methodology for an innovation is shifted to the user, the interchangeability of taking a proven methodology for the stakeholders instead of a predefined methodology is considered an advantage. These decisions to focus on abstract and interchangeable modeling make the Strategy Perspective simple to model and visualize. On the other hand, the abstract and informal model of the Strategy Perspective builds the basis for basic analysis other than tracing. This is not per se a con but rather a limitation that was traded in for flexibility through informality. ### 5.4 Strategy Perspective FAQ The FAQ splits up into the categories: 1. Questions and answers about the general Strategy Perspective model elements 2. Questions and answers about the relation of the Strategy Perspective to other perspectives **Strategy model elements:** What Identifiers and Categories shall be chosen for the Identifiable Elements? Both Identifier and Categories shall be defined as best fitting to the Innovation. The Identifier and Category specification does not constraint the definition by more than being a number or string type. _Example:_ The Innovation "Providing mobility with an E-Scooter" may have the following identifiable elements. The first element is of the "Sub Goal" category and have as an Identifier the initiales "Sq" with a number attached. Similar to the first element is the second element build. Here the category is more innovation specific ("E-Scooter Property") and has as an identifier "EP 1". ``` SG1|Sub Goal|Thee-scootershallbeabletonsporttheuser. EP1|E-ScooterProperty|Thee-scootershallbeabletofby. ``` **When to use values for the Identifiable elements?** Values shall be used when the strategy information can be represented as such and represents a basic condition that shall be automatically evaluated against later defined model elements. _Example:_ EP1 represents a basic condition that shall be fulfilled by the later e-scooter models. Thus it has a value attached. EP2 represents only a guidance for the e-scooter distance reached. Thus it does not contain a value. EP1|E-ScooterProperty|Thee-scootershallbeabletreach20kmh.|Speed>20kmh EP2|E-ScooterProperty|Thee-scooterdistancereachedshallbearound30km.Ingeneral,themorethebetter.Howeverforsmalle-scootermodels,thedistancemayfallbelow30km. ``` **Relations to the other perspectives:** ### How does the Strategy Perspective relate to other Perspectives? The Strategy Perspective represents the first thoughts on the innovation and breaks them down into an innovation description and a set of Identifiable Elements. The Strategy Perspective builds the basis of understanding. The modeling activities in the other Perspectives build on this basis. The Identifiable Elements are designed to transport valuable information to the modeling elements of the other Perspectives by referencing and tracing the Identifiable Elements onto the other Perspective Elements. _Example:_ SG 1 describes a function in an abstract way. This function shall be considered in the modeling activities. Thus SG 1 was created as an Identifiable Element for tracing and referencing. The Feature Tree Modeler creates the Feature "Driving" and references the Identifiable Element onto it to show that the strategy request is fulfilled. SG 1 |Sub Goal | The e-scooter shall be able to transport the user. Similar to SG 1, EP 1 describes a requirement that is translated and mapped on a requirement by a requirement engineer. The reference on the requirement represents that the requirement engineer has fulfilled the request from the strategy department. EP 1 |E-Scooter Property | The e-scooter shall be able to reach 20kmvh. | Speed > 20kmvh | D: R1 Priority: 1 | Name: Speed | Text: The e-scooter speed shall reach 20kmvh on flat ground with a normal person and no extra luggage. A general problem is the jump from creative methods to the modeling viewpoints. How does this jump from the Strategy Perspective to the other Perspectives shall be done? The Jump from the creative methods to the modeling viewpoints is done by some additional steps in between (For a visual representation see the Working Process under 'IMoG.drawio - 'IMoG Working Process' tab): 1. The results of the creativity methods are integrated into the Innovation description. 2. From there on the Traceable Elements are defined. 3. Creating abstract User Stories and Use Cases have been identified as a reasonable step. 4. From there on, the Feature Tree and Requirements can be identified. Example: The OEM used a "Zwicky Box" as a creativity method. The "Zwicky Box" result was integrated into the Innovation description. The Speed parameter from the "Zwicky Box" was identified as one Identifiable Element. Then the User Stories and Use Cases where build (left out here) and the model activities with Feature Trees and Requirements were started. As a result, the requirement R1 resulted. For further details, see the Strategy Perspective example as well as the Functional Perspective Example and Quality Perspective Example in their respective draw. No files. EP 1 |E-Scooter Property | The e-scooter shall be able to reach 20kmvh. | Speed > 20kmvh | D: R1 Priority: 1 | Name: Speed | Text: The e-scooter speed shall reach 20kmvh on flat ground with a normal person and no extra luggage. ## Chapter 6 Functional Perspective The Functional Perspective is the second perspective in IMoG and describes the features (end-user visible characteristics) and functions (traceable tasks or actions that a system shall perform) of the innovation (see Figure 6.1). The purpose of the Functional Perspective is to capture the features and functions required for the innovation before diving deep into solutions. The Functional Perspective is thus used in the early phases of innovation modeling and is part of capturing the problem space. The Functional Perspective model bases on the well-known feature trees [9]. Feature Trees are a subclass of Feature Models, which restrict themselves to tree structures. This restriction reduces the complexity and increases the ability to maintain an overview. Feature Trees put a high focus on variability and are often used in product line engineering. The high-level innovation modeling in IMoG on the other side, does not focus too much on variability and the modeling of details are expected to be after the innovation modeling, being part of the subsequent design and engineering phases. That noted, having the ability to model a bit of variability may support in expressing innovation dependencies. Figure 6.1: Location of the Functional Perspective in IMoG IMoG also makes some extensions to feature trees, however these are mostly of cosmetic nature and can be directly translated to the default feature trees. This translation allows the use of available Feature Tree analysis tools. The chapter is structured as followed: In Section 6.1 the meta model and its model elements are presented. In Section 6.2 an example of the Functional Perspective is given. The strengths and limitations of the Functional Perspective are discussed in Section 6.3. A FAQ finalizes the description in Section 6.4. ### Model elements The meta model (see Figure 6.2) builds on the FODA (Feature Tree model and Feature Diagram model, [9]) and includes all relevant concepts of FODA. The meta model has - as the top level unit of the Functional Perspective - the Functional Perspective Model. It contains a set of Blocks (FP) with Relations (FP) between them. Additionally Groups of Blocks (FP) are contained. Blocks (FP) represent the basic units of functionality known from Feature Trees and are extended with several attributes, Blocktypes and an Abstraction Level (which can be either Context Level, System Level, Component Level or of Type custom Abstraction Level). A detailed description of the attributes can be found in the Block (FP) description. Unlike the original FODA model definition [9], where functions are explicitely not part of the Feature Tree, Blocks (FP) are here further categorized into features and functions for specifying what explicitly a 'Block' means. A feature represents a logical unit of behavior that is too abstract to be mapped on structural solutions, while a function represents a mappable unit onto the structural solutions. For a flexible mapping, each feature shall have a set of functions. The ability to model functions allows the seamless tracing from features onto functions and later onto solutions on the Structural Perspective. Several types of Relations between Blocks (FP) can be made, including and extending the typical relations from Feature Trees. The Relations (FP) split up into Parent-Child Relations, Constraint Relations and Variation Point Relations. The Parent-Child Relations include the Alternative-Relation, the Or-Relation, the Mandatory-Relation and the Optional-Relation known from Feature Trees. Additionally the Parent-Child Relations can be optionally labeled as 'Refinement' or 'Decomposition' or a custom Parent-Child Relation can be used. The Constraint Relations include the known extensions to Feature Trees to express restrictions on configurations: The Require and Exclude relations. If not enough, custom Constraint Relations can be added. The last extension made to Relations (FP) are the Variation Point Derivation Relation to represent similar alternative choices. The following model elements Section will dive into more details. Description: **Functional Perspective Model** The _Functional Perspective Model_ is the diagram of the Functional Perspective of an innovation. It contains all model elements of the Functional Perspective. Example: A full Functional Perspective Model example is shown in Section 6.2. Meta Model Base Meta Model Element: Description: Figure 6.2.: The model elements of the Functional Perspective. **Block (FP) (Read: Block on the Functional Perspective)** The _Block (FP)_ is the abstract Block element of the Functional Perspective which is implemented by _Features_ and _Functions_. It defines the attributes of the Blocks: * The Abstraction Level of the Block defines the level of abstraction the Block represents. It can be either _Context Level, System Level, Component Level_ or from the type _Custom Abstraction Level_. * An optional _Custom Block Type_ can refine the category of the Block further. * The _HTMLDiv_ represents the description of the Block to solve the problem of lack of clarity by adding information next to Feature Trees. The description shall answer shortly "What the Block shall provide?", the Reasoning behind the Block and its basic conditions to work and if the Block has alternative choices, then additionally the binding time of the choice. The binding time being part of the _HTMLDiv_ is considered enough here. It does not have to be a Block property like proposed in the original Feature Tree publication [9]. There is no template needed. Images or drafts provide valuable information. * Optional Custom Block Properties can be defined for additional tooling analysis including filtering and consistency checks. * The _User Stories, discussion_ and _version_ enhance the Block description. Example: The following block shows an example of a Block (FP) with its attributes. \begin{tabular}{|p{284.5pt}|} \hline \multicolumn{2}{|c|}{Providing mobility with an e-scooter} \\ \hline \multicolumn{2}{|c|}{**Description and Reasoning:**} \\ \multicolumn{2}{|c|}{_The addition of other mobility concepts, besides cars and public transport, make a valuable contribution to a better road transport. E-scooters represent a flexible and perfect way to get from A to B for short distances, as they are more environmentally friendly._} \\ \multicolumn{2}{|c|}{Some Subgoals are...} \\ \multicolumn{2}{|c|}{**Basic Working Conditions:**} \\ \multicolumn{2}{|c|}{The scenarios include individual owned e-scooters, permanently used e-scooters, comfort requested e-scooters and simple requested e-scooters...} \\ \multicolumn{2}{|c|}{**Priority (Property): 1**} \\ \multicolumn{2}{|c|}{**Notes:**} \\ \multicolumn{2}{|c|}{(1) Alternative Choice (Binding Time) resolved by company's Application Engineering (2) An E-Scooter is self-balancing if it is equipped with an integrated electronic balance, propulsion, steering, and deceleration. system by which it can maintain itself balance.} \\ \multicolumn{2}{|c|}{User Stories / Use Cases} \\ \multicolumn{2}{|c|}{Discussion / Feedback} \\ \hline \hline \end{tabular} Meta Model Element: Description: **Relation (FP) (Read: Relation on the Functional Perspective)** The abstract _Relation (FP)_ describe relations between _Blocks (FP)_ or respectively _Variation Points_ on the Functional Perspective. _Relations_ are further categorized by * _1-to-1 Variation Point Relations_ * _Parent-Child Relations_ between Blocks (FP) of either category _1-to-n Parent-Child_ relation or category _1-to-1 Parent-Child_ relation. The _Parent-Child Relations_ can be specified by an optional type, which can be either of value _Decomposition_ or _Refinement_. Note that the additional stereotypes are similar to the relation differentiation _{Specialization (Refinement), Decomposition, Parametrization}_ outside the model definition in the original Feature Tree publication [9]. * 1-to1 Constraint Relations between Blocks (FP) Each of the Relations are described in more detail on its own. Example: An example is shown for each relation under their respective description. Meta Model Element: [background=true] (0, **1-to-n Parent-Child Relations** The _1-to-n Parent-Child Relations_ build the Foundation for the well known _Alternative_ and _Or_ Relation from Feature Trees. _Alternative_ and _Or_ Relations can connect one parent Block (FP) with multiple child Blocks (FP). They are used to describe Decomposition or Refinement Choices of the Parent Block. Additionally, 1-to-n Parent-Child Relations can own Variation Points. _Variation Points_ represent the description of the choice or variability and are written optionally. For Refinement Relations, the special Refinement Representation using _Variant Lists_ can be used. However, the normal Representation and the Variant List Refinement Representation are both valid. **Alternatives Representation** Both Representations are allowed! The Refinement Representation is just useful for depths up to 1 of Alternatives, such that any further Refinement alternative depth is represented as a normal Feature Tree Alternative! The Gain of the Refinement Representation lies in the compact representation and is thus enabled for usage in IMoG Feature Trees! Example: **A** **B** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** ** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** ** **C** **C** **C** **C** **C** **C** **C** ** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** ** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** ** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** ** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** ** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **C** **Model Elements:** Block Types: 1. Features 2. Functions Relation Types: 1. And Relation (Mandatory Sub-Features) with * Refinement / Decomposition Relations 2. Optional Features (+ Optional Relations) with * Refinement / Decomposition Relations 3. Xor Relation (Alternative or Variant) with * Refinement / Decomposition Relations * Variation Point * Variant List (Refinement Representation) * Cyclefree Variation Point Selection Derivation 4. Or Relation with Cardinalities with * Refinement / Decomposition Relations 5. Constraint Relations (Require / Exclude) * Constraint Grouping Miscellaneous: 1. Notes In the following the model elements are introduced. First the two Block Types are introduced and then the relations between the Blocks are presented. Lastly the "Notes" element is introduced. Block Types Meta Model Element: Description: **Feature** The _Feature_ is a Block (FP) of the Functional Perspective and is next to _Functions_ one of the two existing Block (FP) elements. A Feature defines a logical unit of behavior. It semantics originates from Feature Models [9]. However, Features are additionally understood here as actionable, uncountable items and shall be described like an activity. A Feature is considered as too abstract to be mapped on structural solutions. The Stereotype <<Feature>> can be omitted. Example: Description: **Feature** The _Function_ is a Block (FP) of the Functional Perspective and is next to _Features_ one of the two existing Block (FP) elements. A Function defines a logical unit of behavior that shall be implemented by structural components. Functions are understood here as actionable, uncountable items. and shall be described like an activity. Example: Relations Description: **Mandatory Relation** The _Mandatory_ relation connects one parent (always the top one) Block (FP) with a child (always the bottom one) Block (FP). The _Mandatory_ relation describes that the child Block must be provided once the parent Block is part of the configuration. The relation has two additional stereotyped forms: The _Mandatory-Decomposition_ relation and the _Mandatory-Refinement_ relation. The _Mandatory-Decomposition_ relation describes, that the child Block is a decomposed element of the parent Block. The _Mandatory-Refinement_ relation on the other hand describes that the child Block is a refinement of the parent Block. If the Stereotype is omitted, then the _Mandatory_ relation is interpreted as a _Mandatory-Decomposition_ relation. Example: Meta Model Element: Description: **Optional Relation** The _Optional_ relation connects one parent (always the top one) Block (FP) with a child (always the bottom one) Block (FP). The _Optional_ relation describes that the child Block may be optionally provided once the parent Block is part of the configuration. The relation has two additional stereotyped forms: The _Optional-Decomposition_ relation and the _Optional-Refinement_ relation. The _Optional-Decomposition_ relation describes, that the child Block is a decomposed element of the parent Block. The _Optional-Refinement_ relation on the other hand describes that the child Block is a refinement of the parent Block. If the Stereotype is omitted, then the _Optional_ relation has no additional semantics assigned. Example: \begin{tabular}{|p{142.3pt}|} \hline \multicolumn{2}{|l|}{**Alternatives and Variation Points**} \\ The _Alternative_ relation is a well known element from Feature Trees. It represents a choice of Blocks (FP) from which exactly one option can be taken. _Variation Points_ represent the description of the choice or variability and are written optionally with the _Alternatives_. _Alternatives_ are used to describe Decomposition or Refinement relations of the parent Block. For Refinement relations, the special Refinement Representation using 'Variant Lists' can be used. However, the normal representation and the Variant List Refinement representation are both valid. \\ As mentioned before, the _Alternatives Relations_ can be specified by an optional type, which can be either of value _Decomposition_ or _Refinement_. \\ \hline \multicolumn{2}{|l|}{Example:} \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{**Variation Point Relation**} \\ The _Variation Point_ relation connects two _Variation Points_ with each other. Up to now there are only the _Derivation_ relation and the _Custom VP_ relation defined. The _Derivation_ relation represents the derivation of the choice of the target _Variation Point_ given that the choices are the same. The _Derivation_ relation can only derive from Variation Points with at least a level higher than the trees depth. This avoids cycles. With this defined, some global configuration can be defined and locally used on the fitting places. The _Derivation_ relation can replace several _Require_ relations. The _Custom VP_ relation allows to describe additional relations between Variation Points. Custom VP Relations are not analyzed. \\ \hline \multicolumn{1}{|c|}{Example:} \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{**Variation Point Relation**} \\ The _Variation Point_ relation connects two _Variation Points_ with each other. Up to now there are only the _Derivation_ relation and the _Custom VP_ relation defined. The _Derivation_ relation represents the derivation of the choice of the target _Variation Point_ given that the choices are the same. The _Derivation_ relation can only derive from Variation Points with at least a level higher than the trees depth. This avoids cycles. With this defined, some global configuration can be defined and locally used on the fitting places. The _Derivation_ relation can replace several _Require_ relations. The _Custom VP_ relation allows to describe additional relations between Variation Points. Custom VP Relations are not analyzed. \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{**Variation Point Relation**} \\ The _Variation Point_ relation connects two _Variation Points_ with each other. Up to now there are only the _Derivation_ relation and the _Custom VP_ relation defined. The _Derivation_ relation represents the derivation of the choice of the target _Variation Point_ given that the choices are the same. The _Derivation_ relation can only derive from Variation Points with at least a level higher than the trees depth. This avoids cycles. With this defined, some global configuration can be defined and locally used on the fitting places. The _Derivation_ relation can replace several _Require_ relations. The _Custom VP_ relation allows to describe additional relations between Variation Points. Custom VP Relations are not analyzed. \\ \hline \end{tabular} Example: Description: **Or Relation with Cardinality** The _Or_ relation is a well known element from Feature Trees. The _Or_ relation represents a choice of Blocks (FP) from which one or more options can be taken. The _Or_ relation is a generalization of _Alternatives_. The _cardinality_ describes how many Blocks can be chosen to create a valid configuration. _Variation Points_ represent the description of the choice or variability and are written optionally with the Or relations. As mentioned before, the Or relations can be specified by an optional type, which can be either of value _Decomposition_ or _Refinement_. Or relations are mostly used to describe _Decomposition_ relations of the parent Block. _Refinement_ relations need to have a _cardinality_ of [1,1] to be valid thus the _Alternative_ relation is used for them instead. Example: Description: **Constraint Relation** The _Constraint_ relation connects one Block (FP) with any other (even non child) Block (FP). The _Constraint_ relation describes a restriction to the space of configurations and thus how the constraint Block relates to the other Block if the other Block is part of the configuration. There are currently three Constraint Relation Types: * The _Require_ relation A\(\rightarrow\)B specifies that if A is part of the configuration, then B must be part of the configuration too. * The _Exclude_ relation A\(\rightarrow\)B specifies that if A is part of the configuration, then B is not allowed to be part of the configuration. * The _Custom Constraint_ allows to describe additional constraints between Blocks (FP) and own a custom type. _Custom Constraints_ are not analysed. Example: [MISSING_PAGE_POST] ### E-Scooter example The example of the Functional Perspective describes the features and functions identified for the innovation of "Providing mobility with an e-scooter". Figure 6.3 shows the feature model including the context level only. The root feature "Providing mobility with an e-scooter" is marked here, showing the description and properties of the root block. It includes the reasoning behind this innovation, some goals from the strategy perspective, some basic working conditions, notes as well as references to the use cases and user stories. The root feature has three mandatory subfeatures, the "Driving" feature, the "Damping" feature and the "Showing Insurance" feature, and one optional "Loading Capacity" feature. The root feature has a variation point focusing on the type of the e-scooter: either a simple and cheap version of the e-scooter or a comfort version. Note, that the variation point can be represented like an Alternative relation with a name. Finally, the e-scooter has three features as a choice, from which two must be at least be taken. This choice includes the "Carrying" feature, the "Balancing" feature and the "Maintaining" feature. The choice is more for showing a complete example than having a decent reasoning behind the choice. Figure 6.4 now also includes the system level and component level (shown in green blocks and purple blocks respectively). This figure shows also constraint relations, grouping and variation point relations. ### 6.3 Functional Perspective: Strengths and Limitations The Functional Perspective targets the modeling of the problem space basing on the well known feature trees. This tailoring on features and functions from the problem space alone makes it easy to apply to any innovation. The well known basis on feature trees makes the Functional Perspective straightforward to use for any experience modeler. The basis on feature trees also enables to use the tooling and analyses from feature trees for the Functional Perspective. The extensions provided in the Functional Perspective are for the purpose of adding further information, for filtering and for a better usability and are of cosmetic nature only. The distinction between _Features_ and _Functions_ is well used in the automotive domain and thus supports well the domain of this project. While these extensions do in fact represent a small learning overhead, the overhead is reason Figure 6.3: The identified features and functions for the innovation of “Providing mobility with an e-scooter”. able. These extensions can be translated directly into basic feature trees. That noted, a feature model expert can model the Functional Perspective without knowing about the extensions. Feature trees are also great at modeling a mix of _Decomposition_ and _Refinement_ in one model. Additionally, the Functional Perspective can be constrained via Requirements from the Quality Perspective, leading to a very sophisticated model. As limitation, the Functional Perspective is not designed to capture early structure and interfaces between features. However, this is also good, because it stops the modeler from thinking too much about solutions and their interfaces. It is also well known that feature models can grow very large due to the vast amount of variants, however in the context of abstract innovation modeling, these models should be manageable. Also worth discussing is the purpose of feature models as a basis for the Functional Perspective. Feature models are made for modeling variability and often used in product lifecycle management. Innovation modeling on the other side is abstract and has not too much variability to model. While this clash of purposes may be debatable, the IMoG applications showed that the Functional Perspective is good to use. ### 6.4 Functional Perspective FAQ The FAQ splits up into the following categories: * Questions and Answers about the general Feature Trees * Questions and Answers the Tooling * Questions and Answers about the concepts an dependencies * General Questions and Answers about the Functional Perspective #### Feature Tree Base **What is a Configuration?** _Taken from the Glossary:_ A feature Configuration is a chosen set of Features that represent an innovation where every choice is resolved. _Example:_ If the whole model consists only of the _Braking_ Feature and the _Manual Braking - Hardware Braking_ Choice, then the model has two configurations. Note that the Alternative-Relation can be represented by an Or-Relation with Cardinality [1,1] too: 1. One Configuration would be the set of chosen Features and Functions (_Braking_, _Manual Braking_). 2. The other Configuration would be the chosen set (_Braking_, _Hardware Braking_). **What are Cardinalities used for?** Cardinalities describes how many Blocks (FP) shall be chosen from _Alternative Or Choices_ to create a valid configuration. _Example:_ If the whole model consists only of the _Braking_ Feature and the _Manual Braking_, _Hardware Braking - Electro Braking_ Choice with the Cardinality [2,3] then the model has four configurations: 1. The Configuration with 2 Variants taken: (_Braking_, _Manual Braking_, _Hardware Braking_). 2. The second Configuration with 2 Variants taken: (_Braking_, _Hardware Braking_, _Electro Braking_). 3. The third Configuration with 2 Variants taken: (_Braking_, _Manual Braking_, _Electro Braking_). 4. The Configuration with all Variants taken: (_Braking_, _Manual Braking_, _Hardware Braking_, _Electro Braking_). **What are Or-Relations used for?** Or-Relations are used whenever more than one Variant shall be selected, or when the the number of chosen Variants shall have a set minimum and maximum bound. _Example:_ The innovation shall _Proude Transportation_ from _A-to-B_. Ideally it shall overcome all obstacles in the most convenient manner. This could include the Functionality of _Diving_, _Swimming_ or _Flying_. However, providing one of the three choices is considered sufficient here. **When to resolve choices (Early or Late Binding)?** While deciding which choices to take is not really relevant for the context of innovations, Feature Models lack clarity from the side of the resolution of the choices. This results in misunderstandings that are harder to resolve later on. To understand the problem, a _Paying the autonomous kavis_ Feature with the _Alternatives Using Cash_ and _Using Digital Money_ Example is used. Now two interpretations arise: 1. The provider of the autonomous taxis has two taxi configurations. One that provides payment methods by using Real Cash and one that provides payment methods by using Digital Money. The Resolution is this case lies within the company. 2. The provider of the autonomous taxi has one taxi configuration that provides both payment methods. The user of the taxi have then to choose one of those payment methods. Here the user resolves the choice. There are upsides and downsides for both interpretations. To overcome this problem the modeler shall be aware of it and resolve the problem in the description of the _Paying autonomous kavis_ Feature. Being part of the description is considered enough here. It does not have to be a Block attribute like proposed in the original Feature Tree publication [12]. Note that for simplicity, the Variation Points do not have any _binding time_ attribute. #### Tooling **Can a user select Multiple Refinement Choices from one Variation Point at once?** Yes, cited from Model Element Description: "[-] Multiple Alternatives of the same Variation Point shall be selectable too.Then the Analyse is restricted to choose one Alternative out of the selected Set of Alternatives in all analysed configurations. [-]. This includes Refinement Alternatives. #### Concepts and Dependencies General FAQ **Why does the Functional Perspective extend Feature Trees (and Diagrams) and does not just use the standard FODA - Feature Trees from [Ka90]?** The extensions can be categorised into five categories why the extensions where made: 1. **Raising Expressiveness:** Known Extensions like Or Relations with Cardinalites are added to model restrictions in the Configurations that can not be expressed with standard Feature Trees. 2. **Adding Version Control and Tracing Capabilities:** The Block (FP) Properties version and the references to the _User Stories_ are added to support typical requirement engineering processes. 3. **Adding Custom Domain Support:** Innovation Modeling is quite an abstract Domain. The Extensions with the _Custom Blocktypes_, _Abstracting Level and Relations_ are added to refine the model elements to better fit the domain of an innovation and increase its Usability. 4. **Resvolving Interpretation Problems:** A lack of Information and Clarity creates Misunderstandings. The original Feature Trees [Kang90] identified that problem and added information next to Feature Trees. Here _HTML Dvs_ and _Descriptions_ are added to Blocks as additional model elements. Relations can be differentiated into the two main Engineering Concepts _Refinement_ and _Decomposition_. _Variation Points are added to Alternative Choices to explicitly specify the variation object and lastly the distinction between _Features_ and _Functions_ were added to explicitely state, which elements shall be mapped on structural solutions. 5. **Compact Representation and Usability:** Feature Trees can get very large, thus a compact representation is desirable. _Variation_ links reduce space of modeling _Alternatives_ and _Variation Point Selection Duration_ reduce resulting among Variation Points. Lastly _Grouping_ is added for fast testing certain sets of Configurations and _Abstraction Levels_ as Block Properties are added for better Filter Functionally. Note, that points 2-5 do not increase Expressiones in the sense that Tools for Feature Trees can still be used without the additional informations of the points. **There are several ways to restrict the set of configurations. Which are they?** The ways to restrict the set of configurations are: * Replacing Optional Relations with Mandatory Relations * Removing Choices, * Cardinalites, * Require / Exclude Relations, * Groups, * Variant Point Selection Duration * Manually selected Blocks and selected Sets of Alternatives (Tooling) Each element is described under _Model Element Description_ Section in more detail. **How to handle domain specific categories and domain specific dependencies?** Domain specific categories shall be added by using domain dependent custom stereotypes (aka categories, aka types) for features and functions to better filter them. The same shall be done for Relations: Each general (Decomposition) Relation shall be stereotyped with custom types. Additionally the Tooling shall provide the possibility of adding profiles. **Example:** For the _city mobility design_ domain the Feature _Providing City mobility_ may have two Features as children (_Cx_, _Bus_). The domain has several important categories like private and public mobility, which are used as custom stereotypes here. Additionally the Relation is typed with the custom stereotype "Products" (_Public Mobility_) "Public Mobility" "provide" too. The tooling may provide coarse analysis techniques for custom stereotypes. Features and Functions When to use Features and when to use Functions? Features shall be used whenever the Block (FP) describes a characteristic or logical or behavioral unit of an innovation that is too abstract to be referenced on some Block in the solution space. Functions on the other side shall be used whenever the Block (FP) describes a logical or behavioral unit that shall be fulfilled by certain Blocks in the solution space. Functions are seen as a set of logical units that describe a Feature as mappable units. Thus - when the Functions are mapped - the Feature is also considered to be fulfilled. However there is no clear line whether some characteristic or logical unit is a Feature or Function. Use as appropriate! Note that, making use of not using any Stereotype (Feature, Function) is fine too. The Block will be interpreted as a Feature then (For analysis purposes) Example: Driving is considered a Feature, as there are multiple Blocks in the solution space that fulfill the Driving Feature together and it seems unreasonable to assign _Driving_ to one specific Block. _Braking_ on the other hand, can be reasoable assigned to a Block or set of Blocks in the solution space. Thus it is considered a Function. Can a Function have a Feature as child?Yes, Functions can have Features as childs. This allows to decompose Functions into abstract, but very specialized Behavior. Example: The Feature Driving is decomposed in the Function Controlling Velocity.Controlling Velocity is considered a mappable Function but also requires Safe State Supervising ability. Safe State Supervising is however really abstract and not mappable on solutions thus it is considered a Feature. The Feature shall be realized by a _History Comparision_ Function and by a _Detection of the Driving Scenario_ Function. If all child Functions of a Function are mapped, shall the parent function be mapped on the solution space too?Yes definitely. Else the integration of all subfunctionality may be forgotten!Do Features and Functions have properties?Features and Functions can own custom properties if the attached HTMLDiv is insufficient.However custom analyses (like consistencychecks ) and tooling (like filtering) must be provided to make them useful.Does naming conventions for Features and Functions exist?Yes, Features and Functions are understood here as actionable Items. To underline this, they shall be named like an activity.Many literature examples describe system solutions instead of actions, which we explicitely do _not_ want.They shall be stated as systems or components on the Structural Perspective.Example: The Feature Hardware Brake shall not be used as it implies being a component.Hardware Instead it shall be named as _Hardware Braking_ (an action). Variation Points - Variant List Representation - Variation Point Selection Derivation - OVM What is a Variation Point?From the Glossary: A Variation Point is a representation of a variable item of the real world or a variable property of such an item within domain artefact enriched by contextual information. [Pohl]A Variation Point can be interpreted as a named Alternative Choice.Example: The Feature Providing Mobility with an E-Scooter (1) has an Alternative Choice of choosing the E-Scooter Type "Simple" or "Comfort".The E-Scooter Type is the Name for the Variation Item (Thus the Variation Point) with the Choices "Simple" and "Comfort".The Alternative Choice represents a Refinement of the E-Scooter Type, thus the Refinement Representation (2) is eligible too. **Why add Variation Point names to Alternatives (Variants)?** There are two different reasons why a name for an alternative choice is useful: 1. When the user want to specify the variable item of the real world or "What exactly varies here?" in more detail. 2. When the user want to make use of the _Derivation_ Relation of Variant Points. _Example:_ The Variation Point E-Sooter Type is labeled to specify the object of variation in more detail, plus for using the Derivation Relation extension. **Can a Feature or Function Block have Multiple Variation Points?** The model itself does not prohibit multiple Variation Points for a Feature or Function Block. However the number of Variation Points using the Refinement Representation is restricted to one for one Feature or Function Block for Readability. Note that - while not prohibited - the use of multiple Refinement-Relation based Variation Points can cause wrong models when the Refinements are not composable (see the example). For more details see the Model Element Description of _Alternatives and Variation Points_. _Example:_ The Feature _Providing Mobility with an E-Sooter_ has two different Refinement based Variation Points. One specifies the the E-Sooter Type with "Simple" and "Confort" while the other specifies the comfort Space with "Medium" and "Extra Large". The Variation Points are providing a contradiction, since one expects that an E-Sooter of Type "Simple" does not have any comfort Space. In this case it is better to restructure the model by either refining the "Comfort" E-Sooter with the comfort Space Variation Point or by fusing both Variation Points to one Variation Point with the choices (Simple, _Medium Spaceed Comfort, Extra Large Spaceed Comfort_). **Do Variant Lists have Cardinalities?** No, Variant Lists do not have Cardinalities. They represent an Alternative Choice with Refinement Relations and Cardinales of [1,1] are only reasonable for Refinements. **When to use Variation Lists and when not?** Variation Lists are a special Representation for Refinement-Alternative Choices. They can be used to reduce the needed representation space for alternative choices with a Variation Point name. The best use of the Representation is, if the user want to highlight a variance choice in the feature model. It is recommended to use this representation whenever possible. One shall refrain from this representation, when the representation form is unknown to a wide range of readers and the extension having a negative impact on the reality for such readers. This may be the case if the Functional Perspective Feature Tree is small and the extension seems unreasonable high to learn for the next reader. Note that, the depth of this Representation is one, thus Refinement of Refinements can not be represented in this special Representation Form. If such a Refinement of Refinements have to be specified, the use of the Feature Model Representation is reasonable. _Example:_ The _Hardware Brake_ is one Brake Type of the _Braking_ Function, that is chosen by the Type of the E-Sooter. The _Hardware Brake_ however has two own Brake Types too. The use of the special Representation here seems reasonable in a large model, however may be omitted in a small model for readibility. **When to use Variation Point Selection Derivation?** Variation Point Selection Derivation allows to reuse the selection of the derived Variation Point again. This results in a restriction of the number of configurations of an innovation. Variation Point Selection Derivation shall be used with caution as it represents a kind of replication of a choice. It should only be used if a Rest functioning of the model would not result in a resolution of the replication. _Example:_ Braking can be realised through a _Manual Brake_ or through _Hardware Braking_. The E-Sooter Type shall however be the decision ground for the choice of the Braking Type. _A Manual Brake_ shall be chosen when the E-Sooter Type is "Simple". _A Hardware Brake_ shall be chosen when the E-Sooter type is "Comfort". A Restructuring of the model is unfeasible, as the choice is far away from the E-Sooter Type choice. Thus the use of Variation Point Selection Derivation seems reasonable. #### What does Circular Dependency Freeness for Variation Point Selection Derivation mean? Circular Dependency Freeness means that no Feature or Function can derive any choice from a Feature or Function that has a deeper level in the model tree than the Feature or function itself (Meaning deriving from one of its child, sibling child, childrens child, and so on is not possible). This resolves the problem of Circular Dependencies and is mostly prohibited for avoiding confusing and simplifying analysis implementation. _Example:_ The Variation Point from A is derived from the Variation Point of C. Variation Point of C is derived from Variation Point from A. It now raises the question, where the choice is firstly decided and who derives from whom. While this can be technically solved by deciding the Variation Point with the lowest depth first, this is surely something that causes confusion and has no real point of modeling it that way. Generally deciding a Variation Point on a lower level first is considered unnatural and confusing itself. Thus the Prohibition of deriving from lower level Variation Points seemed reasonable. What is the Difference between the Orthogonal Variability Model (OVM [Pohl]) and the Integrated Variability model used here? There exists a number of differences between the OVM: * In both approaches Variation Points are used. However Variation Points are treated differently. In OVM they are a stand-alone model element, while they are only a named alternative choice here. * Variants are treated similar. In OVM they are a stand-alone model element that restricts certain configurations via Groups. In the integrated model here Variants are only normal alternatives from Feature Trees. * In both approaches the concept of Groups are defined. However Groups are treated different again. In OVM Groups are chosen by the OVM Choices and here Groups are chosen by the user. * OVM separates variability from features, such that product lines can use the same feature tree and a differentiation variability model for different countries. This comes at the cost of a degree of replication between the Feature Tree and the variability model. Here this kind of common Feature Tree does not exist at the "gain" of less replication. * The OVM owns several additional Dependencies between OVM elements. These are not existing here (Learnability Pro and Expressability Comb): * OVM Variants * OVM Variation Points (alternative, or) with dependencies (mandatory, optional, require, exclude) Conclusion: When there are not existing several feature trees (as for SPL), then the disadvantages of OVM makes the integrated model superior. This is exactly the case for GENIAL! Relations Where is the difference between the Refinement and Decomposition Relations? A Decomposition (or has-a) Relation describes a partitioning of a Feature or Function into its subfunctions and subfeatureExample: Braking is one important function that must be realized to achieve the Driving Feature. A Refinement (or is-a) Relation of a Feature or Function describes a fuller specification of the Function or Feature that is refinementsExample: Hardware Braking is one way of how Braking can be realized. Those modelers - who do not need the difference in Refinement and Decomposition - can abstract the Relation by leaving the Relation Type but. When shall Relations be labeled as Refinement or Decomposition Relations? There is no right or wrong answer to the question when to label Relations with "refine" or "decompose". A basic guideline is to label Relations whenever the label provide some kind of additional value to the reader. For example, if it is unclear how exactly a function or feature correlates with its parent. If a Relation is not labeled, then it does not have a default interpretation either! Note that the Tooling may provide additionaly Filter Functionality and Analysis based on the Relation Type. _Example:_ It is unclear what the Difference between Balancing and ADAS Balancing is. ADAS could describe a logical unit on its own that is needed for Balancing or a Refinement (and thus an exclusion to other alternates). Here it was meant as a simple Refinement. When to use Require / Exclude Relations?Require-Relations and Exclude-Relations are ways to restrict the number of configurations of an innovation. Those Constraint Relations shall be avoided to maintain Reachability and Understandability whenever possible. Only if 1. a restructure of the Hierachy of the Features and Functions (and their parents) seems unreasonable 2. and if the omission of the constraint is not possible those Constraint Relations shall be used. 1. may be seen unreasonable in certain cases, because a restructure may lead up to redundancy and thus a higher count of Feature and Function Blocks or would require a Constraint Relation at another point in the model. Example: For the Mobility-Feature the Comfort E-Scooter requires a Balancing Unit. A Simple E-Scooter can have optionally a Balancing Unit. Once are two ways to represent this dilemma. 1. uses a Requative Constraint Relation to visualize the dependency. 2. explicitly models the Balancing Units as two generate Features. In this simple example 1. seems more reasonable to avoid redundancy but in a large model, both may be used in reasonable manner. Miscellaneous How to represent Refinements of Refinements? There are three ways to represent Refinement of Refinements: 1. Using the Standard Feature Tree Representation for both Refinements only. This is not recommended. 2. Using the Variant List Refinement Representation for the first Refinement and the Standard Feature Tree Representation for the second Refinement. 3. Using the Standard Feature Tree Representation for the first Refinement and the Variant List Refinement Representation for the second Refinement. The ways 2. and 3. are the recommended ways. Depending on where to lay the focus, the modeler has to decide to either choose 2. or 3. Note that Refinements of Refinements do not have a special Representation for more depth to avoid Readibility and Understanding problems. Example: The Refinement of Braking to Hardware Braking and then the Refinement of the Hardware Brake Type has the following three Representations. For the overall example, 3. was chosen to lay a focus on the Hardware Brake Type. What are Groups used for and when _not_ to use Groups? Grouping is an additional Usability Feature that is used when the user wants to toggle a certain set of Features and Functions as required for the next analysis or for restricting the focus on the model visual. It semantics requests that if one Block (FP) of a Group is part of the configuration. Then every other Block in the Group must be part of the configuration too. Note that Groups shall not be used to replace Example: The blue Group t used for focusing visually on a certain set of Braking Blocks that are connected to the Velocity Display. Any started analysis require now that all Blocks in the Group are part of the configuration. User Stories and Use Cases are referenced in the Features and Functions. But how are User Stories and Use Cases connected to each other? User Stories and Use Cases are preliminary activities to find and refine Features and Functions. They are only viewed as references in the Functional Perspective. The creation and changing of User Stories and Use Cases is not part of any MoG Perspective. A dedicated Tool for handling User Stories and Use Cases is expected. The creation and handling on how to decompose Use Cases and User Stories is thus up to the creator and their dedicated Toolchain. Nonetheless, references of User Stories and User Cases to Features and Functions can be used to test User Stories / Use Cases coverage. ### General Stuff How does tracing between User Stories / Use Cases and Feature / Function looks like and how does one ensure traceability if one user story / use cases changes? What is impacted? There exists currently no default Tooling for creating and handling User Stories and Use Cases. Thus the way how tracing looks like is not yet defined. The implementation may include files for each User Story and Use Case or model references into a PLM tool. However this is a tooling question and not a focus in the definition of the Functional Perspective and its contents. The only thing precise yet is that there shall be User Stories and Use Cases referenced in the Features and Functional Regarding the change of a User Story and Use Case: It is not automatically possible when changing the User Story and Use Case directly, because the tooling is not fixed. However a Change Impact Analysis showing the direct and indirect dependencies of a User Story and Use Case to Features and Functions shall exist to enable tracking the affected entities. Feature Trees and Feature Diagrams do not scale well as most other graphical languages. Do Feature Trees and Feature Diagrams provide enough scalability? In the context of modeling future Innovations the scalability is sufficient. There are a multiple minor reasons which add to this assessment: * There is not much information available about future Innovations. Additionally the information has a high level of uncertainty. Thus modeling focuses on describing Innovations without many abstractions and model elements. * The Functional Perspective shall not contain information about solutions. Thus only ideas with their functionality are modeled. It is expected that the model is sufficiently small. * In the context of public roadmapping, partners do not want to share intellectual Property and want to focus on a common set of elements. Thus the number of elements is rather small than high. * The tooling shall provide information filter like the proposed hiding of the HTML. DVis into the Blocks. Thus additional information shall not clutter the screen. ## Chapter 7 Quality Perspective The Quality Perspective is the third perspective in IMoG applications and targets the coverage of mostly non functional requirements for both the problem space and the solution space (see Figure 7.1). The Quality Perspective contains for example constraints from the legislation, robustness requirements and performance requirements. Functional requirements should not exist in the Quality Perspective (hence its name) because the Functional Perspective already covers the features and functions of the innovation. Exceptions that cannot be handled on the Functional Perspective are fine. The Quality Perspective takes requirements from both sides: The problem space and solution space and represent an interface between the spaces. The Quality Perspective contains two representations: The table format for representing the requirements data. The dependency view for representing the inter Perspectives relations as well as the decomposition of the requirements (parent-child relation) With the Quality Perspective based only on generic tables, some recommendations for the data fields of each requirements were made. Special extensions do not exist. Figure 7.1: Location of the Quality Perspective in IMoG The chapter is structured as followed: In Section 7.1 the meta model and its model elements are presented. In Section 7.2 an example of the Quality Perspective is given. The strengths and limitations of the Quality Perspective are discussed in Section 7.3. A FAQ finalizes the description in Section 7.4. ### 7.1 Model elements The meta model of the Quality Perspective (see Figure 7.2) consists only of two important model element: the _Requirement_ and the _Requirement Relation_. The _Requirement_ element has quite a few attributes that are recommended to fill. This section will emphasize on presenting the attributes. The _Quality Perspective Model_ contains as its main members the _Requirement_ element. The _Requirement_ has a bunch of attributes. Some of those are modeled as an own Block to allow the restriction of the data types in form of Enums (Variable types with defined possible values). The Stereotype, the abstraction level and the assignee are part of them. To keep them extensible, each of them has a customization interface. Additionally, custom attributes can be defined. Each requirement can have relations. There are currently three types of relations: * the _Parent-Child_ relation to represent a requirement _Decomposition_ or requirement _Refinement_. * the _Constraint_ relation for describing which (problem space / solution space) Block shall satisfy the requirement and * the _Custom_ requirement for extensibility. Each attribute and relation is described in more detail in the following. **Model Elements:** * Quality Perspective Model * Requirement * Relations * Defined Stereotypes * Notes Model Elements In the following the model elements are introduced in four parts. First the _Quality Perspective Model_ is introduced, then the _Requirement_ with all its attributes and associated Enums is described. Afterwards, the _Relations_ are described. Lastly, the defined _Stereotypes_ and _Notes_ are presented. Meta Model Element: Description: **Quality Perspective Model** The _Quality Perspective Model_ is the diagram of the Quality Perspective of an innovation. It contains all model elements of the Quality Perspective. Example: A full Quality Perspective Model example is shown in Section 7.2. Meta Model Element: Description: **Requirement (Block)** The _Requirement (Block)_ is the main element of the Quality Perspective and represents any Quality Requirement, user need, constraint and so on. It is a quite complex element due to its importance. It defines many attributes. Each attribute can be imagined as a column in a requirements table. Except the outsourced description of the relations, each attribute is described in the following: * The _id_ for a requirement to identify and trace it. It typically takes a unique number of a unique string as an scheme. Both are possible here. * The optional _priority_ for giving the requirement an importance. The value interpretation has to be defined by the stakeholder. Default is an empty field. * The _name_ of the requirement to better identify it (for humans). * The _text_ of the requirement. It can be described as natural language or as formal sentences. If it is formulated as formal sentences then the requirement shall have a proper _Stereotype_ to allow analysis to identify and parse it. **Requirement (Block) continued** * The _satisfiability_ of the requirement. A numerical number between 0 and 1 which estimates the chance the requirement is fulfilled according to the year and other parameters. The satisfiability may be given by a formula instead of a static value. * The _Future Availability_ describes the year date when the requirement will be relevant. In the days / years before the given date this Requirement shall be 'ignored'. * The _discussion_ field provides a platform for discussing the requirements within the value chain. * The _reasoning_ field allows to give a rationale for the requirement definition. * The _version_ field is for proper version management and to identify updated requirements. The requirement has additionally attributes with predefined value ranges (so called Enums). These are represented by an own entity in the meta model: * The _Abstraction Level_ of the requirement defines the level of abstraction the requirement represents. It can be either _Context Level, System Level, Component Level_ or from the type _Custom Abstraction Level_. * Optional _Stereotypes_ can refine the category of the requirement further. There are some Stereotypes predefined, like _Discarded_ (Requirement), _User Need_ and so on. However one can define their own _Custom_ Stereotypes. A list of predefined Stereotype is presented in the FAQ. * The optional _Assignee_ of the requirement represent the responsibility owner of the requirement. It can be either of the predefined type _OEM_, _Tier 1_ or _Tier 2_ or set by a string to a _Custom_ Stakeholder. * If all the above attributes are not enough, or there is an attribute missing for the specified domain, then one can define a _Custom_ Attribute with a _name_, a _value_ and an _unit_. Example: An example with 4 requirements is shown below. The requirements attributes are presented shortly: * The _ids_ are represented as numbers for this example. * The _priority_ for three requirements were determined. The value 1 describes here the most important priority. * The first safety requirement is expected to be in all cases fulfilled (thus _satisfiability_ is 1=100%) to be conform to the German traffic rules. The other requirements should be fulfilled up to a certain degree according to the expectations. * Each requirement has a _name_ and a _text_. All of them are described as natural text with some being unfinished. * The _Future Availability_ is set to Now. Meaning the requirements shall be already considered. * The _parent_ requirement and the _targets_ represent relations, they are not further described here. * Each requirement has _Stereotypes_. The first requirement is considered to be part of the safety concept while the three other requirements are considered user needs. * All four requirements are considered to be part of the _Context Level_ abstraction level. * The _assignee_ of these requirements is either the _OEM_ or the _Tier 1_. * The _reasoning_ and the _discussion_ are not further detailed here. * The _version_ number of each requirement. \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline ID & **Priority** & **Satisfiability** & **Name** & **Text** & **Future Availability** \\ \hline 1 & 1 & 1 & Safety & The e-scooter shall have 2 brakes. & Now \\ \hline 2 & 1 & 0.8 & Braking & The braking power shall be greater than.. & Now \\ \hline 3 & & 0.8 & Damping & The cushed power shall be >=.. & Now \\ \hline 4 & 2 & 0.9 & Weight & The weight shall be <.. & Now \\ \hline 1D & **Parent Req.** & **Labels / Sources** & **Target** & **Abstraction Layer** & **Assignee** & **Reasoning** & **Discussion** & **Version** \\ \hline 1 & - & Safety Concept & Braking & Context Level & OEM &... &... & 2 \\ \hline 2 & - & User Needs & Braking & Context Level & Tier1 &... &... & 1 \\ \hline 3 & - & User Needs & Damping & Context Level & Tier1 &... &... & 1 \\ \hline 4 & - & User Needs & Carrying & Context Level & OEM &... &... & 1 \\ \hline \end{tabular} Meta Model Element: Description: * _Constraint_ relations to describe which Block on the Functional Perspective or the Structural Perspective shall fulfill the specified requirement. This constraint relation is often called <<satisfy>> relation when reversed. However, to avoid confusion, only <<constraint>> relations shall be used. * _Parent-Child_ relations between requirements. The _Parent-Child_ relations can be specified by an optional type, which can be either of value _Decomposition_ or _Refinement_ to specify the relation type. * _Custom 1-to-1_ relations between requirements can be described by using _Custom_ attributes of the requirement. Example: An example for the _Constraint_ Relation and the _Parent-Child_ relation is shown below. The _Custom 1-to-1_ relations are skipped for now. \begin{tabular}{|l|l|l|l|l|l|l|} \hline **ID** &... & **Name** & **Parent Req.** & **Labels / Sources** & **Target** & **Abstraction Layer** & **Assignee** \\ \hline 1 & Safety & - & Safety Concept & Braking & Context Level & OEM \\ \hline \hline 3 & Damping & - & User Needs & Damping & Context Level & Tier1 \\ \hline 4 & Weight & - & User Needs & Carrying & Context Level & OEM \\ \hline \hline 5.2 & Carry-Weight & 4 (Dcmp.) & User Needs & Carrying & System Level & Tier1 \\ \hline \end{tabular} The four requirements above have two important attributes that represent relations: * The _Parent_ requirement represent a _Decomposition_ or _Refinement_ relation. Only the "Carry-Weight" requirement has a parent: The weight requirement. The exact type of the relation is in this example not described. * Each requirement has a _target_. The targets describe in all four cases a function of the Functional Perspective (which can be found in the associated file). The same requirements represented in the Relations View would be depicted as followed: Meta Model Element: Description: **Stereotypes** The _Stereotype_ can refine the category of the requirement further. A requirement can have multiple stereotypes. However, it is recommended to not apply two that contradict each other or are of the same category. The following Stereotypes are predefined: Requirement Categorization Stereotypes: * **Quality Requirement* * **Performance Requirement* * Technical Professional Guess * User Need (non functional) * Safety Requirement * Security Requirement * Legal Constraint * Technology Requirement Status: * Discarded * Proposed * Confirmed (The default interpretation if no requirement status is given) There is some overlap in the definitions of the categories, for example between Quality Requirement and User Need. If one can not decide which category to choose, then take the one that feels as best fit. The categories are only used for Filtering Purposes, thus a miscategorization is not that harmful. The three bold categories are of special interest of some suppliers. Maybe these shall be given some special treatment? It is possible to define _Custom_ Stereotypes. Example: ToDo! Meta Model Element: Description: **Note** The _Note_ can be used to add information to the model that can not or should not be modeled. Notes should be used sparsely! Example: ### 7.2 E-Scooter example The example of the Quality Perspective comprises the innovation requirements of "Providing mobility with an e-scooter" in two views called Requirements Table View and Relations View. The Requirements Table View (see Figure 7.3) contains several identified quality requirements, user needs, constraints and so on for each abstraction level. The Requirements Table View focuses on showing all details of each requirement. The requirements of the component level are filtered out here (to keep the example small). The requirements are listed as relational data bases thus SQL queries shall be executable. The exact details of those requirements are not further explained. The Relations View (see Figure 7.4) hides the attributes and shows the requirements only by their names. The attributes shall be visible when marked with the mouse in an extra window. This view focuses on presenting the Parent-Child relations between two requirements and the Constraint relations of the requirements to the Blocks of the other perspectives. ### 7.3 Quality Perspective: Strengths and Limitations The Quality Perspective contains simply requirements tables and relation views. The Quality Perspective does not capture (many) functional requirements, because these requirements should be handled in the Functional Perspective. Otherwise, there is nothing special about the Quality Perspective. One noteworthy strength is the tracing of requirements to features, functions and solutions, because these links are already weaved into IMoG. ## Chapter 6 Conclusion In this chapter, we have presented a new method for constructing the \(\alpha\)-function of the \(\alpha\)-function Figure 7.4: Relations View. ### Quality Perspective FAQ The FAQ splits up into one part: * Questions and answers about the general requirements on the Quality Perspective #### Requirements [.5em] General Requirements FAQ [.5em] No Requirements have attributes? Yes, they dof Each Requirement has per default several attributes next to the name and the text. Moreover, if the default attributes are not sufficient, one can extend the Requirements with Custom Attributes. [.5em] _Example:_ The Requirement 'Safety' has an _id_ of 1, a _priority_ of 1, a _satisfability_ of 1 and is already relevant. [.5em] **Is it possible to define Variants or Alternatives of Requirements?** No. The reason for this is to avoid ambiguity and model complexity. The functionality and the solution technologies shall be interchangeable. The Quality Requirements, Performance Requirements may be broad or given with a high uncertainty. However, they shall not be modeled multiple times! [.5em] **Is it possible to label Requirements with labels like "User Need", "Quality Requirement" or "Constraint"?** Yes. "User Needs", "Quality Requirements" or "Constraints" can be added as Stereotypes in the column "Labels / Sources". Moreover, if the default stereotypes are not sufficient, one can define Custom Stereotypes. A list of predefined Labels can be found under the Model Elements section. [.5em] **Example:** The Requirement with the name 'Safety' was part of the Safety Concept and is internally differentiated to the Stereotype 'Safety Requirement'. Thus the Custom Stereotype 'Safety Concept' is introduced. The Requirement with the name 'Weight' goes well with the default Stereotype 'User Needs'. [.5em] **I can not decide which Stereotype to take? Which one is recommended?** There is some overlap in the definitions of the Categories, for example between Quality Requirement and User Need. If one can not decide which category to choose, then take the one that feels as best fit. The categories are only used for Filtering Purposes, thus a miscategorization is not that harmful. [.5em] **How to handle domain specific categories and domain specific properties?** Domain specific categories shall be added by using Custom Stereotypes and labeling the domain Requirements with them. Domain Specific properties shall be described by Custom Attributes, which represent added columns to the Requirements table. [.5em] **Example:** ISO 2626 defines a methodology for Functional Safety which defines process steps. Safety Requirements shall have a process step assigned to them. Additionally, ISO2626 defines ASIL Levels for different components. These shall be assigned to the Requirements as well. The process steps are given by Custom Stereotypes and the ASIL level is given by a Custom Attribute. [.5em] Requirements Attributes [.5em] **Which attributes of a Requirement shall be defined and filled?** Only those attributes that provide a use to the Innovation shall be defined. The general rule is: The fewer attributes are defined and the fewer attributes are filled out the better is the usability of the whole Perspective. Otherwise the acceptance may suffer in the form that adding new Requirements and updating old Requirements gets tedious due to the many fields to fill. Maintainability will decrease due to increased time spend. With decreased maintenance, the acceptance will further decrease until the content is outdated. [.5em] **Does naming conventions exist for Requirements?** No, however defining a convention that is tailored to the specific innovation is recommended. [MISSING_PAGE_POST] [.5em] **Does the content of the Requirement column 'targets'' represent model element relations or just **Abstraction Levels have defined Stakeholder assigned. The Assignee column contains Stakeholders as well. Does this overlap create Problems?** The defined Stakeholders of the Abstraction Levels are giving only a general direction who may be responsible. The main reason for the existence of the Abstraction Levels is to decrease the complexity by dividing the Requirements into categories. The optional Assignee column can be used to make exceptions to the general responsibility of the Abstraction Level or to underline that somebody has the responsibility. ## Chapter 8 Structural Perspective The Structural Perspective is the fourth perspective in IMoG and focuses on the modeling of the solution space of the innovation in an abstract manner. (see Figure 9.1). Based on the functions, features and requirements from the problem space, the Structural Perspective draws the corresponding possible solutions. The Structural Perspective is located at the later phases of innovation modeling shortly before the roadmap is written. The Structural Perspective bases on the well-known concepts and representations of systems engineering. The concepts of _Decomposition_, _Refinement_ and _Variation_ that are used on the perspectives are here of high importance too. Next to the three concepts, the concept of properties and their relations provide the possibility to describe and compare solutions in detail. The description of solution alternatives is provided by variants for Blocks (SP) and Refinement Blocks. The representation of the Structural Perspective also follows the principles of systems engineering by using hierarchical Blocks and arrows / channels as relations. The chapter is structured as followed: In Section 8.1 the meta model and its model Figure 8.1: Location of the Structural Perspective in IMoG elements are presented. In Section 8.2 an example of the Structural Perspective is given. The strengths and limitations of the Structural Perspective are discussed in Section 8.3. A FAQ finalizes the description in Section 8.4. ### 8.1 Model elements The meta model of the Structural Perspective (see Figure 8.2) builds on the _Decomposition_ and _Refinement_ concepts. The meta model has the _Structural Perspective Model_ as the top level unit of the Structural Perspective. The Structural Perspective Model contains a set of _Decomposition Models_. Decomposition Models represent the canvas fields with their sketchy system models known from system modeling tools like Cameo or Enterprise Architect. The Structural Perspective can have multiple top level models, however it is recommended to only take one unless more are needed. A Decomposition Model consists of _Structural Model Elements_. The Structural Model Elements include _Blocks_ (on the Structural Perspective) and _Relations_ between them, _Packages_ and _Notes_. Blocks (SP) represent solutions and own a bunch of attributes. Among them are a _name_, a _description_, a _discussion_ chat, potentially an _internal model_, a _version_, a flag for the _selected refinement variants_, an _abstraction level_ (either of type _Context Level_, _System Level_, _Component Level_ or a _Custom Abstraction Level_), a _Decomposition Model_ and a reference to a possibly refined _parent block_, _Notes_, a _Stereotype_ and possibly some _Refinement Groups_. Refinement Groups are the second important type of concept incorporated in the Structural Perspective model. They allow to refine the Blocks by giving additional information and properties. Each refinement is represented by a Refinement Block owning a set of properties. Like Blocks (SP), Refinement Blocks can have a custom or a defined Stereotype, either of type _Technology_, _Mission Profile_ or _Application_. These definitions go loosely hand in hand with the content of the Mission Profile standard (MPFO). Additionally Blocks (SP) can own zero to any number of properties (defined by a name value and a unit). Blocks (SP) have two predefined properties: _Availability_ and _Feasibility_ properties. _Solution Space Descriptions_ (using PMML) can be added to blocks to allow multidimensional adjustments on the property variable values. A detailed description of each attribute can be found in the Block (SP) description. Two types of relations exist: _Channels_ and _Arrow Relations_. The abstract base class _Relations (SP)_ defines the basis of both. It contains an _HTMLDiv_ for adding information, a _version_ number, a _discussion_ chat and a _text_ label. Unlike relations on other Perspectives, these _Relations (SP)_ are rather complex and independent elements. _Channels_ represent an information exchange between two Blocks (SP). _Arrow Relations_ are used for any other (often less-complex) types of relation that shall be represented by an arrow. The _Effect_ relation extends the arrow relation by an _endpointType_ and an _effectType_. Worth to note is that relation _Decomposition_ and _Refinement_ are not supported to keep the model simple. Use the properties to specify additional information instead. In a similar direction, analysis like _Block Interface_ - _Channel - _Block Interface_ consistency is not supported. Such sophisticated analysis is rarely used in abstract innovation descriptions. These are kept for the development phases. **Meta Model Base:** Structural Perspective Model Decomposition Model Structural Model Element Relation (SP) Meta Model Element: Description: **Structural Perspective Model** The _Structural Perspective Model_ is the diagram of the Structural Perspective of an innovation. It contains a _Decomposition Model_ which contains all model elements of the Structural Perspective. Example: A full Structural Perspective Model example is shown in Section 8.2. Meta Model Element: Description: **Structural Perspective Model** The _Structural Perspective Model_ is the diagram of the Structural Perspective of an innovation. It contains a _Decomposition Model_ which contains all model elements of the Structural Perspective. Example: A full Structural Perspective Model example is shown in Section 8.2. Meta Model Element: \begin{tabular}{|l|} \hline \hline **Meta Model Element:** \\ \hline \hline \end{tabular} Description: **Decomposition Model** The _Decomposition Model_ represents one of the major concepts (_Decomposition, Refinement, Variation_) and is fundamental for the modeling activities. The Decomposition Model can be thought of as the main canvas to draw the innovation on. The Decomposition Models consist of any number of _Structural Model Elements_, which include _Blocks_ (of the Structural Perspective), _relations_ between them, as well as _Notes_ and _Packages_. It is used for describing the top level model of the overall Structural Perspective. Each Block can have its own Decomposition Model and thus the Blocks with their Decomposition Models span up a hierarchy. Example: Meta Model Element: Description: **Structural Model Element** The _Structural Model Element_ is the abstract object any model element of the _Decomposition Model_ derives: _Package_, _Block (SP)_, _Note_ and _Relation (SP)_. It owns the following attributes: * The _discussion_ attribute to take comments about this element. * The _version_ attribute to support version control. Example: An example is shown for each Structural Model Element under their respective description. Meta Model Element: Description: **Relation (SP) (Read: Relation on the Structural Perspective)** The abstract _Relation (SP)_ describes relations between _Blocks (SP)_. Relations are implemented by _Channels_ and _Arrow Relations_. In contrast to relations on the Functional Perspective or Quality Perspective, _Relations (SP)_ can own labels and the following attributes: * A _description_ to solve the problem of lack of clarity of the relation by adding information. The description shall answer shortly "What shall the Relation represent?", the reasoning behind the relation and its basic conditions to work. There is no template needed. Images or drafts provide valuable information. * A _stereotype_ to define what type of relation it represents. The stereotype is not defined to be part of any set values, but can hold any string. This way, the stereotype can be used for describing any customized relation. * A set of _Properties_ can be used to specify the relation and to form a basis for any consistency analysis. * Additionally the derived attributes _discussion_ and _version_ from the _Structural Model Element_ as well as _Notes_ to enrich the relations comprehensibility. Each of the implementing relations are described in more detail on its own. Example: An example is shown for each relation under their respective description. **Model Elements:** Block related elements: Block (SP) Package Note Refinement Group Refinement Block Solution Space Description Property Relation Types: Channel Arrow Relation Effect Relation In the following the model elements are introduced. First the seven Block relation elements are introduced and then the relations are presented. Block related elements The concepts of Decompostation, Refinement, Variation can not be strictly separated as maybe finitely concluded here. They of course affect each other for more details see the description of the Block (SP). Description: **Block (SP) (Read: Block on the Structural Perspective)** The _Block (SP)_ is one of the main elements of the Structural Perspective. It is used to represent any system and component that is modeled and is thus the most complex element. It derives the following attributes of the _Structural Model Element_ definition: * The _discussion_ attribute to take comments about this element. * The _version_ attribute to support version control. In addition to the Structural Model Element definition, it contains the following attributes: * A _name_. **Block (SP) continued (Read: Block on the Structural Perspective)** * A _description_ of the Block to solve the problem of lack of clarity by adding information. The description shall answer shortly "What shall the Block represent?", the reasoning behind the Block and its basic conditions to work. There is no template needed. Images or drafts provide valuable information. * An _abstraction level_ to define the level of abstraction the Block represents. It can be either a predefined abstraction level (_Context Level, System Level, Component Level_) or any other string. * An optional _Stereotype_. For any _Block (SP)_, this could be any custom string or one of the predefined Stereotypes: * _Environment_ for representing the environment of the innovation. * _Innovation_ for representing the main focus in this model: the innovation. * _Logic_ for representing a functional unit that is undefined if it is implemented as a physical unit, Hardware or Software. * _Service_ for representing actions of executing some functionality requested by users. * _Part_ for representing any physical unit including materials. * _Hardware_ for representing electrical units that can execute algorithms. * _Software_ or representing algorithms. * Optional _Properties_ to specify the Block in more detail and to form a basis for consistency analysis. Predefined properties are: * An _Availability_ property for the estimation of the availability of the Block or in other words, "To which timestamp is the component available?". * A _Feasibility_ property for the estimation of the feasibility if the Block is available and implementable to the given _Availability_ timestamp. * An optional _Solution Space Description_ (in form of PMMLs) to represent important dependencies between the Blocks properties. The properties and the description form together the solution space. * An optional _internal model_ providing more specification details to enhance the Block description. * The attribute _selected variant_ represents if and which variant is chosen. The Block (SP) supports the three main concepts (Decomposition, Refinement, Variation) in the form of more complex attributes: * An optional _Decomposition Model_ to model how the Block is set up. This element is mainly used to build the system model hierarchy. By representing the concept of 'Decomposition' it is especially important. * An optional set of disjoint _Refinement Groups_, to model possible refinements of the Block's properties. The _Refinement Groups_ can contain several _Refinement Blocks_ to model varieties of the properties. The set of properties defined among all Refinement Groups and the block itself shall be disjoint! **Block (SP) continued (Read: Block on the Structural Perspective)** * which are itself Blocks (SP) too - to represent deviations from the Block. Meanwhile each variant sets its parent reference to the block. A variant can thus only be part of one parent block! Variants represent the implementation of the concept of 'Alternatives' for the Structural Perspective. Selecting a variant of a Block affects the Blocks attributes in the following way: * _Overwriting (but the Block's original value shall be still represented in the tool)_: * The _name_ of the variant overwrites the Block's name. E.g: The variant name 'Comfort E-Scooter' overwrites the Block's name 'E-Scooter'. * Same holds for the _discussion, version, description, abstraction level_ and _Stereotype_. * _Extended:_ * The _properties_ of variant extend the Block's properties. If the variant property name is the same as the property of the block, then the variant property overwrites the blocks property. E.g: Property 'Weight' is set for the 'E-Scooter' and the variant 'Comfort E-Scooter'. In this case, the weight property from the 'Comfort E-Scooter' overwrites the weight property from the block 'E-Scooter'. * The _Solution Space Description_ (SSE) of the variant extends the blocks SSE, if and only if both SSEs have at maximum one common property. If both SSEs have more than one common property (e.g: 'weight' and'speed') then only the SSE from the variant is considered (because there is typically no function satisfying both SSEs). * The _Decomposition Model_ of the variant extends the Decomposition model of the block. There is nothing like overwriting here. If both - variant and block - have a block with the same name in their decomposition model, then they are both considered! Because of the strict variant / parent reference, variants can own relations between blocks of their decomposition model and any blocks of decomposition model of the parent block! This makes the selection of variants a pretty powerful tool to change things! * The _Refinement Groups_ and _Refinement Block_ of the variant extends the Refinement Groups of the block if the Refinement Groups name are not the same. If any Refinement Group exists in both variant and block, then the variant Refinement Group overwrites the Refinement Group from the block. * _Unaffected:_ * _Internal models_ of variants take precedence in the list. However, because Internal models are only referenced and not used in the modeling methodology IMoG, there is no clear overwriting or extending in place. The user shall consider both or just one based on their needs. * The _selected variant_ stays of course relevant (otherwise it would be unclear which variant properties affect the block!). If the variant itself has this property set and its own attributes are affected by their own variants, then this selection among the variants of the variant stays relevant too! Example: Description: **Package** The _Package_ describes an collection of Structural Model Elements to allow to create a hierarchy in the Decomposition Model and distinct between Packages. There is nothing special about Packages otherwise: They have a _name_ and a set of _Structural Model Elements_. Example: A package named 'Mapped Software' with two blocks inside. Meta Model Element: Description: **Note** The _Note_ can be used to add information to the model that can not or should not be modeled. Notes should be used sparsely! Example: Description: **Refinement Group** The _Refinement Group_ represents a container for modeling variety of Refinements. It contains a non empty set of _Refinement Blocks_ and a marker (called _selectedRefinement_), which of the _Refinement Blocks_ is currently selected. _Refinement Groups_ can not exist on their own. They must be included by a Block (SP) or a Refinement Block. The Blocks (SP) and Refinement Blocks may contain none or multiple Refinement Groups. The Refinement Blocks inside a container are recommended, but not restricted to have the same stereotype. It makes much sense to group only blocks with the same stereotype instead of mixing say _Application_ specifications with _Technology_ specifications. The selected Refinement Block will be used to overtake Properties to the Block (SP) specification or to the Refinement Block specification, which are then used for consistency analysis. Example: An example Refinement Area with one Refinement Group containing two *technology* stereotyped Blocks: 'Copper' and 'Iron' #### 1.2.2 The innovation. An E-Scooter for short trips! **Properties:** **[...]** **Refinement:** [MISSING_PAGE_POST] **Refinement Block** The _Refinement Block_ represents 'Refinements' of Blocks (SP). Refinement Blocks represent the 'Refinement' concepts and are thus of special importance. Refinement Blocks are a lighter variant of the Block (SP) and own fewer common attributes: * A _discussion_ attribute to take comments about this element. * A _version_ attribute to support version control. * A _description_ of the Block to solve the problem of lack of clarity by adding information. The description shall answer shortly "What shall the Block represent?", the reasoning behind the Block and its basic conditions to work. There is no template needed. Images or drafts provide valuable information. * A _name_. * Optional _Properties_ to specify the Refinement Block and to form a basis for consistency analysis. In addition to the common attributes above, the Refinement Block may contain an optional _Stereotype_: For any Refinement Block this could be a custom string or one of the predefined Stereotypes: * _Technology_ for representing information about influencing factors of technology and materials of the innovation. * _Mission Profile_ for representing environmental factors that influence the decision. It is especially used for restricting the environment to a special set of situations. * _Application_ for representing specialties about how the system or component is used. Additionally, Refinement Blocks can be further refined. This can be achieved by adding _Refinement Groups_ with Refinement Blocks to the block. Important to note is, that Refinement Blocks can not exist on their own and thus must be part of a Refinement Group. Refinement Groups are either owned by Blocks (SP) or Refinement Blocks. Thus it inevitable follows that any Refinement Block has on the highest level an Block (SP) as an owner. Example: Description: **Property** _Properties_ are part of the specification of Blocks (SP), Refinement Blocks and Relations (SP). Properties are used to provide important information and build a basis for further analysis. Properties can not exist on their own and must have any of the above mentioned elements as an owner. Two properties are predefined in the meta model: The _Availability_ and the _Feasibility_ properties are defined for every Block (SP): * The _Availability_ property is used for the estimation of the availability of the Block or in other words, "To which timestamp is the component available?". * The _Feasibility_ property is used for the estimation of the feasibility if the Block is available and implementable to the given _Availability_ timestamp. Properties own a _name_, a _value_ and a _unit_. The _name_ and _unit_ are not preset to a specific set of values. They can be used for any customization to allow the user to add innovation specific properties. For any domain and innovation it is recommended to have a existing 'domain properties' set to build upon instead of starting with just the two predefined properties. Example: \begin{tabular}{|p{284.5pt}|} \hline \multicolumn{2}{|c|}{Standing Platform} \\ \hline \hline _This is the base of the e-scooter, where the "driver" can stand on. The steering gear is part of it!_ \\ \multicolumn{2}{|c|}{Properties:} \\ \multicolumn{2}{|c|}{Properties:} \\ \multicolumn{2}{|c|}{[Quality] Weight = [5-7]kg [Availability] Feasibility = 100%...} \\ \multicolumn{2}{|c|}{[-.]} \\ \multicolumn{2}{|c|}{[-. Description: **Channel** The _Channel_ represents a communication medium between Blocks (SP). Channels are used, whenever the communication between two Blocks (SP) plays a significant role and needs a specification. Channels can trigger communication based analysis. The definition of the Channel relation reflects that the meta model is not pure generic but slightly customized to the microelectronic domain. Channels implement the abstract Relation (SP). Channels thus own the following attributes: * Two _Block (SP)_ endpoints. * A _label_ and a _description_ to solve the problem of lack of clarity of the relation by adding information. The description shall answer shortly "What shall the Relation represent?", the reasoning behind the relation and its basic conditions to work. There is no template needed. Images or drafts provide valuable information. * A _stereotype_ to define what type of relation it represents. The stereotype is not defined to be part of any set values, but can hold any string. This way, the stereotype can be used for describing any customized relation. * A set of _Properties_ can be used to specify the Relation and to form a basis for any consistency analysis. Additionally the derived attributes _discussion_ and _version_ from the _Structural Model Element_ as well as _Notes_ to enrich the relations comprehensibility. Example: Meta Model Element: Description: Figure 11: The \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\ **Arrow Relation** The _Arrow Relation_ represents a bidirectional relation between two elements. The Arrow Relation remains pretty generic. The Arrow Relation implements the abstract Relation (SP). Arrow Relations thus own the following attributes: * A _source_ and a _target_ endpoint. * A _direction_ either of the type _unidirectional_ or _bidirectional_. * A _label_ and a _description_ to solve the problem of lack of clarity of the relation by adding information. The description shall answer shortly "What shall the Relation represent?", the reasoning behind the relation and its basic conditions to work. There is no template needed. Images or drafts provide valuable information. * A _stereotype_ to define what type of relation it represents. The stereotype is not defined to be part of any set values, but can hold any string. This way, the stereotype can be used for describing any customized relation. * A set of _Properties_ can be used to specify the relation and to form a basis for any consistency analysis. Additionally the derived attributes _discussion_ and _version_ from the _Structural Model Element_ as well as _Notes_ to enrich the relations comprehensibility. Example: Meta Model Element: Description: Figure 11: The \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) plot of the \(\alpha\)-\(\beta\)-\(\beta\)-\(\beta\)-\(\beta\) **Effect Relation** The _Effect Relation_ represents an effect between two elements. The Effect Relation originates from the well known effect chain analysis and can be used in a similar manner. Special types of Effect Relations can be either implemented by customizing the _label_ attribute or by creating a new relation category by implementing _Custom_ relations. The Effect Relation implements the abstract _Relation (SP)_. Effect Relations thus own the following attributes: * An _effectType_, which can be set to either _desired_, _undesired_ or _misuse_. * A _source_ and a _target_ endpoint with the target endpoint having a _endpointType_ (e.g. 'thermal' or 'acoustic' or 'radiation'). * A _direction_ either of the type _unidirectional_ or _bidirectional_. * A _label_ and a _description_ to solve the problem of lack of clarity of the relation by adding information. The _description_ shall answer shortly "What shall the Relation represent?", the reasoning behind the relation and its basic conditions to work. There is no template needed. Images or drafts provide valuable information. * A _stereotype_ to define what type of relation it represents. The stereotype is not defined to be part of any set values, but can hold any string. This way, the stereotype can be used for describing any customized relation. * A set of _Properties_ can be used to specify the relation and to form a basis for any consistency analysis. Additionally the derived attributes _discussion_ and _version_ from the _Structural Model Element_ well as _Notes_ to enrich the relations comprehensibility. Example: ### 8.2 E-Scooter example The example of the Structural Perspective comprises the innovation requirements of "Providing mobility with an e-scooter" in one view called Structural View. The example is divided into three sub views: * The _top view_ (see Figure 8.3) describes the solutions of the innovation "Providing mobility with an e-scooter" from a high level. * next to the context level elements from the top view - the solution-elements of the system level and the component level. Its purpose targets the exemplary representation of all model elements from the meta model. * The _Tree view_ (see Figure 8.5) represents the model as a tree. This view is useful when the model gets too large to handle visually or for searching purposes. Figure 8.3: Top view: The solutions of the innovation “Providing mobility with an e-scooter” from a high level. ### 8.3 Structural Perspective: Strengths and Limitations The Structural Perspective was the most difficult Perspective to design, because of the vast amount of possibilities to model solutions. The general concept to keep things abstract and simple remains the same for the Structural Perspective. The main design decisions of the other perspectives also hold here: * The perspective provides also complex filtering mechanisms via abstraction levels and stereotypes. * The concepts of _Decomposition_, _Refinement_ and _Variability_ are all well supported. * The Structural Perspective is constrain able via requirements. * The Structurl Perspective is still not part of the development or design phases! * The Structural Perspective bases on Model Based Systems Engineering. The focus on the Structural Perspective is also to go not too deep into the behavior to keep solutions abstract enough for innovation modeling. In line with this abstract focus, the Structural Perspective also contains no complex elements like Ports for interface consistency checks. Nonetheless, abstract communication modeling is supported via channels and effect chains. The solution spaces itself can be well assessed via the use of Key Performance Indicators, which are simply chosen properties of Blocks and relations. The Structural Perspective also supports properties for Blocks and relations and provides more than only'structure'. As limitations, there are only few elements supported to model domain specific specialties. These domain specific elements should however be added based on the needs of the innovation. There exists also the high risk of'model explosion' making the model unmaintainable. However, innovation modeling tends to be abstract and thus manageable. Overall, the Structural Perspective seems to be in a good shape for innovation modeling. ### 8.4 Structural Perspective FAQ **Can a Refinement Block be decomposed into Blocks (SP)?** No, the meta model does not allow this. **How to define connections between Software Blocks?** Software Blocks appear at two places in the Structural Perspective: 1. In the Mapped Software containers. Here they are listed as Sets of Blocks where no interconnection can be drawn. 2. In the decomposition of some mapped Software Block. Here interconnection can be drawn. _Example:_ An example may be three Blocks of Software that are mapped on a Central Processing Unit: A Sensor Fusion Block, a Perception Block and an independent Antbag Software Block. A connection between the Sensor Fusion and the Perception shall be added. In the Mapped Software representation no interconnection can be drawn. One way to cope with this situation is by adding a Software Block that generalises the Sensor Fusion and Perception Software. This Block could be named Localization and Perception Software. The Localization and Perception Software is then decomposed into the Sensor Fusion and Perception Blocks with the new connection and ports added. What is the commonality and difference between the Refinement and the Variability Concept for Blocks in IMoG and what is the purpose for this differentiation?** In general, a refinement of a given Block represents a fuller specification of this given Block (e.g. by adding a parameter). In general, variability describes a variable parameter of a specification of a Block. In fact, variability can be seen as multiple different refinements for a variable parameter of a common abstract specification of a Block. Example 1: "Lidar Sensor A" optimized for high accuracy and "Lidar Sensor B" optimized for a high range are variations of characteristics of an abstract "Lider Sensor" specification. Example 2: "Zone Architecture A" optimized for low delay in the network and "Zone Architecture B" optimized for a high bandwidth are variations of characteristics of an abstract "Zone Architecture" specification. Example 3: "Electronic Brake" and "Manual Brake" are variations of the used technology of an abstract "Brake" specification. The question of the difference between the two concepts and the purpose of this differentiation remains. In fact, variability could be represented by using only the concept of refinements. However, IMoG defines them as orthogonal concepts with two representations: The Refinement Groups and the "variant" of relation. The variant of relation allows to represent fuller specifications of a given Block regarding all Block attributes including changes in the properties and changes in the decomposition. Refinement Groups allow to define additional properties among all variants of the abstract Block. The purpose for this definition of refinement is to model several orthogonal refinements that only affect a few properties (and thus represents only small changes regarding all Block variants) without the need to represent them as alternatives with the need to redefine every property and suffer under the explosion of the number of variants. The purpose for the differentiation is thus to handle the "solution space explosion". Let's take a look back at the examples. The modeler has still the choice of representation for every example. Each example could be represented with a variant-of-relation or with a Refinement Group regarding properties: Example 1 would be most probably represented with a Refinement Group with the stereotype of "technology", because Sensor A and Sensor B have only different characteristics modeled as properties. A decomposition of these sensors is most probably not required. Example 2 would be most probably represented with a variant-of-relation as each zone architecture would not only differ in some properties, but also in its decomposition that is most probably in the intesest of being modeled too. Example 3 is a "gray" area' example, because both representations are feasible. If the different brakes should be modeled with their parts, then a representation with a variant-of-relation is suitable. When these parts are not required to be modeled, then a representation with Refinement Groups might be more satibuled How does the Block Properties relate to Variants Properties? Variant Properties overwrite Block Properties if they exists. Block Properties - if not existent in Variants - are inherited. Example: Block E-Socoter.weight < 20kg is overwritten by Variant Heavyweight.weight < 25kg and Small.weight < 15kg. Is it possible to do multiple selections of variants? What happens then? Yes, it is possible to select multiple variants of the same block. It means: I want, that all Variants are checked against the rest of the model variants (environment). With this it is possible to test for example the e-scoter in the environment "suburban" and "hilly alaska" at the same time. The checks are 1x1 and not property hulls. How to represent Estimations of different stakeholders? Estimations are representated by using variants and requirements on those variants. Does some categorization between Properties exists? Yes, Properties can be categorized in predefined and custom categories. A category does not any semantic and is only used for better usability. Categories are removed when properties are analyzed. Some predefined categories may include * Quality * Complexity * Availability * System Data * Function Data * State Data * KPI *... more custom categories **How can Constraintnets help with analysing solution spaces?** Constraints can be used to transform the property lists into constraints and then efficiently check their satisfaction in the whole nets (without the property categories of course). ## Chapter 9 Domain Knowledge Perspective The Domain Knowledge Perspective is the fifth perspective in IMoG (see Figure 9.1). There has not been any more work done on defining the Domain Knowledge Perspective other than given in Chapter 4. Figure 9.1: Location of the Domain Knowledge Perspective in IMoG ## Part III. **Tooling, Evaluation and Closing** ## Chapter 10 Tooling Prototype We created a tooling prototype for IMoG to evaluate our modeling methodology. The scope of the prototype is the Functional Perspective. The prototype was limited to this scope because the effort for consistent tooling between all perspectives within this project was too high. A sophisticated tooling is thus left open for an industrial development after this project. The Functional Perspective is based on the well-known Feature Models [9]. As already mentioned in Chapter 6 (Functional Perspective), we adjusted the meta model to our needs in the context of public committee-based road mapping. This includes the differentiation of "Features" and "Functions" and the addition of a description, an abstraction level and various attributes to each block. The abstraction level is primarily used for filtering purposes. A "configuration" of the Functional Perspective is defined similarly to Feature Models. However, configurations are currently not supported in the prototype. We considered two approaches in achieving tooling support for the Functional Perspective: 1. Translating or implementing IMoG into an existing modeling language like UML or SysML to take advantage of the existing tools. This approach is faster and eases the integration of IMoG into the internal modeling processes of the industry companies. 2. Or by implementing a dedicated tooling prototype for IMoG by ourselves. This approach takes more effort, but brings a more elegant solution with a better learnability curve and a better user experience. For the purpose of an IMoG evaluation, we chose the latter approach, which promises a better user experience and less distortion in an evaluation. Figure 10.1 underlines this reasoning by discussing what kind of a benefit we expect from using different kinds of tooling support for IMoG in a committee: With _no tooling support_ used in the committee, the expectation is, that IMoG generates a huge modeling overhead. The models would be drawn and the committee would need guidance on how and what to model. An IMoG expert would be essentially required to create the models. The strong separation of perspectives and abstraction levels would make it hard to remain efficient. Thus, IMoG would be barely accepted in the committee. Only the use of IMoG's process would provide a guidance and would lead to a slightly positive benefit in innovation modeling. When _IMoG is translated to a generic modeling language_ like UML or SysML and supported by a generic tooling, then IMoG would provide a positive benefit to the committee. IMoG would be roughly supported by different types of diagrams for perspective and view separation, but the modeling elements would not perfectly fit and would be cumbersome to handle. It would take the committee some time learning on how to handle the models. Only an expert would be able to set up IMoG in the external tool, such that it would be usable by others. All in all, the tooling would only have a moderate acceptance in the committee. IMoG makes innovation modeling efficient when _fully supported by a dedicated tooling approach_. The dedicated tooling would support templates for perspectives and mapping views between different models. Each modeling element would have a dedicated interface corresponding for its considered use. There would be hardly any learning overhead, and the good user experience would be motivating. The tooling would understand IMoG's process and would request the necessary inputs from the user. A new user would be guided through the tooling and would not have to understand IMoG to the degree of an expert to provide meaningful content. IMoG together with its tooling would significantly reduce the time required for innovation modeling and would help to understand the innovation efficiently. The tooling would have a high acceptance in the committee. Figure 10.1.: Comparison between the expected benefit of having no tooling support, generic tooling support with an existing tool and using a dedicated tool for IMoG. The horizontal axis represents the amount of tooling support in the committee, ranging from “No Tooling support” over a “Generic Tooling support” to a “Dedicated Tooling support”. The vertical axis represents the expected benefit of IMoG in a committee. In general, the more the tooling is designed around its application, the higher is the expected acceptance! ## 10.1 Functional Perspective Prototype The tooling prototype is publicly available under [https://genial.uni-ulm.de/imogdev/](https://genial.uni-ulm.de/imogdev/) (We would like to thank the University of Ulm for providing their tool _IRIS_[1, 2] that we used as a basis for our IMoG prototype). In the following, the model of the Functional Perspective of the e-scooter (see Chapter 6) is created in the prototype to present the features of the tooling. Figure 10.2 shows the first view on the tool, when opened. The first view is shortly described before the model of the e-scooter is created. It contains a model editor in the center, one sidebar on the left and on the right side of the tool, an IMoG Perspectives toolbar and a menu toolbar. The entry "File" in the menu bar can be used for creating new models, saving models and loading models, the entry "Examples" can be used for loading example models like the e-scooter model, the "Undo" and "Redo" buttons and the entry "Settings" can be used for changing the user interface (e.g., the grid size of the model editor). The menu bar additionally includes toggles to open and close the "Sidebar" and the "Visibility Sidebar" on the left and right side of the editor. The Sidebar on the left is used to display and manipulate information about selected model elements, for tracking of changes in the model history (Edit History) and for presenting the Keyboard and Mouse Shortcuts. It is possible to view older model states through the model history. The Visibility Sidebar on the right can be used to filter model elements and highlight them. For example, it is possible to highlight all Function Blocks in a green color while hiding all mandatory relations. The IMoG Perspectives toolbar allows to select the current perspective and view. The Functional Perspective is, for example, selected in the image. It is possible to swap to different perspectives in the prototype, but the model editor is disabled for the Figure 10.2.: IMoG tooling - a first glance after opening the prototype in the web browser. other Perspectives. The model editor can be used to create the Functional Perspective model, like the e-scooter model. The root feature of the e-scooter model can be created by using the context menu of the Model Editor (see Figure 10.3a) and by entering its name "Providing mobility with an E-Scooter". Each Feature and Function is represented by colored blocks. Each block can be selected to operate with (see Figure 10.3b), like by renaming them, resizing them, changing their type and color or duplicate them. When selected, the tab "IMoG" in the Sidebar will present further information about the block (see Figure 10.3c). This information can be manipulated, giving the blocks a description, changing their stereo-type or setting their abstraction level. After adding further Features and Functions to the model, the Features and Functions can be related with each other. A relation can be created by selecting a block, choosing a relation from the relation toolbar under the block and then clicking on the target block. All eight relations of the Functional Perspective can be used, changed and enriched with labels and information. These include the "Mandatory" relation, the "Optional" relation, the "Requires" constraint relation, the "Excludes" constraint relation, the "Custom" relation, the "Alternative" relation, the "Or" relation with a given cardinality and the "Multi directional Custom" relation. Assuming the other Features, Functions and relations of the e-scooter model (see Chapter 6) are created, then the model of the e-scooter shall look similar to the model in Figure 10.4. Figure 10.3: The context menu, selected Features and the sidebar of the prototype. ### 10.2 Tooling Evaluation We have conducted a user experience evaluation with two other researchers. We gave them some predefined tasks to learn handling the user interface like creating some blocks, changing their properties, relating blocks with each other, saving and loading models and using the filtering features. Afterwards, they had to redraw a model given on paper to identify their use of the tooling. Furthermore, we asked them to open an example model and asked some more challenging questions about the model to identify if they understood what the model was meant to present. Finally, we did a round of structural interviews with them regarding their user experience. The prototype was overall evaluated as fast responding and easy-to-use. We experienced the limitation of missing the other perspectives of IMoG in the prototype. Nonetheless, our prototype demonstrated the large potential of a dedicated tooling approach for any IMoG related project by not bothering the user with cumbersome interactions. Thus, we encourage the reader, other industry partners or committees to try out IMoG and its tooling prototype in their committee. Figure 10.4: The model of the e-scooter in the prototype corresponding to the model presented on the Functional Perspective (see Chapter 6). It contains the root feature (yellow block), one layer of context level features (yellow blocks), one layer of system level functions (green blocks) and one level of component level functions (purple blocks) with the corresponding relations (mandatory, optional, constraint,...). ## Chapter 11 Evaluation The initial evaluation in the original proposal of IMoG [6] stated two strengths: the appropriate level of abstraction for modeling innovations and the examined ways through the matrix. The examined ways include, for example, a top-down diagonal approach from the Context Level of the Strategy Perspective down to the Component Level of the Structural Perspective or a bottom-up approach from the ideas of the semiconductor suppliers back to the context of the car manufacturers. The appropriate level of abstraction was confirmed and further underlined by the use cases where we have applied IMoG: IMoG helped us to adequately tackle the innovations. We reconsidered our opinion regarding the mentioned ways through the matrix. Instead of specifying several possible ways through IMoG, we think it is rather appropriate to follow the mentioned process for IMoG presented in section 12. Furthermore, iterating between the problem space and solution space perspectives similar to the process defined in the twin peaks model [14] is in our opinion the most appropriate approach. The initial evaluation in the original proposal of IMoG [6] identified three potential limitations based on an academic example of wireless charging: scalability, detailed behavioral models, and bridging to product level models. The application of IMoG to the larger example of the e-scooter sheds more light onto these topics. Firstly, we did not encounter any issues regarding scalability in the use cases modeled here, which indicates that IMoG as such does not introduce unnecessary and unmanageable complexities. Secondly, our use case here confirms the view that the absence of detailed behavioral models is actually a strength: Details are not required and should be left out in abstract innovation modeling. Nonetheless, such detailed models should be possible to be attached to solution blocks whenever needed. Finally, the bridge between an IMoG model to a product level model remains properly solvable: Bridging the gap by referring IMoG's elements, using transformations of IMoG models to established system level development languages or by translating the IMoG model into a development focused framework (see Broy et al. in [3]) with adding the behavioral aspects to the designed framework are the recommended choices. While applying the use cases we learned two more lessons: Reordering the perspectives into the problem space and solution space made it easier to apply IMoG. This distinction got added to the design principles of IMoG (see section 13). Another lesson was, that interpreting abstraction levels as filter functionality is better suited for the modeler than interpreting abstraction levels as a division into diagrams. We examined that the division of an innovation model into several pieces would do more harm regarding its user experience and usefulness than it would help. ## Chapter 12 Closing This technical document presented the Innovation Modeling Grid in detail. This document is the successor of two publications on IMoG [6, 11] and focuses on presenting all details of the methodology. Beginning with the process and an overview, each perspective was presented in detail. Afterwards the tooling and the evaluation was presented. Overall, we think that IMoG has great potential to be really useful in committee driven innovation modeling. Next to the applications of the e-scooter example and the application in a project of the GAIA-X family [17], IMoG is currently applied in the "Arbeitskreis Automotive" in a workshop series. This document shows that much is already researched about IMoG, however IMoG still has some missing ends in parts of solution definition and tooling. The model of IMoG can still improve and this improvement is enabled through getting some crucial feedback from applications like the workshop series. Therefore, if one is interested in committee driven innovation modeling, we encourage to take a look at IMoG and tailor it to the needs of their committee. ## Part IV Appendix This appendix contains the following parts: * The glossary of IMoG ## Glossary Definitions from the Literature are abbreviated with the following References.
2309.04103
New improvement to Falconer distance set problem in higher dimensions
We show that if a compact set $E\subset \mathbb{R}^d$ has Hausdorff dimension larger than $\frac{d}{2}+\frac{1}{4}-\frac{1}{8d+4}$, where $d\geq 3$, then there is a point $x\in E$ such that the pinned distance set $\Delta_x(E)$ has positive Lebesgue measure. This improves upon bounds of Du-Zhang and Du-Iosevich-Ou-Wang-Zhang in all dimensions $d \ge 3$. We also prove lower bounds for Hausdorff dimension of pinned distance sets when $\dim_H (E) \in (\frac{d}{2} - \frac{1}{4} - \frac{3}{8d+4}, \frac{d}{2}+\frac{1}{4}-\frac{1}{8d+4})$, which improves upon bounds of Harris and Wang-Zheng in dimensions $d \ge 3$.
Xiumin Du, Yumeng Ou, Kevin Ren, Ruixiang Zhang
2023-09-08T03:46:50Z
http://arxiv.org/abs/2309.04103v1
# New improvement to Falconer distance set problem in higher dimensions ###### Abstract. We show that if a compact set \(E\subset\mathbb{R}^{d}\) has Hausdorff dimension larger than \(\frac{d}{2}+\frac{1}{4}-\frac{1}{8d+4}\), where \(d\geq 3\), then there is a point \(x\in E\) such that the pinned distance set \(\Delta_{x}(E)\) has positive Lebesgue measure. This improves upon bounds of Du-Zhang and Du-Iosevich-Ou-Wang-Zhang in all dimensions \(d\geq 3\). We also prove lower bounds for Hausdorff dimension of pinned distance sets when \(\dim_{H}(E)\in(\frac{d}{2}-\frac{1}{4}-\frac{3}{8d+4},\frac{d}{2}+\frac{1}{4}- \frac{1}{8d+4})\), which improves upon bounds of Harris and Wang-Zheng in dimensions \(d\geq 3\). ## 1. Introduction A classical question in geometric measure theory, introduced by Falconer in the early 80s ([9]) is, how large does the Hausdorff dimension of a compact subset of \(\mathbb{R}^{d}\), \(d\geq 2\) need to be to ensure that the Lebesgue measure of the set of its pairwise Euclidean distances is positive. Let \(E\subset\mathbb{R}^{d}\) be a compact set, its _distance set_\(\Delta(E)\) is defined by \[\Delta(E):=\left\{|x-y|:x,y\in E\right\}.\] **Conjecture**.: [Falconer] _Let \(d\geq 2\) and \(E\subset\mathbb{R}^{d}\) be a compact set. Then_ \[\dim_{H}(E)>\frac{d}{2}\Rightarrow|\Delta(E)|>0.\] _Here \(|\cdot|\) denotes the Lebesgue measure and \(\dim_{H}(\cdot)\) is the Hausdorff dimension._ The main result in this paper improves the best-known dimensional threshold towards the Falconer conjecture in dimensions \(d\geq 3\). **Theorem 1.1**.: _Let \(d\geq 3\) and \(E\subset\mathbb{R}^{d}\) be a compact set. Then_ \[\dim_{H}(E)>\frac{d}{2}+\frac{1}{4}-\frac{1}{8d+4}\Rightarrow|\Delta(E)|>0.\] Falconer's conjecture remains open in all dimensions as of today. It has attracted a great amount of attention in the past decades. To name a few landmarks: in 1985, Falconer [9] showed that \(|\Delta(E)|>0\) if \(\dim_{H}(E)>\frac{d}{2}+\frac{1}{2}\). Bourgain [1] was the first to lower the threshold ## 1. Introduction Let \(d\geq 3\) be a positive integer. Let \(\Omega_{H}(E)\) be the set of all finite dimensional \(H\)-dimensional subspaces of \(\Omega_{H}(E)\). Let \(\Omega_{H}(E)\) be the set of all finite dimensional \(H\)-dimensional subspaces of \(\Omega_{H}(E)\). However, getting a lower bound for \(\dim_{H}(\Delta_{x}(E))\) given \(\dim_{H}(E)=\frac{d}{2}\) is much more challenging. Falconer in [9] proved that \(\dim_{H}(\Delta(E))\geq\frac{1}{2}\), and a pinned version \(\sup_{x\in E}\dim_{H}(\Delta_{x}(E))\geq\frac{1}{2}\) was proved in [17]. The reason \(\frac{1}{2}\) is a natural barrier is that if there existed a \(\frac{1}{2}\)-dimensional ring \(R\subset[1,2]\), then the distance set of \(R^{d}\) is contained in \(\sqrt{R}\), so it has Hausdorff dimension \(\frac{1}{2}\). Even though the works of Bourgain [2] and Katz-Tao [13] showed a quantitative discretized sum-product theorem that rules out the existence of \(R\), it is still a challenging problem to obtain explicit bounds for the discretized sum-product problem, which asks for the largest exponent \(\gamma>0\) such that \(\min(|A+A|_{\delta},|A\cdot A|_{\delta})\gtrsim|A|^{1+\gamma}\) for any Katz-Tao \((\delta,\frac{1}{2})\)-set \(A\subset\mathbb{R}^{1}\) (see [13] for the definition of such sets). See [11], [20], [16], and [23] for the best known bounds for the discretized sum-product problem. For the Falconer distance set problem, the only explicit improvements known over \(\frac{1}{2}\) for \(\dim_{H}(\Delta(E))\) when \(\dim_{H}(E)=\frac{d}{2}\) were derived in [25] and [26] for \(d=2,d=3\), and the latter paper also proved box dimension results for \(d\geq 4\). In personal communication, Shmerkin-Wang extended Stull's bound in \(d=2\) to the case \(\dim_{H}(E)=1\). Furthermore, by plugging in the sharp radial projection estimates of [19] into the proofs of Theorems 1.2 and 1.3 of [26], we get improved bounds over those stated in these theorems for \(d\geq 3\). In summary, the previously known results are: * if \(d=2\), \(\dim_{H}(E)=1\), then \(\sup_{x\in E}\dim_{H}(\Delta_{x}(E))\geq\frac{3}{4}\); * if \(d=3\), \(\dim_{H}(E)=\frac{3}{2}\), then \(\sup_{x\in E}\dim_{H}(\Delta_{x}(E))\geq\frac{5}{8}\); * if \(d\geq 4\), \(\dim_{H}(E)=\frac{d}{2}\), then \(\sup_{x\in E}\dim_{B}(\Delta_{x}(E))\geq\frac{d+2}{2(d+1)}\). A key obstruction to Hausdorff dimension estimates in [26] when \(d\geq 4\) is that \(\frac{d}{2}\)-dimensional measures don't necessarily have decay around small neighborhoods of \((d-1)\)-planes when \(d\geq 4\), so one cannot apply Bourgain's discretized projection theorem to such measures. Note that in Theorem 1.3, \(f(\frac{d}{2})=\frac{d+2}{2(d+1)}>\frac{1}{2}\). This matches the best-known bound \(\frac{5}{8}\) when \(d=3\) using an entirely different approach. And when \(d\geq 4\), this is the first time one obtains an explicit improved bound over \(\frac{1}{2}\) for the Hausdorff dimension of pinned distance sets of a \(\frac{d}{2}\)-dimensional set, (and this matches the box dimension bound as in the above). Finally, let us remark what is known when \(\alpha<\frac{d}{2}\). Falconer [9] proved that \(\dim_{H}(\Delta(E))\geq\min(\alpha-\frac{d-1}{2},0)\); this was improved in dimensions \(d=2\) and \(3\) to \(\sup_{x\in E}\dim_{H}(\Delta_{x}(E))\geq\frac{\alpha+1}{d+1}\) for \(\alpha\in(\frac{d-1}{2},\frac{d}{2})\) by [19], [26]. In dimension \(d\geq 4\) however, their approach only recovers box dimension estimates. Our bound \(\sup_{x\in E}\dim_{H}(\Delta_{x}(E))\geq f(\alpha)\) from Theorem 1.3 is weaker than \(\frac{\alpha+1}{d+1}\) for all \(\alpha<\frac{d}{2}\), but it works for Hausdorff dimension for all \(d\geq 3\). ### Old and new ideas We will adapt the good tube/bad tube and decoupling method pioneered by [10] for dimension \(d=2\) and continued in [5] for even dimensions \(d\). In both papers, Orponen's radial projection theorem [18] plays a key role. However, the argument does not perform well for odd dimensions \(d\), and the result of [7] provides a better bound for distance sets. The reason is that Orponen's radial projection theorem only works for sets with dimension \(>d-1\), where \(d\) is the dimension of the ambient space. To overcome this issue, [5] projected the set onto a generic \((\frac{d}{2}+1)\)-dimensional subspace of \(\mathbb{R}^{d}\) (assuming \(d\) is even). While this orthogonal projection trick works well in even dimensions, for odd dimensions we are forced to project to a \((\frac{d+1}{2})\)-dimensional subspace instead, which creates some loss. To avoid this loss, a natural approach is to avoid the initial orthogonal projection; but then, we need a radial projection theorem that works for sets of dimension \(\leq d-1\). The starting point for this paper is a new radial projection result, Theorem 3.1, by the third author [22]. For concreteness, let us work in \(\mathbb{R}^{3}\) (so \(d=3\)) at a single scale \(r\). Then Theorem 3.1 tells us given \(\alpha\)-dimensional Frostman measures \(\mu_{1},\mu_{2}\), that most \(r\)-tubes either have \(\mu_{2}\)-mass \(\lesssim r^{\alpha-\varepsilon}\) or lie in some heavy \((r^{\kappa},2)\)-plate, which is the \(r^{\kappa}\)-neighborhood of a \(2\)-dimensional hyperplane with \((\mu_{1}+\mu_{2})\)-mass \(\gtrsim r^{\eta}\) (here, \(\eta\ll\kappa\ll\varepsilon\)). We point out that the bound \(r^{\alpha-\varepsilon}\) is a significant improvement over the bound \(r^{(d-1)/2-\varepsilon}\) one would get through combining Orponen's radial projection theorem and orthogonal projections in odd dimensions; the cost is that we need to deal with heavy \((r^{\kappa},2)\)-plates. The technical novelty of this paper is how to deal with these plates in the context of the decoupling framework. Following the setup in [5], we let \(\mu_{1},\mu_{2}\) be \(\alpha\)-dimensional Frostman measures supported on subsets \(E_{1},E_{2}\subset E\) with \(\operatorname{dist}(E_{1},E_{2})\gtrsim 1\). Let \(R_{0}\) be a large number and define the scales \(R_{j}=2^{j}R_{0}\). Fix a scale \(j\); we shall work with \(R_{j}^{-1/2+\beta}\)-tubes. Let \(G(x)\) to be the set of \(y\in E_{1}\) such that \(x,y\) don't both lie in some heavy \((R_{j}^{-\kappa},2)\)-plate. Define a good \(R_{j}^{-1/2+\beta}\)-tube to be one with mass \(\lesssim R_{j}^{-\alpha/2+\varepsilon}\) and doesn't lie in any heavy plate, and \(\mu_{1,g}\) to be a part of \(\mu_{1}\) from all good tubes. With these definitions, we can try to follow the framework of [5] and show that \(\mu_{1}|_{G(x)}\) and \(\mu_{1,g}\) are close in \(L^{1}\)-norm. There are two subtleties with this approach. First, the error in \(\|\mu_{1}|_{G(x)}-\mu_{1,g}\|_{L^{1}}\) may contain contributions from tubes \(T\) at the border of \(G(x)\), i.e. \(T\subset G(x)\) but \(2T\not\subset G(x)\). To overcome this issue, we introduce a probabilistic wiggle. If we work with heavy \((aR_{j}^{-\kappa},2)\)-plates for a uniformly random \(a\in[1,2]\), then the borderline error will be small on average, so there exists a choice for \(a\) to make the borderline error small. The second issue is that we do not have the luxury of working at a single scale, so we need to introduce some ideas from [25, Appendix B]. Specifically, we will construct a decreasing sequence \(\mathbb{R}^{3}\supset G_{0}(x)\supset G_{1}(x)\supset\cdots\) such that \(\mathbb{R}^{3}\setminus G_{0}(x)\) is contained in a \((R_{0}^{-\kappa/2},2)\)-plate, \(\mu(G_{0}(x)\setminus G_{\infty}(x))\leq R_{0}^{-\kappa/2}\), and \(G_{j}(x)\) is disjoint from all heavy \((aR_{j}^{-\kappa},2)\)-plates containing \(x\) (where \(a\in[1,2]\) is the probabilistic wiggle). Since all the \(G_{j}\)'s are close in measure, we can combine multiscale information to show that \(\mu_{1}|_{G_{0}(x)}\) and \(\mu_{1,g}\) are close in \(L^{1}\)-norm. The final step is to show \(G_{0}(x)\) is large. This is equivalent to showing that any \((R_{0}^{-\kappa/2},2)\)-plate has small \((\leq\frac{1}{2})\) measure for \(R_{0}\) large enough. If this is not true, then by compactness we can find a hyperplane with nonzero measure, and then we reduce to Falconer in 2D. In this case, we have strong bounds for distance sets (e.g. see Wolff [29]) because \(\dim_{H}(E)>1.5\) is large. **Remark 1.4**.: _In dimensions \(d\geq 4\), a slightly simpler and more intuitive choice of good measures exists. We refer the interested reader to [6] for more details, where such a direction is pursued and same results in Theorems 1.1 and 1.2 (except for the \(d=3\) case) are obtained as a consequence of new weighted refined decoupling estimates. In dimension \(d=3\) though, the strategy still works but doesn't seem to yield as good a result as the current paper._ **Remark 1.5**.: _What is the limitation of the methods of this paper and the companion paper [6]? The radial projection theorem of [22] is sharp in the following sense: let \(E_{1},E_{2}\) each be unions of \(r^{-\alpha}\) many \(r\)-balls satisfying an \(\alpha\)-dimensional spacing condition such that \(\operatorname{dist}(E_{1},E_{2})\geq\frac{1}{2}\), and let \(\mu_{1},\mu_{2}\) be probability measures supported on \(E_{1},E_{2}\) respectively such that each \(r\)-ball in \(E_{i}\) has \(\mu_{i}\)-measure \(r^{\alpha}\), \(i=1,2\). Then, we know that many \(r\)-tubes can intersect at least one \(r\)-ball in \(E_{1}\) and one \(r\)-ball in \(E_{2}\). Thus, many \(r\)-tubes can have both \(\mu_{1}\) and \(\mu_{2}\)-mass at least \(r^{\alpha}\), so \(r^{\alpha}\) is the best possible threshold for the good tubes in this paper. To make further progress, we suggest to look at improving the decoupling framework._ ### Outline of the paper In Section 2, we outline two main estimates and prove Theorem 1.2 using them. In Section 3, we list several results that we will use to bound the bad part, including the new radial projection estimate by the third author [22] and two geometric lemmas governing the physical location of small plates with large mass. In Section 4, we construct the good measure and prove the first main estimate - Proposition 2.1. In Section 5, we prove the second main estimate - Proposition 2.2 using refined decoupling. In Section 6, we prove Theorem 1.3 using the two main estimates and a framework of Liu [15]. In Section 7, we give some remarks about the extension of Theorem 1.2 to more general norms and its connection with the Erdos distance problem. ### Notations Throughout the article, we write \(A\lesssim B\) if \(A\leq CB\) for some absolute constant \(C\); \(A\sim B\) if \(A\lesssim B\) and \(B\lesssim A\); \(A\lesssim_{\varepsilon}B\) if \(A\leq C_{\varepsilon}B\); \(A\lessapprox B\) if \(A\leq C_{\varepsilon}R^{\varepsilon}B\) for any \(\varepsilon>0\), \(R>1\). For a large parameter \(R\), \(\operatorname{RapDec}(R)\) denotes those quantities that are bounded by a huge (absolute) negative power of \(R\), i.e. \(\operatorname{RapDec}(R)\leq C_{N}R^{-N}\) for arbitrarily large \(N>0\). Such quantities are negligible in our argument. For \(x\in\mathbb{R}^{d}\) and \(t>0\), \(B(x,t)\) is the ball in \(\mathbb{R}^{d}\) of radius \(t\) centered at \(x\). A \(\delta\)-tube is the intersection of an infinite cylinder of radius \(\delta\) and \(B(0,10)\). This is not standard (usually, a tube is defined to be a finite \(\delta\)-cylinder with length \(1\)), but this definition won't cause problems and is slightly more convenient for us. For a set \(E\subset\mathbb{R}^{d}\), let \(E^{c}=\mathbb{R}^{d}\setminus E\), and \(E^{(\delta)}\) the \(\delta\)-neighborhood of \(E\). For subsets \(E_{1},E_{2}\subset\mathbb{R}^{d}\), \(\operatorname{dist}(E_{1},E_{2})\) is their Euclidean distance. For \(A\subset X\times Y\) and \(x\in X\), define the slice \(A|_{x}=\{y\in Y:(x,y)\in A\}\). Similar definition for \(A|_{y}\), when \(y\in Y\). For a measure \(\mu\) and a measurable set \(G\), define the restricted measure \(\mu|_{G}\) by \(\mu|_{G}(A)=\mu(G\cap A)\) for all measurable \(A\subset\mathbb{R}^{d}\). We say a measure \(\mu\) supported in \(\mathbb{R}^{d}\) is an \(\alpha\)-dimensional measure with constant \(C_{\mu}\) if it is a probability measure satisfying that \[\mu(B(x,t))\leq C_{\mu}t^{\alpha},\qquad\forall x\in\mathbb{R}^{d},\,\forall t >0.\] An \((r,k)\)-plate \(H\) in \(\mathbb{R}^{d}\) is the \(r\)-neighborhood of a \(k\)-dimensional affine plane in the ball \(B^{d}(0,10)\). More precisely, \[H:=\{z\in B(0,10):\operatorname{dist}(z,P_{H})<r\},\] where \(P_{H}\) is a \(k\)-dimensional affine plane, which is called the central plane of \(H\). We can also write \(H=P_{H}^{(r)}\cap B(0,10)\). A \(C\)-scaling of \(H\) is \[CH=\{z\in B(0,10):\operatorname{dist}(z,P_{H})<Cr\}.\] Denote the surface of \(H\) by \[\operatorname{Surf}(H):=\left\{z\in B(0,10):\operatorname{dist}(z,P_{H})=r\right\}.\] Since a \(\delta\)-tube is a \((\delta,1)\)-plate, the same conventions apply to tubes. To prevent the reader from feeling too attached to \(B(0,10)\), we will see in the proofs that \(B(0,10)\) can be replaced by any smooth bounded convex body that contains \(B(0,10)\). We say that an \((r,k)\)-plate \(H\) is \(\gamma\)-concentrated on \(\mu\) if \(\mu(H)\geq\gamma\). Given a collection \(\mathcal{H}\) of plates in \(\mathbb{R}^{d}\) and a point \(x\in\mathbb{R}^{d}\), define \(\mathcal{H}(x):=\left\{H\in\mathcal{H}:x\in aH\right\}\), where \(a\) is a "master parameter" that will be defined later. We will work with a collection \(\mathcal{E}_{r,k}\) of essentially distinct \((r,k)\)-plates with the following properties: * Each \((\frac{r}{2},k)\)-plate intersecting \(B(0,1)\) lies in at least one plate of \(\mathcal{E}_{r,k}\); * For \(s\geq r\), every \((s,k)\)-plate contains \(\lesssim_{k,d}\left(\frac{s}{r}\right)^{(k+1)(d-k)}\) many \((r,k)\)-plates of \(\mathcal{E}_{r,k}\). For example, when \(k=1\) and \(d=2\), we can simply pick \(\sim r^{-1}\) many \(r\)-tubes in each of an \(r\)-net of directions. This generalizes to higher \(k\) and \(d\) via a standard \(r\)-net argument, which can be found in [22, Section 2.2]. **Acknowledgements**.: _XD is supported by NSF DMS-2107729 (transferd from DMS-1856475), NSF DMS-2237349 and Sloan Research Fellowship. YO is supported by NSF DMS-2142221 and NSF DMS-2055008. KR is supported by a NSF GRFP fellowship. RZ is supported by NSF DMS-2207281 (transferd from DMS-1856541), NSF DMS-2143989 and the Sloan Research Fellowship._ ## 2. Main estimates In this section, we outline two main estimates, from which Theorems 1.2 and 1.3 follow. Let \(E\subset\mathbb{R}^{d}\) be a compact set with positive \(\alpha\)-dimensional Hausdorff measure. Without loss of generality, assume that \(E\) is contained in the unit ball, and there are subsets \(E_{1}\) and \(E_{2}\) of \(E\), each with positive \(\alpha\)-dimensional Hausdorff measure, and \(\operatorname{dist}(E_{1},E_{2})\gtrsim 1\). Then there exist \(\alpha\)-dimensional probability measures \(\mu_{1}\) and \(\mu_{2}\) supported on \(E_{1}\) and \(E_{2}\), respectively. To relate the measures to the distance set, we consider their push-forward measures under the distance map. For a fixed point \(x\in E_{2}\), let \(d^{x}:E_{1}\to\mathbb{R}\) be the pinned distance map given by \(d^{x}(y):=|x-y|\) Then, the pushforward measure \(d_{*}^{x}(\mu_{1})\), defined as \[\int_{\mathbb{R}}\psi(t)\,d_{*}^{x}(\mu_{1})(t)=\int_{E_{1}}\psi(|x-y|)\,d\mu_{1} (y),\] is a natural measure that is supported on \(\Delta_{x}(E_{1})\). In the following, we will construct another complex-valued measure \(\mu_{1,g}^{x}\) that is the _good_ part of \(\mu_{1}\) with respect to \(\mu_{2}\) depending on \(x\), and study its pushforward under the map \(d^{x}\). The main estimates are the following. **Proposition 2.1**.: _Let \(d\geq 2\), \(k\in\{1,2,\cdots,d-1\}\), \(k-1<\alpha\leq k\), and \(\varepsilon>0\). Then there exists a small \(\beta(\varepsilon)>0\) such that the following holds for sufficiently large \(R_{0}(\beta,\varepsilon)\). Assume \(\mu_{1,g}^{x}\) has been constructed following the procedure in Section 4.2 below. Then there is a subset \(E_{2}^{\prime}\subset E_{2}\) with \(\mu_{2}(E_{2}^{\prime})\geq 1-R_{0}^{-\beta}\) and for each \(x\in E_{2}^{\prime}\), there exists a set \(G(x)\subset B^{d}(0,10)\) where \(B^{d}(0,10)\setminus G(x)\) is contained within some \((R_{0}^{-\beta},k)\)-plate, such that the following estimate holds:_ \[\|d_{*}^{x}(\mu_{1}|_{G(x)})-d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{1}}\leq R_{0}^{- \beta}. \tag{2.1}\] **Proposition 2.2**.: _Let \(d\geq 2\), \(0<\alpha<d\), and \(\varepsilon>0\). Then for sufficiently small \(\beta(\varepsilon)\in(0,\varepsilon)\) in the construction of \(\mu_{1,g}^{x}\) in Section 4.2 below,_ \[\int_{E_{2}}\|d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{2}}^{2}d\mu_{2}(x)\lesssim_{d, \alpha,\varepsilon}\int_{\mathbb{R}^{d}}|\xi|^{-\frac{\alpha d}{d+1}+ \varepsilon}|\widehat{\mu}_{1}(\xi)|^{2}\,d\xi+R_{0}^{d}.\] **Remark 2.3**.: _Broadly speaking, \(G(x)\) removes the contributions from small plates that contain large mass. This makes it possible to efficiently apply the new radial projection theorem, Theorem 3.1._ Proof of Theorem 1.2 using Propositions 2.1 and 2.2.: For \(d\geq 3\) and \(\frac{d}{2}<\alpha<\frac{d+1}{2}\), let \(k=\lfloor\frac{d}{2}\rfloor+1\) so that \(k-1<\alpha\leq k\). If \(\mu_{1}\) gives nonzero mass to some \(k\)-dimensional affine plane, then we are done by applying [9] to that \(k\)-plane since \(\alpha>\frac{d}{2}\geq\frac{k+1}{2}\). Thus, assume \(\mu_{1}\) gives zero mass to every \(k\)-dimensional affine plane. By a compactness argument, there exists \(r_{0}>0\) such that \(\mu_{1}(H)<\frac{1}{1000}\) for any \((r_{0},k)\)-plate \(H\). Now the two propositions tell us that there is a point \(x\in E_{2}\) such that \[\|d_{*}^{x}(\mu_{1}|_{G(x)})-d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{1}} \leq\frac{1}{1000},\] \[\|d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{2}}^{2}\lesssim I_{\lambda}(\mu _{1})+R_{0}^{d}<\infty,\] by choosing \(R_{0}\) sufficiently large. Here \(I_{\lambda}(\mu_{1})\) is the \(\lambda\)-dimensional energy of \(\mu_{1}\) and \(\lambda=d-\frac{\alpha d}{d+1}+\varepsilon\), by a Fourier representation for \(I_{\lambda}\): \[I_{\lambda}(\mu)=\int\int|x-y|^{-\lambda}d\mu(x)d\mu(y)=C_{d,\lambda}\int_{ \mathbb{R}^{d}}|\xi|^{\lambda-d}|\widehat{\mu}(\xi)|^{2}\,d\xi.\] One has \(I_{\lambda}(\mu_{1})<\infty\) if \(\lambda<\alpha\), which is equivalent to \(\alpha>\frac{d(d+1)}{2d+1}=\frac{d}{2}+\frac{1}{4}-\frac{1}{8d+4}\). Now \(B^{d}(0,10)\setminus G(x)\) is contained in some \((R_{0}^{-\beta},k)\)-plate \(H\). If \(R_{0}\) is chosen sufficiently large such that \(R_{0}^{\beta}>r_{0}^{-1}\), then \[\mu_{1}(G(x))\geq 1-\mu_{1}(H)>1-\frac{1}{1000}.\] Since \(d_{*}^{x}(\mu_{1}|_{G(x)})\) is a positive measure, its \(L^{1}\) norm is \(\mu_{1}(G(x))>1-\frac{1}{1000}\). Thus, \[\int_{\Delta_{x}(E)}|d_{*}^{x}(\mu_{1,g}^{x})|=\int|d_{*}^{x}(\mu _{1,g}^{x})|-\int_{\Delta_{x}(E)^{c}}|d_{*}^{x}(\mu_{1,g}^{x})|\\ \geq 1-\frac{2}{1000}-\int|d_{*}^{x}(\mu_{1}|_{G(x)})-d_{*}^{x}( \mu_{1,g}^{x})|\geq 1-\frac{3}{1000}.\] On the other hand, \[\int_{\Delta_{x}(E)}|d_{*}^{x}(\mu_{1,g}^{x})|\leq|\Delta_{x}(E)|^{1/2}\left( \int|d_{*}^{x}(\mu_{1,g}^{x})|^{2}\right)^{1/2}.\] Therefore, \(|\Delta_{x}(E)|>0\). The proof of Theorem 1.3 is deferred until Section 6. ## 3. Radial projections and heavy plates In this section, we list several results that we will use in Sections 4.4 and 4.6 to bound the bad part of \(\mu_{1}\). We'll use the following new radial projection estimate, which follows from [22, Theorem 1.13]. **Theorem 3.1**.: _Let \(d\geq 2\), \(k\in\{1,2,\cdots,d-1\}\), \(k-1<\alpha\leq k\), and fix \(\eta,\varepsilon>0\), and two \(\alpha\)-dimensional measures \(\mu_{1},\mu_{2}\) with constants \(C_{\mu_{1}},C_{\mu_{2}}\) supported on \(E_{1},E_{2}\subset B(0,1)\) respectively. There exists \(\gamma>0\) depending on \(\eta,\varepsilon,\alpha,k\) such that the following holds. Fix \(\delta<r<1\). Let \(A\) be the set of pairs \((x,y)\in E_{1}\times E_{2}\) satisfying that \(x\) and \(y\) lie in some \(\delta^{\eta}\)-concentrated \((r,k)\)-plate on \(\mu_{1}+\mu_{2}\). Then there exists a set \(B\subset E_{1}\times E_{2}\) with \(\mu_{1}\times\mu_{2}(B)\leq\delta^{\gamma}\) such that for every \(x\in E_{1}\) and \(\delta\)-tube \(T\) through \(x\), we have_ \[\mu_{2}(T\setminus(A|_{x}\cup B|_{x}))\lesssim\frac{\delta^{\alpha}}{r^{\alpha -(k-1)}}\delta^{-\varepsilon}.\] _The implicit constant may depend on \(\eta,\varepsilon,\alpha,k,C_{\mu_{1}},C_{\mu_{2}}\)._ **Remark 3.2**.: _(a) The roles of \(\mu_{1}\) and \(\mu_{2}\) in Theorem 3.1 are interchangeable, so the conclusion also holds for \(\mu_{1}\) instead of \(\mu_{2}\)._ _(b) If \(\alpha>d-1\), then the numerology of Theorem 3.1 doesn't apply. Instead, Orponen's radial projection theorem [18] in dimension \(d\) applies. The result (stated in [10, Lemma 3.6] for \(d=2\), but can be generalized to all dimensions \(d\)) is that for \(\gamma=\varepsilon/C\), there exists a set \(B\subset E_{1}\times E_{2}\) with \(\mu_{1}\times\mu_{2}(B)\leq\delta^{\gamma}\) such that for every \(x\in E_{1}\) and \(\delta\)-tube \(T\) through \(x\), we have_ \[\mu_{2}(T\setminus B|_{x})\lesssim\delta^{d-1-\varepsilon}.\] _Note that the set \(A\) of "concentrated pairs" is not needed here._ _(c) If \(r\sim\delta\), we can obtain a slightly better result by projecting to a \(k\)-dimensional subspace and following the argument in [5, Section 3.2]. The result is that for \(\gamma=\varepsilon/C\), there exists a set \(B\subset E_{1}\times E_{2}\) with \(\mu_{1}\times\mu_{2}(B)\leq\delta^{\gamma}\) such that for every \(x\in E_{1}\) and \(\delta\)-tube \(T\) through \(x\), we have_ \[\mu_{2}(T\setminus B|_{x})\lesssim\delta^{k-1-\varepsilon}.\] _The set \(A\) is again not needed in this case. The main novelty of Theorem 3.1 comes when \(r>\delta\)._ We will also need the following two lemmas from [22] (Lemmas 7.5 and 7.8) governing the physical location of small plates with large mass. **Lemma 3.3**.: _Let \(k-1<s\leq k\) and \(0<r\leq 1\). There is \(N=N(s,k,d)\) such that the following holds: let \(\nu\) be an \(s\)-dimensional measure with constant \(C_{\nu}\geq 1\), and let \(\mathcal{E}_{r,k}\) be the collection of essentially distinct \((r,k)\)-plates from the Notations part of Section 1. Let \(\mathcal{H}=\{H\in\mathcal{E}_{r,k}:\nu(H)\geq a\}\). Then \(|\mathcal{H}|\lesssim(\frac{C_{\nu}}{a})^{N}\). (The implicit constant only depends on \(k,d\) and is independent of \(a,r\).)_ **Lemma 3.4**.: _Let \(0<r<r_{0}\lesssim 1\) and \(s>k-1\). Let \(\mathcal{H}\) be a collection of \((r,k)\)-plates, and let \(\mu\) be a compactly supported \(s\)-dimensional measure with constant \(C_{\mu}\). Then for all \(x\in\operatorname{spt}\left(\mu\right)\) except a set of \(\mu\)-measure \(\lesssim C_{\mu}\left(\frac{r}{r_{0}}\right)^{s-(k-1)}|\mathcal{H}|^{2}\), there exists an \((r_{0},k)\)-plate that contains every \((r,k)\)-plate in \(\mathcal{H}\) that passes through \(x\)._ ## 4. Construction of good measure and Proposition 2.1 In this section, we will construct the good measure \(\mu_{1,g}^{x}\) and prove Proposition 2.1. We will henceforth treat \(\alpha,k,d,\varepsilon\) in the hypothesis of Proposition 2.1 as fixed constants. To assist in the proof, we will eventually be choosing the following parameters: \(\varepsilon_{0}(\varepsilon)\), \(\kappa(\varepsilon_{0})\), \(\eta(\kappa,\varepsilon_{0})\), \(\beta(\kappa,\eta,\varepsilon_{0})\). In terms of size, they satisfy \[0<\beta\ll\eta\ll\kappa\ll\varepsilon_{0}\ll\varepsilon.\] Here, \(A\ll B\) means "\(A\) is much smaller than \(B\)." Unwrapping the dependences, we see that \(\beta\) ultimately only depends on \(\varepsilon\), which is what we want. ### Smooth Partitions of Unity We follow the first part of [5, Section 3.1]. Let \(R_{0}\) be a large power of \(2\) that will be determined later, and let \(R_{j}=2^{j}R_{0}\). Construct a partition of unity \[1=\sum_{j\geq 0}\psi_{j},\] where \(\psi_{0}\) is supported in the ball \(|\omega|\leq 2R_{0}\) and each \(\psi_{j}\) for \(j\geq 1\) is supported on the annulus \(R_{j-1}\leq|\omega|\leq R_{j+1}\). Importantly, we may choose \(\psi_{j}\) such that \(\|\tilde{\psi}_{j}\|_{L^{1}}\leq C\) for some absolute constant \(C\) and all \(j\geq 1\). For example, choose \(\psi_{j}\) to be Littlewood-Paley functions \(\chi(x/R_{j})-\chi(x/R_{j-1})\), where \(\chi\) is a smooth bump function that is \(1\) on \(B(0,1)\) and \(0\) outside \(B(0,2)\). In \(\mathbb{R}^{d}\), cover the annulus \(R_{j-1}\leq|\omega|\leq R_{j+1}\) by rectangular blocks \(\tau\) of dimensions approximately \(R_{j}^{1/2}\times\cdots\times R_{j}^{1/2}\times R_{j}\), with the long direction of each block \(\tau\) being the radial direction. Choose a smooth "partition of unity" with respect to this cover such that \[\psi_{j}=\sum_{\tau}\psi_{j,\tau}(\omega).\] The functions \(\psi_{j,\tau}\) satisfy the following properties: * \(\psi_{j,\tau}\) is supported on \(\tau\) and \(\|\psi_{j,\tau}\|_{L^{\infty}}\leq 1\); * \(\tilde{\psi}_{j,\tau}\) is essentially supported on a \(R_{j}^{-1/2}\times\cdots\times R_{j}^{-1/2}\times R_{j}^{-1}\) box \(K\) centered at \(0\), in the sense that \(|\tilde{\psi}_{j,\tau}(x)|\leq\mathrm{RapDec}(R_{j})\) if \(\mathrm{dist}(x,K)\gtrsim R_{j}^{-1/2+\beta}\) (and the implicit constant in the decay \(C_{N}R_{j}^{-N}\) is universal only depending on \(N\)); * \(\|\tilde{\psi}_{j,\tau}\|_{L^{1}}\lesssim 1\) (the implicit constant is universal). For each \((j,\tau)\), cover the unit ball in \(\mathbb{R}^{d}\) with tubes \(T\) of dimensions approximately \(R_{j}^{-1/2+\beta}\times\cdots\times R_{j}^{-1/2+\beta}\times 20\) with the long axis parallel to the long axis of \(\tau\). The covering has uniformly bounded overlap, each \(T\) intersects at most \(C(d)\) other tubes. We denote the collection of all these tubes as \(\mathbb{T}_{j,\tau}\). Let \(\eta_{T}\) be a smooth partition of unity subordinate to this covering, so that for each choice of \(j\) and \(\tau\), \(\sum_{T\in\mathbb{T}_{j,\tau}}\eta_{T}\) is equal to \(1\) on the ball of radius \(10\) and each \(\eta_{T}\) is smooth. For each \(T\in\mathbb{T}_{j,\tau}\), define an operator \[M_{T}f:=\eta_{T}(\psi_{j,\tau}\hat{f})^{\vee},\] which, morally speaking, maps \(f\) to the part of it that has Fourier support in \(\tau\) and physical support in \(T\). Define also \(M_{0}f:=(\psi_{0}\hat{f})^{\vee}\). We denote \(\mathbb{T}_{j}=\cup_{\tau}\mathbb{T}_{j,\tau}\) and \(\mathbb{T}=\cup_{j\geq 1}\mathbb{T}_{j}\). Hence, for any \(L^{1}\) function \(f\) supported on the unit ball, one has the decomposition \[f=M_{0}f+\sum_{T\in\mathbb{T}}M_{T}f+\operatorname{RapDec}(R_{0})\|f\|_{L^{1}}.\] See [10, Lemma 3.4] for a justification of the above decomposition. (Even though [10, Lemma 3.4] is stated in two dimensions, the argument obviously extends to higher dimensions.) ### Heavy Plates and Good Tubes In this subsection, we define good tubes. Actually, we will use three categories: good, acceptable, and non-acceptable. The idea is that Theorem 3.1 tells us that \(R_{j}^{-1/2+\beta}\)-tubes fall into one of three categories: * For \(R_{j}^{-\eta}\)-concentrated \((R_{j}^{-\kappa},k)\)-plates, tubes in them can have large \(\mu_{2}\)-mass. Call a tube inside one of these plates non-acceptable. * Many of the acceptable tubes \(T\) are good, i.e. \(\mu_{2}(4T)\lesssim R_{j}^{-\alpha/2+\varepsilon_{0}}\). * By Theorem 3.1, there are not many tubes that are neither non-acceptable nor good. The idea is that to form our good measure \(\mu_{1,g}^{x}\), we keep contributions only from good tubes. By the third bullet, we are allowed to remove tubes that are neither non-acceptable nor good. To remove the non-acceptable tubes, we will instead remove the heavy plates. Next, we formalize this idea. Let \(k\) be the integer such that \(k-1<\alpha\leq k\). Let \(\mathcal{E}_{r,k}\) be the cover of \(B(0,1)\) with \((r,k)\)-plates constructed in Section 3; every \((r/2,k)\)-plate is contained within some element of \(\mathcal{E}_{r,k}\). Let \(\mathcal{H}_{j}\) be the set of \((R_{j}^{-\kappa},k)\)-plates in \(\mathcal{E}_{R_{j}^{-\kappa},k}\) that are \(R_{j}^{-\eta}\)-concentrated on \(\mu_{1}+\mu_{2}\); then Lemma 3.3 tells us \(|\mathcal{H}_{j}|\lesssim R_{j}^{N\eta}\). Let \(\mathcal{H}_{\leq j}=\cup_{i=1}^{j}\mathcal{H}_{i}\) and \(\mathcal{H}=\cup_{i=1}^{\infty}\mathcal{H}_{i}\). Note that \(|\mathcal{H}_{\leq j}|\lesssim R_{j}^{N\eta}\). Let \(C_{\mathrm{sep}}\geq 1\) be a constant such that \(\operatorname{dist}(E_{1},E_{2})\geq C_{\mathrm{sep}}^{-1}\gtrsim 1\). We will eventually choose a "master parameter" \(a\in[99C_{\mathrm{sep}},100C_{\mathrm{sep}}]\). For \(H\in\mathcal{H}\), we will use \(aH\) as proxies for \(H\) when defining acceptable tubes and the good measure. We briefly attempt to motivate the construction. The role of \(C_{\mathrm{sep}}\) is to make sure that tubes intersecting \(H\cap E_{1}\) and \(H\cap E_{2}\) actually lie inside \(aH\). The role of \(a\) is to introduce a probabilistic wiggle to make a key technical condition hold (see the control of \(\operatorname{Bad}_{j}^{2}\) in Lemma 4.8). Now, we fix a choice of \(a\) and define the following. We say an \(R_{j}^{-1/2+\beta}\)_-tube \(T\in\mathbb{T}_{j}\) is _non-acceptable_ if there exists some \(H\in\mathcal{H}_{\leq\max(j,j_{*})}\) such that \(2T\) is contained in \(aH\), where \(j_{*}=\log_{2}R_{0}\) (the motivation for introducing \(j_{*}\) will be explained in the proof of Lemma 4.2). Otherwise, we say it is _acceptable_. Define a _good_\(R_{j}^{-1/2+\beta}\)-tube \(T\) to be an acceptable tube with \(\mu_{2}(4T)\leq R_{j}^{-\alpha/2+\varepsilon_{0}}\). And define the good part of \(\mu_{1}\) with respect to \(\mu_{2}\) and \(x\in E_{2}\) by \[\mu_{1,g}^{x}:=M_{0}(\mu_{1}|_{G_{0}(x)})+\sum_{T\in\mathbb{T},\,T\;\text{good }}M_{T}\mu_{1}.\] The only dependence on \(x\) comes in the \(M_{0}(\mu_{1}|_{G_{0}(x)})\), which is crucial when we try to prove Proposition 2.2 later. We will define \(G_{0}(x)\) in the next subsection, roughly speaking, \(G_{0}(x)\) is obtained by removing heavy plates through \(x\) at several scales. As constructed, we may not get a good bound of the form \(\|d_{*}^{x}(\mu_{1,g}^{x})-d_{*}^{x}(\mu_{1})\|\leq R_{0}^{-\varepsilon}\). This is because \(\mu_{1,g}^{x}\) doesn't include contributions from non-acceptable tubes while \(\mu_{1}\) does. Instead, we need to work with a measure \(\mu_{1}^{x}\) depending on \(x\) that removes the contributions of the non-acceptable tubes through \(x\). In fact, since non-acceptable tubes are contained in heavy plates, we should define \(\mu_{1}^{x}\) by removing these heavy plates "at different scales \(R_{j}\)" and make sure that summing over different scales still leads to good behavior (see Lemma 4.3). We make things rigorous in the next subsection. ### Construction of \(G(x)\) and \(\mu_{1}^{x}\) Recall that \(\operatorname{dist}(E_{1},E_{2})\geq C_{\text{sep}}^{-1}\gtrsim 1\). Thus, for \(x\in E_{2}\), we have \(E_{1}\subset B(x,C_{\text{sep}}^{-1})^{c}\), a fact that underlies the rest of the paper. Recall that \(a\in[99C_{\text{sep}},100C_{\text{sep}}]\) is the "master parameter" to be chosen later. For \(x\in E_{2}\) and \(H\in\mathcal{H}_{j}(x)\), let \(F(x,aH)\) be given by \[F(x,aH):=\left\{y\in B(x,C_{\text{sep}}^{-1})^{c}\cap B(0,10):l(x,y)\cap \operatorname{Surf}(aH)=\emptyset\right\},\] where \(l(x,y)\) is the line through \(x\) and \(y\). It is true that \(F(x,aH)\subset aH\); this will be proved in Lemma 4.1. Define \(j_{*}:=\log_{2}R_{0}\) such that \(R_{j_{*}}=R_{0}^{2}\). For \(j\geq 0\), let \[G_{j}(x)=\left[B(x,C_{\text{sep}}^{-1})^{c}\cap B(0,10)\right]\setminus\cup_{ H\in\mathcal{H}_{\leq\max(j,j_{*})}(x)}F(x,aH)\,. \tag{4.1}\] Finally, we define \(G(x)=G_{0}(x)\cup B(x,C_{\text{sep}}^{-1})\) and \[\mu_{1}^{x}:=\sum_{j\geq 0}\mu_{1}|_{G_{j}(x)}*\tilde{\psi}_{j}. \tag{4.2}\] It will be proved that \(G(x)\) satisfies the condition of Proposition 2.1 in the next subsection, and this is the only reason why we include \(B(x,C_{\rm sep}^{-1})\) in \(G(x)\). We now list some good properties of these definitions that will be critical later. First, the construction of \(G_{j}(x)\) ensures that \(G_{j}(x)\supset G_{j+1}(x)\) and \[G_{j}(x)\setminus G_{j+1}(x)\subset\cup_{H\in\mathcal{H}_{j+1}(x)}F(x,aH)\,,\] with \(G_{j}(x)=G_{j+1}(x)\) for \(j<j_{*}\). Next, \(G_{j}(x)\) keeps the contributions from the acceptable tubes through \(x\) while discarding the non-acceptable tubes through \(x\). **Lemma 4.1**.: _Let \(T\in\mathbb{T}_{j}\) and \(x\in 2T\cap E_{2}\)._ _(a) If \(2T\cap B(x,C_{sep}^{-1})^{c}\subset G_{j}(x)\), then \(T\) is acceptable;_ _(b) If \(2T\cap B(x,C_{sep}^{-1})^{c}\subset F(x,aH)\) for some \(H\in\mathcal{H}_{\leq\max(j,j_{*})}(x)\), then \(T\) is non-acceptable._ Proof.: Let \(T\in\mathbb{T}_{j}\) and \(x\in 2T\). (a) If \(2T\cap B(x,C_{\rm sep}^{-1})^{c}\subset G_{j}(x)\), we show \(T\) is acceptable, i.e. \(2T\not\subset aH\) for any \(H\in\mathcal{H}_{\leq\max(j,j_{*})}\). Fix \(H\in\mathcal{H}_{\leq\max(j,j_{*})}\). If \(x\notin aH\), then \(2T\not\subset aH\). If \(x\in aH\), then \(H\in\mathcal{H}_{\leq\max(j,j_{*})}(x)\). Choose \(y\in 2T\cap B(x,C_{\rm sep}^{-1})^{c}\) such that the line \(l(x,y)\) is parallel to the central line of \(T\). By assumption, we know that \(y\in G_{j}(x)\), so \(l(x,y)\) intersects \(\mathrm{Surf}(aH)\). In particular, \(2T\) intersects \(\mathrm{Surf}(aH)\), and thus \(2T\not\subset aH\). (b) If \(2T\cap B(x,C_{\rm sep}^{-1})^{c}\subset F(x,aH)\) for some \(H\in\mathcal{H}_{\ell}(x)\) with \(\ell\leq\max(j,j_{*})\), we show \(T\) is non-acceptable. More precisely, we can show that \(2T\) is contained in \(aH\). As promised before, we will prove \(F(x,aH)\subset aH\). Take \(y\in F(x,aH)\), and suppose \(y\notin aH\). Then \(d(y,P_{H})\geq aR_{\ell}^{-\kappa}\) and \(d(x,P_{H})<aR_{\ell}^{-\kappa}\). By continuity, we can find \(z\) on the line segment between \(x\) and \(y\) such that \(d(z,P_{H})=aR_{\ell}^{-\kappa}\). By convexity of \(B(0,10)\), we have \(z\in B(0,10)\). Thus, we have \(z\in\mathrm{Surf}(aH)\), which contradicts \(y\in F(x,aH)\), and so in fact \(y\in aH\). This proves \(F(x,aH)\subset aH\). Now observe the following geometric fact: \(2T\) is a tube through \(x\in B(0,1)\) whose ends lie on \(B(0,10)\), and so \(2T\cap B(x,C_{\rm sep}^{-1})^{c}\) contains both ends of \(2T\). Also, by our initial assumption, we get \[2T\cap B(x,C_{\rm sep}^{-1})^{c}\subset F(x,aH)\subset aH.\] Therefore, since \(aH\) is convex, we get \(2T\subset aH\). To estimate \(\|d_{*}^{x}(\mu_{1}|_{G(x)})-d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{1}}\), it suffices to estimate \(\|\mu_{1}|_{G(x)}-\mu_{1}^{x}\|_{L^{1}}\) and \(\|d_{*}^{x}(\mu_{1}^{x})-d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{1}}\). This will be the content of the next two subsections. ### Pruning \(E_{2}\) and estimating \(\|\mu_{1}|_{G(x)}-\mu_{1}^{x}\|_{L^{1}}\) If all the \(G_{j}(x)\)'s were the same, then \(\mu_{1}|_{G(x)}=\mu_{1}|_{G_{0}(x)}=\mu_{1}^{x}\). This isn't quite true, but in general, we will still get a good bound on \(\|\mu_{1}|_{G(x)}-\mu_{1}^{x}\|_{L^{1}}\) if \(\mu_{1}(G_{j}(x)\setminus G_{j+1}(x))\) is small for all \(j\). This weaker assumption is in fact true for "most" \(x\in E_{2}\). To see this, we use a variant of Proposition 7.3 in [22] (see also Proposition B.1 of [25]). **Lemma 4.2**.: _For every \(\kappa>0\), there exists \(\eta_{0}(\alpha,k,\kappa)>0\) such that for all \(\eta\in(0,\eta_{0}]\) and \(R_{0}\) sufficiently large in terms of \(\alpha,k,\kappa,C_{sep},\eta\), the following holds. Then there exists \(E_{2}^{\prime\prime}\subset E_{2}\) with \(\mu_{2}(E_{2}\setminus E_{2}^{\prime\prime})\leq R_{0}^{-\eta}\) such that the following assertions hold for \(x\in E_{2}^{\prime\prime}\)._ 1. _The_ \(100C_{sep}\)_-scalings of elements in_ \(\mathcal{H}_{j}(x)\) _are contained within some_ \((R_{j}^{-\kappa/2}/3,k)\)_-plate through_ \(x\)_;_ 2. _The_ \(100C_{sep}\)_-scalings of elements in_ \(\mathcal{H}_{\leq j_{*}}(x)\) _are contained within some_ \((R_{0}^{-\kappa/2},k)\)_-plate through_ \(x\)_;_ 3. \(\mu_{1}(G_{j}(x)\setminus G_{j+1}(x))\lesssim R_{j}^{-\eta/2}\)_, for any_ \(j\geq 0\)_._ Proof.: First pick \(j\geq 0\). By Lemma 3.4 and \(300C_{\mathrm{sep}}<R_{0}^{\kappa/2}\) for sufficiently large \(R_{0}\), we can find sets \(F_{j}\) with \(\mu_{2}(F_{j})\lesssim R_{j}^{-\kappa(\alpha-k+1)/2}|\mathcal{H}_{j}|^{2}\) such that for \(x\in E_{2}\setminus F_{j}\), the \(100C_{\mathrm{sep}}\)-scalings of elements in \(\mathcal{H}_{j}(x)\) are contained within some \((R_{j}^{-\kappa/2}/3,k)\)-plate through \(x\). Thus for \(x\in E_{2}\setminus F_{j}\), assertion (a) is true. By the same Lemma, we can find a set \(F_{0}\) with \[\mu_{2}(F_{0})\lesssim R_{0}^{-\kappa(\alpha-k+1)/2}|\mathcal{H}_{\leq j_{*}} |^{2}\] such that for \(x\in E_{2}\setminus F_{0}\), the \(100C_{\mathrm{sep}}\)-scalings of elements in \(\mathcal{H}_{\leq j_{*}}(x)\) are contained within some \((R_{0}^{-\kappa/2},k)\)-plate through \(x\). (Some of the plates in \(\mathcal{H}_{\leq j_{*}}\) are too small, but we simply thicken them to \((R_{0}^{-\kappa},k)\)-plates.) Thus for \(x\in E_{2}\setminus F_{0}\), assertion (b) is true. Let \(E_{2}^{\prime\prime}=E_{2}\setminus(F_{0}\cup\bigcup_{j=j_{*}}^{\infty}F_{j})\); then (using \(R_{j_{*}}=R_{0}^{2}\) and Lemma 3.3): \[\mu_{2}(E_{2}\setminus E_{2}^{\prime\prime})\lesssim R_{0}^{-\kappa(\alpha-k+ 1)/2}|\mathcal{H}_{\leq j_{*}}|^{2}+\sum_{j\geq j_{*}}R_{j}^{-\kappa(\alpha-k+ 1)/2}|\mathcal{H}_{j}|^{2}\] \[\lesssim R_{0}^{-\kappa(\alpha-k+1)/2+4N\eta}+\sum_{j\geq j_{*}}R_{j}^{-\kappa (\alpha-k+1)/2+2N\eta}\lesssim\sum_{j\geq 0}R_{j}^{-2\eta}\lesssim_{\eta}R_{0}^{- 2\eta},\] if \(\eta\leq\eta_{0}\) is chosen sufficiently small in terms of \(\kappa\). Hence, if \(R_{0}\) is chosen sufficiently large in terms of \(\eta\), then \(R_{0}^{\eta}\) dominates the implicit constant and we get \(\mu_{2}(E_{2}\setminus E_{2}^{\prime\prime})\leq R_{0}^{-\eta}\). Thus, \(E_{2}^{\prime\prime}\subset E_{2}\) satisfies the desired bound and assertions (a) and (b) are proved. Let \(x\in E_{2}^{\prime\prime}\). For assertion (c), we observe that if \(R_{j}<R_{0}^{2}\), then \(G_{j}(x)=G_{j+1}(x)\). (This is the reason why the parameter \(j_{*}\) was introduced.) Thus, assume \(R_{j}\geq R_{0}^{2}\). Then \[G_{j}(x)\setminus G_{j+1}(x)\subset\bigcup_{H\in\mathcal{H}_{j+1}(x)}F(x,aH)\,,\] which is contained within some \((R_{j+1}^{-\kappa/2}/3,k)\)-plate \(V\) through \(x\) using assertion (a). Let \(m\) be such that \(R_{j+1}\in[R_{m}^{2}/2,2R_{m}^{2}]\). Since \(R_{j+1}^{-\kappa/2}/3\leq R_{m}^{-\kappa}/2\), we know that \(V\) is contained in some \((R_{m}^{-\kappa},k)\)-plate \(H\in\mathcal{E}_{R_{m}^{-\kappa},k}\). Note that, since \(a\geq 99C_{\rm sep}\), we have \(H\cap B(x,C_{\rm sep}^{-1})^{c}\subset F(x,aH)\). Therefore, if \(H\in\mathcal{H}_{m}\), then \(V\cap G_{m}(x)=\emptyset\). Otherwise, we have \(\mu_{1}(V)\leq R_{m}^{-\eta}\) by definition of \(\mathcal{H}_{m}\). In either case, we have \[\mu_{1}(G_{j}(x)\setminus G_{j+1}(x))\leq\mu_{1}(V\cap G_{m}(x))\lesssim R_{m} ^{-\eta}\lesssim R_{j}^{-\eta/2}\,,\] as desired. Lemma 4.2 has two ramifications. First, Lemma 4.2(b) tells us that \(B(0,10)\setminus G(x)\) is contained within some \((R_{0}^{-\kappa/2},k)\)-plate. Second, Lemma 4.2(c) tells us that \(\mu_{1}(G_{j}(x)\setminus G_{j+1}(x))\) is small for all \(x\in E_{2}^{\prime\prime}\), so we can estimate \(\|\mu_{1}^{x}-\mu_{1}|_{G(x)}\|_{L^{1}}\). We record these two observations and provide detailed proofs in the following lemma. **Lemma 4.3**.: _Let \(E_{2}^{\prime\prime}\) be the subset given in Lemma 4.2 and \(x\in E_{2}^{\prime\prime}\). Then \(B(0,10)\setminus G(x)\) is contained within some \((R_{0}^{-\kappa/2},k)\)-plate and_ \[\|\mu_{1}|_{G(x)}-\mu_{1}^{x}\|_{L^{1}}\lesssim R_{0}^{-\eta/2}.\] Proof.: For the first assertion, we use the definition of \(G(x),G_{0}(x)\) in (4.1) and the fact \(F(x,aH)\subset aH\) proved in Lemma 4.1 to write \[B(0,10)\setminus G(x)=\big{[}B(x,C_{\rm sep}^{-1})^{c}\cap B(0,10)\big{]} \setminus G_{0}(x)\subset\bigcup_{H\in\mathcal{H}_{\leq j_{*}}(x)}aH.\] By Lemma 4.2(b), the rightmost expression is contained within some \((R_{0}^{-\kappa/2},k)\)-plate. For the second assertion, let \(G_{\infty}(x)=\bigcap_{j=0}^{\infty}G_{j}(x)\). First, by Lemma 4.2(c), we establish that for all \(j\geq 0\), \[\mu_{1}(G_{j}(x)\setminus G_{\infty}(x))=\sum_{i=j}^{\infty}\mu_{1}(G_{i}(x) \setminus G_{i+1}(x))\lesssim\sum_{i=j}^{\infty}R_{i}^{-\eta/2}\lesssim R_{j} ^{-\eta/2}. \tag{4.3}\] Also, \(\mu_{1}\) is supported on \(B(x,C_{\rm sep}^{-1})^{c}\), so \(\mu_{1}|_{G(x)}\) and \(\mu_{1}|_{G_{0}(x)}\) are the same measure. Thus, by (4.3), we get \(\|\mu_{1}|_{G(x)}-\mu_{1}|_{G_{\infty}(x)}\|_{L^{1}}\lesssim R_{0}^{-\eta/2}\) so it suffices to show that \(\|\mu_{1}|_{G_{\infty}(x)}-\mu_{1}^{x}\|_{L^{1}}\lesssim R_{0}^{-\eta/2}\). Indeed, using \(\sum_{j\geq 0}\psi_{j}=1\) and the definition (4.2) of \(\mu_{1}^{x}\), we have \[\mu_{1}^{x}-\mu_{1}|_{G_{\infty}(x)}=\sum_{j\geq 0}(\mu_{1}|_{G(x)}-\mu_{1}|_{G_{ \infty}(x)})*\tilde{\psi}_{j}=\sum_{j\geq 0}\mu_{1}|_{G_{j}(x)\setminus G_{ \infty}(x)}*\tilde{\psi}_{j},\] and so by \(\|\tilde{\psi}_{j}\|_{L^{1}}\lesssim 1\), Young's convolution inequality, and (4.3), we have \[\|\mu_{1}^{x}-\mu_{1}|_{G_{\infty}(x)}\|_{L^{1}}\lesssim\sum_{j\geq 0}\mu_{1}(G_ {j}(x)\setminus G_{\infty}(x))\lesssim\sum_{j\geq 0}R_{j}^{-\eta/2}\lesssim R _{0}^{-\eta/2}.\] **Remark 4.4**.: _We can make the last step more efficient using Abel summation. Let \(\tilde{P}_{k}=\sum_{j=1}^{k}\tilde{\psi}_{j}\). Note that if we chose \(\psi_{j}\) to be Littlewood-Paley functions \(\chi(x/R_{j})-\chi(x/R_{j-1})\) as in Section 4.1, then \(\|\tilde{P}_{k}\|_{L^{1}}\leq C\) for some universal constant \(C\). Now, Abel summation gives_ \[\mu_{1}^{x}-\mu_{1}|_{G_{\infty}(x)}=\sum_{j=0}^{\infty}\mu_{1}|_{G_{j}(x) \setminus G_{j+1}(x)}*\tilde{P}_{j},\] _and thus we get a slightly sharper bound \(\|\mu_{1}^{x}-\mu_{1}|_{G_{\infty}(x)}\|_{L^{1}}\lesssim\mu(G_{0}(x)\setminus G _{\infty}(x))\). This can be useful in some potential applications where \(\mu(G_{0}(x)\setminus G_{\infty}(x))\) is controlled but not \(\mu(G_{j}(x)\setminus G_{\infty}(x))\) as \(j\to\infty\)._ ### Tube Geometry and bound of \(\|d_{*}^{x}(\mu_{1}^{x})-d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{1}}\) The goal of this subsection is to establish a bound on \(\|d_{*}^{x}(\mu_{1}^{x})-d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{1}}\) in terms of the geometry of the tubes (Lemma 4.7). Showing such a bound will allow us to prove Proposition 2.1, thanks to Lemma 4.3. Recall that \[\mu_{1}^{x}=\sum_{j\geq 0}\mu_{1}|_{G_{j}(x)}*\tilde{\psi}_{j}.\] We first give a good approximation to \(\mu_{1}^{x}\). The following lemma can be proved the same way as Lemma 3.4 in [10] and we omit the details. It shows that problems about the mysterious \(\mu_{1}^{x}\) actually reduce to problems about restricted measures of \(\mu_{1}\). We remark that the next few lemmas will be proved for all \(x\in E_{2}\), even though we only need these results for \(x\in E_{2}^{\prime\prime}\) to prove Proposition 2.1. This is a minor difference: restricting to \(x\in E_{2}^{\prime\prime}\) doesn't make the lemmas easier to prove. **Lemma 4.5**.: _Let \(x\in E_{2}\). Then_ \[\|\mu_{1}^{x}-M_{0}(\mu_{1}|_{G_{0}(x)})-\sum_{j=1}^{\infty}\sum_{T\in\mathbb{ T}_{j}}M_{T}(\mu_{1}|_{G_{j}(x)})\|_{L^{1}}\leq\mathrm{RapDec}(R_{0}).\] We make the following observations relating the geometry of tubes to analytic estimates. **Lemma 4.6**.: _Let \(T\in\mathbb{T}_{j}\) be an \(R_{j}^{-1/2+\beta}\)-tube and \(x\in E_{2}\)._ 1. _If_ \(2T\) _doesn't pass through_ \(x\)_, then_ \(\|d_{*}^{x}(M_{T}\mu_{1})\|_{L^{1}}\leq\mathrm{RapDec}(R_{j})\) _and_ \(\|d_{*}^{x}(M_{T}(\mu_{1}|_{G_{j}(x)}))\|_{L^{1}}\leq\mathrm{RapDec}(R_{j})\)_._ 2. _If_ \(2T\cap B(x,C_{sep}^{-1})^{c}\subset G_{j}(x)\)_, then_ \(\|M_{T}\mu_{1}-M_{T}(\mu_{1}|_{G_{j}(x)})\|_{L^{1}}\leq\mathrm{RapDec}(R_{j})\)_._ 3. _If_ \(2T\cap B(x,C_{sep}^{-1})^{c}\subset F(x,aH)\) _for some_ \(H\in\mathcal{H}_{\leq\max(j,j_{*})}(x)\)_, then_ \(\|M_{T}(\mu_{1}|_{G_{j}(x)})\|_{L^{1}}\leq\mathrm{RapDec}(R_{j})\)_._ 4. \(\|M_{T}(\mu_{1}|_{G_{j}(x)})\|_{L^{1}}\) _and_ \(\|M_{T}\mu_{1}\|_{L^{1}}\) _are both_ \(\lesssim\mu_{1}(2T)+\mathrm{RapDec}(R_{j})\)_._ Proof.: (a) This is Lemma 3.1 of [10] applied to \(\mu_{1}\) and \(\mu_{1}|_{G_{j}(x)}\); note that \(\|\mu_{1}|_{G_{j}(x)}\|_{L^{1}}\leq\|\mu_{1}\|_{L^{1}}\leq 1\). (b) By assumption, \(\mu_{1}-\mu_{1}|_{G_{j}(x)}\) is supported outside \(2T\cap B(x,C_{\mathrm{sep}}^{-1})^{c}\supset 2T\cap E_{1}\), so we can apply Lemma 3.2 of [10]. (c) By assumption, \(\mu_{1}|_{G_{j}(x)}\) is supported outside \(2T\cap E_{1}\), so we can apply Lemma 3.2 of [10]. (d) This is a direct consequence of Lemma 3.2 of [10]. Using Lemma 4.6, we are able to compare \(\mu_{1}^{x}\) with \(\mu_{1,g}^{x}\). We first need some definition. For \(x\in E_{2}\), \(j\geq 0\), define \(\mathrm{Bad}_{j}(x)\) to be the union of \(2T\), where \(T\in\mathbb{T}_{j}\) is an \(R_{j}^{-1/2+\beta}\)-tube such that \(2T\) passes through \(x\) and either (1) \(2T\cap B(x,C_{\mathrm{sep}}^{-1})^{c}\) is not contained in \(G_{j}(x)\) or any \(F(x,aH)\) for \(H\in\mathcal{H}_{\leq\max(j,j_{*})}(x)\); or (2) \(2T\cap B(x,C_{\mathrm{sep}}^{-1})^{c}\subset G_{j}(x)\) and \(\mu_{2}(4T)>R_{j}^{-\alpha/2+\varepsilon_{0}}\). By Lemma 4.1, (2) is morally the union of the acceptable but not good tubes through \(x\), while (1) is the union of the tubes through \(x\) that are "borderline" between acceptable and non-acceptable. One may compare the following lemma with Lemma 3.1 of [5]. **Lemma 4.7**.: _Let \(x\in E_{2}\) and \(\mathrm{Bad}_{j}(x)\) be defined as above, \(\forall j\geq 0\). Then,_ \[\|d_{*}^{x}(\mu_{1}^{x})-d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{1}}\lesssim\sum_{j \geq 1}R_{j}^{100\beta d}\mu_{1}(\mathrm{Bad}_{j}(x))+\mathrm{RapDec}(R_{0}).\] Proof.: We apply Lemma 4.5 and the definition of \(\mu_{1,g}^{x}\). Note that the \(M_{0}(\mu_{1}|_{G_{0}(x)})\) terms cancel in the resulting expression for \(\mu_{1}^{x}-\mu_{1,g}^{x}\) (this is the reason why \(\mu_{1,g}^{x}\) needs to depend on \(x\)). Let \(g(T)=1\) if \(T\) is good and \(0\) otherwise. Thus, it suffices to prove for each \(j\geq 1\), \[\|\sum_{T\in\mathbb{T}_{j}}[d_{*}^{x}(M_{T}(\mu_{1}|_{G_{j}(x)}))-d _{*}^{x}(M_{T}\mu_{1})g(T)]\|_{L^{1}} \tag{4.4}\] \[\lesssim R_{j}^{100\beta d}\mu_{1}(\operatorname{Bad}_{j}(x))+\operatorname {RapDec}(R_{j}).\] Let \(\mathbb{T}_{j,\operatorname{bad}}\) be the set of tubes \(T\in\mathbb{T}_{j}\) such that \(2T\) passes through \(x\) and either condition (1) or (2) in the definition of \(\operatorname{Bad}_{j}(x)\) holds. We claim that if \(T\notin\mathbb{T}_{j,\operatorname{bad}}\), then \[\|d_{*}^{x}(M_{T}(\mu_{1}|_{G_{j}(x)}))-d_{*}^{x}(M_{T}\mu_{1})g(T)\|_{L^{1}} \leq\operatorname{RapDec}(R_{j}), \tag{4.5}\] while if \(T\in\mathbb{T}_{j,\operatorname{bad}}\), then \[\|d_{*}^{x}(M_{T}(\mu_{1}|_{G_{j}(x)}))-d_{*}^{x}(M_{T}\mu_{1})g(T)\|_{L^{1}} \lesssim\mu_{1}(2T)+\operatorname{RapDec}(R_{j}). \tag{4.6}\] We show how the claim implies (4.4). For any \(y\in E_{1}\), \(d(x,y)\gtrsim 1\) and so there are \(\lesssim R_{j}^{100\beta d}\) many \(R_{j}^{-1/2+\beta}\)-tubes in \(\mathbb{T}_{j}\) passing through both \(x\) and \(y\). Thus, \[\sum_{T\in\mathbb{T}_{j,\operatorname{bad}}}\mu_{1}(2T)\lesssim R_{j}^{100 \beta d}\mu_{1}(\operatorname{Bad}_{j}(x)). \tag{4.7}\] Combining (4.5), (4.6), (4.7) proves (4.4). Now we prove the claim. Suppose \(T\notin\mathbb{T}_{j,\operatorname{bad}}\). By working through the definition of \(\mathbb{T}_{j,\operatorname{bad}}\), we have three possibilities of \(T\): either 1. \(x\notin 2T\); 2. \(x\in 2T\), \(2T\cap B(x,C_{\operatorname{sep}}^{-1})^{c}\subset G_{j}(x)\) and \(\mu_{2}(4T)\leq R_{j}^{-\alpha/2+\varepsilon_{0}}\). Then \(T\) is acceptable by Lemma 4.1(a), so it is good since \(\mu_{2}(4T)\leq R_{j}^{-\alpha/2+\varepsilon_{0}}\). 3. \(x\in 2T\), \(2T\cap B(x,C_{\operatorname{sep}}^{-1})^{c}\subset F(x,aH)\) for some \(H\in\mathcal{H}_{\leq\max(j,j_{*})}(x)\). Then \(T\) is non-acceptable by Lemma 4.1(b). In case (i), we get (4.5) by Lemma 4.6(a), regardless of \(g(T)=0\) or \(1\). In case (ii), we use Lemma 4.6(b) and \(g(T)=1\), and in case (iii), we use Lemma 4.6(c) and \(g(T)=0\). Thus, if \(T\notin\mathbb{T}_{j,\operatorname{bad}}\), then (4.5) holds. Now suppose \(T\in\mathbb{T}_{j,\operatorname{bad}}\). Then we get (4.6) by applying Lemma 4.6(d), regardless of whether \(g(T)=0\) or \(1\). This proves the claim. The crucial estimate about \(\operatorname{Bad}_{j}(x)\) is the following lemma, which will be proved in Section 4.6. **Lemma 4.8**.: _For every \(\varepsilon_{0}>0\), there exist \(\eta_{0}(\varepsilon_{0}),\kappa_{0}(\varepsilon_{0})>0\) such that for any \(\eta\in(0,\eta_{0}],\kappa\in(0,\kappa_{0}]\) and sufficiently small \(\beta\) depending on \(\varepsilon_{0},\eta,\kappa\), the following holds. In the construction of \(\mu_{1,g}^{x}\) in Section 4.2, _we can choose some \(a\in[99C_{sep},100C_{sep}]\) such that for any \(j\geq 1\), if we define_ \[\operatorname{Bad}_{j}:=\left\{(x,y)\in E_{2}\times E_{1}:y\in\operatorname{Bad} _{j}(x)\right\},\] _then \(\mu_{2}\times\mu_{1}(\operatorname{Bad}_{j})\lesssim R_{j}^{-200\beta d}\)._ Using this, we complete the proof of Proposition 2.1. Proof of Proposition 2.1.: Our goal is to find a large subset \(E_{2}^{\prime}\subset E_{2}\) with \(\mu_{2}(E_{2}^{\prime})\geq 1-R_{0}^{-\beta}\) such that for each \(x\in E_{2}^{\prime}\), we have that \(B^{d}(0,10)\setminus G(x)\) is contained within some \((R_{0}^{-\beta},k)\)-plate and \[\|d_{*}^{x}(\mu_{1}|_{G(x)})-d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{1}}\leq R_{0}^{- \beta}. \tag{4.8}\] First, let us determine the auxiliary parameters \(\varepsilon_{0},\kappa,\eta,\beta\). (Recall that \(\varepsilon,\alpha,k,d\) are fixed constants.) We defer the choice of \(\varepsilon_{0}(\varepsilon)\) to the next section, see Lemma 5.1. Then, choose \(\kappa=\kappa_{0}(\varepsilon_{0})\) in Lemma 4.8. Next, choose \(\eta=\eta(\kappa,\varepsilon_{0})\) to be the smaller of the two \(\eta_{0}\)'s in Lemma 4.2 and Lemma 4.8. Finally, choose \(\beta\) to the smaller of the \(\beta(\varepsilon_{0},\eta,\kappa)\) in Lemma 4.8 and \(\min(\frac{\eta}{3},\frac{\kappa}{2})\). Now, we shall construct \(E_{2}^{\prime}\) by taking the set \(E_{2}^{\prime\prime}\) from Lemma 4.2 and removing some "bad parts" given by Lemma 4.8. Fix a choice of \(a\) in the construction of \(\mu_{1,g}^{x}\) in Section 4.2 such that the conclusion in Lemma 4.8 holds for any \(j\geq 1\). By Lemma 4.8, for each \(j\geq 1\), we can find a set \(F_{j}\subset E_{2}\) with \(\mu_{2}(F_{j})\leq R_{j}^{-50\beta d}\) such that \(\mu_{1}(\operatorname{Bad}_{j}(x))\lesssim R_{j}^{-150\beta d}\) for \(x\in E_{2}\setminus F_{j}\). Finally, define \(E_{2}^{\prime}:=E_{2}^{\prime\prime}\setminus\bigcup_{j\geq 1}F_{j}\). We now verify that \(E_{2}^{\prime}\) satisfies the desired conditions. First, observe that \(\mu_{2}(E_{2}^{\prime})\geq\mu(E_{2})-\mu(E_{2}\setminus E_{2}^{\prime\prime} )-\sum_{j\geq 1}R_{j}^{-50\beta d}>1-R_{0}^{-\beta}\) if \(R_{0}\) is sufficiently large. Next, fix \(x\in E_{2}^{\prime}\). Since \(x\in E_{2}^{\prime\prime}\) and \(\beta<\frac{\kappa}{2}\), we get from the first part of Lemma 4.3 that \(B(0,10)\setminus G(x)\) is contained in some \((R_{0}^{-\beta},k)\)-plate. Now by the second part of Lemma 4.3, since \(\beta\leq\frac{\eta}{3}\), we have that \[\|d_{*}^{x}(\mu_{1}|_{G(x)})-d_{*}^{x}(\mu_{1}^{x})\|_{L^{1}}<\frac{1}{2}R_{0}^ {-\beta}. \tag{4.9}\] For each \(x\in E_{2}^{\prime}\), Lemma 4.7 tells us (for some constant \(C\) depending only on our parameters): \[\|d_{*}^{x}(\mu_{1}^{x})-d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{1}}\leq C\cdot\sum_{j \geq 1}R_{j}^{-50\beta d}+\operatorname{RapDec}(R_{0})<\frac{1}{2}R_{0}^{-\beta} \tag{4.10}\] if \(R_{0}\) is sufficiently large. Combining (4.9) and (4.10) via triangle inequality proves the desired equation (4.8). ### Control of bad part The goal of this subsection is to prove Lemma 4.8. To do so, we will use the new radial projection estimate, Theorem 3.1. Proof of Lemma 4.8.: Let \(\operatorname{Bad}_{j}=\{(x,y)\in E_{2}\times E_{1}:y\in\operatorname{Bad}_{j}(x)\}\). By definition of \(\operatorname{Bad}_{j}(x)\) above Lemma 4.7, we have \(\operatorname{Bad}_{j}\subset\operatorname{Bad}_{j}^{1}\cup\operatorname{Bad }_{j}^{2}\). Here \(\operatorname{Bad}_{j}^{1}\) is the set of pairs \((x,y)\in E_{2}\times E_{1}\) such that \(x,y\) lie in \(2T\) for some \(R_{j}^{-1/2+\beta}\)-tube \(T\in\mathbb{T}_{j}\) with \(2T\cap B(x,C_{\operatorname{sep}}^{-1})^{c}\subset G_{j}(x)\) and \(\mu_{2}(4T)\geq R_{j}^{-\alpha/2+\varepsilon_{0}}\). And \(\operatorname{Bad}_{j}^{2}\) is the set of pairs \((x,y)\in E_{2}\times E_{1}\) satisfying that \(x,y\) lie in \(2T\) for some \(R_{j}^{-1/2+\beta}\)-tube \(T\in\mathbb{T}_{j}\) such that \(2T\cap B(x,C_{\operatorname{sep}}^{-1})^{c}\) is not contained in \(G_{j}(x)\) or any \(F(x,aH)\) for \(H\in\mathcal{H}_{\leq\max(j,j_{*})}(x)\). Our bound of \(\mu_{2}\times\mu_{1}(\operatorname{Bad}_{j}^{1})\) will not depend on the choice of \(a\) in the construction of \(\mu_{1,g}^{x}\) in Section 4.2, while \(\mu_{2}\times\mu_{1}(\operatorname{Bad}_{j}^{2})\) will. For a given \(H\in\mathcal{H}_{\ell}\) with \(\ell\leq\max(j,j_{*})\), let \(\operatorname{Bad}_{j}^{2}(H)\) be the set of pairs \((x,y)\in E_{2}\times E_{1}\) satisfying that \(H\in\mathcal{H}_{\leq\max(j,j_{*})}(x)\) and \(x,y\) lie in \(2T\) for some \(R_{j}^{-1/2+\beta}\)-tube \(T\in\mathbb{T}_{j}\) such that \(2T\cap B(x,C_{\operatorname{sep}}^{-1})^{c}\) is not contained in \(\mathbb{R}^{d}\setminus F(x,aH)\) or \(F(x,aH)\). We'll prove \[\mu_{2}\times\mu_{1}(\operatorname{Bad}_{j}^{1})\lesssim R_{j}^{-200\beta d}\] and that there exists \(a\) such that \[\sum_{H\in\mathcal{H}_{\leq\max(j,j_{*})}}\mu_{2}\times\mu_{1}(\operatorname{ Bad}_{j}^{2}(H))\lesssim R_{j}^{-1/8}\,, \tag{4.11}\] for any \(j\geq 0\). Since \(\operatorname{Bad}_{j}^{2}\subset\bigcup_{H\in\mathcal{H}_{\leq\max(j,j_{*})} }\operatorname{Bad}_{j}^{2}(H)\), these two bounds will prove Lemma 4.8. _Upper bound of \(\mu_{2}\times\mu_{1}(\operatorname{Bad}_{j}^{1})\)._ This is a consequence of Theorem 3.1. By applying Theorem 3.1 with parameters \((\delta,r,\delta^{\eta},\varepsilon)=(R_{j}^{-1/2+\beta},R_{j}^{-\kappa}/(200C _{\operatorname{sep}}),R_{j}^{-\eta},\varepsilon_{0}/4)\), we can find \(\gamma>0\) such that the following is true. There exists a set \(B\subset E_{2}\times E_{1}\) with \(\mu_{2}\times\mu_{1}(B)\leq R_{j}^{-\gamma}\) such that for each \(y\in E_{1}\) and \(R_{j}^{-1/2+\beta}\)-tube \(T\) with \(2T\) containing \(y\), we have (assuming \(\kappa\) and \(\beta\) are chosen small enough in terms of \(\varepsilon_{0}\)): \[\mu_{2}(2T\setminus(A|_{y}\cup B|_{y}))\lesssim_{C_{\operatorname{sep}}}\frac {R_{j}^{(-1/2+\beta)\alpha}}{R_{j}^{-\kappa(\alpha-k+1)}}\cdot R_{j}^{(-1/2+ \beta)(-\varepsilon_{0}/4)}\leq R_{j}^{-\alpha/2+\varepsilon_{0}/2}\,, \tag{4.12}\] where \(A\) is the set of pairs \((x,y)\in E_{2}\times E_{1}\) satisfying that \(x\) and \(y\) lie in some \(R_{j}^{-\eta}\)-concentrated \((R_{j}^{-\kappa}/(200C_{\operatorname{sep}}),k)\)-plate on \(\mu_{1}+\mu_{2}\). Since decreasing the values of \(\beta,\gamma\) makes the previous statement weaker, we may assume \(200\beta d=\gamma\). Now, observe that since \(d(y,E_{2})\geq C_{\rm sep}^{-1}\gtrsim 1\) for \(y\in E_{1}\) and \(\mu_{2}\) is a probability measure, there are at most \(\lesssim R_{j}^{\alpha/2-\varepsilon_{0}+O(\beta)}\) many tubes \(T\in\mathbb{T}_{j}\) with \(2T\) containing \(y\) satisfying \(\mu_{2}(4T)\geq R_{j}^{-\alpha/2+\varepsilon_{0}}\). Moreover, we claim the following. **Claim 1.**. Let \(y\in E_{1}\). Suppose there exist \(x\in E_{2}\) and \(T\in\mathbb{T}_{j}\) such that \(x,y\) lie in \(2T\) with \(2T\cap B(x,C_{\rm sep}^{-1})^{c}\subset G_{j}(x)\). Then \(2T\cap A|_{y}=\emptyset\). Assuming Claim 1, by (4.12) we get \[\mu_{2}({\rm Bad}_{j}^{1}|_{y}\setminus B|_{y})\lesssim R_{j}^{\alpha/2- \varepsilon_{0}+O(\beta)}\cdot R_{j}^{-\alpha/2+\varepsilon_{0}/2}\leq R_{j}^{ -200\beta d}\,.\] Thus, \(\mu_{2}\times\mu_{1}({\rm Bad}_{j}^{1}\setminus B)\lesssim R_{j}^{-200\beta d}\), and so \(\mu_{2}\times\mu_{1}({\rm Bad}_{j}^{1})\lesssim R_{j}^{-200\beta d}\). It remains to prove Claim 1. Let \(y\in E_{1}\), \(x\in E_{2}\), and \(T\in\mathbb{T}_{j}\) be such that \(x,y\) lie in \(2T\) with \(2T\cap B(x,C_{\rm sep}^{-1})^{c}\subset G_{j}(x)\). Suppose \(2T\cap A|_{y}\neq\emptyset\). Pick a point \(x^{\prime}\in 2T\cap A|_{y}\); by definition, we have \(x^{\prime}\in E_{2}\cap 2T\) and there exists a \(R_{j}^{-\eta}\)-concentrated \((R_{j}^{-\kappa}/(200C_{\rm sep}),k)\)-plate \(H^{\prime}\) such that \(x^{\prime}\) and \(y\) lie in \(H^{\prime}\). We also know \(x^{\prime},y\) both belong to \(2T\). Since \(d(x^{\prime},y)\geq C_{\rm sep}^{-1}\), we have that \(2T\) is contained in \(100C_{\rm sep}H^{\prime}\). This in turn is a \(R_{j}^{-\eta}\)-concentrated \((R_{j}^{-\kappa}/2,k)\)-plate, so it must be contained in some \(H\in\mathcal{H}_{j}\). Hence, \(2T\subset H\). By assumption, we know \(2T\cap B(x,C_{\rm sep}^{-1})^{c}\not\subset F(x,aH)\) (where \(a=99C_{\rm sep}\)), so there exists \(z\in 2T\cap B(x,C_{\rm sep}^{-1})^{c}\) such that \(\ell(x,z)\) intersects \({\rm Surf}(aH)\) at some point \(w\). Let \(P\) be the \(k\)-plane through \(x\) parallel to the central plane \(P_{H}\) of \(H\). Since \(x,w,z\) are collinear, we have \[\frac{d(w,P)}{d(z,P)}=\frac{d(w,x)}{d(z,x)}.\] However, we know that \(w\in{\rm Surf}(aH)\), so \(d(w,P_{H})=aR_{j}^{-\kappa}\). Also \(d(P,P_{H})\leq R_{j}^{-\kappa}\) since \(x\in 2T\subset H\). Thus by triangle inequality, we get \(d(w,P)\geq(a-1)R_{j}^{-\kappa}\). While \(d(z,P)\leq 2R_{j}^{-\kappa}\) (since \(z\in 2T\subset H\)), so \(\frac{d(w,P)}{d(z,P)}\geq\frac{a-1}{2}\). On the other hand, \(d(w,x)\leq 20\) and \(d(z,x)\geq C_{\rm sep}^{-1}\), so \(\frac{d(w,x)}{d(z,x)}\leq 20C_{\rm sep}\). Hence, we get \(\frac{a-1}{2}\leq 20C_{\rm sep}\), contradiction to \(a=99C_{\rm sep}\) and \(C_{\rm sep}\geq 1\). _Upper bound of \(\mu_{2}\times\mu_{1}({\rm Bad}_{j}^{2})\)._ We will prove there exists a choice for \(a\) such that for all \(j,\ell\) satisfying \(\ell\leq\max(j,j_{*})\), we have \[F_{j,\ell}(a)=\sum_{H\in\mathcal{H}_{\ell}}\mu_{2}\times\mu_{1}({\rm Bad}_{j}^ {2}(H))\leq R_{j}^{-1/4}. \tag{4.13}\] Then given \(j\), summing (4.13) over \(\ell\leq\max(j,j_{*})\) gives (4.11) (use \(\max(j,j_{*})\lesssim\log R_{j}\)). To prove (4.13), we fix \(j,\ell\) and upper bound the measure of the set of \(a\)'s for which (4.13) fails. We would like to apply Markov's inequality, so we compute the expectation of \(F_{j,\ell}(a)\) over \(a\). Let \(P(a)=\frac{1}{C_{\rm sep}}\mathbb{1}_{[90C_{\rm sep},100C_{\rm sep}]}\,da\) be a probability measure of \(a\in[99C_{\rm sep},100C_{\rm sep}]\). Then we have \[\int_{I}F_{j,\ell}(a)\,dP(a) =\int_{E_{1}}\int_{E_{2}}\int_{I}\sum_{H\in\mathcal{H}_{\ell}}1_{ \operatorname{Bad}^{2}_{j}(H)}(x,y)\,dP(a)d\mu_{2}(x)d\mu_{1}(y)\] \[\leq\sup_{x\in E_{2},y\in E_{1}}\int_{I}\sum_{H\in\mathcal{H}_{ \ell}}1_{\operatorname{Bad}^{2}_{j}(H)}(x,y)\,dP(a).\] The following claim shows that it is unlikely that \((x,y)\in\operatorname{Bad}^{2}_{j}(H)\) for a given \(H\). **Claim 2.** Suppose \((x,y)\in\operatorname{Bad}^{2}_{j}(H)\), where \(H\in\mathcal{H}_{\ell}(x)\) with \(\ell\leq\max(j,j_{*})\). Let \(z_{1},z_{2}\) be the intersections of the line through \(x,y\) with \(B(0,10)\). Then for one of \(i=1,2\), we have \(|d(z_{i},P_{H})-aR_{\ell}^{-\kappa}|\lesssim R_{j}^{-1/2+\beta}\), where \(P_{H}\) is the central plane of \(H\). _Proof._ By definition of \(\operatorname{Bad}^{2}_{j}(H)\), there exists \(T\in\mathbb{T}_{j}\) with \(2T\) containing both \(x,y\) such that \(2T\cap B(x,C_{\rm sep}^{-1})^{c}\) is not contained in \(\mathbb{R}^{d}\setminus F(x,aH)\) or \(F(x,aH)\). Fix a large constant \(C>0\). If \(d(z_{i},P_{H})-aR_{\ell}^{-\kappa}>CR_{j}^{-1/2+\beta}\) for \(i=1\) or \(2\), then we claim \(2T\cap B(x,C_{\rm sep}^{-1})^{c}\subset\mathbb{R}^{d}\setminus F(x,aH)\). Note that for any \(y^{\prime}\in 2T\cap B(x,C_{\rm sep}^{-1})^{c}\), one of the intersections \(z^{\prime}\) of the line through \(x,y^{\prime}\) with \(B(0,10)\) is contained in \(B(z_{i},CR_{j}^{-1/2+\beta})\), and so by triangle inequality, we get \(d(z^{\prime},P_{H})>aR_{\ell}^{-\kappa}\). (This is why the \(B(x,C_{\rm sep}^{-1})^{c}\) is important: it is not true that for all \(y^{\prime}\in 2T\), one of the intersections \(z^{\prime}\) of the line through \(x,y^{\prime}\) with \(B(0,10)\) is contained in \(B(z_{i},CR_{j}^{-1/2+\beta})\). Take \(y^{\prime}\) such that \((y-x)\perp(y^{\prime}-x)\), for instance.) This shows that \(2T\cap B(x,C_{\rm sep}^{-1})^{c}\subset\mathbb{R}^{d}\setminus F(x,aH)\). If on the other hand \(d(z_{i},P_{H})<aR_{\ell}^{-\kappa}-CR_{j}^{-1/2+\beta}\) for both \(i=1,2\), then a similar argument shows that \(2T\cap B(x,C_{\rm sep}^{-1})^{c}\subset F(x,aH)\). Thus, we have established the contrapositive of the claim. \(\square\) Using Claim 2, observe that, for a fixed pair \((x,y)\in E_{2}\times E_{1}\) and a fixed \(H\in\mathcal{H}_{\ell}(x)\) with \(\ell\leq\max(j,j_{*})\) (so \(R_{\ell}\leq R_{j}^{2}\)), we have \((x,y)\in\operatorname{Bad}^{2}_{j}(H)\) for \(a\) lying in two intervals each of length \(\lesssim R_{j}^{-1/2+\beta}R_{\ell}^{\kappa}\lesssim R_{j}^{-1/2+\beta+2\kappa}\). Recall that by Lemma 3.3 and \(R_{\ell}\leq R_{j}^{2}\) we have \(|\mathcal{H}_{\ell}|\lesssim R_{j}^{2N\eta}\). Therefore, \[\sum_{H\in\mathcal{H}_{\ell}}\int_{I}1_{\operatorname{Bad}^{2}_{j }(H)}(x,y)\,dP(a)\lesssim \frac{R_{j}^{2N\eta}R_{j}^{-1/2+\beta+2\kappa}}{|I|}\] \[\sim R_{j}^{-1/2+\beta+2N\eta+2\kappa},\] Thus, assuming \(\beta,\eta,\kappa\) are small enough and \(R_{0}\) is large enough, by Markov's inequality we have \(|\{a:F_{j,\ell}(a)>R_{j}^{-1/4}\}|\leq R_{j}^{-1/8}\). By the union bound, (4.13) fails for some \(j,\ell\) satisfying \(\ell\leq\max(j,j_{*})\) only in a set of measure \[\sum_{j=0}^{\infty}\sum_{\ell\leq\max(j,j_{*})}R_{j}^{-1/8}\leq\sum _{j=0}^{j_{*}}\sum_{\ell=0}^{j_{*}}R_{0}^{-1/8}+\sum_{\ell=0}^{\infty}\sum_{j= \ell}^{\infty}R_{j}^{-1/8}\] \[\lesssim R_{0}^{-1/8}\cdot(\log_{2}R_{0})^{2}+\sum_{\ell\geq 0}R_{ \ell}^{-1/8}\lesssim R_{0}^{-1/8}\cdot\left[(\log_{2}R_{0})^{2}+1\right]<1,\] if \(R_{0}\) is chosen sufficiently large. ## 5. Refined decoupling and Proposition 2.2 In this section, we prove Proposition 2.2, which will complete the proof of Theorem 1.2. This part of the argument proceeds very similarly as [5, Section 4] and [10, Section 5]. Let \(\sigma_{r}\) be the normalized surface measure on the sphere of radius \(r\). The main estimates in the proof of Proposition 2.2 are the following. **Lemma 5.1**.: _For any \(\alpha>0\), \(r>10R_{0}\), and \(\varepsilon_{0}\) sufficiently small depending on \(\alpha,\varepsilon\):_ \[\int_{E_{2}}|\mu_{1,g}^{x}\ast\widehat{\sigma}_{r}(x)|^{2}d\mu_{2}(x)\lesssim _{\varepsilon}r^{-\frac{\alpha d}{d+1}+\varepsilon}r^{-(d-1)}\int|\widehat{ \mu}_{1}|^{2}\psi_{r}d\xi+\operatorname{RapDec}(r),\] _where \(\psi_{r}\) is a weight function which is \(\sim 1\) on the annulus \(r-1\leq|\xi|\leq r+1\) and decays off of it. To be precise, we could take_ \[\psi_{r}(\xi)=(1+|r-|\xi||)^{-100}\,.\] **Lemma 5.2**.: _For any \(\alpha>0\), \(r>0\), we have_ \[\int_{E_{2}}|\mu_{1,g}^{x}\ast\widehat{\sigma}_{r}(x)|^{2}\,d\mu_{2}(x)\lesssim (r+1)^{d-1}r^{-(d-1)}\,.\] Proof of Proposition 2.2, given Lemmas 5.1 and 5.2.: Note that \[d_{*}^{x}(\mu_{1,g}^{x})(t)=t^{d-1}\mu_{1,g}^{x}\ast\sigma_{t}(x).\] Since \(\mu_{1,g}^{x}\) is essentially supported in the \(R_{0}^{-1/2+\beta}\) neighborhood of \(E_{1}\), for \(x\in E_{2}\), we only need to consider \(t\sim 1\). Hence, up to a loss of \(\operatorname{RapDec}(R_{0})\) which is negligible in our argument, we have \[\int\|d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{2}}^{2}\,d\mu_{2}(x) \lesssim\int_{0}^{\infty}\int|\mu_{1,g}^{x}\ast\sigma_{t}(x)|^{2} \,d\mu_{2}(x)t^{d-1}dt\] \[=\int_{0}^{\infty}\int|\mu_{1,g}^{x}\ast\widehat{\sigma}_{r}(x)|^ {2}\,d\mu_{2}(x)r^{d-1}dr,\] where in the second step, we used a limiting process and an \(L^{2}\)-identity proved by Liu [14, Theorem 1.9]: for any Schwartz function \(f\) on \(\mathbb{R}^{d},d\geq 2\), and any \(x\in\mathbb{R}^{d}\), \[\int_{0}^{\infty}|f*\sigma_{t}(x)|^{2}\,t^{d-1}\,dt=\int_{0}^{\infty}|f*\widehat {\sigma}_{r}(x)|^{2}\,r^{d-1}\,dr.\] For \(r\leq 10R_{0}\) we use Lemma 5.2, and for \(r>10R_{0}\) we use Lemma 5.1. The small \(r\) contribution to \(\int\|d_{*}^{x}(\mu_{1,g}^{x})\|_{L^{2}}^{2}d\mu_{2}(x)\) is \[\int_{0}^{10R_{0}}(r+1)^{d-1}\,dr\lesssim R_{0}^{d}.\] The large \(r\) contribution is (dropping the negligible \(\operatorname{RapDec}(r)\) term) \[\int_{10R_{0}}^{\infty}\int_{\mathbb{R}^{d}}r^{-\frac{\alpha d}{d +1}+\varepsilon}\psi_{r}(\xi)|\widehat{\mu}_{1}(\xi)|^{2}\,d\xi dr\] \[\lesssim\int_{\mathbb{R}^{d}}|\xi|^{-\frac{\alpha d}{d+1}+ \varepsilon}|\widehat{\mu}_{1}(\xi)|^{2}\,d\xi.\] The proof of Proposition 2.2 is thus complete upon verification of Lemmas 5.1 and 5.2. Proof of Lemma 5.2.: We follow the proof of Proposition 5.3 in [10], case \(r<10R_{0}\). Since \(\mu_{2}\) is a probability measure, it suffices to upper bound \(\sup_{x}|\mu_{1,g}^{x}*\widehat{\sigma}_{r}(x)|\). Fix \(x\) and note that \[|\mu_{1,g}^{x}*\widehat{\sigma}_{r}(x)|^{2}\leq\|\widehat{\mu_{1,g}^{x}}\|_{L ^{1}(d\sigma_{r})}^{2}\leq\|\widehat{\mu_{1,g}^{x}}\|_{L^{2}(d\sigma_{r})}^{2 }\,.\] Then by the approximately orthogonal argument in [10, Proof of Proposition 5.3], we have \[\|\widehat{\mu_{1,g}^{x}}\|_{L^{2}(d\sigma_{r})}^{2}\lesssim r^{-(d-1)}\int(| \widehat{\mu_{1}|_{G_{0}(x)}}|^{2}+|\widehat{\mu}_{1}|^{2})\psi_{r}d\xi.\] Finally, since \(\|\widehat{\mu}_{1}\|_{L^{\infty}}\leq\|\mu_{1}\|_{L^{1}}=1\), \(\|\widehat{\mu_{1}|_{G_{0}(x)}}\|_{L^{\infty}}\leq\|\mu_{1}|_{G_{0}(x)}\|_{L^ {1}}\leq 1\), and \(\int\psi_{r}\,d\xi\lesssim(r+1)^{d-1}\), we get the desired result. ### Refined decoupling estimates The key ingredient in the proof of Lemma 5.1 is the following refined decoupling theorem, which is derived by applying the \(l^{2}\) decoupling theorem of Bourgain and Demeter [3] at many different scales. Here is the setup. Suppose that \(S\subset\mathbb{R}^{d}\) is a compact and strictly convex \(C^{2}\) hypersurface with Gaussian curvature \(\sim 1\). For any \(\varepsilon>0\), suppose there exists \(0<\beta\ll\varepsilon\) satisfying the following. Suppose that the \(1\)-neighborhood of \(RS\) is partitioned into \(R^{1/2}\times...\times R^{1/2}\times 1\) blocks \(\theta\). For each \(\theta\), let \(\mathbb{T}_{\theta}\) be a set of finitely overlapping tubes of dimensions \(R^{-1/2+\beta}\times\cdots\times R^{-1/2+\beta}\times 1\) with long axis perpendicular to \(\theta\), and let \(\mathbb{T}=\cup_{\theta}\mathbb{T}_{\theta}\). Each \(T\in\mathbb{T}\) belongs to \(\mathbb{T}_{\theta}\) for a single \(\theta\), and we let \(\theta(T)\) denote this \(\theta\). We say that \(f\) is microlocalized to \((T,\theta(T))\) if \(f\) is essentially supported in \(2T\) and \(\tilde{f}\) is essentially supported in \(2\theta(T)\). **Theorem 5.3**.: _[_10_, Corollary 4.3]_ _Let \(p\) be in the range \(2\leq p\leq\frac{2(d+1)}{d-1}\). For any \(\varepsilon>0\), suppose there exists \(0<\beta\ll\varepsilon\) satisfying the following. Let \(\mathbb{W}\subset\mathbb{T}\) and suppose that each \(T\in\mathbb{W}\) lies in the unit ball. Let \(W=|\mathbb{W}|\). Suppose that \(f=\sum_{T\in\mathbb{W}}f_{T}\), where \(f_{T}\) is microlocalized to \((T,\theta(T))\). Suppose that \(\|f_{T}\|_{L^{p}}\) is \(\sim\) constant for each \(T\in\mathbb{W}\). Let \(Y\) be a union of \(R^{-1/2}\)-cubes in the unit ball each of which intersects at most \(M\) tubes \(T\in\mathbb{W}\). Then_ \[\|f\|_{L^{p}(Y)}\lesssim_{\varepsilon}R^{\varepsilon}\left(\frac{M}{W}\right) ^{\frac{1}{2}-\frac{1}{p}}\left(\sum_{T\in\mathbb{W}}\|f_{T}\|_{L^{p}}^{2} \right)^{1/2}.\] The proof of Lemma 5.1 using Theorem 5.3 proceeds almost identically as in [5, Lemma 4.1], as the \(x\) dependence of the good measure doesn't exist in the regime \(r>10R_{0}\). We include the proof below for the sake of completeness. ### Proof of Lemma 5.1 Assume \(r>10R_{0}\). By definition, \[\mu_{1,g}^{x}*\widehat{\sigma}_{r}=\sum_{j:R_{j}\sim r}\sum_{T\in\mathbb{T}_{j }:T\text{ good}}M_{T}\mu_{1}*\widehat{\sigma}_{r}+\text{RapDec}(r).\] The key point to notice is that \(\mu_{1,g}^{x}*\widehat{\sigma}_{r}\) is independent of \(x\). The contribution of \(\text{RapDec}(r)\) is already taken into account in the statement of Lemma 5.1. Hence without loss of generality we may ignore the tail \(\text{RapDec}(r)\) in the argument below. Let \(\eta_{1}\) be a bump function adapted to the unit ball and define \[f_{T}=\eta_{1}\left(M_{T}\mu_{1}*\widehat{\sigma}_{r}\right).\] One can easily verify that \(f_{T}\) is microlocalized to \((T,\theta(T))\). Let \(p=\frac{2(d+1)}{d-1}\). By dyadic pigeonholing, there exists \(\lambda>0\) such that \[\int|\mu_{1,g}^{x}*\widehat{\sigma}_{r}(x)|^{2}\,d\mu_{2}(x)\lesssim\log r\int |f_{\lambda}(x)|^{2}d\mu_{2}(x),\] where \[f_{\lambda}=\sum_{T\in\mathbb{W}_{\lambda}}f_{T},\quad\mathbb{W}_{\lambda}:= \bigcup_{j:R_{j}\sim r}\Big{\{}T\in\mathbb{T}_{j}:T\text{ good },\|f_{T}\|_{L^{p}}\sim\lambda\Big{\}}.\] Next, we divide the unit ball into \(r^{-1/2}\)-cubes \(q\) and sort them. Denote \[\mathcal{Q}_{M}:=\{r^{-1/2}\text{-cubes }q:q\text{ intersects }\sim M\text{ tubes }T\in\mathbb{W}_{\lambda}\}.\] Let \(Y_{M}:=\bigcup_{q\in\mathcal{Q}_{M}}q\). Since there are only \(\sim\log r\) many choices of \(M\), there exists \(M\) such that \[\int|\mu_{1,g}^{x}*\widehat{\sigma}_{r}(x)|^{2}\,d\mu_{2}(x)\lesssim(\log r)^{2} \int_{Y_{M}}|f_{\lambda}(x)|^{2}d\mu_{2}(x)\,.\] Since \(f_{\lambda}\) only involves good wave packets, by considering the quantity \[\sum_{q\in\mathcal{Q}_{M}}\sum_{T\in\mathbb{W}_{\lambda}:T\cap q\neq\emptyset} \mu_{2}(q),\] we get \[M\mu_{2}(\mathcal{N}_{r^{-1/2}}(Y_{M}))\lesssim|\mathbb{W}_{\lambda}|r^{- \frac{\alpha}{2}+\varepsilon_{0}}, \tag{5.1}\] where \(\mathcal{N}_{r^{-1/2}}(Y_{M})\) is the \(r^{-1/2}\)-neighborhood of \(Y_{M}\). The rest of the proof of Lemma 5.1 will follow from Theorem 5.3 and estimate (5.1). By Holder's inequality and the observation that \(f_{\lambda}\) has Fourier support in the \(1\)-neighborhood of the sphere of radius \(r\), one has \[\int_{Y_{M}}|f_{\lambda}(x)|^{2}\,d\mu_{2}(x)\lesssim\left(\int_{Y_{M}}|f_{ \lambda}|^{p}\right)^{2/p}\left(\int_{Y_{M}}|\mu_{2}*\eta_{1/r}|^{p/(p-2)} \right)^{1-2/p},\] where \(\eta_{1/r}\) is a bump function with integral \(1\) that is essentially supported on the ball of radius \(1/r\). To bound the second factor, we note that \(\eta_{1/r}\sim r^{d}\) on the ball of radius \(1/r\) and rapidly decaying off it. Using the fact that \(\mu_{2}(B(x,t))\lesssim t^{\alpha},\forall x\in\mathbb{R}^{d},\forall t>0\), we have \[\|\mu_{2}*\eta_{1/r}\|_{\infty}\lesssim r^{d-\alpha}\,.\] Therefore, \[\int_{Y_{M}}|\mu_{2}*\eta_{1/r}|^{p/(p-2)}\lesssim \|\mu_{2}*\eta_{1/r}\|_{\infty}^{2/(p-2)}\int_{Y_{M}}d\mu_{2}*\eta _{1/r}\] \[\lesssim r^{2(d-\alpha)/(p-2)}\mu_{2}(\mathcal{N}_{r^{-1/2}}(Y_{M})).\] By Theorem 5.3, the first factor can be bounded as follows: \[\left(\int_{Y_{M}}|f_{\lambda}|^{p}\right)^{2/p}\lesssim \left(\frac{M}{\mathbb{W}_{\lambda}}\right)^{1-2/p}\sum_{T\in \mathbb{W}_{\lambda}}\|f_{T}\|_{L^{p}}^{2}\] \[\lesssim \left(\frac{r^{-\frac{\alpha}{2}+\varepsilon_{0}}}{\mu_{2}( \mathcal{N}_{r^{-1/2}}(Y_{M}))}\right)^{1-2/p}\sum_{T\in\mathbb{W}_{\lambda}} \|f_{T}\|_{L^{p}}^{2},\] where the second step follows from (5.1). Combining the two estimates together, one obtains \[\int_{Y_{M}}|f_{\lambda}(x)|^{2}\,d\mu_{2}(x)\lesssim r^{\frac{2d}{p}-\alpha(\frac {1}{2}+\frac{1}{p})+O(\varepsilon_{0})}\sum_{T\in\mathbb{W}_{\lambda}}\|f_{T} \|_{L^{p}}^{2}.\] Observe that \(\|f_{T}\|_{L^{p}}\) has the following simple bound: \[\|f_{T}\|_{L^{p}}\lesssim \|f_{T}\|_{L^{\infty}}|T|^{1/p}\lesssim\sigma_{r}(\theta(T))^{1/2} |T|^{1/p}\|\widehat{M_{T}\mu_{1}}\|_{L^{2}(d\sigma_{r})}\] \[= r^{-(\frac{1}{2p}+\frac{1}{4})(d-1)+O(\beta)}\|\widehat{M_{T} \mu_{1}}\|_{L^{2}(d\sigma_{r})}.\] Plugging this back into the above formula, one obtains \[\int_{Y_{M}}|f_{\lambda}(x)|^{2}\,d\mu_{2}(x)\lesssim r^{\frac{2d}{p}-(\alpha+d-1)(\frac{1}{2}+\frac{1}{p})+O(\varepsilon_{0 })}\sum_{T\in\mathbb{W}_{\lambda}}\|\widehat{M_{T}\mu_{1}}\|_{L^{2}(d\sigma_{ r})}^{2}\] \[\lesssim r^{-\frac{\alpha d}{d+1}+\varepsilon}r^{-(d-1)}\int|\widehat{ \mu}_{1}|^{2}\psi_{r}\,d\xi,\] where \(p=2(d+1)/(d-1)\) and we have used orthogonality and chosen \(\beta\ll\varepsilon_{0}\ll\varepsilon\). The proof of Lemma 5.1 and hence Proposition 2.2 is complete. ## 6. Proof of Theorem 1.3 We will prove Theorem 1.3 following the approach in [15]. We shall use the following criteria to determine the Hausdorff dimension of pinned distance sets. **Lemma 6.1**.: _[_15_, Lemma 3.1]_ _Given a compact set \(E\subset\mathbb{R}^{d}\), \(x\in\mathbb{R}^{d}\) and a probability measure \(\mu_{E}\) on \(E\). Suppose there exist \(\tau\in(0,1]\), \(K\in\mathbb{Z}_{+}\), \(\beta>0\) such that_ \[\mu_{E}(\{y:|y-x|\in D_{k}\})<2^{-k\beta}\] _for any_ \[D_{k}=\bigcup_{j=1}^{M}I_{j},\] _where \(k>K\), \(M\leq 2^{k\tau}\) are arbitrary integers and each \(I_{j}\) is an arbitrary interval of length \(\approx 2^{-k}\). Then_ \[\dim_{H}(\Delta_{x}(E))\geq\tau.\] The next proposition is a key step in the proof of Theorem 1.3, which can be viewed as a discretized variant of it. **Proposition 6.2**.: _Let \(d\geq 2,1\leq k\leq d-1\), \(k-1<\alpha\leq k\), and \(\tau<\min(f(\alpha),1)\). There exists \(\beta>0\) depending on \(\tau,\alpha,k\) such that the following holds for sufficiently small \(\delta<\delta_{0}(\tau,\alpha,k)\). Let \(\mu_{1},\mu_{2}\) be \(\alpha\)-dimensional measures with \(\sim 1\) separation and constant \(C_{\alpha}\) supported on \(E_{1},E_{2}\) respectively. Then there exists a set \(F_{\delta}\subset E_{2}\) with \(\mu_{2}(F_{\delta})\lesssim\delta^{\beta^{2}}\) such that for all \(x\in E_{2}\setminus F_{\delta}\), there exists a set \(W(x)\) that is contained within some \((2\delta^{\beta^{2}},k)\)-plate such that_ \[\mu_{1}(\{y:|x-y|\in J\}\setminus W(x))\leq\delta^{\beta^{2}/2},\] _where \(J\) is any union of \(\leq\delta^{-\tau}\) many intervals each of length \(\sim\delta\)._ Next we prove Proposition 6.2. The proof reproduces the argument in [15, Section 4] with some minor simplifications. Proof.: First, instead of working with \(\mu_{1}\), we will use a mollified version that removes the high frequency contributions. Let \(\phi\in C_{0}^{\infty}(\mathbb{R}^{d})\) be supported on \(B(0,1)\) and satisfy \(\phi\geq 0\), \(\int\phi=1\), and \(\phi\geq 1\) on \(B(0,\frac{1}{2})\). This \(\phi\) will be fixed for the rest of the proof (in particular, it does not depend on \(\delta\), and subsequent implicit constants may depend on \(\phi\)). Let \(\phi_{\delta}(\cdot)=\delta^{-d}\phi(\delta^{-1}\cdot)\) and \(\mu_{1}^{\delta}=\mu_{1}*\phi_{\delta}\). The crucial point is that \(\mu_{1}^{\delta}\) is supported in a \(\delta\)-neighborhood of the support of \(\mu_{1}\) and in fact serves as a good approximation for \(\mu_{1}\) down to scale \(\delta\), but \(\mu_{1}^{\delta}\) is rapidly decaying at frequencies much larger than \(\delta^{-1}\). Fix a small \(\varepsilon>0\). We apply Proposition 2.1 with \(R_{0}=\delta^{-\beta}\) and the measure \(\mu_{1}^{\delta}\), which is still an \(\alpha\)-dimensional measure with constant comparable to \(C_{\alpha}\) (independent of \(\delta\)). (We make \(\delta\) sufficiently small to ensure that \(R_{0}\) is sufficiently large.) Then there is a subset \(E_{2}^{\prime}\subset E_{2}\) so that \(\mu_{2}(E_{2}^{\prime})\geq 1-\delta^{\beta^{2}}\) and for each \(x\in E_{2}^{\prime}\), there exists a set \(G(x)\subset\mathbb{R}^{d}\) where \(B^{d}(0,10)\setminus G(x)\) is contained within some \((\delta^{\beta^{2}},k)\)-plate \(H(x)\) such that \[\|d_{*}^{x}(\mu_{1}^{\delta}|_{G(x)})-d_{*}^{x}(\mu_{1,g}^{\delta,x})\|_{L^{1 }}\leq\delta^{\beta^{2}}.\] We will define \(W(x)=H(x)^{(\delta)}\), which satisfies the condition for \(W(x)\). Let \[\mathcal{J}_{\delta}^{\tau}=\left\{\bigcup_{j=1}^{M}I_{j}:M\leq\delta^{-\tau},\text{ each }I_{j}\text{ is an open interval of length }\sim\delta\right\}.\] Let \(F^{\prime}\) be the set of points \(x\in E_{2}^{\prime}\) such that \[\sup_{J\in\mathcal{J}_{\delta}^{\tau}}\int_{J^{(\delta)}}d_{*}^{x}(\mu_{1}^{ \delta}|_{G(x)})(t)\,dt\geq\delta^{\beta^{2}/2}.\] Now, define \(F_{\delta}:=F^{\prime}\cup(E_{2}\setminus E_{2}^{\prime})\). Then for any \(x\in E_{2}\setminus F_{\delta}=E_{2}^{\prime}\setminus F^{\prime}\), we can claim the following. **Claim.** For all \(x\in E_{2}^{\prime}\setminus F^{\prime}\) and \(J\in\mathcal{J}_{\delta}^{\tau}\), we have \[\mu_{1}(\{y:|x-y|\in J\}\setminus W(x))\leq\delta^{\beta^{2}/2}.\] _Proof of Claim._ Note that if \(y\notin W(x)\), then \(B(y,\delta)\subset G(x)\). For \(x\in E_{2}^{\prime}\setminus F^{\prime}\) and \(J\in\mathcal{J}_{\delta}^{\tau}\), we have \[\delta^{\beta^{2}/2} \geq\int_{|x-z|\in J^{(\delta)}}\mu_{1}^{\delta}|_{G(x)}(z)\,dz\] \[=\delta^{-d}\iint_{|x-z|\in J^{(\delta)},z\in G(x)}\phi(\delta^{- 1}(z-y))d\mu_{1}(y)dz\] \[\geq\delta^{-d}\iint_{|x-y|\in J,|y-z|\leq\delta,y\notin W(x)} \phi(\delta^{-1}(z-y))d\mu_{1}(y)dz\] \[\geq\int_{|x-y|\in J,y\notin W(x)}d\mu_{1}(y)\int_{B(0,1)}\phi(u) \,du\] \[=\mu_{1}(\{y:|x-y|\in J\}\setminus W(x)).\qed\] Recall that \(\mu_{2}(E_{2}\setminus E_{2}^{\prime})\leq\delta^{\beta^{2}}\). So it remains to show \(\mu_{2}(F^{\prime})\lesssim\delta^{\beta^{2}}\) (assuming good choice for \(\beta,\varepsilon\)). For \(x\in F^{\prime}\), we have \[\sup_{J\in\mathcal{J}_{\delta}^{\tau}}\int_{J^{(\delta)}} \lvert d_{*}^{x}(\mu_{1,g}^{\delta,x})(t)\rvert\,dt\] \[\geq\sup_{J\in\mathcal{J}_{\delta}^{\tau}}\int_{J^{(\delta)}}d_{* }^{x}(\mu_{1}^{\delta}|_{G(x)})(t)\,dt-\lVert d_{*}^{x}(\mu_{1}^{\delta}|_{G( x)})-d_{*}^{x}(\mu_{1,g}^{\delta,x})\rVert_{L^{1}}\] \[\geq\delta^{\beta^{2}/2}-\delta^{\beta^{2}}\geq\delta^{\beta^{2}}.\] Then by Cauchy-Schwarz, we have for \(x\in F^{\prime}\), \[\left(\sup_{J\in\mathcal{J}_{\delta}^{\tau}}|J^{\delta}|^{\frac{1}{2}}\right) \left(\int|d_{*}^{x}(\mu_{1,g}^{\delta,x})(t)|^{2}dt\right)^{\frac{1}{2}}\geq \sup_{J\in\mathcal{J}_{\delta}^{\tau}}\int_{J^{(\delta)}}|d_{*}^{x}(\mu_{1,g} ^{\delta,x})(t)|\,dt\geq\delta^{\beta^{2}}.\] For \(J\in\mathcal{J}_{\delta}^{\tau}\), \(J\) and \(J^{(\delta)}\) can both be covered by \(\lesssim\delta^{-\tau}\) many intervals each of length \(\sim\delta\), so \(\sup_{J\in\mathcal{J}_{\delta}^{\tau}}|J^{\delta}|^{1/2}\lesssim\delta^{(1- \tau)/2}\). Thus, for \(x\in F^{\prime}\), \[\lVert d_{*}^{x}(\mu_{1,g}^{\delta,x})\rVert_{L^{2}}^{2}\geq\delta^{2\beta^{2 }-(1-\tau)}q.\] Integrate over \(F^{\prime}\) and apply Proposition 2.2 to get \[\delta^{2\beta^{2}-(1-\tau)}\mu_{2}(F^{\prime}) \leq\int\|d_{*}^{x}(\mu_{1,g}^{\delta,x})\|_{L^{2}}^{2}d\mu_{2}(x)\] \[\leq\int|\widehat{\mu_{1}^{\delta}}(\xi)|^{2}|\xi|^{-\frac{\alpha d }{d+1}+\varepsilon}\,d\xi+\delta^{-d\beta}+\text{RapDec}(\delta)\] \[\leq\int|\widehat{\mu_{1}}(\xi)|^{2}|\widehat{\phi}(\delta\xi)|^{ 2}|\xi|^{-\frac{\alpha d}{d+1}+\varepsilon}\,d\xi+\delta^{-d\beta}+\text{RapDec} (\delta)\] \[\leq\int_{|\xi|\leq\delta^{-1-\beta}}|\widehat{\mu_{1}}(\xi)|^{2 }|\xi|^{-\frac{\alpha d}{d+1}+\varepsilon}\,d\xi+\delta^{-d\beta}+\text{RapDec} (\delta)\,.\] Since \(\tau<1\), the condition \(|\xi|\leq\delta^{-1-\beta}\), which implies \(|\xi|^{1-\beta}\leq\delta^{-(1+\beta)(1-\beta)}\leq\delta^{-1}\), gives us \[\delta^{-3\beta^{2}+(1-\tau)}\leq|\xi|^{(3\beta^{2}-(1-\tau))(1-\beta)}=|\xi| ^{-(1-\tau)+O(\beta)}.\] Thus, \[\mu_{2}(F^{\prime})\] \[\leq \delta^{-2\beta^{2}+(1-\tau)}\int_{|\xi|\leq\delta^{-1-\beta}}| \widehat{\mu_{1}}(\xi)|^{2}|\xi|^{-\frac{\alpha d}{d+1}+\varepsilon}\,d\xi+ \delta^{-d\beta-2\beta^{2}+(1-\tau)}+\text{RapDec}(\delta)\] \[\leq \delta^{\beta^{2}}\int|\widehat{\mu_{1}}(\xi)|^{2}|\xi|^{-\frac{ \alpha d}{d+1}+\varepsilon-(1-\tau)+O(\beta)}\,d\xi+\delta^{-d\beta-2\beta^{2} +(1-\tau)}+\text{RapDec}(\delta)\,.\] Finally, since \(\tau<f(\alpha)=\alpha\cdot\frac{2d+1}{d+1}-(d-1)\) and \(\tau<1\), we may choose small \(\varepsilon\) and \(\beta\) such that \[-d\beta-2\beta^{2}+(1-\tau)>\beta^{2},\] \[-\frac{\alpha d}{d+1}+\varepsilon-(1-\tau)+O(\beta)<-d+\alpha.\] This guarantees the energy integral \[\int|\widehat{\mu_{1}}(\xi)|^{2}|\xi|^{-\frac{\alpha d}{d+1}+\varepsilon-(1- \tau)+O(\beta)}\,d\xi\] to be finite (in fact, \(\lesssim C_{\mu}\)) and so \(\mu_{2}(F^{\prime})\lesssim\delta^{\beta^{2}}\). We will also need a result of Shmerkin to prove Theorem 1.3. **Theorem 6.3**.: _[_25_, Theorem 6.3 and Theorem B.1]_ _Fix \(1\leq k\leq d-1\), \(c>0\). Given \(\kappa_{1},\kappa_{2}>0\), there is \(\gamma>0\) (depending continuously on \(\kappa_{1},\kappa_{2}\)) such that the following holds._ _Let \(\mu,\nu\) be probability measures on \(B^{d}(0,1)\) satisfying decay conditions_ \[\mu(V) \leq C_{\mu}r^{\kappa_{1}},\] \[\nu(V) \leq C_{\nu}r^{\kappa_{2}},\] _for any \((r,k-1)\)-plate \(V\), and \(0<r\leq 1\). Suppose \(\nu\) gives zero mass to every \(k\)-dimensional affine plane. Then for all \(x\) in a set \(E\) of \(\mu\)-measure \(\geq 1-c\) there is a set \(K(x)\) with \(\nu(K(x))\geq 1-c\) such that_ \[\nu(W\cap K(x))\leq r^{\gamma}, \tag{6.2}\] _for any \(r\in(0,r_{0}]\) and any \((r,k)\)-plate \(W\) passing through \(x\), where \(r_{0}>0\) depends only on \(d,\mu,C_{\nu},\kappa_{2},c\)._ _Finally, the set \(\{(x,y):x\in E,y\in K(x)\}\) is compact._ We now prove the following theorem that implies the second claim of Theorem 1.3 (that \(\sup_{x\in E}\dim_{H}(\Delta_{x}(E))\geq\min(f(\alpha),1)\)). (Specifically, apply Theorem 6.4 with \(\alpha-\varepsilon\) for any \(\varepsilon>0\).) The first and third claims then follow by applying the second claim of Theorem 1.3 to the set \(\{x\in E:\dim_{H}(\Delta_{x}(E))\leq\min(f(\alpha),1)-\varepsilon\}\) and taking a sequence of \(\varepsilon_{n}\to 0\). **Theorem 6.4**.: _Let \(0<\alpha\leq d-1\). Suppose \(E_{1},E_{2}\subset B^{d}(0,1)\) are separated by \(\sim 1\), and each of them has positive \(\alpha\)-dimensional Hausdorff measure. Then there exists \(x\in E_{1}\cup E_{2}\) with \(\dim_{H}(\Delta_{x}(E_{1}))\geq\min(f(\alpha),1)\)._ Proof.: Let \(1\leq k\leq d-1\), \(k-1<\alpha\leq k\). Let \(\mu_{1},\mu_{2}\) be \(\alpha\)-dimensional measures supported on \(E_{1},E_{2}\) respectively. Suppose \(\mu_{1}\) gives nonzero mass to some \(k\)-dimensional affine plane \(H\). We have three possible cases: * If \(k\geq 3\), then \(\alpha>k-1\geq\frac{k+1}{2}\), so by [21], there exists \(x\in E_{1}\) such that \(|\Delta_{x}(E_{1})|>0\). * If \(k=2\), then by [15, Theorem 1.1] we have \(\dim_{H}(\Delta_{x}(E_{1}))\geq\min(f(\alpha),1)\) for some \(x\in E_{1}\). * If \(k=1\), then for all \(x\in E_{1}\cap H\), we have \(\dim_{H}(\Delta_{x}(E_{1}))\geq\dim_{H}(E_{1}\cap H)\geq\alpha>f(\alpha)=\min( f(\alpha),1)\). Now assume \(\mu_{1}\) gives zero mass to every \(k\)-dimensional affine plane. Also, note that \(\mu_{1}\) and \(\mu_{2}\) satisfy decay conditions \[\mu_{i}(V)\lesssim C_{\mu_{i}}r^{\alpha-(k-1)},\quad i=1,2,\] for any \((r,k-1)\)-plate \(V\), and \(0<r\leq 1\). Then we can use Theorem 6.3 to find \(r_{0},\gamma>0\) and a set \(E_{2}^{\prime}\subset E_{2}\) with \(\mu_{2}(E_{2}^{\prime})\geq\frac{1}{2}\) such that for any \(x\in E_{2}^{\prime}\), there exists \(K(x)\subset E_{1}\) with \(\mu_{1}(K(x))\geq\frac{1}{2}\) such that for any \((r,k)\)-plate \(H\) containing \(x\) with \(r\leq r_{0}\), \[\mu_{1}(K(x)\cap H)\leq r^{\gamma}.\] Additionally, the set \(\{(x,y):x\in E_{2}^{\prime},y\in K(x)\}\) is compact, which in particular means that \(K(x)\) is compact for all \(x\in E_{2}^{\prime}\). Fix \(0<\tau<\min(f(\alpha),1)\). We apply Proposition 6.2 at all sufficiently small dyadic scales \(\delta\). By the Borel-Cantelli lemma, a.e. \(x\in E_{2}^{\prime}\) lie in finitely many of the \(F_{\delta}\). For such \(x\), we have for all sufficiently small \(\delta>0\) and any \(J\) which is a union of \(\leq\delta^{-\tau}\) many intervals each of length \(\sim\delta\), \[\mu_{1}(\{y:|x-y| \in J\}\cap K(x))\] \[\leq\mu_{1}(\{y:|x-y|\in J\}\setminus W(x))+\mu_{1}(K(x)\cap W(x))\] \[\lesssim\delta^{\beta^{2}/2}+\delta^{\gamma\beta^{2}}.\] Hence, by Lemma 6.1 applied to the restricted measure \(\mu_{1}|_{K(x)}\), we see that \(\dim_{H}(\Delta_{x}(E_{1}))\geq\tau\) for a.e. \(x\in E_{2}^{\prime}\). Taking a sequence of \(\tau\to\min(f(\alpha),1)\), we see that \(\dim_{H}(\Delta_{x}(E_{1}))\geq\min(f(\alpha),1)\) for a.e. \(x\in E_{2}^{\prime}\). ## 7. Other norms and connections with Erdos distance problem Theorem 1.2 also extends to more general norms. For a symmetric convex body \(K\) in \(\mathbb{R}^{d}\), let \(\|\cdot\|_{K}\) be the norm with unit ball \(K\). **Theorem 7.1**.: _Let \(d\geq 3\). Let \(K\) be a symmetric convex body in \(\mathbb{R}^{d}\) whose boundary is \(C^{\infty}\) smooth and has strictly positive curvature. Let \(E\subset\mathbb{R}^{d}\) be a compact set. Suppose that \(\dim_{H}(E)>\frac{d}{2}+\frac{1}{4}-\frac{1}{8d+4}\). Then there is a point \(x\in E\) such that the pinned distance set \(\Delta_{K,x}(E)\) has positive Lebesgue measure, where_ \[\Delta_{K,x}(E):=\{\|x-y\|_{K}:\,y\in E\}.\] The argument for this generalization is similar to that in [10]. Indeed, the definition of bad tubes and heavy plates depends only on the geometry of \(\mathbb{R}^{d}\) and not the specific norm involved (note that the \(r\)-neighborhood of a set \(A\) is still defined via Euclidean metric, not the new norm's metric). The change of norm only affects the conversion from geometry to analysis, as manifested in Lemma 4.6(a) and our use of Liu's \(L^{2}\)-identity [14, Theorem 1.9] in the proof of Proposition 2.2. These considerations were already done in [10] (see also [5]); we omit the details. As discussed in [10, 5], one can go from Falconer-type results to Erdos-type results. **Definition 7.2**.: _Let \(P\) be a set of \(N\) points contained in \([0,1]^{d}\). Define the measure_ \[d\mu_{P}^{s}(x)=N^{-1}\cdot N^{\frac{d}{s}}\cdot\sum_{p\in P}\chi_{B}(N^{\frac{ 1}{s}}(x-p))\,dx, \tag{7.3}\] _where \(\chi_{B}\) is the indicator function of the ball of radius \(1\) centered at the origin. We say that \(P\) is \(s\)-adaptable if there exists \(C\) independent of \(N\) such that_ \[I_{s}(\mu_{P})=\int\int\left|x-y\right|^{-s}d\mu_{P}^{s}(x)\,d\mu_{P}^{s}(y) \leq C. \tag{7.4}\] It is not difficult to check that if the points in set \(P\) are separated by distance \(cN^{-1/s}\), then (7.4) is equivalent to the condition \[\frac{1}{N^{2}}\sum_{p\neq p^{\prime}}\left|p-p^{\prime}\right|^{-s}\leq C, \tag{7.5}\] where the exact value of \(C\) may be different from line to line. In dimension \(d\), it is also easy to check that if the distance between any two points of \(P\) is \(\gtrsim N^{-1/d}\), then (7.5) holds for any \(s\in[0,d)\), and hence \(P\) is \(s\)-adaptable. Using the same argument as in [5], from Theorem 7.1 we get the following Erdos-type result. **Proposition 7.3**.: _Let \(d\geq 3\). Let \(K\) be a symmetric convex body in \(\mathbb{R}^{d}\) whose boundary is \(C^{\infty}\) smooth and has strictly positive curvature. Let \(P\) be a set of \(N\) points contained in \([0,1]^{d}\)._ _(a). If the distance between any two points of \(P\) is \(\gtrsim N^{-1/d}\), then there exists \(x\in P\) such that_ \[\left|\Delta_{K,x}(P)\right|\gtrapprox N^{\frac{d}{2}+\frac{1}{4}-8d+4}=N^{ \frac{2d+1}{d(d+1)}}.\] _(b). More generally, if \(P\) is \(s_{n}\)-adaptable for a decreasing sequence \((s_{n})_{n=1}^{\infty}\) converging to \(\frac{d}{2}+\frac{1}{4}-\frac{1}{8d+4}\), then there exists \(x\in P\) such that_ \[\left|\Delta_{K,x}(P)\right|\gtrapprox N^{\frac{2d+1}{d(d+1)}}.\]
2309.04117
XENONnT and LUX-ZEPLIN constraints on DSNB-boosted dark matter
We consider a scenario in which dark matter particles are accelerated to semi-relativistic velocities through their scattering with the Diffuse Supernova Neutrino Background. Such a subdominant, but more energetic dark matter component can be then detected via its scattering on the electrons and nucleons inside direct detection experiments. This opens up the possibility to probe the sub-GeV mass range, a region of parameter space that is usually not accessible at such facilities. We analyze current data from the XENONnT and LUX-ZEPLIN experiments and we obtain novel constraints on the scattering cross sections of sub-GeV boosted dark matter with both nucleons and electrons. We also highlight the importance of carefully taking into account Earth's attenuation effects as well as the finite nuclear size into the analysis. By comparing our results to other existing constraints, we show that these effects lead to improved and more robust constraints.
Valentina De Romeri, Anirban Majumdar, Dimitrios K. Papoulias, Rahul Srivastava
2023-09-08T04:39:12Z
http://arxiv.org/abs/2309.04117v3
# XENONnT and LUX-ZEPLIN constraints ###### Abstract We consider a scenario in which dark matter particles are accelerated to semi-relativistic velocities through their scattering with the Diffuse Supernova Neutrino Background. Such a subdominant, but more energetic dark matter component can be then detected via its scattering on the electrons and nucleons inside direct detection experiments. This opens up the possibility to probe the sub-GeV mass range, a region of parameter space that is usually not accessible at such facilities. We analyze current data from the XENONnT and LUX-ZEPLIN experiments and we obtain novel constraints on the scattering cross sections of sub-GeV boosted dark matter with both nucleons and electrons. We also highlight the importance of carefully taking into account Earth's attenuation effects as well as the finite nuclear size into the analysis. By comparing our results to other existing constraints, we show that these effects lead to improved and more robust constraints. DSNB, neutrinos, boosted dark matter, direct detection ## I Introduction It is estimated that 85% of the matter content of the Universe is in the form of a hypothetical kind of matter, dubbed dark matter (DM) [1]. One of the biggest mysteries in contemporary physics and astronomy is to understand its microscopic nature. However, since DM does not interact with photons and interacts very "weakly" with ordinary matter, it proves challenging to detect it. On the other hand, DM gravitational effects on visible matter allow us to infer its existence despite its elusiveness. One of the most compelling solutions to the DM puzzle assumes it to be in the form of some unknown particle [2], thus calling for an extension of the Standard Model (SM). Several strategies, including direct and indirect detection experiments and collider searches, have been developed to try to detect it [3; 4]. Although a conclusive finding of DM has not been achieved yet, these searches have imposed very tight constraints on its potential properties. As part of the continuous effort to understand this enigmatic component, new experiments, and observations are being carried out. The possibility that DM has been produced thermally in the early Universe and that its abundance is determined by thermal freeze-out has motivated numerous large direct detection (DD) experiments, which aim at observing the scattering of a DM particle off a target in a deep underground detector. These experiments have experienced an increasingly, decades-long progress which has brought them into the multi-ton era [5]. Current most-sensitive constraints in the high-mass regime include those set by liquid xenon (LXe) experiments like LUX-ZEPLIN (LZ) [6; 7], XENONnT [8; 9], XENON1T [10], PandaX-II [11] and LUX [12], together with measurements on liquid argon (LAr) detectors like DEAP-360 [13] and DarkSide-50 [14] and the solid-state cryogenic detector of SuperCDMS [15]. We are interested in the results recently released by two DD experiments, XENONnT [8] and LZ [7]. Both experiments use state-of-the-art LXe detectors that aim at observing low-energy electron and nuclear recoils induced by DM scattering. Being one of the most sensitive DM DD experiments at present, XENONnT [16; 17], installed at the Gran Sasso National Laboratories in Italy, is the upgrade phase of XENON1T [10]. Thanks to its larger active target mass, superior photon detection mechanism, and extremely low background, XENONnT is an order of magnitude more sensitive to weakly-interacting DM particles than its predecessor. The recently released XENONnT data correspond to a total exposure of 1.16 tonne\(\times\)years [8]. The LZ experiment [18], located at the Sanford Underground Research Facility in South Dakota, is a detector centered on a dual-phase time projection chamber, also filled with LXe. The recently available LZ data correspond to an exposure of 5.5 tonne\(\times\)60 days [7]. The LZ collaboration has reported results from a blind search for DM particles and established the current strongest constraint for masses above 9 GeV, testing a cross section as small as \(6\times 10^{-48}\) cm\({}^{2}\) at a DM mass of 30 GeV. Both XENONnT and LZ, as most of other DD experiments, have best sensitivities to electroweak-scale DM with masses around 10-100 GeV. Below the GeV scale, their sensitivity drops dramatically, as the electron and nuclear recoil energy becomes smaller and eventually falls below the detector threshold. Normally, recoil events in the LZ experiment cannot be observed for non-relativistic sub-GeV DM traveling at velocities \(v\sim 10^{-3}\). However, an energetic sub-GeV DM particle may generate a substantial signal. One possibility that has been put forward to explore sub-GeV DM is that of boosted DM (BDM). Such a BDM would contribute as a subdominant component of the total DM flux, but would nonetheless enhance the mass reach of DD experiments, allowing to explore the sub-GeV range. This idea has first been proposed considering DM boosting from the scattering with energetic galactic cosmic rays [19; 20] and has been extensively discussed in the literature, see for instance [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. More recently, the possibility that DM is boosted through its scattering with neutrinos has also been envisaged, either considering cosmic neutrinos [36], solar neutrinos [37], neutrinos from primordial black holes evaporation [38; 39; 40; 41], supernova neutrinos [42; 43] or the diffuse supernova neutrino background (DSNB) [44; 45]. Other possibilities leading to BDM include blazar-boosted DM [46], boosted DM from phantom dark energy [47], models with semi-annihilating DM [48; 49] or models with a multi-component DM sector [50; 51]. In this work, we investigate the possibility that the DM in the Milky Way halo is boosted to semi-relativistic velocities, via its scattering on the DSNB [52; 53]. The DSNB is a cumulative and isotropic flux of MeV neutrinos of all flavors produced from core-collapse supernovae explosions along the whole history of the Universe. While not yet observed, the DSNB is an irreducible background, expected to be within the reach of near-future experiments. Even though less energetic than cosmic rays, it seems reasonable to assume possible interactions of local DM with this isotropic neutrino background. By employing XENONnT and LZ latest data releases [7; 8], we derive stringent constraints on both DM-electron and DM-nucleon scattering cross sections in the sub-GeV range, thus providing complementary results to the standard analyses offered by the two collaborations [7; 9]. We highlight and pay special attention to the Earth's attenuation effects, that, as we will show, play an important role in the region of interest of the parameter space. Additionally, we also take into account nuclear effects which further improve the sensitivity and robustness of our analysis. DSNB-boosted DM had previously been considered in Ref. [44] as a possible explanation to an excess of electron recoil events in the low energy region, now disappeared, observed by XENON1T [54]. Ref. [45] also set limits on DSNB-boosted DM scattering off electrons using XENON1T and Super-Kamiokande data. Here we improve upon these previous results by presenting for the first time constraints on DSNB-boosted DM, from the most recent XENONnT and LZ data, for both nuclear and electron scattering. The remainder of this paper is organized as follows. Section II provides a discussion on theoretical predictions for the DSNB flux. Sec. III explains how sub-GeV non-relativistic DM particles in the Milky Way halo can attain semi-relativistic speeds due to interactions with DSNB neutrinos, and highlights the importance of Earth's attenuation effects as well as the nuclear form factors. In Sec. IV, we delve into the simulation of the DSNB-boosted DM-induced signal predicted for the XENONnT and LZ detectors. Our results are presented in Sec. V, while we finally provide our concluding remarks in Sec. VI. ## II Theoretical estimate of the DSNB flux Right after the first star formation event, the Universe has been surrounded by an isotropic flux of MeV-energy neutrinos and antineutrinos of all flavors, produced from all supernovae events from the core-collapse explosions of huge stars throughout the Universe. The theoretical prediction for the differential DSNB flux, per neutrino flavor \(\alpha\), can be estimated as [52; 55; 56; 57] \[\frac{d\Phi_{\nu_{\alpha}}^{\rm DSNB}}{dE_{\nu}}=\int_{0}^{\rm z_{\rm max}}d{ \rm z}\frac{R_{\rm CCSN}({\rm z})}{H({\rm z})}\,\mathscr{F}_{\nu_{\alpha}}(E_ {\nu_{\alpha}})|_{E_{\nu}=E_{\nu}^{s}(1+{\rm z})}\, \tag{1}\] \(E_{\nu}^{s}\) being the neutrino energy at the source. The integral is performed over the redshift parameter, z, and we take the maximum redshift at which star-formation occurs as \({\rm z_{\rm max}}\sim 6\). Moreover, \(H({\rm z})\) is the Hubble function determined from the Friedmann equation as \[H({\rm z})=H_{0}\sqrt{\Omega_{M}(1+{\rm z})^{3}+\Omega_{\Lambda}(1+{\rm z})^{ 3(1+w)}+(1-\Omega_{M}-\Omega_{\Lambda})(1+{\rm z})^{2}}\,, \tag{2}\] where \(H_{0}=67.45\ {\rm km\ s^{-1}Mpc^{-1}}\) is the Hubble constant [1; 58], \(\Omega_{M}=0.315\pm 0.007\) and \(\Omega_{\Lambda}=0.685\pm 0.007\) denote the matter and vacuum contributions to the present-Universe energy density, while the best current measurement for the equation-of-state parameter for the dark energy is \(w=-1.028\pm 0.031\)[1]. The DSNB flux further depends upon the rate of Core-Collapse Supernovae (CCSN), which reads [56] \[R_{\rm CCSN}({\rm z})=\dot{\rho}_{*}({\rm z})\frac{\int_{8}^{50}\psi(M)dM}{ \int_{0.1}^{100}M\psi(M)dM}\,, \tag{3}\] where \(\psi(M)\) is the initial mass function (IMF) of stars, indicating the star density within a certain mass range. For our analysis we have assumed the IMF to be a power-law distribution, \(\psi(M)\propto M^{-2.35}\) according to [59]. The redshift evolution of the co-moving cosmic star-formation rate, \(\dot{\rho}_{*}({\rm z})\), can be modelled as [55; 60] \[\dot{\rho}_{*}({\rm z})=\dot{\rho}_{0}\left[(1+{\rm z})^{-10a}+\left(\frac{1+{ \rm z}}{B}\right)^{-10b}+\left(\frac{1+{\rm z}}{C}\right)^{-10c}\right]^{-0.1}\,, \tag{4}\] where the overall normalization factor is \(\dot{\rho}_{0}=0.0178^{+0.0035}_{-0.0036}\ {\rm M_{\odot}yr^{-1}Mpc^{-3}}\)[56]. The constants \(B\), and \(C\) are expressed as [55; 60]: \[B=(1+{\rm z}_{1})^{1-\frac{a}{b}}\,, \tag{5a}\] \[C=(1+{\rm z}_{1})^{\frac{b-a}{c}}(1+{\rm z}_{2})^{1-\frac{b}{c}}\,, \tag{5b}\] where \(\mathrm{z}_{1}=1\), and \(\mathrm{z}_{2}=4\) represent the redshift breaks, while \(a\), \(b\) and \(c\) denote the logarithmic slopes for the low, intermediate, and high redshift ranges. An analytical fit to data from different astronomical surveys [55; 60] gives \[\left\{a,b,c\right\}=\left\{3.4\pm 0.2,-0.3\pm 0.2,-3.5\pm 1\right\}.\] Finally, a non-degenerate Fermi-Dirac distribution is used to parametrize the flavor-dependent neutrino spectra released by a CCSN event [52; 56] \[\mathscr{F}_{\nu_{\alpha}}(E_{\nu_{\alpha}})=\frac{E_{\nu}^{\mathrm{tot}}}{6} \frac{120}{7\pi^{4}}\frac{E_{\nu_{\alpha}}^{2}}{T_{\nu_{\alpha}}^{4}}\frac{1} {1+e^{E_{\nu_{\alpha}}/T_{\nu_{\alpha}}}}\,, \tag{6}\] where \(E_{\nu}^{\mathrm{tot}}=3\times 10^{53}\mathrm{erg}\)1 represents the total amount of energy released as neutrinos [52], and \(T_{\nu_{\alpha}}\) denotes the temperature of each flavor of neutrinos. In our present study, we consider \(T_{\nu_{e}}=6.6\ \mathrm{MeV},\ T_{\bar{\nu}_{e}}=7\ \mathrm{MeV},\ T_{\nu_{x}}=10\ \mathrm{MeV}\) (\(\nu_{x}\) denotes either \(\nu_{\mu}\) or \(\nu_{\tau}\) or their antiparticles), satisfying the upper limit extracted from Super-Kamiokande [61]. Footnote 1: It is assumed that the total released energy, \(E_{\nu}^{\mathrm{tot}}\), is equally distributed among the 6 neutrinos flavors, since during CCSN the neutrino emission mostly occurs in the cooling phase, which persists for about 10 s after the bounce. We show in Fig. 1 the predicted DSNB fluxes, for the different neutrino flavors, as a function of the neutrino energy. In the following calculations we will assume an uncertainty of 40% in the normalization of the DSNB spectra, estimated from uncertainties in the cosmic star-formation rate [55]. This uncertainty on the DSNB fluxes is illustrated by the shaded bands in Fig. 1. ## III The DSNB-Boosted Dark Matter Flux In this section, we discuss how the DM particles in the Milky Way halo get boosted to considerably greater velocities due to their scattering with DSNB neutrinos. We remain agnostic of the specific DM model and for sake of uniformity in comparing the final results we assume the DM to be made of one particle species \(\chi\) that scatters with neutrinos and electrons (\(\sigma_{\nu\chi}=\sigma_{\chi e}\)) or with Figure 1: Predicted DSNB fluxes for various neutrino flavors (\(\nu_{x}\) denotes either \(\nu_{\mu}\) or \(\nu_{\tau}\) or their antiparticles), as a function of the neutrino energy, estimated from Eq. (1). The bands illustrate a 40% error in the normalization uncertainty of the DSNB spectra [55]. neutrinos and nucleons (\(\sigma_{\nu\chi}=\sigma_{\chi n}\)) with the same cross section, as the benchmark for our analysis. These assumptions can be naturally realized in flavor-dependent gauged \(U(1)\) extensions such as \(U(1)_{B_{i}-3L_{i}}\), \(i\) being generation index or \(U(1)_{B-3L_{e}}\) models [62; 63; 64]. Furthermore, scenarios deviating from this assumption can be easily accounted for by using the product \(\sqrt{\sigma_{\nu\chi}\sigma_{\chi e}}\) or \(\sqrt{\sigma_{\nu\chi}\sigma_{\chi n}}\) as applicable, see Sec. V for further discussion. Before entering into details, it is noteworthy to stress that the initial DM galactic escape velocity is irrelevant [19], as the scattering between \(\chi\) and DSNB neutrinos accelerates the DM to significantly higher velocities. The DSNB-boosted DM differential flux, induced by its scattering with the DSNB given in Eq. (1), can be estimated as [44] \[\frac{d\Phi_{\chi}}{dT_{\chi}}=D_{\rm halo}\int_{E_{\nu}^{\rm min}}^{E_{\nu}^{ \rm max}}dE_{\nu}\frac{1}{m_{\chi}}\frac{d\sigma_{\nu\chi}}{dT_{\chi}}\frac{d \Phi_{\nu}^{\rm DSNB}}{dE_{\nu}}\,, \tag{7}\] where \(T_{\chi}\) is the energy transferred to \(\chi\) and \(\frac{d\Phi_{\nu}^{\rm DSNB}}{dE_{\nu}}\) is the sum over all neutrino flavors of the DSNB flux given in Eq. (1). The neutrino-DM scattering cross section can be cast in the form \[\frac{d\sigma_{\nu\chi}}{dT_{\chi}}=\frac{\sigma_{\nu\chi}}{T_{\chi}^{\rm max} (E_{\nu})}\Theta\left[T_{\chi}^{\rm max}(E_{\nu})-T_{\chi}\right]\,, \tag{8}\] where \(m_{\chi}\) denotes the DM mass, while \(\sigma_{\nu\chi}\) controls the strength of the interaction. The maximum transferred energy to which the DM can be boosted for a given neutrino energy \(E_{\nu}\), is dictated by the kinematics of the process and is incorporated in the Heaviside step function 2: \(T_{\chi}^{\rm max}(E_{\nu})=E_{\nu}^{2}\Big{/}\Big{(}E_{\nu}+\frac{m_{\chi}}{ 2}\Big{)}\). The maximum neutrino energy in our numerical calculations is taken to be \(E_{\nu}^{\rm max}=100\) MeV, while the lower integration limit in Eq. (7) can be obtained by inverting the expression for \(T_{\chi}^{\rm max}\) which gives the minimum neutrino energy required to boost the DM, i.e. \(E_{\nu}^{\rm min}=\left[T_{\chi}+\sqrt{T_{\chi}^{2}+2m_{\chi}T_{\chi}}\ \right] \Big{/}2\). Footnote 2: Throughout our study, we have taken neutrinos as massless since corrections due to non-zero neutrino masses are very small. The \(D-\)factor (\(D_{\rm halo}\)) in Eq. (7) encodes the DM density distribution within our galactic halo, and it is expressed as the integral of the density profile along the line of sight (l.o.s.) \(\ell\) and over the solid angle \(\Omega\): \[D_{\rm halo}=\int_{\Delta\Omega}\frac{d\Omega}{4\pi}\int_{0}^{\ell_{\rm max}} \rho_{\rm MW}[r(\ell,\psi)]d\ell. \tag{9}\] Here, we assume a Navarro-Frenk-White (NFW) profile3, defined as [66] Footnote 3: The simulated events are found to be largely independent of the DM density profile. We have checked that using a cored isothermal DM density profile [65]\(D_{\rm halo}\) changes by less than 1%. \[\rho_{\rm MW}(r)=\rho_{\odot}\left[\frac{r}{r_{\odot}}\right]^{-1}\left[\frac {1+\frac{r_{\odot}}{r_{s}}}{1+\frac{r}{r_{s}}}\right]^{2}\,, \tag{10}\] where the scale radius is \(r_{s}=20\) kpc and the local DM density is \(\rho_{\odot}=0.4\ {\rm GeV\ cm^{-3}}\). The galactocentric distance reads \[r(l,\psi)=\sqrt{r_{\odot}^{2}-2lr_{\odot}\cos\psi+l^{2}}\,, \tag{11}\] with \(r_{\odot}=8.5\) kpc being the distance between the Earth and the galactic centre and \(\psi\) the angle of view defining the l.o.s.. The upper limit of the l.o.s. integral is given by \(\ell_{\rm max}=\sqrt{R^{2}-r_{\odot}^{2}\sin^{2}\psi}+r_{\odot}\cos\psi\), with the galactic halo virial radius taken to be \(R=200\) kpc. Given these values, we hence obtain \(D_{\rm halo}=2.22\times 10^{25}\) MeV cm\({}^{-2}\) over the whole galactic halo. ### Attenuation effects In this subsection, we will focus our attention on the modifications expected to occur in the energy profile of the DSNB-boosted DM flux during its propagation through the atmosphere and the Earth [67; 68; 69; 70; 19; 26]. For sufficiently large interaction cross sections, \(d\sigma_{\chi i}/dT_{i}\), the DM particles may lose a significant amount of energy due to their scattering on nuclei (\(i=\mathcal{N}\)) or electrons (\(i=e\)), resulting into a sizeable attenuation of the DM flux before reaching the detector. This effect can be accounted for via the energy loss equation [19; 26] \[\frac{dT_{\chi}^{z}}{dz}= -n_{i}\int_{0}^{T_{i}^{\text{max}}(T_{\chi}^{z})}\frac{d\sigma_{ \chi i}}{dT_{i}}T_{i}\,dT_{i}\,, \tag{12}\] where \(T_{i}\) denotes the energy lost by the boosted DM particle in a collision and \(n_{i}\) is the number density of nucleus species or electrons. Here, \(z\) denotes the distance travelled from the location of the scattering point (inside the atmosphere or the Earth) to the detector. In the most general case, Eq. (12) relates the initial energy at the top of the atmosphere (\(z=0\)), \(T_{\chi}^{0}\), with the average kinetic energy, \(T_{\chi}^{z}\), after travelling a distance \(z\) before reaching the underground detector. In our analysis, we neglect the impact of atmospheric attenuation as it is expected to be negligible compared to Earth's attenuation [26], for the cross sections under consideration. Hence, we take \(z=0\) at the Earth's surface. Then, the distance \(z\) can be expressed as (see Appendix A for more details) \[z=-(R_{E}-h_{d})\cos\theta_{z}+\sqrt{R_{E}^{2}-(R_{E}-h_{d})^{2}\sin^{2}\theta _{z}}\,, \tag{13}\] where \(R_{E}\) stands for the radius of the Earth, \(\theta_{z}\) refers to the detector's zenith angle and \(h_{d}\) indicates the depth of the detector's location from the Earth's surface, at the point where the zenith angle is zero. Moreover, for the sake of simplicity, we have adopted a mean average electron density \(n_{e}\) of Earth's most abundant elements between the surface and depth \(z\), \(n_{e}=8\times 10^{23}\) cm\({}^{-3}\)[20]. In the case of attenuation due to \(\chi\) scattering on nuclei, we have determined the nuclear number density at depth \(z\) through a weighted average of the most abundant elements found in the Earth's crust, mantle, and core, yielding \(n_{\mathcal{N}}=3.44\times 10^{22}\) cm\({}^{-3}\) and \(A\approx 33.3\) (for details see Appendix B and Refs. [71; 72; 73; 67]). The differential cross section for DM-electron or DM-nucleus scattering takes the form \[\frac{d\sigma_{\chi i}}{dT_{i}}=\frac{\sigma_{\chi i}}{T_{i}^{\text{max}}(T_{ \chi})}\,,\quad i=\mathcal{N}\text{ or }e\,, \tag{14}\] where the maximum recoil energy that can be lost by \(\chi\) during the attenuation process, is obtained from the kinematics of the process and reads \[T_{i}^{\text{max}}(T_{\chi})=\frac{2m_{i}T_{\chi}(T_{\chi}+2m_{\chi})}{(m_{ \chi}+m_{i})^{2}+2m_{i}T_{\chi}}\,, \tag{15}\] with \(m_{i}\) indicating the nuclear (\(i=\mathcal{N}\)) or electron (\(i=e\)) mass. The solution of Eq. (12) gives the DM energy as a function of the distance and the initial DM energy, i.e. \(T_{\chi}^{z}\equiv T_{\chi}^{z}(T_{\chi}^{0},z)\) with \(z\) depending on the zenith angle and the detector depth as indicated in Eq. (13). The resulting attenuated DM flux reaching the detector after averaging over angles 4, \(d\Phi_{\chi}^{z}/dT_{\chi}^{z}\), is given by the expression Footnote 4: Here, we only consider the angle-averaged DM flux as most of the current DM direct detection experiments do not have directionality capabilities. \[\frac{d\Phi_{\chi}}{dT_{\chi}^{z}}=\int d\Omega\left.\frac{d^{2}\Phi_{\chi}}{ dT_{\chi}d\Omega}\right|_{T_{\chi}^{0}}\frac{dT_{\chi}^{0}}{dT_{\chi}^{z}}\,, \tag{16}\] where \(\Omega\) is the solid angle. Scattering with electrons For the case of DM-electron scattering we have \(\sigma_{\chi i}=\sigma_{\chi e}\) in Eq. (14). In this case Eq. (12) can be solved analytically. The solution for \(T_{\chi}^{z}\) at a given depth \(z\) can be expressed in terms of the DM energy at the surface, \(T_{\chi}^{0}\), as \[T_{\chi}^{z}\approx\frac{T_{\chi}^{0}e^{-z/l_{E}}}{1+\frac{T_{\chi}^{0}}{2m_{ \chi}}\left(1-e^{-z/l_{E}}\right)}\,, \tag{17}\] where \(l_{E}\) represents the mean free path for energy loss, given by \(l_{E}^{-1}=n_{e}\sigma_{\chi e}\frac{2m_{e}m_{\chi}}{(m_{e}+m_{\chi})^{2}}\). By inverting Eq. (17) we obtain the expression for \(T_{\chi}^{0}\) as a function of \(T_{\chi}^{z}\) and \(z\), which reads \[T_{\chi}^{0}\approx\frac{2m_{\chi}T_{\chi}^{z}e^{z/l_{E}}}{2m_{\chi}+T_{\chi}^ {z}\left(1-e^{z/l_{E}}\right)}\,. \tag{18}\] As a consequence, the attenuated DM flux given in Eq. (16) that eventually reaches the detector can be simplified as follows \[\frac{d\Phi_{\chi}}{dT_{\chi}^{z}}\approx\int d\Omega\ \frac{d^{2}\Phi_{\chi}}{ dT_{\chi}d\Omega}\bigg{|}_{T_{\chi}^{0}}\frac{4m_{\chi}^{2}e^{z/l_{E}}}{ \left[2m_{\chi}+T_{\chi}^{z}\left(1-e^{z/l_{E}}\right)\right]^{2}}\,. \tag{19}\] Before closing this discussion let us stress that, as discussed before, our analysis is done for the benchmark \(\sigma_{\nu\chi}=\sigma_{\chi e}\). Note that the bounds that we will eventually obtain in Sec. V will mainly depend on the product \(\sqrt{\sigma_{\nu\chi}\sigma_{\chi e}}\), which simplifies to \(\sigma_{\chi e}\) under the assumption \(\sigma_{\nu\chi}=\sigma_{\chi e}\), plus corrections due to the dependence of \(l_{E}\) (see Eq. 17) on \(\sigma_{\chi e}\). We will discuss this in more detail in Sec. V. #### ii.2.2 Scattering with nuclei: Effect of the finite nuclear size For the case of DM-nucleus scattering we take \(\sigma_{\chi i}=\sigma_{\chi\mathcal{N}}^{\text{SI}}\) in Eq. (14), where the spin-independent (SI) DM-nucleus elastic scattering cross section is expressed as 5[74; 75] Footnote 5: For simplicity we consider a spin-conserving scenario where DM-proton and DM-neutron effective couplings are equal (\(f_{p}/f_{n}\approx 1\)). \[\sigma_{\chi\mathcal{N}}^{\text{SI}}(\mathfrak{q}^{2})=\frac{\mu_{\mathcal{N }}^{2}}{\mu_{n}^{2}}A^{2}\sigma_{\chi n}F^{2}(\mathfrak{q}^{2})\,, \tag{20}\] where \(A\) denotes the atomic mass number of the target nuclei, \(\mathfrak{q}=\sqrt{2m_{\mathcal{N}}T_{\mathcal{N}}}\) stands for the three-momentum transfer, \(\sigma_{\chi n}\) is the DM-nucleon SI cross section and \(\mu_{\mathcal{N}}\) (\(\mu_{n}\)) represents the DM-nucleus (DM-nucleon) reduced mass. We again take the benchmark \(\sigma_{\chi\nu}=\sigma_{\chi n}\) for our analysis, but, as mentioned above, our bounds are mainly dependent on \(\sqrt{\sigma_{\nu\chi}\sigma_{\chi n}}\) (see Sec. V). The SI cross section \(\sigma_{\chi\mathcal{N}}^{\text{SI}}\) is a momentum-dependent quantity due to the presence of the nuclear form factor, \(F(\mathfrak{q}^{2})\), which accounts for the finite nuclear size and has been parameterized by a Helm-type 6 effective form factor [76]. Notice that the energy dependence of the SI cross section prevents us from obtaining an analytical solution for Eq. (12), unlike the case of DM-electron scattering discussed above, hence we need to solve Eq. (12) numerically. The fact that the DM travels through Earth to reach the detectors, leads to attenuation of the flux. In Fig. 2 we present the angle-averaged DSNB-boosted DM flux for unattenuated and attenuated cases. The results are plotted for the benchmark parameters \(m_{\chi}=300\,\mathrm{MeV}\) and \(\sigma_{\chi\nu}=10^{-29}\,\mathrm{cm}^{2}\), assuming a depth of \(h_{d}=1.4\) km which corresponds to the underground location of XENONnT. Note that the results remain essentially unchanged for the case of LZ (\(h_{d}=1.47\) km). The solid blue line in Fig. 2 corresponds to the unattenuated flux. The dashed lines show the attenuated fluxes corresponding to attenuation due to DM-nucleus (orange, dashed) and DM-electron (green, dashed) scattering as a function of the DM energy. For the case of \(\chi-\mathcal{N}\) scattering, the effect of the finite nuclear size is illustrated by comparing the resulting fluxes for two cases: i) by incorporating the Helm form factor in the calculation (orange, dashed) and ii) by assuming \(F=1\) i.e. completely ignoring nuclear physics (orange, dotted). As can be seen, Earth's attenuation effects shift the peak of the DSNB-boosted DM flux towards lower energies and reduce it up to a factor 2 (3.5) for \(\chi-e\) (\(\chi-\mathcal{N}\)) scattering. Furthermore, the high-energy endpoint of the differential DSNB-boosted DM flux spectra exhibits a faster decline when finite nuclear size is neglected, as opposed to the case where the finite nuclear size effects are taken into account. The Earth's attenuation and nuclear size effects play a crucial role in the results presented in the remainder of the work (see also Appendix C). Before closing this discussion let us stress that in the present analysis, the calculated attenuated flux is not taking into account additional effects related to the direction of DNSB-boosted DM particles after each scattering process nor the possibility of multiple scatterings. These effects only become relevant when the DM mass and energy are significantly lower than the mass of the nucleus [26; 67] and are important for probing diurnal effects [77]. Figure 2: The angle-averaged DNSB-boosted DM flux distribution as a function of the DM energy for \(m_{\chi}=300\,\mathrm{MeV}\), \(\sigma_{\chi\nu}=10^{-29}\,\mathrm{cm}^{2}\) and a detector depth of \(h_{d}=1.4\) km for the benchmarks \(\sigma_{\chi\nu}=\sigma_{\chi e}\) and \(\sigma_{\chi\nu}=\sigma_{\chi n}\). The unattenuated flux is shown by the solid blue line. The attenuated DM flux for the case of DM scattering with electrons (nuclei including Helm-type form factor) inside the Earth is displayed with a green (orange) dashed line. The effect of the nuclear form factor is illustrated by comparing the results assuming a Helm-type form factor (dashed line) and \(F=1\) (dash-dotted line). See main text for more details. Dark matter signal at underground detectors After reaching the underground detector, the DSNB-boosted DM can scatter off both the electrons and nuclei of the target material, thus inducing both electronic and nuclear recoils. The differential event rate with respect to the recoil energy \(T_{i}\) can be written as [44] \[\frac{dR}{dT_{i}}=t_{\rm run}N_{\rm target}^{i}\mathcal{A}\int dT_{\chi}^{z} \frac{d\Phi_{\chi}}{dT_{\chi}^{z}}\frac{\sigma_{\chi i}}{T_{i}^{\rm max}(T_{ \chi}^{z})}\Theta[T_{i}^{\rm max}(T_{\chi}^{z})-T_{i}]\,, \tag{21}\] where \(t_{\rm run}\) and \(N_{\rm target}\) denote the exposure time and number of targets of the detector, respectively, while \(\mathcal{A}\) represents the detection efficiency provided by each experiment. At this point we should clarify that the detection efficiency is provided either in terms of true \(\mathcal{A}(T_{i})\) or reconstructed \(\mathcal{A}(T_{i}^{\rm reco})\) recoil energy, hence its explicit dependence has been dropped in Eq. (21) to avoid confusion. Regarding DM-electron scattering, our calculations incorporate the detection efficiency provided by LZ [6] in terms of true recoil energy \(\mathcal{A}(T_{e})\), while for the case of XENONnT we consider the efficiency provided in Ref. [8] in terms of reconstructed recoil energy \(\mathcal{A}(T_{e}^{\rm reco})\). Regarding DM-nucleus scattering, we account for the detection efficiency \(\mathcal{A}(T_{\mathcal{N}})\) provided in terms of true nuclear recoil energy as reported by both LZ [7] and XENONnT [9]. In what follows, we will use the recent data released by XENONnT and LZ collaborations to put constraints on the DM mass and DM-electron/nucleon cross sections. We first focus on DM-electron scattering, i.e. we calculate the differential event spectrum \(dR/dT_{e}\) given in Eq. (21) for \(i=e\). In this case, \(\chi\) particles scatter off electrons in the underground detector with a cross section \(\sigma_{\chi e}\). The angle-averaged DSNB-boosted DM flux, accounting for the attenuation effects (see Sec. III.1.1) is given in Eq. (19). Since very low energy scatterings occur, our calculations take into consideration atomic binding effects which lead to a slight cross section suppression at very low recoil energies. To this purpose, the number of target electrons in Eq. (21) is expressed as \(N_{\rm target}^{e}=\frac{m_{\rm det}N_{A}}{M_{r}}\times Z_{\rm eff}(T_{e})\) where \(m_{\rm det}\), \(M_{r}\) and \(N_{A}\) represent the detector mass, the molar mass of the target material and the Avogadro's number, respectively. The recoil energy-dependent quantity \(Z_{\rm eff}(T_{e})\) denotes the effective charge of the atomic nucleus that is seen by DM for a given energy deposition \(T_{e}\). The latter can be approximated by a series of step functions that depend on the single particle binding energy of the \(i\)th electron, following the Hartree-Fock calculations of Ref. [78]. Turning our attention to the case of DM scattering with nuclei, we calculate the corresponding differential event spectrum \(dR/dT_{\mathcal{N}}\) that follows from Eq. (21) for \(i=\mathcal{N}\). For the sake of clarity let us note that in this case, although our calculated event rates refer to DM-nucleus scattering, our results will be always expressed in terms of the fundamental DM-nucleon cross section \(\sigma_{\chi n}\), instead of the SI DM-nucleus cross section \(\sigma_{\chi\mathcal{N}}^{\rm SI}\) [see e.g. Eq. (20)]. In this case the angle-averaged attenuated boosted DM flux has been computed numerically as discussed in Sec. III.1.2. Since both the LZ and XENONnT collaborations have reported their measured data in terms of electron-recoil spectra, we convert our calculated nuclear recoil spectrum \(dR/dT_{\mathcal{N}}\) into an "electron-equivalent" recoil spectrum, according to the expression \[\frac{dR}{dT_{e}}=\frac{dR}{dT_{\mathcal{N}}}\frac{1}{\mathcal{Q}_{f}(T_{ \mathcal{N}})+T_{\mathcal{N}}\frac{d\mathcal{Q}_{f}}{dT_{\mathcal{N}}}}\,, \tag{22}\] where the quenching factor, \(\mathcal{Q}_{f}(T_{\mathcal{N}})=T_{e}/T_{\mathcal{N}}\), quantifies the energy loss to heat in the aftermath of a DM-nucleus scattering event. In the present analysis we adopt the standard Lindhard quenching factor [79]. Notice also that for DM-nucleus scattering the effective charge \(Z_{\rm eff}\) is irrelevant and hence we take \(Z_{\rm eff}=1\). In the final step we intend to simulate the expected boosted DM signal at the LZ and XENONnT detectors with high reliability. To this purpose we evaluate the reconstructed spectra by assuming a Gaussian smearing function \(\mathcal{G}(T_{e}^{\rm reco},T_{e})\) as \[\frac{dR}{dT_{e}^{\rm reco}}=\int_{0}^{T_{e}^{\rm max}}\frac{dR}{dT_{e}}(T_{e}) \mathcal{G}(T_{e}^{\rm reco},T_{e})\,dT_{e}\,. \tag{23}\] By following the above method we have verified that our predictions are in excellent agreement with those reported by XENONnT and LZ for both electron and nuclear recoils. First, based on previous work [80], we have calculated the elastic neutrino-electron scattering spectra and found that a total of \(\sim\)30 events are expected for LZ and 76 events (300 events in the full region [1; 140] keV\({}_{\rm ee}\)) for XENONnT, in agreement with Refs. [7; 8]. Second, regarding nuclear recoils we have calculated the corresponding coherent elastic neutrino-nucleus scattering (CE\(\nu\)NS) expected events induced by \({}^{8}\)B neutrinos and found 0.16 events for LZ and 0.24 events for XENONnT, in agreement with Refs. [7; 9], respectively. Following these prescriptions, we show in Fig. 3 the simulated recoil spectra as a function of the electron-equivalent ionization energy, expected at both the LZ and XENONnT detectors. The black points depict the experimental data together with the error bars, when available 7, as provided by the collaborations. The blue histograms indicate the background events, also given by the collaborations. The red and green histograms represent the sum of the background and of the simulated number of events, assuming \(m_{\chi}=300\) MeV and two different values of \(\chi\)-nucleon or \(\chi\)-electron scattering cross sections, as indicated in the legend. In the case of the LZ detector, the light brown histogram further represents the \({}^{37}\)Ar background, which originates from cosmogenic activation of the xenon prior to underground deployment, producing short-lived \({}^{37}\)Ar that decayed during the first run [7]. Figure 3: Simulated signal (colored histograms) and experimental data (black points with error bars) as a function of the electron-equivalent ionization energy, for the LZ (left) and XENONnT (right) experiments. The DSNB-boosted DM events have been computed assuming \(m_{\chi}=300\) MeV and different values of scattering cross sections, see the legend. The blue and light brown histograms refer to the background events as provided by the collaborations. ### LZ analysis For the analysis of LZ data, we have performed a spectral analysis using the following Poissonian \(\chi^{2}\) function [81] \[\chi^{2}(\overrightarrow{\mathcal{S}};\alpha,\beta,\delta)= 2\sum_{i=1}^{51}\Biggl{[}R^{i}_{\rm pred}(\overrightarrow{ \mathcal{S}};\alpha,\beta,\delta)-R^{i}_{\rm exp}+R^{i}_{\rm exp}\ln\left( \frac{R^{i}_{\rm exp}}{R^{i}_{\rm pred}(\overrightarrow{\mathcal{S}};\alpha, \beta,\delta)}\right)\Biggr{]}+ \tag{24}\] \[\left(\frac{\alpha}{\sigma_{\alpha}}\right)^{2}+\left(\frac{\beta }{\sigma_{\beta}}\right)^{2}+\left(\frac{\delta}{\sigma_{\delta}}\right)^{2}\,,\] where \(R^{i}_{\rm exp}\) denotes the experimental differential events in the \(i\)th recoil energy bin, as reported in Ref. [7], while the predicted differential events contain the DSNB-boosted DM signal, as well as all background components: \(R^{i}_{\rm pred}(\overrightarrow{\mathcal{S}};\alpha,\beta,\delta)=(1+\alpha )R^{i}_{\rm bkg}+(1+\beta)R^{i}_{\rm DSNB-BDM}(\overrightarrow{\mathcal{S }})+(1+\delta)R^{i}_{\rm 3^{7}Ar}\). It is worth noting that, in accordance with Ref. [82], the \(R_{\rm bkg}\) spectrum is calculated by eliminating the \({}^{37}\)Ar contributions from the total background provided in Ref. [7]. The nuisance parameters \(\{\alpha,\beta,\delta\}\) are introduced to incorporate the uncertainty on background, neutrino flux distribution8 and \({}^{37}\)Ar components with \(\sigma_{\alpha}=13\%\), \(\sigma_{\beta}=40\%\)[55] and \(\sigma_{\delta}=100\%\). For each new physics parameter belonging to \(\overrightarrow{\mathcal{S}}\) (i.e. \(m_{\chi}\) or \(\sigma_{\chi i}\)), we have marginalized the \(\chi^{2}\) function over all nuisance parameters. Footnote 8: The uncertainty on the DSNB flux primarily arises from the uncertainty in the star-formation rate, mentioned in Eq. (4). ### XENONnT analysis The following Gaussian \(\chi^{2}\) function is used for the analysis of XENONnT data [81] \[\chi^{2}(\overrightarrow{\mathcal{S}};\beta)=\sum_{i=1}^{30}\left(\frac{R^{ i}_{\rm pred}(\overrightarrow{\mathcal{S}};\beta)-R^{i}_{\rm exp}}{\sigma^{i}} \right)^{2}+\left(\frac{\beta}{\sigma_{\beta}}\right)^{2}\,. \tag{25}\] Here, \(R^{i}_{\rm pred}(\overrightarrow{\mathcal{S}};\beta)=(1+\beta)\,R^{i}_{\rm DSNB -BDM}(\overrightarrow{\mathcal{S}})+B^{i}_{0}\), with \(B_{0}\) denoting the simulated background mentioned in Ref. [8]. The rest of the details are similar to the LZ analysis. ## V Results We present in Fig. 4 the 90% C.L. exclusion regions on the DSNB-boosted DM, in the planes \((m_{\chi},\sigma_{\chi e})\) and \((m_{\chi},\sigma_{\chi n})\). We consider the case of DM scattering off electrons (left panel) and nuclei (right panel), and show both constraints obtained using LZ (blue) and XENONnT (red) experimental data. For comparison purposes, we also show limits from other dedicated experiments and studies. In the \(\chi\)-electron scattering channel (left panel), we consider results from DD experiments (gray contour), mainly SENSEI [83], DAMIC [84], EDELWEISS [85], SuperCDMS [86], DarkSide-50 [87], XENON1T [88] and PandaX-II [89]. Additionally, we include constraints from solar reflection [90] (dark yellow) and from Super-Kamiokande [31] (green). Similarly, for the \(\chi\)-nucleus scattering channel (right panel), we take into account results from various experiments and studies. These include again DD experiments (black region) [91, 92, 93, 94, 95, 96, 97, 98, 99], together with constraints from the Super-Kamiokande experiment [100] (green) and the XQC rocket experiment [101] (light pink). We further depict cosmological bounds in cyan (note that they are obtained as a 95% C.L. exclusion region). Among them, the strongest bounds come from Milky-Way satellite galaxies, but also include cosmic-microwave-background anisotropy measurements and Lyman-\(\alpha\) forest constraints [102; 103; 104; 105; 106; 107]. We also represent the Big Bang Nucleosynthesis (BBN) limit on the mass of real scalar and Dirac fermion DM [108]. By considering the limits from these experiments, we establish a comprehensive picture of the constraints on the DSNB-boosted DM parameter space. Finally, we compare our results with recent bounds on sub-GeV cosmic-ray boosted DM (CRDM), also derived using LZ data [30] (light yellow). While many references in the literature have addressed cosmic-ray boosted DM, Ref. [30] allows for a direct comparison of our results given that we both analyze the same LZ data set. As can be seen, our constraints obtained assuming a boost from DSNB neutrinos are ruling out a larger region of the parameter space. This is understood since the local interstellar population of cosmic rays is about one order of magnitude less intense compared to the flux of DSNB neutrinos, the latter peaking at lower (\(\sim 10\) MeV) energies, though. Notice also that Ref. [30] ignored nuclear-physics corrections, that are rather important for a CRDM-based analysis where a larger momentum transfer is involved (compared to our DSNB-based analysis). The correct inclusion of such effects would drastically modify the CRDM region shown in the plot. Although not shown here, nuclear effects in CRDM studies have been considered by incorporating Helm-type nuclear form factors, for example, in the analysis of XENON1T excess data in Refs. [26; 29]. At this point we should note that given that the initial CRDM flux peaks beyond 100 MeV and extends up to GeV energies, a large momentum transfer is involved in the process and it cannot be realistically accounted for through the inclusion of nuclear form factors. For an appropriate treatment of nuclear structure at such large momentum transfer see Ref. [28]. While a significant part of our constraints lie in a region of parameter space already probed by other searches, these results highlight the complementarity and significance of the LZ and XENONnT data in probing the sub-GeV DM parameter space. Also, it is worth mentioning that both these experiments have just started taking data and we are only using their very first data sets obtained with exposure time of only a few months, but still the bounds are already competitive with other bounds. As the statistics of these experiments will increase, their data will play a much Figure 4: 90% C.L. exclusion regions in the DSNB-boosted DM parameter space, obtained for scattering off electrons (left panel) and nuclei (right). We show results obtained with LZ (blue) and XENONnT (red) data. For comparison purposes, existing limits from other studies are also shown (see main text for details). more important role in constraining the DSNB-boosted DM parameter space. Moreover, and as mentioned in Sec. III, let us recall that we present the limits on \(\sigma_{\chi e}\) and \(\sigma_{\chi n}\) under the assumption of \(\sigma_{\chi\nu}=\sigma_{\chi e}\) and \(\sigma_{\chi\nu}=\sigma_{\chi n}\), respectively. However, note that the lower limit of our closed regions is basically dependent only on \(\sqrt{\sigma_{\chi e}\sigma_{\chi\nu}}\) and \(\sqrt{\sigma_{\chi n}\sigma_{\chi\nu}}\), respectively, so it can be easily recast into alternative scenarios in which the magnitude of the two cross sections is different. The upper limit of our closed contours though, has a stronger dependence on the attenuation effects and therefore depends on a more complicated combination of \(\sigma_{\chi e,n}\) and \(\sigma_{\chi\nu}\). As it can be noticed, our exclusion regions have a closed shape, due to the inclusion of attenuation effects (see the discussion below). Large scattering cross sections i.e. \(\sigma_{\chi e}\gtrsim 2\times 10^{-28}\) cm\({}^{2}\) (\(\sigma_{\chi n}\gtrsim 8\times 10^{-28}\) cm\({}^{2}\)) for DM scattering off electrons (nucleons) and \(m_{\chi}=0.1\) MeV result into a strong attenuation during the propagation of the DM particles through the Earth and are therefore disfavored. To understand the shape of our bounds in more detail, we now explore the implications of considering attenuation effects and adopting a realistic nuclear form factor. We examine how these factors influence the exclusion limits on the scattering cross section and DM mass. The energy loss experienced by the DSNB-boosted DM due to its scattering with the Earth's material introduces a significant impact on the derived exclusion limits. Figure 5 illustrates the consequences of considering (blue, closed contour) or neglecting (red exclusion line) attenuation effects for \(\chi-e\) scattering, while Fig. 6 depicts the same for \(\chi-\mathcal{N}\) scattering. Clearly, attenuation effects impose an upper bound on the exclusion region. Above a certain scattering cross section, energy loss becomes substantial such that the DSNB-boosted DM particles cannot be detected because of severe attenuation. This fact confirms the necessity of properly accounting for Earth's attenuation effects to ensure accurate and robust constraints on the DNSB-boosted DM parameters. Furthermore, for the case of \(\chi-\mathcal{N}\) scattering, considering a realistic nuclear form factor [see e.g. Eq. (20)] introduces visible effects, as illustrated in Fig. 6. The inclusion of the Helm form factor effectively modifies the energy-loss dynamics compared to the \(F=1\) scenario. Indeed, the finite nuclear size reduces the differential DM-nucleus cross section given in Eq. (14), thus leading to a decrease in the energy-loss rate \(dT_{\chi}^{z}/dz\) when a larger momentum transfer is involved i.e. for the high-energy tail of DSNB-boosted DM flux (for an illustration see Fig. 10 in Appendix C). This modification results in a shift of the upper bound of the exclusion region, allowing for slightly higher \(\sigma_{\chi n}\) values before energy loss renders particles undetectable. Figure 5: 90% C.L. exclusion limits on \(\sigma_{\chi e}\) and \(m_{\chi}\), obtained including (blue closed contour) or neglecting (red line) attenuation effects due to DM scattering off Earth’s elements before reaching the detectors. Left panel shows bounds obtained with XENONnT data, while right panel refers to LZ data. Such an exploration of the interplay between attenuation effects and nuclear-physics considerations leads to a more comprehensive and robust understanding of the complex dynamics governing DSNB-boosted DM scatterings. These insights emphasize the significance of accounting for the latter effects in the accurate interpretation of experimental results, providing insights into the implications of \(\chi\)-\(\mathcal{N}\) scattering. ## VI Conclusions In this work we have revisited the possibility that sub-GeV DM is boosted to semi-relativistic velocities through collisions with the DSNB. Such a very energetic component of the total DM flux, while subdominant, would be detectable at DM DD experiments thus amplifying their experimental reach. We have analyzed the most recent data from two cutting-edge DM experiments, LZ and XENONnT, and we have obtained stringent constraints on the DSNB-boosted DM parameter space. For the first time, we have considered both electron and nuclear scatterings and obtained bounds on the relevant cross sections and DM mass. These new bounds extend the reach of typical DM DD searches to even lower DM mass ranges, and they are consistent with other searches for cosmic-ray boosted DM. In this regard, we have illustrated that due to the higher intensity of the DSNB flux in comparison to cosmic rays, the former allows to exclude a larger part of the available parameter space. We further point out that in obtaining our results for DM-nuclei scattering we reliably account for corrections due to the finite nuclear size by incorporating a Helm-type nuclear form factor. Our results hence complement other existing searches for sub-GeV DM. Most of all, they show that even with their very first and limited exposure time data sets, the low-threshold XENONnT and LZ experiments dominate the terrestrial limits on DM-nucleus scattering at very low DM masses, with good complementarity to neutrino experiments like Super-Kamiokande and cosmological observations. Finally we have highlighted the importance of including Earth's attenuation Figure 6: 90% C.L. exclusion limits on \(\sigma_{\chi n}\) and \(m_{\chi}\), obtained including (blue closed contour) or neglecting (red line) attenuation effects due to DM scattering off Earth’s elements before reaching the detectors. Left panel shows bounds obtained with XENONnT data, while right panel refers to LZ data. Additionally, the impact of the nuclear form factor is shown, with solid lines representing the Helm form factor, and dashed lines obtained assuming \(F=1\) for both attenuated and unattenuated scenarios. effects in the analysis. In particular, we have demonstrated that they have a strong impact on the upper bound of our derived exclusion regions disfavoring large DM scattering cross sections, namely \(\sigma_{\chi e}\gtrsim 2\times 10^{-28}\) cm\({}^{2}\) and \(\sigma_{\chi n}\gtrsim 8\times 10^{-28}\) cm\({}^{2}\) for \(m_{\chi}=0.1\) MeV. In summary our current analysis, by taking into account the Earth's attenuation and finite nuclear size effects, provides accurate and robust constraints on the parameter space of low mass DSNB-boosted DM, using the first data sets of LZ and XENONnT. ###### Acknowledgements. We acknowledge the use of the high-performance computing facilities offered by the Bhaskara Cluster at IISER Bhopal, which significantly facilitated the completion of this research. VDR acknowledges financial support by the CIDEXG/2022/20 grant (project "D'AMAGAT") funded by Generalitat Valenciana and by the Spanish grant PID2020-113775GB-I00 (MCIN/AEI/10.13039/ 501100011033). AM is grateful for the invaluable financial support provided by the Prime Minister Research Fellowship (PMRF), sponsored by the Government of India (PMRF ID: 0401970). The work of DKP was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "3rd Call for H.F.R.I. Research Projects to support Post-Doctoral Researchers" (Project Number: 7036). The work of RS has been supported by the SERB, Government of India grant SRG/2020/002303. ## Appendix A Geometry of an underground detector's location In the context of DM detection, the geometry of the detector's underground position plays a pivotal role in assessing the anticipated signal rate and characteristics of the search. We explore the Figure 7: Geometry of the underground detector’s location at a depth \(h_{d}\) below Earth’s surface, and distance \(z\) traveled by DM before detection. configuration of a detector located at a depth \(h_{d}\) below the Earth's surface, as depicted in Fig. 79. Our primary concern is to determine the distance \(z\) that corresponds to the distance traveled by DM particles between a specific impact point on the Earth's surface (\(I\)) and the detector (\(D\)). This distance depends on the zenith angle, \(\theta_{z}\), representing the angle between the vertical direction and the line connecting the detector to the chosen point on the Earth's surface. Footnote 9: Credit of the underlying 3D world map: Apple Maps v 3.0. For the purpose of computing the distance \(z\) we employ the law of cosines within the triangle \(\triangle\)ODI (see Fig. 7), leading to the following expression: \[\begin{split} R_{E}^{2}&=z^{2}+(R_{E}-h_{d})^{2}-2 R_{E}(R_{E}-h_{d})\cos{(\pi-\theta_{z})}\\ \Rightarrow z&=-(R_{E}-h_{d})\cos{\theta_{z}}+\sqrt{R_{E}^{2 }-(R_{E}-h_{d})^{2}\sin^{2}{\theta_{z}}}\,.\end{split} \tag{10}\] By means of this expression, we can precisely determine the distance \(z\) that characterizes the spatial relationship between the position of the detector and the Earth's surface point of interest for a given zenith angle. Understanding this geometric configuration is crucial for a more realistic simulation of the interactions between DM particles and the detector and in predicting potential signals in DM experiments. ## Appendix B Geophysical properties of Earth We model the Earth's interior as a sphere of constant electron and nuclear densities (\(n_{e}=8\times 10^{23}\text{ cm}^{-3}\) and \(n_{\mathcal{N}}=3.44\times 10^{22}\text{ cm}^{-3}\)), based on the abundances of the main elements as shown in Table 1. ## Appendix C Energy loss experienced by the DSNB-boosted DM due to scattering inside Earth In this Appendix we investigate the dependence of the DSNB-boosted DM's underground kinetic energy (here denoted as \(T_{\chi}^{z}\)) on the distance (\(z\)) and initial value of the DM kinetic energy at Earth's surface (\(T_{\chi}^{0}\)). Such considerations provide essential insights into the impact of attenuation effects within Earth's materials. The left panel of Fig. 8 illustrates \(T_{\chi}^{z}\) as a function of \(z\), derived by solving Eq. (12), while considering different initial values of \(T_{\chi}^{0}\). This analysis is performed separately for \(\chi-e\) (dashed curves) and \(\chi-\mathcal{N}\) (solid curves) scattering scenarios. As can be seen, the energy \begin{table} \begin{tabular}{c c c c} \hline \hline **Element** & **Mass Number (A)** & **Relative Abundance (\%)** & \(\mathbf{n}_{\mathcal{N}}\) **(cm\({}^{-3}\))** \\ \hline Fe & 55.845 & 32.1 & \(6.11\times 10^{22}\) \\ O & 15.999 & 30.1 & \(3.45\times 10^{22}\) \\ Si & 28.086 & 15.1 & \(1.77\times 10^{22}\) \\ Mg & 24.305 & 13.9 & \(1.17\times 10^{22}\) \\ S & 32.065 & 2.9 & \(2.33\times 10^{21}\) \\ Ca & 40.078 & 1.5 & \(7.94\times 10^{20}\) \\ Al & 26.982 & 1.4 & \(1.09\times 10^{21}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Properties of the most abundant elements in the Earth’s geosphere. The table presents the main elements found within the Earth’s crust, mantle, and core [71; 72]. For each element, the respective mass number (\(A\)), relative abundance, and nuclear number density (\(n_{\mathcal{N}}\)) are provided. of the DSNB-boosted DM undergoes a rapid decrease for distances larger than \(\sim 1\) km for \(\chi-\mathcal{N}\) scattering, while in the case of \(\chi-e\) scattering, this substantial energy reduction takes place at distances beyond \(\sim 100\) km. In the right panel of Fig. 8, we present \(T_{\chi}^{z}\) as a function of the initial kinetic energy \(T_{\chi}^{0}\), computed at a fixed depth \(z=1.4\) km (typical of DD experiments like XENONnT and LZ), i.e. corresponding to the special case \(\theta_{z}=0\) for which \(z=h_{d}\). This plot offers a direct comparison of the energy evolution assuming different initial conditions. The intricate interplay between these parameters is further visualized in the contour plot displayed in Fig. 9, which depicts the variation of \(T_{\chi}^{z}\) across the \((z,T_{\chi}^{0})\) plane. In all cases, we adopt a representative DSNB-boosted DM particle mass \(m_{\chi}=300\) MeV and assume a cross section \(\sigma_{\chi e}=\sigma_{\chi n}=10^{-29}\) cm\({}^{2}\). For the case of \(\chi-\mathcal{N}\) scattering, incorporating finite nuclear size effects through the nuclear form Figure 8: Left panel: \(T_{\chi}^{z}\) as a function of \(z\) for fixed \(T_{\chi}^{0}\). Right panel: \(T_{\chi}^{z}\) versus \(T_{\chi}^{0}\) at fixed \(z=1.4\) km. We fix \(m_{\chi}=300\) MeV and \(\sigma_{\chi e}(\sigma_{\chi n})=10^{-29}\) cm\({}^{2}\). Solid lines correspond to DM-nuclei scattering, while dashed lines represent DM-electron scattering. factor becomes particularly relevant. Figure 10 shows the impact of nuclear effects by considering two distinct scenarios in the calculation of the final kinetic energy of DM particles reaching the detector: (i) including a Helm-type nuclear form factor and (ii) completely neglecting nuclear effects, i.e. \(F=1\). Notably, the effect driven by nuclear physics becomes evident around \(z=0.1\) km. In particular for high-energy DSNB-boosted DM particles, the disparity between the two scenarios becomes substantial. All in all, incorporating nuclear physics effects through the Helm form factor leads to a reduction of the total cross section, resulting in a mitigated energy loss. Remarkably, DSNB-boosted DM particles with kinetic energies exceeding 100 MeV undergo such a marginal energy loss that their energy remains nearly constant at distances around \(z\lesssim 100\) km.
2309.03678
Fully Onboard SLAM for Distributed Mapping with a Swarm of Nano-Drones
The use of Unmanned Aerial Vehicles (UAVs) is rapidly increasing in applications ranging from surveillance and first-aid missions to industrial automation involving cooperation with other machines or humans. To maximize area coverage and reduce mission latency, swarms of collaborating drones have become a significant research direction. However, this approach requires open challenges in positioning, mapping, and communications to be addressed. This work describes a distributed mapping system based on a swarm of nano-UAVs, characterized by a limited payload of 35 g and tightly constrained on-board sensing and computing capabilities. Each nano-UAV is equipped with four 64-pixel depth sensors that measure the relative distance to obstacles in four directions. The proposed system merges the information from the swarm and generates a coherent grid map without relying on any external infrastructure. The data fusion is performed using the iterative closest point algorithm and a graph-based simultaneous localization and mapping algorithm, running entirely on-board the UAV's low-power ARM Cortex-M microcontroller with just 192 kB of SRAM memory. Field results gathered in three different mazes from a swarm of up to 4 nano-UAVs prove a mapping accuracy of 12 cm and demonstrate that the mapping time is inversely proportional to the number of agents. The proposed framework scales linearly in terms of communication bandwidth and on-board computational complexity, supporting communication between up to 20 nano-UAVs and mapping of areas up to 180 m2 with the chosen configuration requiring only 50 kB of memory.
Carl Friess, Vlad Niculescu, Tommaso Polonelli, Michele Magno, Luca Benini
2023-09-07T12:40:06Z
http://arxiv.org/abs/2309.03678v1
# Fully Onboard SLAM for Distributed Mapping with a Swarm of Nano-Drones ###### Abstract The use of Unmanned Aerial Vehicles (UAVs) is rapidly increasing in applications ranging from surveillance and first-aid missions to industrial automation involving cooperation with other machines or humans. To maximize area coverage and reduce mission latency, swarms of collaborating drones have become a significant research direction. However, this approach requires open challenges in positioning, mapping, and communications to be addressed. This work describes a distributed mapping system based on a swarm of nano-UAVs, characterized by a limited payload of \(35\,\mathrm{g}\) and tightly constrained on-board sensing and computing capabilities. Each nano-UAV is equipped with four 64-pixel depth sensors that measure the relative distance to obstacles in four directions. The proposed system merges the information from the swarm and generates a coherent grid map without relying on any external infrastructure. The data fusion is performed using the iterative closest point algorithm and a graph-based simultaneous localization and mapping algorithm, running entirely on-board the UAV's low-power ARM Cortex-M microcontroller with just \(192\,\mathrm{kB}\) of SRAM memory. Field results gathered in three different mazes from a swarm of up to 4 nano-UAVs prove a mapping accuracy of \(12\,\mathrm{cm}\) and demonstrate that the mapping time is inversely proportional to the number of agents. The proposed framework scales linearly in terms of communication bandwidth and on-board computational complexity, supporting communication between up to 20 nano-UAVs and mapping of areas up to \(180\,\mathrm{m}^{2}\) with the chosen configuration requiring only \(50\,\mathrm{kB}\) of memory. Mapping, SLAM, Swarm, UAVs, ICP [https://youtube.com/watch?v=c9hajp_43aw](https://youtube.com/watch?v=c9hajp_43aw) ## I Introduction Unmanned Aerial Vehicles (UAVs) have emerged as attractive solutions for several applications that require high maneuverability and scalability, such as distributed inspection and surveillance [1]. In particular, nano-UAVs [2] have proven to be safe to operate near people due to their reduced weight, i.e., below \(50\,\mathrm{g}\), which makes them an excellent choice for navigating indoor or cramped environments [3]. Furthermore, they are agile and small enough to fit in the palm of a hand, and more importantly, their cost-effective hardware facilitates a swarm formation. The swarm allows for a decreased latency and a higher probability of reaching the mission objective due to the intrinsic redundancy of having more than one drone [4, 5]. Finding gas leaks [6], localizing survivors in mines [7], detecting the source of radiation in nuclear plants, or machine-to-machine cooperation with IoT devices [8] are only a few examples where nano-UAVs, with a single agent or a swarm formation, can be employed. In this context, specific conditions must be met to reach a full and robust level of autonomy; for instance, the UAV has to perceive the environment and compute its next movements to enable optimal mission planning [9]. Thus, sensing, mapping, local/global communication, and on-board processing are all essential components in supporting autonomous task completion. In addition, a swarm agent needs to estimate its spatial location among other UAVs to infer the optimal mission strategy [4]. Enabling optimal planning requires good knowledge of the map of the environment [9]. Mapping is one of the main components necessary for achieving efficient autonomous robot control, as it enables advanced obstacle avoidance and the computation of optimal mission trajectories. Simultaneous Localization and Mapping (SLAM) is a class of mapping algorithms that can actively correct for both odometry and mapping errors [3]. In particular, graph-based SLAM is widely used due to its high accuracy and ability to store the whole robot trajectory [10, 11]. Today it is a State-of-the-Art (SoA) reference for performance comparisons [12]. The algorithm models the drone poses as graph nodes, the odometry measurements as edges, and consists of two main elements: the loop-closure detection, which identifies if the drone has returned to a previously visited location, and the graph optimization, which creates an additional edge as the result of the loop closure, optimizing the previously added poses. Moreover, since mapping requires accurate sensing capabilities that can measure the depth of the environment, LiDARs and stereo cameras [12] are among the most popular approaches that support SLAM on UAVs today. Even though the SLAM algorithm paired with LiDARs is widely used [13, 14], it requires computationally intensive and memory-hungry processes, which are not feasible on nano-UAVs due to their limited payload and constrained resources [2]. As an alternative, miniaturized, low-resolution, and energy-efficient ToF sensors weighing only \(42\,\mathrm{mg}\) have been released on the market in the last few years, opening new applications in the field of nano-UAVs [15]. Consequently, recent works have exploited these sensors and enabled SLAM with nano-UAVs. However, due to the low sensor resolution (i.e., one depth pixel per sensor), their systems are only capable of mapping simple-geometry environments (e.g., long corridors) often relying on external infrastructure to outsource the intensive SLAM computation [3]. Unlike previous works, we take advantage of the novel VL53L5CX 8\(\times\)8 \(\mathrm{RoT}\) matrix sensor, which provides a 64-pixel depth map and offers a Field of View (FoV) of \(45^{\circ}\)[15]. By mounting four such sensors on the nano-UAV (i.e., front, rear, left, right), we achieve a cumulative FoV of \(180^{\circ}\). Spinning the drone by \(45^{\circ}\) results in an FoV of \(360^{\circ}\) and an angular resolution of \(5.6^{\circ}\), which provides superior loop-closure detection performance compared to the previous depth-based solutions for nano-UAV and micro-robots [3, 16]. By projecting the depth measurements acquired over a short time frame into the coordinate system of the drone, we obtain a small map that we call a _scan_. When the drone revisits a location, it acquires another scan and determines the rigid-body transformation w.r.t. the scan acquired when it first visited that location. This particular type of loop-closure detection is called _scan-matching_. Furthermore, in this work, we implement and use an optimized version of the Iterative Closest Point algorithm (ICP), a SoA in scan-matching that works with an arbitrary environment geometry. Another challenge in our scenario is the ability of the system to cope with large environments such as warehouses or large buildings. Existing works that implement SLAM on nano-UAVs off-load the computation to a remote base station via radio communication [3, 16], which is not always possible to setup up and reduces the autonomy of the nano-UAVs in the swarm. Moreover, the communication introduces a power consumption overhead which translates into reduced flight time. Furthermore, it bounds the operating area of the drone to the range of the radio communication, which is within tens of meters indoors due to the presence of the walls and interference. To mitigate these effects, our mapping system only relies on on-board computational resources. In addition, to increase the mapping capabilities and enhance scalability, we extend the mapping system to a swarm of nano-UAVs. This way, each nano-UAV in the swarm is mapping a different environment region, and a customized version of SLAM is used to fuse their information and generate a comprehensive map. This paper proposes a precise system for distributed indoor mapping with a swarm of nano-UAVs exploiting novel and low-power depth sensors. The entire computation runs on-board the nano-UAVs, without relying on any external computer. Our contributions can be summarized as follows: * an optimized implementation and evaluation of the ICP algorithm. It runs entirely on-board a resource constrained microprocessor in about \(500\,\mathrm{ms}\) for the input size used in our evaluation with an expected translation accuracy of about \(3\,\mathrm{cm}\). Due to the combination with the 8\(\times\)8 \(\mathrm{RoT}\) matrix sensors, this is the first work that enables ICP onboard nano-UAVs. * a distributed and autonomous exploration policy with obstacle avoidance that allows multiple drones to explore an unknown environment with different flight paths and moving obstacles. * the computationally lightweight integration between ICP and SLAM, and its extension for a swarm of drones. Furthermore, we design and implement a reliable radio communication protocol that orchestrates how the drones in the swarm exchange poses and scans with each other. * an extensive in-field evaluation that demonstrates the mapping capabilities of our system with a swarm of four nano-UAVs. We prove how our optimized SLAM, in combination with ICP, corrects the odometry errors by up to 55% and aligns the world frames of individual drones to generate coherent maps. To our knowledge, this is the first work that enables on-board mapping with a swarm of nano-UAVs. ## II Related Work Standard-size UAVs differ from Micro-Aerial Vehicles (MAVs) and nano-UAVs in terms of size, weight, total power consumption, and on-board processing capabilities. The latter two are directly linked as the budget for on-board electronics, including sensing and processing, is about \(1/10\) of the total motor power [2]. Today, most of the new advancement in the SoA regarding robotic perception and mapping have been demonstrated on standard-size UAVs and MAVs, which feature a power budget between 50 and \(100\,\mathrm{W}\) and a total mass \(\geq\)\(1\,\mathrm{kg}\)[17]. Hence they feature powerful on-board computing platforms, often equipped with GPUs and several gigabytes of memory [17]. On the other hand, nano-UAVs weigh less than \(50\,\mathrm{g}\) with a total power budget in the order of 5-\(10\,\mathrm{W}\), of which only \(500\,\mathrm{mW}\)\(-1\,\mathrm{W}\) remains for the sensors and MicroController Units (MCUs) [2]. Moreover, low-power MCUs mostly support a limited amount of memory, in general between \(100\,\mathrm{kB}\) and \(500\,\mathrm{kB}\), a stringent limitation for visual-based perception and mapping. Previous works on MAVs and UAVs have commonly relied on miniature, conventional \(360^{\circ}\) LiDAR sensors [18] or depth stereo camera [17] for mapping purposes. In particular, [19] integrated single-layer LiDAR sensors with inertial measurement units for indoor mapping tasks. The platform used is the commercial DJI Phantom 3 drone with an additional desktop-class Intel 15 processor required on-board. The LiDAR sensor used is \(62\,\mathrm{mm}\times 62\,\mathrm{mm}\times 87.5\,\mathrm{mm}\) in size and weighs \(210\,\mathrm{g}\), while nominally consuming \(8.4\,\mathrm{W}\) of power. Using a similar configuration, [20] integrates a multi-layer LiDAR sensor to allow 3D mapping of indoor environments while also relying on a desktop-class Intel i7 processor. Although the LiDAR sensor in this case only consumes \(8\,\mathrm{W}\) of power, its footprint is larger (\(103.3\,\mathrm{mm}\times 103.3\,\mathrm{mm}\times 71.7\,\mathrm{mm}\)) and weighs \(509\,\mathrm{g}\). On the other hand, [21] leverages an RGBD camera combined with a particle filter for navigating obstructed and visually degraded shipboard environments. The platform used is \(58\,\mathrm{cm}\times 58\,\mathrm{cm}\times 32\,\mathrm{cm}\) in size, carries more than \(500\,\mathrm{g}\) of instrumentation, and operates on a high-performance octa-core ARM processor. SoA mapping strategies are also investigated in the UAV field, as reported in Table I, in terms of sensors, mapping accuracy, swarm size, and power consumed by the computing platforms. In [22], Causa _et. all_ propose a cooperative mapping strategy based on LiDAR and GNSS, relying on a standard size UAV of \(3.6\,\mathrm{kg}\) and off-board processing. In [23], the authors bring the intelligence fully on-board, relying on a power-hungry (i.e., \(30\,\mathrm{W}\)) Nvidia Xavier and a VLP-16 LiDAR. Also, in [14], the mapping algorithm and on-board processing are entrusted to a Jetson TX2 featuring a multi-core CPU and a GPU, as well as \(8\,\mathrm{GB}\) of memory. Chang _et. all_ propose a robust multi-robot SLAM system designed to support robot swarms [13]; however, results are validated offline on an Intel i7-8750H processor. Although these approaches provide good mapping capabilities, in the range of between 5 and \(20\,\mathrm{cm}\), these solutions involve large and heavy sensors that require lower-intensive processing. Unfortunately, application processors, such as the Nvidia Xavier, are too power-intensive for nano-UAV applications and have a total weight that is orders of magnitude higher than is acceptable. Thus, it is clear that standard SoA approaches for navigation and mapping are not computationally optimized and cannot be applied in the nano-UAV field, which is the scope of this paper. However, in recent years, lightweight alternative sensing solutions, which are more appropriate for nano-UAV platforms, have become available [15]. Simple ToF ranging sensors facing forward, backward, left, and right on a Crazyflie drone have been used to implement mapping [24]. Although this work implements basic obstacle avoidance, manual piloting from a ground station is required, and no method for compensating odometry drift is implemented. Using the same hardware configuration, [7] implements obstacle avoidance for source seeking using deep reinforcement learning. However, since this method does not map the environment, the path is often suboptimal. Mapping using these simple ranging sensors is possible, but the low resolution of the acquired data means that longer flight times are required to approach the mapping fidelity of traditional drone-based systems, while still suffering from odometry drift. Mapping methods can be improved by applying SLAM algorithms that are used to correct pose estimation errors. The sub-field of SLAM problems known as "SLAM with sparse sensing" [25] explores more challenging applications where robots receive data points with a lower frequency and accuracy, as is the case for sensing solutions suitable for nano-UAVs [26]. For instance, [27] proposes to solve this problem by extracting line features as landmarks and applying a Rao-Blackwellized Particle Filter (RBPF) [28]. However, particle filters are not a favorable approach for larger maps, since large numbers of particles are required to maintain accuracy [29]. With this in mind, [3] demonstrates the efficient offline use of graph-based SLAM on sparse data with a novel front-end for scan matching. Although this method uses an Intel i7 desktop-class processor, it was also evaluated using data collected from a Crazyflie drone using single-zone ToF sensors. Following up on [24], the method presented in [16], shows a similar approach to applying SLAM to compensate for odometry drift, but the method is still being applied offline. The approach is further limited by the single-zone sensors, which are mounted on a larger drone (182x158x56mm) based on the Crazyflie Bolt. Works such as [30] used ranging sensors to implement SLAM on-board differential wheeled robot platforms using embedded application processors. Furthermore, [31] uses an application processor and cameras to implement visual SLAM on-board a wheeled consumer robot platform, while [32] discusses visual SLAM algorithm optimizations to better exploit the resources of an OMAP processor. On the other hand, [33] shows an MCU-based solution using a particle filter-based approach [27] that relies on extracting line features. Similarly, a different method called orthogonal SLAM is proposed in [34] and executed on an MCU on-board a drone in [35]. However, this method assumes that the extracted line features are always orthogonal. In contrast, our approach makes use of a fully on-board, highly optimized implementation of graph-based SLAM [10] with scan matching using ICP [36] adapted for a novel sensing solution on a \(34.8\,\mathrm{g}\) nano-UAV. Although visual and sparse-distance SLAM mapping is a well-known topic in the robotics community, the computational load and the related processing latency are still a concern in the UAV field [9]. Moreover, the current research trend pushes for collaborative mapping among a group of agents [37], often composed of a heterogeneous set of flying platforms and sensing elements [17]. As shown by Table I, only a few works in the literature propose a distributed mapping solution based on a UAV swarm that is successfully implemented in a real experiment [14], in particular without relying on any external infrastructure [13, 17]. In the nano-UAV field, lightweight methodologies and field tests demonstrating mapping capabilities are scarce [3, 7]. To the best of our knowledge, no works investigating a mapping system based on a swarm of nano-UAVs are present in the literature. Thus, this \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Work & On-board processing & Sensor & ToF & Map & \multirow{2}{*}{Field test} & Multi-robot & Power & System \\ & & & Pixels & accuracy & & Swarm & Consumption & Weight \\ \hline \hline **This** & & & & Nano-UAV and MAV & & & & \\ **work** & **Yes (Cortex-M4)** & \(4\times\)**ToF 64-pixel** & & & **Yes (4** & **Yes (up to 20** & **44.8** g \\ [24] & No & \(4\times\) ToF V15.1x & 4 & 10-20 cm & Yes & No & - & \(401\,\mathrm{g}\) \\ [7] & Yes (Cortex-M4) & \(4\times\) ToF V15.31x & 4 & - & Yes & No & \(240\,\mathrm{mW}\) & \(31.7\,\mathrm{g}\) \\ [3] & No (Intel i7 station) & \(4\times\) ToF V15.31x & 4 & 5-\(15\,\mathrm{cm}\) & No & No & - & \({}^{-}\) \\ [16] & No & \(4\times\) ToF V15.31x & 4 & 4.7 cm & Yes & No & - & \(401\,\mathrm{g}\) \\ \hline \hline [22] & No & LIDAR & - & 5-20 cm & No & Yes & - & \(3.6\,\mathrm{kg}\) \\ [23] & Yes (Xavier) & VLP-16 LiDAR & - & 2.14 m & Yes & No & \(30\,\mathrm{W}\) & \(>\)\(2\,\mathrm{kg}\) \\ [14] & Yes (Jetson TX2 ) & RP-IDAR & - & - & Yes & yes up to 50 & \(7.5\,\mathrm{W}\) & \(1.8\,\mathrm{kg}\) \\ [13] & No (Intel i7 station) & LIDAR & - & 15-20 cm & No & Yes & \({}^{-}\) & \({}^{-}\) \\ [17] & Yes (Jetson TX2) & Intel RealSense D435 & - & - & Yes & Yes & \(7.5\,\mathrm{W}\) & \(1.3\,\mathrm{kg}\) \\ \hline \hline \end{tabular} \end{table} TABLE I: System and performance comparison between this paper and the State-of-the-Art (SoA) works present in the literature. On-board processing, sensing elements, mapping accuracy, and system setups are compared. paper proposes the first study, implementation, and field results enabling fully on-board and distributed mapping for the nano-UAV ecosystem. We demonstrate the possibility of supporting up to 20 agents relying only on a \(34.8\,\mathrm{g}\) hardware platform for sensing, processing, and communication. The achieved accuracy is aligned with the SoA for MAVs and standard sized UAVs, with an average error on the order of \(12\,\mathrm{cm}\). The proposed system, including the lightweight perception framework implemented in this paper, paves the way for enhancing the autonomous capabilities of nano-UAVs by improving optimal path planning and multi-agent collaboration. ## III System setup ### _Crazyflie Nano-UAV_ The Crazyflie 2.1 is an open nano-UAV platform from Bitcraze commonly used in research. Its mainboard also acts as the airframe and includes an Inertial Measurement Unit (IMU), a barometer, a radio (Nordic nRF51822), and an STM32F405 processor that handles sensor readout, state estimation, and real-time control. The drone features extension headers that can be used to add commercially available decks (i.e., plug-in boards) to improve its sensing capabilities. In this work, the drone was equipped with the commercial Flow-deck v2, featuring a downward-facing optical flow camera and single-zone ToF ranging sensor, which improve velocity and height measurements through sensor fusion by the on-board Extended Kalman Filter (EKF). The Crazyflie was further equipped with a custom deck designed and presented in this work featuring four VL53L5CX ToF ranging sensors. The total weight at take-off is \(34.8\,\mathrm{g}\), including all the hardware used for the scope of this paper. ### _Custom Quad ToF Deck_ The VL53L5CX is a multi-zone 64-pixel ToF sensor featuring an extremely lightweight package, with a total weight of \(42\,\mathrm{mg}\). Its performance in the field of nano-UAVs were characterized in [15]. The maximum ranging frequency for the full resolution of 8\(\times\)8 pixel is \(15\,\mathrm{Hz}\), and the field of view (FoV) is \(45^{\circ}\). Moreover, the VL53L5CX provides a pixel validity matrix paired with the 64-pixel measurement matrix, which automatically flags noisy or out-of-range measurements. To enable the use of the multi-zone ranging sensors with the Crazyflie, we have created a custom deck designed specifically for the VL53L5CX ToF sensor. It can be used simultaneously with the Flow-deck v2 and incorporates four VL53L5CX sensors that face forward, back, left, and right, allowing for the detection of obstacles in all directions. Although lower than in conventional LIDARs, the 8\(\times\)8 resolution is enough to enable accurate scan-matching. Unlike LIDARs, the novel multi-zone ToF sensor is a perfect trade-off between resolution, accuracy, weight, and power consumption for use on-board nano-UAVs. Moreover, we incorporate a \(4\,\mathrm{MB}\) SPI flash storage device to store the acquired measurements. The final design of the customized deck only weighs \(5.1\,\mathrm{g}\). ## IV Algorithms This section provides the theoretical background of the lightweight algorithms we implement on-board and how they fit together to solve the mapping problem. Generating an accurate map requires good accuracy of the drone's position estimate, which can be impacted by odometry drift, leading to warped maps. We show how we leverage and combine the information from multiple ToF multi-zone sensors to achieve scan matching using the ICP algorithm. While ICP can correct the odometry errors in the re-visited locations, we show how it can be combined with SLAM to correct the whole past trajectory and therefore enable accurate mapping. Moreover, ICP is also used to combine data collected from different nano-UAVs in the swarm. ### _Scan Frames and Scans_ In this work, we tackle the mapping problem in 2D, and therefore the localization and mapping are performed in one plane. Since each ToF sensor provides an 8\(\times\)8 distance matrix, we need to reduce it to an 8-element array that is compatible with our 2D scenario. Hence we derive a single measurement from each column of the ToF matrix. To this end, we only consider the center four pixels from each column, discard any invalid pixels, and take the median of the remaining values. Should none of the pixels be valid, the entire column is discarded. This occurs, for example, when there is no obstacle within the \(4\,\mathrm{m}\) range of the sensor. Let \(\mathbf{x}_{k}=(x_{k},y_{k},\psi_{k})\) be the state of the drone (i.e., _pose_) at timestamp k expressed in the world coordinate frame. Furthermore, let \(i\in\{1,2,3,4\}\) describe index among sensors and \(j\in\{1,2,\ldots 8\}\) the index among the zones of each sensor. Thus, \(d_{k}^{ij}\) represents the distance the \(i\)-th sensor provides for the \(j\)-th zone. Equation 1 provides the function that projects a distance measurement \(d_{k}^{ij}\) acquired at pose \(\mathbf{x}_{k}\) into the world coordinate frame. Figure 1 provides a graphical representation of the variables in Equation 1, and the world and the drone's body frames are represented in black and green, respectively. The heading angle \(\psi_{k}\) represents the rotation between the drone's body frame and the world's frame, \(\beta_{i}\in\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\}\) represents the fixed rotation of each sensor w.r.t. the drone's body frame and \(\mathbf{R}\) is the 2D rotation matrix. Furthermore, \(o_{x}^{i}\) and \(o_{y}^{i}\) represent the offset of each ToF sensor \(i\) w.r.t. the drone's center \(O\) expressed in the body frame of the drone. \(d_{ij}\) is the projection of a distance measurement on the \(OX\) axis of the ToF sensor's coordinate frame (marked with blue in Figure 1a), and the sensor directly provides it. The y-coordinate of the measurement in the same coordinate frame is calculated as \(tan(\theta_{j})\cdot d^{ij}\), where \(\theta_{j}\) is the angle of the zone. \[\mathbf{s}(\mathbf{x}_{k},d_{k}^{ij})=\begin{pmatrix}x_{k}\\ y_{k}\end{pmatrix}+\mathbf{R}_{(\psi_{k}+\beta_{i})}\begin{pmatrix}d_{k}^{ij}+o_{ x}^{i}\\ tan(\theta_{j})\cdot d_{k}^{ij}+o_{y}^{i}\end{pmatrix} \tag{1}\] We define the _scan frame_ as the set containing the 2D projection of each distance measurement for a particular timestamp \(k\). The size of a scan frame is, at most, 32 points - 4 sensors \(\times\) 8 zones - as some distance measurements might be invalid and, therefore, not included in the scan frame. A scan frame is therefore obtained by applying Equation 1 for each of the 32 points. The projection of the distance measurements is required for the scan-matching algorithm. However, the 32 points in a scan frame are still too sparse to compute accurate scan matching. We solve this problem by stacking 15 scan frames in a set that we call a _scan_. Specifically, once the nano-UAV decides to acquire a scan, it appends a new scan frame with a frequency of \(7.5\,\mathrm{Hz}\) until it reaches the count of 15. The four ToF sensors have a cumulative FoV of \(180^{\circ}\) - \(45^{\circ}\) per sensor. To maximize the scan coverage, the drone also rotates in place by \(45^{\circ}\) on the yaw axis while acquiring the scan, resulting in a \(360^{\circ}\) coverage. In conclusion, we can define a scan as \(\mathbf{S}_{k}=\{\mathbf{s}(\mathbf{x}_{k}^{\ast},d_{k}^{j})\mid i\leq 4;j\leq 8;k\leq \tilde{k}<k+15;i,j,\tilde{k}\in\mathbb{N}^{\ast}\}\) and each scan \(\mathbf{S}_{k}\) has an associated pose \(\mathbf{x}_{k}\), acquired right before the scan acquisition starts. The resulting size of a scan is, at most, 480 points. This setting was empirically selected based on the trade-off between the need to have sufficient points for ICP-based scan-matching to produce accurate solutions and the memory footprint. ### _Iterative Closest Point_ Scan matching is the process of aligning two scans, such that overlapping features line up correctly. The result of scan matching is the optimal rotation and translation that is applied to one scan, such that its features overlap with the other scan. Since each scan has an associate pose, the transformation between two scans is the same as the transformation between their associated poses. This is critical for correcting errors in odometry estimations that lead to the misalignment of poses that were acquired at different times or locations. We choose the ICP algorithm for scan matching and describe the theoretical study of ICP optimized for a practical implementation on-board nano-UAVs. We define two scans \(P=\{\mathbf{p}_{1},\dots,\mathbf{p}_{N}\}\) and \(Q=\{\mathbf{q}_{N},\dots,\mathbf{q}_{M}\}\) and omit the time index in the notation for simplicity. Scan matching can be formulated as a least squares problem, and finding the optimal transformation between \(P\) and \(Q\) is equivalent to solving the optimization problem in Equation 2[36]. However, this equation assumes known data associations - i.e., which point in scan Q corresponds to each point in scan P. Under this assumption, solving Equation 2 leads to the optimal solution without requiring an initial guess. In the ideal case, when the scans are identical, rotating each scan point \(Q\) using \(\mathbf{R}^{\ast}\) and then adding the translation factor \(\mathbf{t}^{\ast}\) should result in a perfect overlap with scan \(P\) and the same holds for the poses \(\mathbf{x}_{Q}\) and \(\mathbf{x}_{P}\) that are associated with the two scans. \[\mathbf{R}^{\ast},\mathbf{t}^{\ast}=\operatorname*{arg\,min}_{\mathbf{R},\mathbf{t}}\sum\|\bm {q}_{i}-(\mathbf{R}\mathbf{p}_{i}+\mathbf{t})\|^{2} \tag{2}\] However, no prior knowledge of correspondences is available in real-world applications and a heuristic is required. A common method for establishing the correspondences for a given point in a scan is to find the closest point in the other scan in terms of Euclidean distance [36]. Once the correspondences are calculated, Equation 2 determines the optimal transformation. The iterative process between calculating the correspondences and computing the optimal transformation, until the mean Euclidean distances between the correspondence pairs reach the minimum, is called ICP. Its accuracy is mainly impacted by how precisely the correspondences are determined. Fig. 1: Illustrations of the four ToF sensors (Figure 0(a)) and the pose graphs (Figures 0(b) and 0(c)). ### _Simultaneous Localization and Mapping_ Producing an accurate map requires precise robot position estimation. In most indoor scenarios (including ours), the position and heading of the drone are computed by integrating velocity and angular velocity measurements, respectively. However, these measurements are affected by sensor noise, and integrating noisy data over time results in drift. We employ SLAM to correct the errors in the trajectory of the robot caused by imprecise odometry measurements. However, this approach is also valid in case of absolute range-based positioning systems [8]. We employ the algorithmic approach to graph-based SLAM introduced in [10] and implement an optimized version that allows the algorithm to run on-board resource-constrained nano-UAVs. Within this approach, the UAV's poses \(\mathbf{X}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\}\) are modeled as graph nodes, and the odometry measurements \(\mathbf{x}_{ij}\) as graph edges. An odometry measurement \(\mathbf{z}_{ij}=(\Delta_{x},\Delta_{y},\Delta_{\psi})\) is expressed in the coordinate frame of \(\mathbf{x}_{i}\) and represents the relative transformation between poses \(\mathbf{x}_{i}\) an \(x_{j}\). The work in [10] formulates the SLAM as a least-squares optimization problem that determines the optimal poses given the pose \(\mathbf{x}_{0}\) and a set of edges \(\mathbf{z}_{ij}\). The optimization problem is given by Equation 3, where the number of terms in the sum is equal to the number of edges in the directed graph. \(\Omega\) is the diagonal information matrix, which weighs the importance of each term in the sum. \[\mathbf{e}_{ij}=\mathbf{z}_{ij}-\hat{\mathbf{z}}_{ij}(\mathbf{x}_{i},\mathbf{x}_{j})\] \[\mathbf{X}^{*}=\operatorname*{arg\,min}_{\mathbf{X}}\sum_{i,j}\mathbf{e}_{ ij}^{T}\Omega\mathbf{e}_{ij} \tag{3}\] Figure 0(b) shows a simple pose graph example, but representative of any real-world scenario. Since any node is connected to the previous one by an odometry measurement, at least one path exists between any two given nodes. The optimization solution given by [10] to the problem in 3 is based on the Gauss-Newton iterative method, and an initial guess is required for the poses, which we obtain from the nano-UAV's odometry measurements. Since the drone directly obtains the odometry measurements \(\mathbf{z}_{01},\mathbf{z}_{12},\ldots\mathbf{z}_{N-1,N}\) from the state estimator, it is straightforward to compute the initial guess of the poses \(\mathbf{x}_{1},\ldots\mathbf{x}_{N}\) by forward integration w.r.t. \(\mathbf{x}_{0}\). This is the best guess that we have, which is quite close to the real values assuming that the odometry drift is in bounds - typically below \(0.5\,\mathrm{m}\). So far, solving the optimization problem in Equation 3 for the measurements \(\mathbf{z}_{ij}\) would lead to no change in the poses because the pose values after the forward integration are already in agreement with the measurements. Assuming that poses \(\mathbf{x}_{N}\) and \(\mathbf{x}_{0}\) are close enough in terms of Euclidean distance, one can use the ICP algorithm introduced in Section IV-B to generate a direct relation between the two poses - which we call a _virtual edge_ or a _constraint_. The virtual edges take part in the optimization process just as the other edges, as additional terms to the sum in Equation 3. However, since the scans are very accurate, so is the result of the ICP, and, therefore, the virtual edge measurements are more precise than the edges associated with odometry measurements. Consequently, the information matrix \(\Omega\) associated with the virtual edges takes the value \(20\mathbf{I}\), while the information matrix for the odometry edges is \(\mathbf{I}\). These values are determined empirically. So far, we have presented how we integrate the graph-based algorithm presented in [10] with ICP to optimize a single drone agent's trajectory (i.e., poses). In the following, we show how we apply SLAM when using multiple drones. Figure 0(c) illustrates the pose graphs of two drones following two independent trajectories. For the sake of readability, we omit the notation for the edge measurements and use the superscript in the pose notation to indicate which drone the pose is associated with. Within the SLAM formulation introduced so far, the two trajectories would result in two disconnected graphs. However, knowing the locations where the trajectories intersect, the drones can use ICP to determine how closely located poses (e.g., \(\mathbf{x}_{0}^{0}\) and \(\mathbf{x}_{0}^{1}\)) relate and therefore create a connection between the two graphs for each intersection point. This approach enables both loop closure and alignment of the trajectories of multiple drones in a common coordinate system. While the scenario in Figure 0(c) only refers to two drones, the approach scales for any number of drones in the swarm. Putting together the pose-graphs associated with multiple drones results in a larger graph that we call _the global graph_. Since the size of the resulting global graph represents the sum of all subgraph sizes, there is no difference in computation and memory requirements between a swarm of m drones with N poses each and a single drone with m\(\cdot\)N poses. Since the graph optimization is only executed on a single drone at a time, the main limitation of the system is the maximum number of poses that the SLAM algorithm can optimize within an acceptable time frame. However, this limitation could be further relaxed by dividing the global graph. For example, in the situation depicted in Figure 0(c), optimizing the graph \(\mathbf{x}_{0}^{1,0},\mathbf{x}_{1}^{1,0},\ldots\mathbf{x}_{C}^{1,0}\) and then using the optimized value of \(\mathbf{x}_{C}^{1,0}\) as a constraint for optimizing \(\mathbf{x}_{C}^{1,0},\mathbf{x}_{C+1}^{1,0},\ldots\mathbf{x}_{N}^{1,0}\) would result in very similar results to the case of optimizing the whole global graph at once. ### _Autonomous Exploration_ Given that the environment is unknown, an autonomous exploration policy is needed to guide the nano-UAVs through the unfamiliar environment. The policy is deliberately designed to choose different paths for each drone, assuming sufficient distinct paths exist. To simplify the evaluation, we assume an environment consisting of perpendicular walls and equal-width corridors. However, our mapping system can work in environments with arbitrary geometry with minor adjustments to the exploration strategy. The autonomous exploration policy is based on following walls or corridors and relies on the ToF sensor information to avoid obstacles. More in detail, it computes the minimum distance from the matrix in each direction and uses this information to make decisions. The motion commands are separated along two axes: primary and secondary. The primary axis is the direction the drone aims to explore. The secondary axis is perpendicular, and motion along this axis should be minimal. The motion along the _secondary axis_ is commanded so that the drone maintains a constant distance from the walls. A proportional velocity controller with a of gain \(v_{stab}=$2\,\mathrm{s}^{-1}$\) determines the velocity set point based on the distance to the walls on either side - i.e., \(d_{L}\) and \(d_{R}\). When the drone detects that it is located in a corridor (i.e., walls on both sides), it attempts to center itself by targeting a velocity \(v_{sec}=d_{L}-0.5\cdot(d_{L}+d_{R}))\cdot v_{stab}\). Otherwise, if there is a wall within \(1\,\mathrm{m}\) of either side of the drone, it attempts to hold a target distance of \(d_{wall}=$0.5\,\mathrm{m}$\) to the wall, by applying \(v_{sec}=(d_{wall}-d_{L,R})\cdot v_{stab}\). These values are chosen assuming that the width of corridors is approximately \(1\,\mathrm{m}\), but the parameters can be adapted for different environments. The wall-following is effective for scan matching since walls and corners typically provide a feature-rich scans, while avoiding frequent out-of-range measurements associated with large open spaces. The _primary axis_ describes the direction of exploration. Its control is based on waypoints. The navigation policy continuously seeks the next waypoint, which is always located about \(1\,\mathrm{m}\) away from the previous one in the direction of motion. In each waypoint, the drone adds a new pose to its graph and acquires and stores a new scan. Equation 4 describes the proportional velocity control along the primary axis, where \(v_{exp}\) is the nominal exploration velocity, \(d_{w}\) is the remaining distance to the next waypoint and \(d_{slow}=$0.75\,\mathrm{m}$\) determines the distance to a waypoint where the drone starts slowing down. Additionally, increases in velocity are limited to an acceleration of \(a_{exp}=$0.5\,\mathrm{m}\mathrm{/}\mathrm{s}^{2}$\), to smooth the drone's motion. When the drone faces an obstacle in the direction of exploration before reaching the next waypoint, it immediately chooses a new direction for exploration - left or right, depending on the predefined priority. The primary and secondary axes are switched, and exploration continues. If the only option is to go back the way the drone came (i.e., dead-end), it lands and finishes exploring. \[v_{pri}(d_{w})=\begin{cases}(\frac{d_{w}}{d_{slow}}+0.1)v_{exp}&\text{if }d_{w}<d_{slow}\\ v_{exp}&\text{otherwise}\end{cases} \tag{4}\] Given that we are interested in operating a swarm of drones, and in most situations, they start from the same location, it would be sub-optimal if all drones had identical flight paths. At the same time, for testing purposes, individual drones' trajectories should be predictable and repeatable. To customize the exploration policy, we choose different initial headings for each drone which changes how the primary axis is initially selected. Furthermore, each drone in the swarm can be given a different steering priority (i.e., left or right) which dictates which direction to go when facing a forward obstacle. ## V Swarm Coordination This section discusses the methodology used to enable swarm mapping and inter-drone radio communication. In particular, it introduces the communication protocol and what types of messages the drone exchange with each other. Furthermore, it explains how the drones coordinate within the swarm, the mapping mission, and how they exchange poses and scans to create a map. Lastly, a scalability study is provided. ### _Communication: Physical Layer_ The physical layer is provided by the Crazyflie P2P interface enabled through its on-board radio transceiver. The P2P API provides basic packet-based communication primitives with best-effort delivery. No routing is implemented on this layer, meaning all packets are broadcast without any form of acknowledgment. Further, there is neither media access control nor flow control over the radio link. Since the layer provides completely unreliable package delivery, all link, network and transport layer functionality must be implemented. Given the limited capabilities of nano-UAVs, it is desirable to leverage the existing hardware already available on-board rather than adding more complex communication transceivers that would result in more weight and additional power consumption. Therefore, we rely on the available P2P communication and develop a robust and scalable communication protocol, which we present later in this section. Since we also need to capture data from the swarm in a base-station computer (for monitoring and evaluation purposes only), we configure an additional _bridge drone_ that remains grounded but relays all received packets to the computer over the built-in USB link. ### _Communication: Transport Layer_ On top of the physical layer, we designed a transport layer protocol to provide reliable message-based networking with broadcast and unicast capabilities. The basic principles are derived from ALOHAnet, but rather than implementing a TCP/IP stack, the protocol merges the link, network, and transport layers to provide a much simpler communication flow. In order to simplify the protocol, we make some assumptions about the communication and impose some limitations. Most importantly, each transmitter may only attempt to transmit a new packet when all intended recipients have acknowledged the previous packet. This significantly reduces the complexity of the receiver and provides some rudimentary flow control. Further, we assume that all drones are within communication range of each other. This allows us to forego the need to implement a mesh-networking scheme that would be out of the scope for this work. We also assume that all transmissions are message-based rather than stream-based. We recognize that in our use case, most messages are short and that large messages are transmitted infrequently and are not broadcast. The physical layer packets carry at most 60 bytes of user data, requiring messages to be split into individual packets and reassembled at the receiver. Given the first limitation we imposed above, reassembly is trivial. We use the 16-bit wide packet header of the following structure to implement the protocol: 4-bit _source_ and _destination_ field, _acknowledge_ bit, _end_ bit, _tag_ field, and _sequence number_ field. The source and destination refer to the addresses of the transmitter and receiver drone. A destination field of address 0xF indicates a broadcast, while the source field must not contain the broadcast address. The sequence number of each packet is an increasing unsigned integer that is used to deduplicate packets that have been retransmitted. When receiving a packet from another drone, the receiver responds with an empty packet with the acknowledge flag set. Once the sender receives the acknowledgment (ACK) packet from all targeted drones, it may send another message. Message boundaries are indicated by setting the end flag on the last packet of a message. The application layer uses the tag field to help identify the type of message received. No length field is included in the header since a lower layer header already provides this information. ### _Communication: Message types_ In the application layer, we distinguish between four different types of messages that are each identified using the tag field in the protocol header. Firstly, there is the _pose update message_ (PUM) - 16 bytes: this message is used to synchronize the pose graph between drones. Each message contains a pose data structure and is broadcast to the entire swarm. The same message type is used both to register new poses and update existing poses. Since each node is identified by a globally unique identifier, these two scenarios can be distinguished by checking whether the pose identifiers already exist. The next message type is the _ToF scan request_ (TSR) - 4 bytes: this is a unicast message that is sent by the main drone to request the transfer of a ToF scan. The body of this message is a pose identifier associated with the scan that is being requested. Paired with the ToF scan request, we introduce the _ToF scan response_ (SR) - 1146 bytes: this message is sent in response to a ToF Scan Request message. The body contains the 2D points in the requested scan. Lastly, we have the _control messages_ - 16 bytes: this class of messages contains control flow instructions, such as the takeoff or landing commands sent by the base station through the bridge drone. ### _Mission Coordination_ Even if identical in terms of hardware, the drones can be classified by their functionality. Firstly, we have the bridge drone that we introduced above. Then, we have the flying drones in the swarm that explore the environment and acquire scans and poses. Among the flying drones in the swarm, one drone is elected as a leading _main drone_. Although all drones contribute to the mapping task, the main drone collects relevant scans, performs the ICP and SLAM computation to optimize the poses of the global graph, and propagates the results back to the swarm. Despite applying a centralized approach to execute the SLAM algorithm, the architecture does not, in fact, require the main drone to store all of the data acquired by the other drones. Memory-intensive data, such as the individual ToF scans captured by other drones, can be loaded from the swarm on demand. This is important to ensure that this approach can scale with the size of the swarm, given the tight memory constraints of individual drones and bandwidth constraints of the entire swarm, and to increase the overall swarm robustness avoiding single points of failure. The pose graph is the only data shared and synchronized between all drones in the swarm formation, which is lightweight and scales linearly with the total length of all flight paths in the swarm. Given that the pose graph is shared between all drones and all other data can be loaded on-demand, the architecture provides fault tolerance for the main drone. Should the main drone fail, any of the remaining drones are able to assume its role. Before the mapping mission starts, all drones are placed in known positions. When the bridge drone broadcasts the take-off message, all drones start exploring the environment according to the exploration policy introduced in Section IV-D. Whenever a drone reaches a way-point (i.e., every \(1\,\mathrm{m}\)), it adds a new pose in the graph, and then broadcasts it to the swarm. Furthermore, a new scan is acquired and stored in the external flash memory. For every new pose, the main drone checks whether it is within proximity of another pose, indicating that there should be sufficient overlap between the corresponding ToF scans to perform scan matching. Should this be the case, the main drone will request the relevant scan data from the appropriate drone, execute ICP and add a virtual edge to the pose graph. At the end of the mission, the main drone can execute the SLAM algorithm and broadcast corrected pose information back to the swarm. The pose graph is optimized at the end of the mission to simplify the evaluation, but due to the relatively low execution time of SLAM (i.e., several seconds) w.r.t. the mission time (i.e., several minutes), the optimization can also happen multiple times during the mission. After SLAM is run, the optimized poses are used to correct the scans, and the map is obtained by merging all scans together. ### _Scalability_ A performance and scalability analysis of the presented communication protocol is proposed. In particular, we consider the required bandwidth for broadcast communications w.r.t. the number of swarm agents, normalized with the inter-drone distance \(d=\)\(1\,\mathrm{m}\) (also \(1\,\mathrm{m}\) in our work). Hence we define the required swarm bandwidth \(B_{s}(N,d)\) as a function of the number of agents \(N\) and the pose-to-pose distance \(d\) as shown in Equation 5. As introduced earlier in this section, _PUM_, _TSR_, and _SR_ are 16 bytes, 4 bytes and 1146 bytes, respectively. Furthermore, _ACK_ is 2 bytes, since an acknowledge requires sending a complete header without any payload. \(P_{sm}\) defines the probability two drones fly through the same location - i.e., probability to require the scan-matching. Normally, \(\chi_{upd}\cdot(PUM_{tot}+f_{map}\cdot(ACK\cdot N))\) should also be added to \(B_{s}(N,d)\) in Equation 5 and represents the overhead for broadcasting the map from the main drone to the swarm. However, \(\chi_{upd}\) defines the map update rate, which is broadcast to all drones after running SLAM, and it is zero in our work since we only run SLAM once at the end of the mission. \(f_{map}\) and \(f_{scan}\) are the packet fragmentation ratios - i.e., the minimum integer number of 60 byte packets to send a map and a scan, respectively. Since the size of a scan is 1146 bytes, \(f_{scan}=20\), while \(f_{map}\) depends on the environment, but it is bounded by the size of the flash. \[\begin{split} B_{s}(N,d)=\underbrace{N}_{d}(PUM+(ACK\cdot(N-1))) +\\ \underbrace{N\cdot P_{sm}\cdot(TSR+SR+f_{scan}\cdot ACK)}_{scan \ broadcast}\.\end{split} \tag{5}\] Considering the practical experiments reported in Section VI with 2 or 4 drones, the required bandwidth scanning a new pose every \(1\,\mathrm{m}\) is \(4.05\,\mathrm{kbit/s}\) and \(8.24\,\mathrm{kbit/s}\), respectively. Notably, the bandwidth requirement scales approximately linearly. Considering the Crazyflie P2P interface, the maximum measured bitrate in our swarm is \(64.1\,\mathrm{kbit/s}\), which would be able to support a swarm composed of 20 agents. However, if a \(6\,\mathrm{Mbit/s}\) UWB communication would be used instead [8] - commonly used in nano-UAV platforms - the maximum swarm size could reach 500 with \(d=$1\,\mathrm{m}$\). ## VI Results In this section, we provide a quantitative analysis of ICP and SLAM in terms of scan-matching accuracy, trajectory correction, and mapping accuracy. All experiments are performed in our testing arena, equipped with a Vicon Vero 2.2 motion capture system (mocap) for ground-truth measurements of the drones' poses. To assess the localization and mapping capabilities of our swarm, we build several mages out of \(100\,\mathrm{cm}\times 80\,\mathrm{cm}\) chipboard panels. ### _ICP Results_ In the following, we evaluate the scan matching accuracy in terms of rotation error and translation error. In this scope, we place a drone in two close-by locations and acquire a scan in each of them. Then we execute on-board ICP and compute the relative transformation between the poses associated with the two. Furthermore, with the aid of the mocap we also compute the ground truth transformation, since the mocap precisely measures the position and heading of the two poses. To calculate the error, we use a method similar to the one proposed by Pomerleau et al. [38]. We express the ground truth transformation and the one provided by ICP in homogenous coordinates, which we note \(\boldsymbol{T}_{GT}\) and \(\boldsymbol{T}_{ICP}\). Since homogenous coordination allows unifying rotation and translation in the same transformation, we calculate the ICP error using Equation 6. Specifically, the translation error is calculated as \(e_{t}=\|\Delta\boldsymbol{t}\|\), where \(\Delta\boldsymbol{t}\) comprises the translation error on both \(X\) and \(Y\) axes. The rotation error is retrieved from the rotation matrix as \(e_{R}=\cos^{-1}(\Delta\boldsymbol{R}_{00})\). \[\Delta\boldsymbol{T}=\begin{bmatrix}\Delta\boldsymbol{R}&\Delta\boldsymbol{t} \\ \boldsymbol{0}^{\top}&1\end{bmatrix}=\boldsymbol{T}_{ICP}\boldsymbol{T}_{GT}^{-1} \tag{6}\] Figure 2 presents the in-field results for two scenarios, where the scans are acquired in a corridor intersection (i.e., Figure 1(a)) and in a corner (i.e., Figure 1(b)). In both cases, the 2D points of each scan are represented in blue and orange, respectively. Green represents the scan created by applying the ICP transformation to the scan \(X\) - which in both cases approximately matches scan \(Y\). In the first experiment depicted in Figure 1(a), the ICP algorithm achieves a highly accurate solution with a translation error of only \(0.7\,\mathrm{cm}\) and a rotation error of \(0.98^{\circ}\). In contrast, the translation error in the second experiment from Figure 1(b) is slightly higher (i.e., \(3.2\,\mathrm{cm}\)) while the rotation error is \(0.56^{\circ}\). The increased translation error is most likely due to the poorer texture such as the smaller number of corners in the scan. Furthermore, on the bottom part of Figure 1(b) one could notice some artifacts. Since the scan generation is based on the drone's state estimate, errors in this estimate can lead to outliers in the scan. Acquiring a scan requires the drone to spin around its z-axis and the state estimate that mainly relies on the on-board optical flow sensor is not very stable during spinning. ### _SLAM Results_ We recall that generating accurate maps requires accurate trajectory estimation, as the drone's trajectory represents the foundation for projecting the distance measurements in the world frame, as shown in Section IV-A. Therefore, we proceed by assessing the performance of the SLAM in correcting the trajectory errors, and then generate the map and evaluate its accuracy. The experiments in this section only involve one drone agent. To evaluate the performance of the SLAM algorithm in correcting the trajectory, we check how close the estimated poses are to the ground truth poses provided by the mocap. Given a set of estimated poses \(\boldsymbol{x}_{1}\ldots\boldsymbol{x}_{n}\), the corresponding set of ground truth poses is \(\boldsymbol{x}_{1}^{GT}\ldots\boldsymbol{x}_{n}^{GT}\). We mention that we use a reduced representation of the poses, each consisting only of the x and y coordinates. The pose estimation root-mean-squared-error (RMSE) of the optimized poses w.r.t. the ground truth is computed as in Equation 7. \[RMSE_{poses}=\sqrt{\frac{\sum_{i=1}^{n}\|\boldsymbol{x}_{i}-\boldsymbol{x} _{i}^{GT}\|^{2}}{n}} \tag{7}\] To evaluate the pose correction performance, we proceed with an experiment where the drone is autonomously flying in a simple maze of square shape, with another square obstacle in the middle, just as shown in Figure 2(b) or 2(c) - in blue. The drone starts flying from the bottom left corner of the square and completes the loop three times. It flies at constant height of \(0.6\,\mathrm{m}\) and uses an exploration speed of \(v_{exp}=$0.8\,\mathrm{m}\mathrm{/}\mathrm{s}$\), acquiring scans in each waypoint. Whenever it revisits a waypoint, it uses ICP scan matching to create SLAM constraints w.r.t. the scans acquired in the first trajectory loop. After flying the three loops, the poses are optimized by running SLAM. Figure 2(a) shows the three trajectories: before SLAM (i.e., unoptimized), after SLAM (i.e., optimized), and the ground Fig. 2: Translation and rotation error for the on-board computed ICP solutions using empirical ToF scans. truth. The markers indicate the location of the poses while the lines interpolate between poses. The solid line indicates the unoptimized trajectory - i.e., as determined by the internal state estimator, while the dashed line represents the ground truth measured by the mocap. Furthermore, the dotted line shows the optimized flight path, which is closer to the ground truth. Specifically, the RMSE of the unoptimised and optimized poses is \(34.6\,\mathrm{cm}\) and \(19.8\,\mathrm{cm}\), respectively. We further evaluate the accuracy of the map, which is generated by putting together all the scans. Firstly, we propose a metric that quantifies the mapping accuracy. We define walls as lines that span between two endpoints. If the point is above the wall, we define the distance of the point to the wall as the shortest distance to the line intersecting the endpoints. Otherwise, we define it as the distance to the closest endpoint. Given a set of walls \(W\), and a set of 2D points in the map \(\mathbf{p}_{1}\cdots\mathbf{p}_{n}\), we define the mapping RMSE as in Equation 8. We apply this metric to the map generated with the unoptimized poses and show the results in Figure 2(b), which leads to a mapping RMSE of \(25.1\,\mathrm{cm}\). Re-projecting the scans using the optimized poses results in a corrected map shown in Figure 2(c). The mapping RMSE of the corrected map is \(16\,\mathrm{cm}\), proving that applying the correction based on ICP and SLAM improves the mapping accuracy by about 35%. Also, this time we observe some artifacts, particularly at the corners of the walls, which are caused by inaccurate state estimation during the yaw rotation of a scan. Note, however, that the map appears scaled by some constant factor. This is due to the drone's state estimator, which is consistently overestimating distances. This problem can be mitigated by performing odometry calibration, but it is out of scope and left for future work. \[RMSE_{map}=\sqrt{\frac{\sum_{i=1}^{n}(\min_{\forall w\in W}dist(w,\mathbf{p}_{i}) )^{2}}{n}} \tag{8}\] The experiment was run with an exploration velocity \(v_{exp}=$0.8\,\mathrm{m}\mathrm{/}\mathrm{s}$\). If we run the same experiment again but with \(v_{exp}=$0.2\,\mathrm{m}\mathrm{/}\mathrm{s}$\), we obtain a pose estimation RMSE of \(20.4\,\mathrm{cm}\) and \(14.5\,\mathrm{cm}\) for the unoptimised and optimized poses, respectively. Note that even without any SLAM optimization, the pose estimation RMSE is smaller at lower velocity, indicating that the odometry drift is strongly related to the drone's velocity. ### _Mapping Results_ So far, in this section, we evaluated the performance of ICP and SLAM, and we proved the benefit of applying SLAM for correcting the trajectory and generating a map with one drone. In the following, we focus on distributed mapping and evaluate the mapping capabilities when a swarm of drones is employed. To prove the generalization capabilities of our system, we perform three experiments, each consisting of mapping a different maze. We recall that the swarm-based experiments imply having one main drone, which collects the poses and scans from the other drones, runs the optimization, and then sends out the corrected poses. We proved in Section VI-B that the exploration velocity impacts the odometry accuracy. However, since a very low velocity of the drones would result in a large mapping time, we set \(v_{exp}=$0.4\,\mathrm{m}\mathrm{/}\mathrm{s}$\) for the main drone and a larger velocity \(v_{exp}=$0.8\,\mathrm{m}\mathrm{/}\mathrm{s}$\) for the other drones as a trade-off between mapping accuracy and time. #### Vi-C1 First Experiment The scenario of the first experiment is described in Figure 3(a). The green and red markings denote the takeoff and landing locations for all drones, which are connected by the expected flight paths of each drone (dashed lines). The crosses mark the expected locations of the poses acquired during autonomous exploration. The unoptimized and optimized trajectories are shown in Figure 3(b) along with the \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{3}{*}{Drones} & \multicolumn{4}{c}{Second experiment} \\ \cline{2-5} & Pose estimation RMSE & \multicolumn{2}{c}{Mapping RMSE} \\ \cline{2-5} & no-SLAM & SLAM & no-SLAM & SLAM \\ \hline 2 & \(45.5\,\mathrm{cm}\) & \(20.6\,\mathrm{cm}\) & \(30.9\,\mathrm{cm}\) & \(13.1\,\mathrm{cm}\) \\ 4 & \(28.6\,\mathrm{cm}\) & \(16.5\,\mathrm{cm}\) & \(22.5\,\mathrm{cm}\) & \(14.5\,\mathrm{cm}\) \\ \hline \multicolumn{5}{c}{Third experiment} \\ \hline 2 & \(34.2\,\mathrm{cm}\) & \(20.1\,\mathrm{cm}\) & \(25.3\,\mathrm{cm}\) & \(15.9\,\mathrm{cm}\) \\ 4 & \(21.8\,\mathrm{cm}\) & \(15.1\,\mathrm{cm}\) & \(17.9\,\mathrm{cm}\) & \(12.7\,\mathrm{cm}\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Pose estimation and mapping accuracy of the second and third experiment. Fig. 3: Mapping a square maze with a single nano-UAV. ground truth poses for both drones - shown in blue and orange. Indeed, the pose estimation RMSE for the main drone is \(15.9\,\mathrm{cm}\), and almost \(31\,\mathrm{\char 37}\) higher for the other drone (i.e., \(20.8\,\mathrm{cm}\)). The SLAM optimization is able to improve the situation significantly, reducing the overall pose estimation RMSE for both drones by about \(25\,\mathrm{\char 37}\). Inspecting Figure (b)b once again, we can see that the optimized poses (orange dotted line) for drone 1 are now close to the ground truth (orange dashed line). Similarly, comparing the generated map from Figure (c)c to Figure (d)d shows a clear improvement, with the mapping RMSE of the point cloud improving by \(19\,\mathrm{\char 37}\) from \(14.4\,\mathrm{cm}\) to \(11.6\,\mathrm{cm}\). The blue lines indicate the ground truth of where the walls are located. #### V-A2 Second Experiment In the second experiment, we demonstrate the system's scalability as a swarm, evaluating not only the trajectory and mapping accuracy but also how the mapping time changes with the number of drones. For this scope, we map the same layout twice using swarms of two and four drones. The layout, as well as the intended flight paths, are visualized in Figure (a)a for two drones and in Figure (b)b for the scenario with four drones. The left half of Table II lists the accuracy of the pose estimations and mapping with and without SLAM optimization for each swarm configuration. In general, the accuracy of pose estimations and mapping improves with more drones due to the shorter flight path per drone required to map the same area. On average, the SLAM optimization approximately halves the pose estimation error. The reduction in the mapping RMSE when SLAM is employed is 58% and 36% for the situation with two and four drones, respectively. While the RMSE of the corrected map is about the same for two and four drones, the mapping time is heavily reduced, from 5min 28sec with two drones to 2min 38sec with four drones. While keeping the same takeoff and landing location, we can cover the same area in less than half the time with four drones compared to two drones. This result motivates the need for a drone swarm and proves the scalability potential. In the first experiment, we provided the generated map in a dense representation (i.e., Figure 4). Now we use the approach presented in [39] to convert the dense map to an occupancy grid map of resolution \(10\,\mathrm{cm}\), which we directly show in Figure (a)a and Figure (b)b. We choose this representation as it is more meaningful for enabling future functionalities such as path planning. #### V-A3 Third Experiment In this experiment, the objective is the same as in the second one, but the maze layout differs. The layout and the drones' trajectories are presented in Figure (c)c and Figure (d)d for two drones and four drones, respectively. For this experiment, the maze layout is slightly more complicated to better approximate the diverse shapes of indoor environments. In this layout it is not possible to use the same takeoff and landing locations for both swarm configurations. Instead, the drones in the two-drone swarm return to their starting point, while the drones in the four-drone swarm finish their flight path in a different location. The right half of Table II presents the pose estimation and mapping RMSE. We observe an improvement in the pose estimation of 41% for the two-drone case and 31% for the four-drone case when using the SLAM optimization, while the mapping RMSE is reduced by 37% and 29%, respectively. Due to the particularly long flight paths of drones 1 and 3 in the four-drone swarm, this configuration only benefits from a \(20\,\mathrm{\char 37}\) mission time advantage in this layout, as the mission times are 3min7sec and 2min30sec for the cases with two drones and four drones, respectively. We further remark that the accuracy figures when using SLAM optimization are similar to those observed in the previous experiment, as seen in Table II. As for the previous experiment, we provide the resulting occupancy grid maps, shown in Figure (c)c and Figure (d)d. ### _Performance_ We evaluated the computation time required to run ICP and SLAM. Since the execution speed is critical for real-time systems, we analyze how different settings for these algorithms impact the execution time. For the ICP algorithm, the main parameter that one could change is the maximum number of 2D points in a scan. While too many points would lead to an unnecessarily large execution time, not enough points might result in poor scan-matching accuracy. Figure (a)a shows how the execution time varies with the size of the scan (with a step of 32), resulting in a purely quadratic dependency. This is expected because the most dominant computation in ICP is finding the correspondences - for each point in the first Fig. 4: First experiment: mapping experiment with a swarm of two nano-UAVs. scan, find the closest point in the second scan - which is implemented as a double for loop. Matching the empirical measurements presented by Figure 6(a) with the quadratic function \(0.002\cdot x^{2}+0.0135\cdot x-3.38\) results in a matching over 99%. While all execution times reported in Figure 6(a) are below \(1\,\mathrm{s}\) and meet the real-time requirements, we choose a scan size of 480 as, according to our experiments, it is the smallest size that does not result in algorithmic divergence - i.e., finding a wrong solution. Memory is not a strong limitation, as the size of a scan is \(8*n_{points}\) bytes - 8 because two floats are necessary to store one 2D point. Running ICP requires having two scans in the RAM, and thus a scan size of 480 results in a RAM consumption of 7680 bytes, which is still much smaller than the total amount of \(50\,\mathrm{kB}\) available RAM. Next, we provide an evaluation of the execution time of SLAM as a function of both the number of poses and constraints in fig. 6(b). We vary the number of constraints in the range \(2^{0}\) - \(2^{5}\), which is representative of the number of loop closures typically required in a real-world scenario. The number of poses is swept in the range 16 - 176, with a step of 32. 176 is the largest number of poses that can accommodate 32 constraints without overflowing the available RAM of \(50\,\mathrm{kB}\). The minimum execution time is \(98\,\mathrm{ms}\) for 16 poses with one constraint, and the maximum is \(7882\,\mathrm{ms}\) for a combination of 176 poses and 32 constraints. While quite large, even the maximum execution time of almost \(8\,\mathrm{s}\) is still feasible for a real-world mission. The only drawback is that the drones have to hover and wait while SLAM is running, which would increase the mission time by \(8\,\mathrm{s}\) every time SLAM is executed during the mission. ### _Discussion_ We present the limitations of our system as well as possible improvements. The main goal of our work was to demonstrate that the novel multi-zone ToF sensor can enable new applications for nano-drones, which can generate accurate maps fully on-board due to our optimized implementations of ICP and SLAM. A notable limitation is the maximum mappable area that our system supports. As mentioned earlier in this work, this is mainly limited by the maximum amount of poses that the SLAM algorithm can optimize at a time, as well as the inter-pose distance. With the available on-board RAM of \(50\,\mathrm{kB}\), our system can optimize about 180 poses, which for a \(1\,\mathrm{m}\) inter-pose distance corresponds to an area of about \(180\,\mathrm{m}^{2}\). However, the \(1\,\mathrm{m}\) distance for adding a new pose was selected because our testing area is small. Since the range of the ToF multi-zone sensor is at least \(2\,\mathrm{m}\) with less than 5% invalid pixels [15], the inter-pose distance could also be increased to \(2\,\mathrm{m}\) which would result in a maximum mappable area four times as large, so about \(720\,\mathrm{m}^{2}\). A more effective way to cope with larger pose graphs is using more on-board RAM to optimize large graphs fast enough to meet the real-time requirements. Emerging low-power embedded MCUs such as multi-core RISC-V SoCs are the perfect candidates for Fig. 5: The maze layouts for the second (a and b) and third experiment (c and d). Fig. 6: The binary occupancy grid maps for the second (a and b) and third experiment (c and d). this task as they have a power consumption within hundreds of \(\mathrm{mW}\), and their parallel capabilities could heavily reduce the execution time of ICP and SLAM. Another limitation of our system is the resolution of the ToF sensor. While more capable than the other available solutions of its weight and size class, the multi-zone ToF sensor still has a poorer range and resolution than the larger and more power-hungry LiDARs. As shown in Figure 0(a), each ToF zone represents a triangle in 2D (or a cone in 3D), whose base increases with the distance magnitude and results in higher uncertainty for the obstacle location. Therefore, flying the drone closer (i.e., within \(2\,\mathrm{m}\)) to the walls or objects is ideal for accuracy-critical processes. Due to the larger resolution of LiDARs, the zones are narrower, which translates into a higher accuracy that stays in bounds even at large distances. Moreover, since LiDARs have a higher measurement range than the \(4\,\mathrm{m}\) of the multi-zone ToF sensor, they require drones to fly less when mapping larger rooms. ## VII Conclusion This paper presented a complete system to enable mapping on a swarm of nano-UAVs based on novel and low-power multi-zone ToF sensors. The main contributions include a novel lightweight sensor module that provides a \(360^{\circ}\) depth map with an accuracy comparable with SoA LiDARs. Moreover, ICP and SLAM are optimized to run on resource-constrained MCUs with a total available memory of only \(50\,\mathrm{kB}\), supporting and combining sparse measurements from up to 20 swarm agents. Using a different wireless physical link, e.g., the UWB protocol commonly used for indoor localization, the swarm can potentially be extended to hundreds of nano-UAVs. We also demonstrated, with three field experiments, that a collaborative group of robots can accurately map an environment and decrease the mission latency linearly w.r.t. the number of employed swarm agents. The swarm redundancy, and its improved robustness, are assured by a system-level strategy specifically designed for this work. Indeed, all the collected poses are broadcast to the whole swarm; thus, the _main drone_ can be selected at will or easily replaced by another nano-UAV without losing any information - the swarm data is distributed across all its agents. Despite its extremely lightweight setup (\(38.4\,\mathrm{g}\)), the system proposed in this paper features a mapping accuracy comparable with the SoA contributions designed for MAVs and standard-size UAVs. We showed how our swarm-based solution reduces the pose estimation and mapping errors by about half while flying with a speed of \(0.8\,\mathrm{m/s}\). The system proposed in this paper paves the way for more autonomous micro-robots characterized by ultra-constrained hardware, _defacto_ enabling a system technology not present in the nano-UAV field so far. Indeed, with a complete environmental map, advanced navigation solutions can be inferred to enhance flight autonomy with advanced path planning and mission strategies. ## Acknowledgments This work has been supported in part by "TinyTrainer" project that receives funding from the Swiss National Science Foundation under grant number 207913.
2302.00003
The Power of External Memory in Increasing Predictive Model Capacity
One way of introducing sparsity into deep networks is by attaching an external table of parameters that is sparsely looked up at different layers of the network. By storing the bulk of the parameters in the external table, one can increase the capacity of the model without necessarily increasing the inference time. Two crucial questions in this setting are then: what is the lookup function for accessing the table and how are the contents of the table consumed? Prominent methods for accessing the table include 1) using words/wordpieces token-ids as table indices, 2) LSH hashing the token vector in each layer into a table of buckets, and 3) learnable softmax style routing to a table entry. The ways to consume the contents include adding/concatenating to input representation, and using the contents as expert networks that specialize to different inputs. In this work, we conduct rigorous experimental evaluations of existing ideas and their combinations. We also introduce a new method, alternating updates, that enables access to an increased token dimension without increasing the computation time, and demonstrate its effectiveness in language modeling.
Cenk Baykal, Dylan J Cutler, Nishanth Dikkala, Nikhil Ghosh, Rina Panigrahy, Xin Wang
2023-01-31T00:29:39Z
http://arxiv.org/abs/2302.00003v1
# The Power of External Memory in Increasing Predictive Model Capacity ###### Abstract One way of introducing sparsity into deep networks is by attaching an external table of parameters that is sparsely looked up at different layers of the network. By storing the bulk of the parameters in the external table, one can increase the capacity of the model without necessarily increasing the inference time. Two crucial questions in this setting are then: what is the lookup function for accessing the table and how are the contents of the table consumed? Prominent methods for accessing the table include 1) using words/wordpieces token-ids as table indices, 2) LSH hashing the token vector in each layer into a table of buckets, and 3) learnable softmax style routing to a table entry. The ways to consume the contents include adding/concatenating to input representation, and using the contents as expert networks that specialize to different inputs. In this work, we conduct rigorous experimental evaluations of existing ideas and their combinations. We also introduce a new method, alternating updates, that enables access to an increased token dimension without increasing the computation time, and demonstrate its effectiveness in language modeling. ## 1 Introduction Contemporary machine learning models have been remarkably successful in many different domains ranging from natural language (Chowdhery et al., 2022; Hoffmann et al., 2022) to computer vision (Yu et al., 2022; Riquelme et al., 2021). However, these successes have come in part through sheer scale. A vast amount of empirical studies justify the conventional wisdom that bigger (models and data sets) is better (Hernandez et al., 2021; Kaplan et al., 2020). Accordingly, state-of-the-art models often contain billions of parameters and are trained for weeks on enormously large data sets using thousands of AI accelerators. Their immense size leads to prohibitive compute and energy costs (Patterson et al., 2021) and prevents their deployment to resource or compute-constrained applications (e.g., autonomous driving) (Liebenwein et al., 2021). _Sparsely-activated networks_, such as Mixture-of-Expert (MoE) models (Shazeer et al., 2017), have the potential to alleviate these costs and enable efficient scalability of modern models. The main idea is to partition a network's or each layer's parameters into a table (of experts), where each entry (expert) of the table corresponds to a small subset of disjoint parameters that can be acted on by the input. During training and inference, a given input to the network is _routed_ to a small subset of entries (parameters) to compute the output. As a result, the computation cost remains small relative to the total number of network parameters. By storing the bulk of the parameters in externally accessed tables, we obtain models with significantly higher capacity with only a relatively small increase in computation time. Designing effective sparsely-activated models hinges on two essential components: (i) the expert lookup (routing) function and (ii) the logic for consuming the contents of the table entries. Examples of lookup functions include using Token-ID lookups similar to (Roller et al., 2021), Locality Sensitive Hashing (LSH) lookup of input token embeddings (Panigrahy et al., 2021), and trainable softmax based lookup as in sparse expert models (Fedus et al., 2022; Lepikhin et al., 2020). There are also different ways of consuming the accessed entries. For example, one could view the accessed entries as additional parameters for input representation, which can be added or concatenated with the layer's output to form the augmented output. Alternatively, one could interpret the table entry as input-dependent function parameters, which parameterize an _expert_ function whose output is combined with the _main_ expert output; here, the _main_ expert is one that the input is always acted upon (see Fig. 1). Overall, there exists a research gap in evaluating and comparing combinations of such ideas to generate the most efficient and high-performing sparse models. In this work, we extensively evaluate several choices for the lookup function and the consumption logic and their combinations. We find Token-ID lookup to be more effective in the large-number-of-experts case. From a theoretical perspective, we provide insights into popular lookup (routing) functions and why some might perform better than others. In addition, inspired by the observation that transformer models [23] benefit from increased representation dimension, we use memory parameters as additional parameters for input representation, and introduce a novel method called _Alternating Updates_. This method widens the representation without increasing transformer computation time by working on a part of the representation at each layer. In particular, our contributions are: 1. We extensively study and empirically evaluate various lookup functions including softmax, LSH, and Token-ID lookup in the partial experts setting. 2. We show the theoretical connections between various lookup functions and LSH variants. We also establish the power of consuming disjoint embedding tables at latter layers of the network. 3. We introduce and evaluate the method of _Alternating Updates_ that enables increased token dimension with little additional computation cost. ## 2 Related Work Prior work is rich with a diverse set of techniques to increase the efficiency of contemporary transformer models (see [14] for a survey). In this paper, we focus on lookup-based sparse models due to their state-of-the-art performance on various standard benchmarks [13] and favorable theoretical properties [15, 1]. Recent works have introduced extremely large, yet scalable models with the use of conditional routing of inputs to a learnable subset of parameters. Notably, the Sparse Mixture of Experts (SMoE) [20, 11, 12] family of models use a learned softmax probability distribution to conditionally direct the computation to _experts_, i.e., subsets of network parameters. By routing the computation to a small subset of parameters on an input-dependent basis, SMoE leads to higher capacity models with a relatively small and controllable increase in computation. Switch Transformers [13] show that routing to a single expert on an input-dependent basis reduces computation and outperforms prior SMoE approaches on language tasks. Follow up work on SMoE include those that improve the load balancing of experts [11, 12], use reinforcement learning to learn the routing function [10], and leverage smooth top-\(k\) expert selection [11] (see [13] for a survey). Other choices for the routing function include non-learnable ones such as Locality Sensitivity Hashing (LSH) [15] which generally maps similar inputs to the same expert, Hash Layers that use token-based hashing [14], and language-specific deterministic routing [15]. Residual Mixture of Experts [23] separates the expert weights into input-independent and input-dependent components, similar to the partial expert setting we have. Conditionally accessing _external memory_ is another related approach to vastly increase model capacity at the cost of a relatively small increase in computation [11, 12]. For examples, Memorizing Transformers [23], Memformer [23], and Product key memory [12] leverage dynamic memory to encode and retrieve relevant information. Additional works include those that use an immensely large untrainable corpus, such as Wikipedia, REALM [17], or a 2 trillion token database, RETRO [1]. ## 3 Lookup Functions In this section, we formalize the augmentation of a layer \(L\) with external memory and provide an overview of the various lookup functions that we cover in this paper. The memory augmented layer is shown as Alg. 1. Since we are primarily interested in Transformer-like architectures we can just focus on the action of the layer on a single token. Abstractly, we consider a layer \(L\) to be a function from a vector space \(\mathcal{X}\) to itself. For example, \(L\) can be the self-attention layer of a transformer and \(\mathcal{X}\) the space of embedded token vectors \(\mathbb{R}^{d_{\text{emb}}}\). We let \(\mathcal{F}\) be the set of all functions from \(\mathcal{X}\) to \(\mathcal{X}\). The external memory for the layer consists of a lookup function \(q\) and a memory table \(T\). Given the previous layer output \(x\in\mathcal{X}\) and the index Figure 1: The standard Mixture of Experts model (left) routes the inputs to one or more of \(n\) experts based on a routing function. Mixture of Partial Experts (right) always routes the input to the main expert and additionally routes the input to one or more partial experts; the output is a function of the main expert’s and partial experts’ outputs. of original token in the vocabulary (i.e. the token-id, see Sec. 3.2) which we denote \(\mathrm{id}\), the look-up function \(q\) computes a set of indices \(\mathcal{T}=q(x,\mathrm{id})\). Typically the look-up function either uses only \(\mathrm{id}\) as in Token-ID lookup (see Sec. 3.2) or only \(x\) as in Softmax lookup (see Sec. 3.1). The indices \(i\in\mathcal{T}\) are then used to access the table \(T\) and access experts \(f_{i}=T(i)\) in \(\mathcal{F}\). We will consider experts \(f_{i}\) that are either two-layer fully connected networks \(f(x)=V\phi(U^{T}x)\) where \(\phi\) is the ReLU activation or just simply a constant function \(f(x)=b\). For a \(d\)-dimensional input, the matrices \(V,U\in\mathbb{R}^{d\times\mathrm{rank}}\), where \(\mathrm{rank}\) is a configurable parameter that specifies the width of the expert, and consequently the computation time of routing to each partial expert. The augmented layer computation outputs \(L(x)+\sum_{i\in\mathcal{T}}w_{i}(x)f_{i}(x)\) for some weighting functions \(w_{i}\) instead of the normal output \(L(x)\). In principle, one could consider alternate ways of combing \(L(x)\) and \(\sum_{i}w_{i}(x)f_{i}(x)\), however in this work we only consider addition since it is simple and preserves dimensions. We consider different choices of the lookup function \(q\) and memory tables \(T\) as follows. ``` 0: Layer \(L\in\mathcal{F}\), previous layer output \(x\in\mathcal{X}\), \(\mathrm{id}\in\mathbb{N}\), look-up function \(q:\mathcal{X}\times\mathbb{N}\rightarrow[n]\), \(\mathrm{memory\ table}\ T:[n]\rightarrow\mathcal{F}\) \(\mathrm{Table\ index}\ i=q(x,\mathrm{id})\); Adjustment function \(f=T(i)\); Output:\(L(x)+f(x)\) ``` **Algorithm 1**Memory Augmented Layer ### MoE-style Softmax Lookup The MoE layer routes an input token \(x\) to \(k\) of \(n\) experts where each expert is itself a parametrized subnetwork (e.g., a fully-connected layer). Following [10], we let \(\{E_{i}(\cdot)\}_{i\in[n]}\) and \(E_{i}(x)\) denote the set of experts and the output of lookup the input token \(x\) to expert \(i\), respectively. For an input token \(x\), a learnable weight matrix \(W\) is applied to obtain the logits \(h(x)=Wx\). The lookup probabilities are computed by taking the softmax of \(h(x)\) \[p_{i}(x)=\frac{\exp(h_{i}(x))}{\sum_{j\in[n]}\exp(h_{j}(x))}\quad\forall i\in [n].\] The token \(x\) is routed to the expert(s) \(\mathcal{T}\subset[n]\) with the top-\(k\) probabilities \(p(x)\). Since this operation is not differentiable, the output \(y\) is computed as a probability weighted combination of the experts' outputs to enable gradients to propagate back to the router parameters, i.e., \(y=\sum_{i\in\mathcal{T}}p_{i}(x)E_{i}(x)\)[12]. ### Token-ID Lookup In a transformer, the computation can be viewed as repeatedly transforming the embedding vector of a token within the initial embedding space. For example the token "tiger" may be embedded initially as \(x_{0}\in\mathbb{R}^{d_{\mathrm{rank}}}\) and then transformed into \(x_{1},x_{2},\ldots\), etc. successively where each \(x_{i}\in\mathbb{R}^{d_{\mathrm{rank}}}\). In Token-ID lookup for each \(x_{i}\) the look-up function \(q\) (see Alg. 1) simply returns the index of the input token (e.g. "tiger") in the vocabulary and ignores the previous layer output. Note that in this case the table size \(n\) of each layer is equal to the size of the vocabulary. ### Locality Sensitive Hashing (LSH) Lookup Locality Sensitive Hashing [1] is a popular variant of hashing that tends to hash similar objects to the same buckets and is used for approximate nearest neighbor search [1, 1, 14]. It tends to hash similar inputs to the same bucket with higher probability and dissimilar inputs to different buckets. There are variants of LSH, but for LSH lookup we follow prior work [1, 12] and consider the _hyperplane-based LSH_. At a high level, this approach partitions the input space into a grid-like structure using randomly oriented, equispaced hyperplanes. Each such region is considered a bucket (expert) of the hash table. See Sec. 4 and the supplementary material for details. ## 4 Theoretical Arguments In this section, we analyze and provide unifying theoretical insights into popular lookup functions (Sec. 4.1) and demonstrate the theoretical advantage of using embedding table lookups at higher layers (Sec. 4.2). ### Lookup functions as variants of LSH and their efficiency Here, we show that under simplifying assumptions, softmax routing and wordpiece routing can be viewed as Spherical LSH and Min-hash LSH, respectively. This interpretation will imply that, with practical configurations, Token-ID is more parameter efficient than Softmax, which is more efficient than hyperplane-LSH lookup. **Preliminaries** We consider a Locality Sensitive Hashing (LSH) that maps an input to one of \(n\) buckets. We let \(r_{2}>r_{1}>0\), denote the threshold for nearby points and far-away points, respectively. For \(x,y\in\mathbb{R}^{d}\), we say \(x\) and \(y\) are nearby if \(\|x-y\|_{2}\leq r_{1}\) and they are far-away if \(\|x-y\|_{2}\geq r_{2}\), where \(\|x\|_{2}\) is the \(2\)-norm of the vector \(x\). Let \(c=r_{2}/r_{1}>1\) denote the distance gap as a ratio. Let \[p_{1} \leq\mathrm{Pr}(h(x)=h(y):\|x-y\|_{2}\leq r_{1})\] \[p_{2} \geq\mathrm{Pr}(h(x)=h(y):\|x-y\|_{2}\geq r_{2})\] denote lower and upper bounds on the collision probability of nearby points and far-away points, respectively. Notably, with \(n\) buckets the probability that two nearby points hash to the same bucket is \(n^{-\rho}\), where \(\rho=\frac{\log(1/p_{1})}{\log(1/p_{2})}\)(Andoni and Razenshteyn, 2015). **Efficiency** Let us consider two sentences \(s_{1},s_{2}\) of the same length \(l\) that have \(f\) fraction of wordpieces in common. Assume for simplicity that the embedding vector for each wordpiece is a random unit vector in \(R^{d}\). We summarize LSH variants and the collision probability of nearby points in this setting. We are interested in the _efficiency_, i.e., the collision probability for the set of experts corresponding to two similar sentences. For a fixed size table of \(n\) experts, the higher the collision probability for two similar sentences, the more efficient the LSH lookup is in terms of routing similar tokens to similar buckets. The full details of the LSH variants and proofs are in the supplementary. 1. **Hyperplane LSH (Datar et al., 2004)**: this variant divides a \(R^{d}\) space into buckets by using randomly oriented parallel equispaced hyperplanes. Hyperplane LSH has the property that \(\rho=\mathcal{O}(1/c)\). Computations (see supplementary) yield \(c=1/\sqrt{1-f}\), which implies a collision probability of \(n^{-\mathcal{O}(\sqrt{1-f})}\). 2. **Spherical LSH (Andoni and Indyk, 2008)**: here, we use a random set of points to divide up the \(R^{d}\) space into Voronoi regions, each representing a different bucket. This method has a better \(\rho\) value of \(\mathcal{O}(1/c^{2})\). Assuming that the Softmax lookup matrix \(W\) (see Sec. 3) is uniform, the Softmax lookup corresponds to Spherical LSH (Andoni et al., 2015). This yields \(\rho=\mathcal{O}(1-f)\) and a collision probability of \(n^{-\mathcal{O}(1-f)}\). 3. **Min-hash (Broder et al., 1998)**: this approach is used for hashing sets so that similar sets get hashed to the same bucket. Using the Jaccard similarity measure for comparing sets means that the fraction of experts that match up for the two sentences \(s_{1},s_{2}\) is \(f\). This also means that Token-ID lookup can be viewed as Min-hash LSH. The above properties imply the following theorem. **Theorem 1**.: _In the setting of the above simplifying assumptions, we have the following:_ 1. _Softmax and Token-ID lookup can be viewed as Spherical LSH and Min-hash LSH, respectively._ 2. _The probability that a random token in the two sentences of equal length that overlap in_ \(f\) _fraction of the wordpieces gets routed to the same expert is_ \(n^{-\mathcal{O}\left(\sqrt{1-f}\right)}\)_,_ \(n^{-\mathcal{O}(1-f)}\)_, and_ \(f\) _for hyperplane LSH, Softmax, and Token-ID lookup, respectively._ 3. _For large_ \(n\) _and a small fraction_ \(f\)_, in terms of routing to the same expert, the efficacy order of the different lookup methods is Token-ID_ \(\geq\) _Softmax_ \(\geq\) _hyperlane LSH._ ### Advantage of embedding lookups at higher layers Note that the Token-ID routing need not be a per-layer routing operation since it is merely a function of the token ID which can be done once in the input layer. One way of incorporating the result of this lookup is to simply feed the output of the lookup into the input for the first layer. This implementation delegates the work of passing on this information to the higher layers to the network itself. Alternatively, the output of the lookup in the input layer can be partitioned so that _different parts of the lookup output feed into the different layers of the network_. This partitioning and feeding the embedding lookup directly to the layers can be viewed as separate embedding lookups in those layers. The theorem below establishes that the latter implementation with embedding lookups at higher layers enables a more efficient architecture of lower width and fewer parameters than the former one. **Theorem 2**.: _There exists a class of natural learning problems where embedding lookups of categorical features at upper layers in addition to the input layer gives a more efficient architecture compared to an architecture that feeds the embedding lookup output only to the input layer._ Proof Sketch.: The main idea is to consider two architectures that implement the embedding lookup in the two distinct ways and an input \((u,q)\) with ground truth score \(\langle\Psi(u),\Phi(q)\rangle\), where \(\Psi(u)\) maps \(u\) to a \(d\)-dimensional feature vector and \(\Phi(q)\) is a non-linear transformation of \(q\) that can be implemented by a deep network of width \(d\). The first architecture combines the lookup \(\Psi(u)\) with \(q\) (by a weighted sum) and feeds into the network as input; the second architecture in addition feeds the embedding output of \(u\) to all the layers of the network instead of only the lowest layer. The second architecture can store \(\Psi(u)\) in the table and feed it directly to the output layer (which produces \(\Phi(q)\)) to obtain the result \(\langle\Psi(u),\Phi(q)\rangle\) using width \(d\). On the other hand, for the first architecture the entropy of the information carried up the layers is at least \(2d\) assuming \(u\) and \(q\) are random and not correlated, and so the width of the network needs to be \(2d\). ## 5 External memory with Alternating Updates In this section, we introduce the method of _Alternating Updates_, an approach to enable increased token dimension with little additional computation cost. ### Background Instead of viewing external memory parameters as input-dependent function parameters (or "experts"), we can view them as additional parameters for the input representation. To consume these additional parameters, we can project and add them to the original representation vector. Alternatively, we can use them to widen the representation vector, as we do in Alternating Updates. This ties well with the observation that language models benefit from wider model dimensions, for example, as model sizes scale up, model dimension grows from 512 (small) to 768 (base) and 1024 (large, 3B, and 11B) in T5 models [11], and from 4096 (8B) to 8192 (64B) and 18432 (540B) in PaLM models [13]. As the model dimension increases, both representation dimension and transformer layer dimension increase. However, the two dimensions account for different capacities of the model: wider representations store more information about the input, while wider transformer layers give more processing power. They also differ a lot in computation cost: widening the representation increases computation minimally, while widening transformer layers quadratically increases computation cost. A natural question is how to incorporate wider representations while maintaining smaller transformer layers. ### A Predict-Compute-Correct Algorithm We propose to keep a wide representation vector, perform computation with a sub-block, and estimate the updated representation using a Predict-Compute-Correct algorithm, as illustrated in Figure 2. Taking this view, external memory is now added by increasing the token embedding's dimensionality: suppose the original embedding dimension is \(d\in\mathbb{N}\), it is now increased to \(d+e\in\mathbb{N}\), which introduces \(Ve\) additional parameters, where \(V\) is the vocabulary size for the tokens. While the \(e\) can be any nonnegative integer, we first discuss our algorithm in the simpler case in which \(e\) is a multiple of \(d\), i.e. \(e=(K-1)d\), where \(K\in\mathbb{N}\) and \(K>1\). More specifically, let the input embedding vector be \(Kd\) dimensional, where \(K,d\in\mathbb{N}\). Our algorithm keeps the dimension of the representation vector at every layer to be \(Kd\) while uses layers of width \(d\) to transform the representation vector. Denote the representation vector at layer \(i\) by \(x_{i}=\operatorname{concat}(x_{i}^{1},x_{i}^{2},...,x_{i}^{K})\), where \(x_{i}^{j}\in\mathbb{R}^{d},j=1,2,...,K\) are contiguous sub-blocks of \(x_{i}\) and \(\operatorname{concat}\) is the concatenation operation. Denote layer \(i\)'s transformation function by \(L_{i}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\). Representation vector \(x_{i+1}\) at layer \(i+1\) is obtained in three steps: 1. **Prediction**: predict the representation vector at next layer with a trainable linear map \(\hat{x}_{i+1}=P_{i}x_{i}\), where \(P_{i}\in\mathbb{R}^{Kd\times Kd}\); 2. **Computation**: select a sub-block \(x_{i}^{j^{*}}\) and update this block with \(L_{i}\): \(\tilde{x}_{i+1}^{j^{*}}=L_{i}(x_{i}^{j^{*}})\) (selection of \(j^{*}\) is discussed in the next section); more than one sub-blocks can be selected if needed; 3. **Correction**: correct the prediction with the computation result: \(x_{i+1}=\hat{x}_{i+1}+G_{i}(\tilde{x}_{i+1}^{j^{*}}-\hat{x}_{i}^{j^{*}})\), where \(G_{i}\in\mathbb{R}^{Kd\times d}\) is a trainable matrix. When there is no ambiguity about the layer index, we drop the subscript \(i\), and denote \(x_{old}:=x_{i}\) and \(x_{new}:=x_{i+1}\). The three steps are summarized in Algorithm 2. ``` 0: Representation vector \(x_{old}=\operatorname{concat}(x_{old}^{1},x_{old}^{2},...,x_{old}^{K})\), where \(x_{old}^{j}\in\mathbb{R}^{d},j=1,2,...,K\) are contiguous sub-blocks of \(x_{old}\). 0: Updated representation vector \(x_{new}\). Prediction: predict the updated representation vector with a trainable linear map: \(\hat{x}=Px_{old}\), where \(P\in\mathbb{R}^{Kd\times Kd}\) is a trainable matrix; 0: select a sub-block \(x_{old}^{j^{*}}\) and update this block with \(L\): \(\tilde{x}^{j^{*}}=L(x_{old}^{j^{*}})\); 0: correct the prediction with the computation result: \(x_{new}=\hat{x}+G(\tilde{x}^{j^{*}}-\hat{x}^{j^{*}})\), where \(G\in\mathbb{R}^{Kd\times d}\) is a trainable matrix. ``` **Algorithm 2**Predict-Compute-Correct algorithm This Predict-Compute-Correct algorithm is inspired by the Kalman filter algorithm [15]. Casted in the Kalman filtering framework, the prediction step utilizes a simple linear dynamic model, the computation step is viewed as a form of "measurement", and correction step performs a weighted sum of prediction and "measurement" through the gain matrix \(G\). Note the prediction and measurement noises which are important components in Kalman filter are not modeled in the above algorithm. Estimation of noises and covariance updates are left for future work. To further reduce the computation cost of the prediction and correction steps, we impose a particular block matrix structure on \(P\) and \(G\). Let \(P=(p_{i,j}I_{d\times d})_{i,j\in[K]}\) Figure 2: Updating a wide representation vector: (a) wide transformer layers scales quadratically with the representation dimension; (b) Predict-Compute-Correct algorithm uses a narrow transformer layer along with lightweight predictor and corrector to update a wide representation vector. \(G=(g_{i}I_{d\times d})_{i\in[K]}\), where \(p_{i,j},g_{i}\in\mathbb{R}\) are scalars and \(I_{d\times d}\in\mathbb{R}^{d\times d}\) is the identity matrix. Note this amounts to treating each sub-block \(x^{i}\) as an atomic quantity. The simplified Predict-Compute-Correct algorithm is summarized in Algorithm 3. ``` 0: Representation vector \(x_{old}=\operatorname{concat}(x^{1}_{old},x^{2}_{old},...,x^{K}_{old})\), where \(x^{j}_{old}\in\mathbb{R}^{d},j=1,2,...,K\) are contiguous sub-blocks of \(x_{old}\). 0: Updated representation vector \(x_{new}\). Prediction: predict the updated representation vector with a trainable linear map: \(\hat{x}^{i}=\sum_{j=1}^{K}p_{i,j}x^{j}_{old}\) for \(i=1,2,...,K\), where \(p_{i,j}\in\mathbb{R}\) are trainable scalars; 0: select a sub-block \(x^{j^{*}}_{old}\) and update this block with \(L\): \(\bar{x}^{j^{*}}=L(x^{j^{*}}_{old})\); Correction: correct the prediction with the computation result: \(x^{i}_{new}=\hat{x}^{i}+g_{i}(\bar{x}^{j^{*}}-\bar{x}^{j^{*}})\) for \(i=1,2,...,K\), where \(g_{i}\in\mathbb{R}\) are trainable scalars. ``` **Algorithm 3**Simplified Predict-Compute-Correct algorithm In the simplified algorithm, prediction and correction steps involve only vector addition and scalar-vector multiplication, which both incur \(O(d)\) computation cost and much less than the \(O(d^{2})\) cost of the layer transformer \(L\). ### Selection of sub-blocks The selection of sub-blocks for the computation step is not specified in Algorithm 2 and 3. We consider two simple, deterministic selection methods in this paper and leave more sophisticated methods for future work. 1. **Same**: choose the same sub-block for all the layers in a neural network; 2. **Alternating**: for a sequence of layers, alternating through the sub-blocks, that is, if the sub-blocks are indexed with zero-based index, then sub-block \(i\) mod \(K\) is selected for the computation step for layer \(i\). Algorithm 3 with alternating selection is referred to as **Alternating Updates(AltUp)** in the following sections. We compare the two selection methods empirically in Section 6.2 and found the "alternating" method is better. ### Extension to non-integer multiples In the above description of the Predict-Compute-Correct algorithms, we assumed the augmented dimension \(e\) is a multiple of original embedding dimension \(d\). For the more general case when \(e\) is not a multiple of \(d\), we add a divide-and-project step before we apply Algorithm 2 or Algorithm 3: choose an integer factor \((K-1)\) of \(e\), divide the augmented vectors into \((K-1)\) sub-blocks, and project each sub-block to \(d\) dimension. Here \(K\) becomes another hyper-parameter of the algorithm. ## 6 Results ### Setting We performed all of our experiments using T5-model architectures [14] of varying sizes (small, base, and large) which we pretrained on the C4 dataset for 500,000 steps with a batch size of \(256\). The pre-trained models were then finetuned on either the GLUE [20], SuperGLUE [20], or SQuAD [11] benchmark tasks for a further 50,000 steps with a batch-size of \(256\). The pretraining task is to predict corrupted text spans, and the finetuning tasks are re-casted into text generation tasks. We report both pretraining and finetuning metrics: for pretraining, we report span prediction accuracy on a hold-out validation set, and for finetuning, we follow the same recipe as the T5 models, see [14] for more details. The full experiment set-up is detailed in the appendix. ### Memory consumption methods In this section, we present empirical results on comparison of different memory consumption methods, especially the Predict-Compute-Correct algorithm (Algorithm 3). In all the subsequent experiments, the augmented memory are implemented as additional embedding tables at the bottom layer of the model and token-ID lookup can be performed only once, which results in very small computation cost. We explore different memory consumption methods, model sizes, and memory sizes. We first fix the augmented memory parameters to be one extra embedding table (corresponding to \(K=2\) in Algorithm 3), lookup mechanism to be token-ID lookup, and compare different memory consumption methods. In Table 1, we compare the summation method (Sum) in which additional embedding vectors are added to the token representation vector, Algorithm 3 with same block selection (SameUp), and Algorithm 3 with alternating block selection (AltUp), all on top of the T5 version 1.1 base model (B). We note all three methods bring improvements in both pretraining and finetuning, and AltUp is the most effective one. While pretraining accuracies for all three memory consumption methods are similar, differences in finetuning metrics are large, and Alternating Updates achieving roughly twice gains compared to the other two methods. Similar behaviors are observed for small and large sized T5 models, see appendix for details. For the second set of experiments, we explore the alternating updates with increasing model sizes. We compare three model sizes with the T5 version 1.1 architecture: small (S), base (B) and large (L). The base and large models follow the same model configurations as in the T5 paper, while the small model is shallower than the T5 paper [14] to cover a larger range of model sizes (\(4\) encoder/decoder layers instead of \(8\) encoder/decoder layers). For models with alternating updates, we set \(K=2\), corresponding to doubling the embedding dimension. Full details of the model configurations are available in Appendix. Table 2 shows pretraining and finetuning metrics comparsion of the baseline models and the corresponding models with Alternating Updates. Note gains in pretraining accuracies show diminishing returns when model sizes grows, gains in finetuning metrics doesn't seems to diminish. We plan to experiment with even larger models to see if this trend is still valid. Table 3 documents the parameter count and training speed comparison. Note Alternating Updates increases the embedding parameters while leaving the non-embedding parameters roughly the same. Since the transformer computation are not changed by alternating updates, we also observe very small training speed impact. Finally, we present the model quality with different memory sizes. Table 4 contains model performances for alternating updated base sized models with \(K=2\) and \(4\). We observe a monotonic increasing trend for the pretraining accuracy as memory sizes increases. On finetuning tasks, we observe some tasks (SuperGLUE) continue to benefit from more memory, while other tasks (GLUE and SQuAD) don't. We hypothesize this might be attributed to the nature of the tasks: while some tasks requires more knowledge about the input and can be improved with a wider input representation, other tasks depend more function approximation capacity which is not increased with a wider input representation. We observe similar behaviors for small and large sized T5 models, see appendix for details. ### Varying table size and rank for Softmax Lookup Here we investigate the effect of varying table size (number of experts) and rank on the popular Softmax lookup mechanism that is used by state-of-the-art MoE models. We evaluate the performance of Softmax lookup in the partial expert setting on the performance of T5X Small pretraining with ranks \(\{0,4,16,64,128\}\) and buckets (experts) \(\{8,32,64,128,512\}\). The results are shown in Fig. 3. We observe that generally, increasing the width (rank) of the partial expert leads to increased performance for virtually all table sizes, with the notable exception of the largest table size (512). Interestingly, increasing the number of \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & \begin{tabular}{c} **Pretrain** \\ **accuracy** \\ \end{tabular} & \begin{tabular}{c} **Finetune** \\ **GLUE** \\ \end{tabular} & \begin{tabular}{c} **Finetune** \\ **SG** \\ \end{tabular} & \begin{tabular}{c} **Finetune** \\ **SQuAD (EM/F1)** \\ \end{tabular} \\ \hline S & \(61.21\) & \(75.83\) & \(59.28\) & \(76.44/84.97\) \\ S + AltUp & \(\mathbf{61.86}\) & \(\mathbf{76.82}\) & \(\mathbf{59.60}\) & \(\mathbf{77.51}/\mathbf{85.79}\) \\ \hline B & \(66.42\) & \(84.25\) & \(73.56\) & \(83.78/91.19\) \\ B + AltUp & \(\mathbf{66.96}\) & \(\mathbf{85.32}\) & \(\mathbf{75.80}\) & \(\mathbf{85.24}/\mathbf{92.36}\) \\ \hline L & \(69.13\) & \(87.23\) & \(81.21\) & \(86.77/93.56\) \\ L + AltUp & \(\mathbf{69.32}\) & \(\mathbf{88.20}\) & \(\mathbf{82.75}\) & \(\mathbf{87.81}/\mathbf{94.29}\) \\ \hline \hline \end{tabular} \end{table} Table 2: T5 version 1.1. models augmented with Alternating Updates: both pretraining and finetuning metrics are improved. We observe diminishing return in pretraining accuracy gain, but no diminishing returns in finetuning metrics. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & \begin{tabular}{c} **Pretrain** \\ **accuracy** \\ \end{tabular} & \begin{tabular}{c} **Finetune** \\ **GLUE** \\ \end{tabular} & \begin{tabular}{c} **Finetune** \\ **SG** \\ \end{tabular} & \begin{tabular}{c} **Finetune** \\ **SQuAD (EM/F1)** \\ \end{tabular} \\ \hline B & \(66.42\) & \(84.25\) & \(73.56\) & \(83.78/91.19\) \\ B + AltUp & \(66.82\) & \(84.06\) & \(74.15\) & \(84.41/91.76\) \\ B + AltUp & \(\mathbf{66.96}\) & \(\mathbf{85.32}\) & \(\mathbf{75.80}\) & \(\mathbf{85.24}/\mathbf{92.36}\) \\ B + AltUp & \(\mathbf{66.96}\) & \(\mathbf{85.32}\) & \(\mathbf{75.80}\) & \(\mathbf{85.24}/\mathbf{92.36}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of memory consumption methods: summation (Sum), Predict-Compute-Correct with “same” block selection (SameUp), and Predict-Compute-Correct with “alternating” block selection (AltUp). \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & \begin{tabular}{c} **Pretrain** \\ **accuracy** \\ \end{tabular} & \begin{tabular}{c} **Finetune** \\ **GLUE** \\ \end{tabular} & \begin{tabular}{c} **Finetune** \\ **SG** \\ \end{tabular} & \begin{tabular}{c} **Finetune** \\ **SQuAD (EM/F1)** \\ \end{tabular} \\ \hline S & \(61.21\) & \(75.83\) & \(59.28\) & \(76.44/84.97\) \\ S + AltUp & \(\mathbf{61.86}\) & \(\mathbf{76.82}\) & \(\mathbf{59.60}\) & \(\mathbf{77.51}/\mathbf{85.79}\) \\ \hline B & \(66.42\) & \(84.25\) & \(73.56\) & \(83.78/91.19\) \\ B + AltUp & \(\mathbf{66.96}\) & \(\mathbf{85.32}\) & \(\mathbf{75.80}\) & \(\mathbf{85.24}/\mathbf{92.36}\) \\ \hline L & \(69.13\) & \(87.23\) & \(81.21\) & \(86.77/93.56\) \\ L + AltUp & \(\mathbf{69.32}\) & \(\mathbf{88.20}\) & \(\mathbf{82.75}\) & \(\mathbf{87.81}/\mathbf{94.29}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Model size and train speed comparison: T5.1.1 small(S), base(B) and large(L) models are compared. Embedding parameters include input embedding table parameters (shared between encoder and decoder) and output embedding table. Non-embedding parameters include all the transformer blocks. Train speed is measured by number of examples per second per core. buckets (table size) does not lead to strict increases in performance. In fact, the best performing configuration uses the intermediate table size of \(64\) buckets with the maximum rank tested \(128\). A stronger trend also holds: increasing the buckets until 64 leads to monotonic increases in performance (see red, blue, and green curves of Fig. 3), after which we see a monotonic decrease in performance with higher number of buckets (see purple and orange curves). This is in agreement with previous observations of softmax-style MoE routing [20], where increasing the number of experts beyond a certain point was found to degrade performance. ### Comparisons of Lookup Functions We now consider evaluating the performance of the various lookup functions subject to a constraint on the number of additional parameters introduced to the model. In particular, rank 0 Token-ID introduces roughly \(2^{15}\) additional parameters, so we experiment with various combinations of rank (\(\{0,2,4,\ldots,128,256\}\)) and the number of buckets (\(\{2^{3},2^{4},\ldots,2^{15}\}\)) for Softmax and LSH lookup that satisfy the criteria of adding roughly \(2^{15}\) (32,128) parameters1 to the model. Footnote 1: The number of additional parameters is computed as \(\max\{2\,\mathrm{rank},1\}\ast\mathrm{buckets}\). We refer the reader to the supplementary for details on the computation. Table 5 depicts the results of the evaluations on T5 Small and Base models. We see that Token-ID achieves the highest pretrain accuracy on both T5 models and significantly outperforms the baselines. Softmax and LSH lookup come second and third place, respectively. Note that even though the configuration of rank 0 and \(2^{15}\) buckets was evaluated for both Softmax and LSH lookups, they performed poorly in comparison to Token-ID lookup. This result precisely aligns with the statement of Theorem 3 which states that for a fixed size table, Token-ID is more efficient than Softmax, which is in turn more efficient than (hyperplane) LSH lookup. ## 7 Conclusion In this paper, we study various lookup (routing) functions for sparsely activated memory modules and memory consumption methods. We empirically evaluate different lookup strategies, noting in particular the effectiveness of Token-ID lookup in the large-number-of-experts setting. We provide theoretical insights to support this experimental observation by studying the different lookup functions through the lens of Locality Sensitive Hashing. In addition, we introduce a novel method, _Alternating Updates_ to increase representation width with little additional computation cost. Specifically, _Alternating Updates_ utilizes lightweight prediction and correction steps to update a wider representation vector without increasing the transformer layer's computation cost. As a result, we achieve strong performance improvements on language modeling and language understanding benchmarks. We envision that theoretical insights on lookup functions and the Alternating Updates algorithm can serve as valuable components for designing high-performing memory augmented models.
2309.06700
Existence of weak solutions to borderline double-phase problems with logarithmic convection term
In this study, we devote our attention to the question of clarifying the existence of a weak solution to a class of quasilinear double-phase elliptic equations with logarithmic convection terms under some appropriate assumptions on data. The proof is based on the surjectivity theorem for the pseudo-monotone operators and modular function spaces and embedding theorems in generalized Orlicz spaces. Our approach in this paper can be extended naturally to a larger class of unbalanced double-phase problems with logarithmic perturbation and gradient dependence on the right-hand sides.
Minh-Phuong Tran, Thanh-Nhan Nguyen
2023-09-13T03:51:21Z
http://arxiv.org/abs/2309.06700v2
# Existence of weak solutions to borderline double-phase problems with logarithmic convection term ###### Abstract In this study, we are concerned with a class of quasilinear elliptic equations driven by degenerate double-phase operators with different structures: \[-\mathrm{div}\left(|\nabla u|^{p-2}\nabla u+|\nabla u|^{p-2}\mathrm{log}(e+| \nabla u|)\nabla u)\right)=F(x,u,\nabla u).\] We prove the existence of solutions in the weak sense under some appropriate assumptions on data. The proof is based on the surjectivity theorem for the pseudo-monotone operators and the existence theorem in this paper can be extended to a larger class of quasilinear elliptic equations/systems with gradient dependence on the right-hand sides. Keywords: Borderline double-phase problems; Weak solutions; Existence result; Logarithmic convection term; Orlicz-Sobolev spaces. ## 1 Introduction In this paper, we are concerned with the existence result for the following double-phase problem of the type \[\begin{cases}-\mathrm{div}\left(\mathcal{A}(\nabla u)\right)&=\ F(x,u,\nabla u ),\quad\text{ in }\Omega,\\ \quad\quad\quad\quad u&=\ 0,\quad\quad\quad\quad\quad\text{ on }\partial\Omega,\end{cases} \tag{1.1}\] where \(\Omega\subset\mathbb{R}^{n}\) is an open bounded domain (\(n\geq 2\)) with smooth boundary \(\partial\Omega\), and in this case, the vector field \(\mathcal{A}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is given by \[\mathcal{A}(\xi)=|\xi|^{p-2}\xi+|\xi|^{p-2}\log(e+|\xi|)\xi,\quad\xi\in\mathbb{ R}^{n}. \tag{1.2}\] Here, we restrict our study to the most interesting case of \(1<p<n\) and the right-hand side of (1.1) is a gradient-dependent perturbation (convection term). We assume that \(F:\Omega\times\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}\) is a Caratheodory function (that is, it is measurable for a.e. \(x\in\Omega\) with respect to \((t,\xi)\in\mathbb{R}\times\mathbb{R}^{n}\) and continuous for every \((t,\xi)\in\mathbb{R}\times\mathbb{R}^{n}\) with respect to \(x\in\Omega\)). The equation appearing in (1.1) is relevant to the Euler-Lagrange equation of the energy functional \[\omega\mapsto\int_{\Omega}g(\nabla\omega)dx, \tag{1.3}\] where the integrand \(g\) is a combination of two different phases of elliptic behaviors: standard polynomial growth and the logarithmic perturbation of functional with \((p,q)\)-growth with respect to the gradient. The study of functional displayed in (1.3) is included in the realm of integral functional with non-standard growth condition, according to Marcellini's pioneering contributions in [29, 30], where the energy density satisfies \[|\xi|^{p}\lesssim g(\xi)\lesssim|\xi|^{q}+1,\qquad 1<p\leq q. \tag{1.4}\] The theory of double-phase functionals traces a long way back to the works of Zhikov in [42, 43], who first described a feature of strongly anisotropic materials with two hardening exponents \(p\) and \(q\) in the context of homogenization theory and nonlinear elasticity. In such models, integrand \(g\) changes its ellipticity rate according to the point, i.e. \(g\equiv g(x,\xi)\). In particular, \(g(x,\xi)=|\xi|^{p}+a(x)|\xi|^{q}\), where the coefficient \(a(\cdot)\) dictates the geometry of a composite of two different materials. For this model, there have been extensive mathematical investigations pertaining to the existence of minimizers. In a related context, various existence results have been studied by multiple authors in different directions. To mention a few, an existence result proved by Perera and Squassina [35] using Morse theory (critical groups); existence and multiplicity results established by Liu and Dai [28] with variational methods; existence proofs by Gasinski and Winkert [20] with the usage of surjectivity for pseudomonotone operators; the isotropic and anisotropic double-phase problems studied by Radulescu in [36]; Zhang and Radulescu [41] with the tools of critical points theory in generalized Orlicz-Sobolev space; and an increasing list of references is far from being complete. Besides, many notable works in the regularity of minimizers to such models have been developed in recent years. To explore more, we refer to the studies of Colombo and Mingione in [13, 14], Baroni, Colombo, Mingione [4, 5], Byun and Oh in [10], Beck-Mingione [6], De Filippis-Mingione [15, 16, 17] and the excellent survey paper [33] for more references. To our knowledge, the energy functional of type (1.3) mixed with both polynomial and logarithmic perturbations was of interest for the first time in [3, 4, 32] towards the regularity of minimizers. In these papers, the authors mentioned the borderline case of double-polynomial-phase functionals \[\omega\mapsto\mathcal{G}(\omega,\Omega):=\int_{\Omega}\left(a(x)|\nabla\omega |^{p}+\log(e+|\nabla\omega|)|\nabla\omega|^{p}\right)dx, \tag{1.5}\] where the non-negative function \(a(\cdot)\) is supposed to be bounded. So far as we know, there is not much previous work regarding the existence and/or uniqueness of solutions to corresponding equations related to such energy functionals, especially when the right-hand side is a convection term (with gradient dependence). Our goal here is to address the question of the existence of at least a weak solution to the corresponding Euler-Lagrange equation of functional \(\mathcal{G}\) shown in (1.5). Inspired by recent works of existence theory for quasilinear double-phase problems with \((p,q)\)-Laplacian and convection term mentioned above, we believe that it would be very interesting to treat the borderline-case of double-phase problem (1.1). Furthermore, for simplicity purposes, we focus on the incorporated functional \(\mathcal{G}\) in (1.5), where \(a(\cdot)\equiv 1\), or the double-phase integrand \(g\) depends only on the gradient of the solution. It should be mentioned that the case when the dependence on \(x\) produces the gap, that is \(\mathcal{A}(x,\xi)=a(x)|\xi|^{p-2}\xi+|\xi|^{p-2}\mathrm{log}(e+|\xi|)\xi\), is also important and this will be investigated in a forthcoming paper. The main existence result in this paper is based on the subjectivity theorem for pseudo-monotone operators in [11]. To the best of our knowledge, the theory of pseudo-monotone operators is known as a useful tool in proving the existence theorems for quasilinear elliptic and parabolic partial differential equations. The key idea underlying this existence theory, which goes back to Browder [9] and Minty [34], is to study the properties of monotone operators. Later, the existence theorems were extended to a more general class of quasilinear equations by Hartmann and Stampacchia in [22]. Since there were several equations involving operators not monotone, Brezis in [8] introduced a vast class of so-called pseudo-monotone operators, thereby extending Browder's existence theorem. This class of operators became an important research topic that has been developed in the solvability of a wide class of quasilinear equations. Therefore, the proof of such existence exploits an argument similar to that in [11] and somewhat enhances previous results in the same direction [20, 28]. The main novelty of this work lies in the model that, our problem is driven by a nonhomogeneous differential operator mixed between standard polynomial and logarithmic growths. Furthermore, it is worth mentioning that the reaction term is allowed to depend on both the solution and its gradient (called the convection term). These features make our main result somewhat interesting. This paper is organized as follows. Section 2 is devoted to notation, definitions, and necessary background from the theory of double-phase problems and function spaces. In this section, we also sketch the main ideas, and ingredients employed in the proofs of our result and point out some technical difficulties. Section 3 is the most substantial, it contains the statement of the main theorem and its proof. ## 2 Preliminary materials Let us begin with some notation and definitions used throughout this paper. Firstly, notice that we will follow the usual convention of denoting by \(C\) a general positive constant which will not necessarily be the same at different occurrences and which can also change from line to line. Peculiar dependencies on parameters will be emphasized in parentheses when needed, i.e. \(C=C(n,p)\) means that \(C\) depends on \(n,p\). In the sequel, let \(\Omega\) be an open bounded domain in \(\mathbb{R}^{n}\) with smooth boundary \(\partial\Omega\), \(n\geq 2\). We denote by \(|E|\) the finite \(n\)-dimensional Lebesgue measure of a certain measurable set \(E\subset\mathbb{R}^{n}\) and with \(h:E\to\mathbb{R}\) being a measurable map, we shall denote by \[\fint_{E}h(x)dx=\frac{1}{|E|}\int_{E}h(x)dx\] its mean integral on \(E\). For any \(p>1\), the notation \(p^{\prime}\) stands for Holder conjugate exponent of \(p\), i.e. \(p^{\prime}=\frac{p}{p-1}\). Further, we shall make use of the star notation, \(p^{*}\) for the Sobolev critical exponent to \(p\) \[p^{*}=\begin{cases}\frac{np}{n-p},&\text{if}\ \ p<n\\ +\infty,&\text{if}\ \ p\geq n.\end{cases}\] For the convenience of the reader, we recall some basic definitions and properties related to the Young function and generalized Orlicz-Sobolev spaces as follows. For a quite comprehensive account, the interested reader might start by referring to [23] and the bibliography therein. **Definition 2.1** (Young function): _We say that \(G:[0,\infty)\to[0,\infty)\) to be a Young function if \(G\) is non-decreasing convex and satisfies the following conditions_ \[G(0)=0,\ \lim_{t\to\infty}G(t)=\infty,\ \lim_{t\to 0^{+}}\frac{G(t)}{t}=0,\ \lim_{t\to\infty}\frac{G(t)}{t}=\infty.\] _Then, the complementary Young function \(G^{*}\) to \(G\) is defined by_ \[G^{*}(t)=\sup\{ts-G(s):\ s\geq 0\},\quad t\geq 0.\] **Definition 2.2** (\(\Delta_{2}\) and \(\nabla_{2}\)-conditions): _We say that the Young function \(G\) satisfies \(\Delta_{2}\) condition if there exists \(\tau_{1}>1\) such that_ \[G(2t)\leq\tau_{1}G(t),\ \text{ for all }t\geq 0.\] _In this case, we write \(G\in\Delta_{2}\). We say that the Young function \(G\) satisfies \(\nabla_{2}\) condition, denoted by \(G\in\nabla_{2}\), if there exists \(\tau_{2}>1\) such that_ \[G(t)\leq\frac{1}{2\tau_{2}}G(\tau_{2}t),\ \text{ for all }t\geq 0.\] _It is worth noting that if \(G\) satisfies both \(\Delta_{2}\) and \(\nabla_{2}\)-conditions, we shall write \(G\in\Delta_{2}\cap\nabla_{2}\)._ In what follows, we state a useful lemma for later use in the proof, interested readers might consult [23] for further details. **Lemma 2.3**: _Let \(G\) be a Young function._ * \(G^{*}\) _is also a Young function and_ \((G^{*})^{*}=G\)_._ * \(G\in\nabla_{2}\Leftrightarrow G^{*}\in\Delta_{2}\) _._ * _If_ \(G\in\Delta_{2}\) _then there exist some constants_ \(1<\nu_{1}\leq\nu_{2}\) _and_ \(C>1\) _independent of_ \(\sigma\) _and_ \(t\) _such that_ \[C^{-1}\min\{\sigma^{\nu_{1}},\sigma^{\nu_{2}}\}G(t)\leq G(\sigma t)\leq C\max\{ \sigma^{\nu_{1}},\sigma^{\nu_{2}}\}G(t),\quad\forall t,\sigma\geq 0.\] * _If_ \(G\in\Delta_{2}\cap\nabla_{2}\) _then_ \[G^{*}\left(\frac{G(t)}{t}\right)\leq G(t)\leq G^{*}\left(\frac{2G(t)}{t}\right),\quad\text{ for all }t>0.\] (2.1) _Moreover, for every_ \(\varepsilon\in(0,1)\)_, there exists_ \(C>0\) _such that_ \[st\leq\varepsilon G(s)+CG^{*}(t),\quad\text{ for all }s,t\geq 0.\] * _If_ \(G\in\Delta_{2}\cap\nabla_{2}\) _then for every_ \(\varepsilon\in(0,1)\)_, there exists_ \(C>0\) _such that_ \[s\frac{G(t)}{t}\leq\varepsilon G(s)+CG(t),\quad\text{ for all }s\geq 0,t>0.\] (2.2) Regarding this study, we also collect definitions of Orlicz and Orlicz-Sobolev spaces more or less known (cf. [25]). **Definition 2.4** (Orlicz spaces): _Let \(\Omega\) be a bounded open subset in \(\mathbb{R}^{n}\) and \(G:[0,\infty)\to[0,\infty)\) be a Young function. The Orlicz class \(O^{G}(\Omega)\) is defined to be the set all of measurable functions \(f:\Omega\to\mathbb{R}\) satisfying_ \[\int_{\Omega}G(|f(x)|)dx<\infty.\] _The Orlicz space \(L^{G}(\Omega)\) is the smallest linear space containing \(O^{G}(\Omega)\), endowed with the following Luxemburg norm_ \[\|f\|_{L^{G}(\Omega)}=\inf\left\{\sigma>0:\ \fint_{\Omega}G\left(\frac{|f(x)|}{ \sigma}\right)dx\leq 1\right\}.\] **Definition 2.5** (Orlicz-Sobolev spaces): _Let \(G:[0,\infty)\to[0,\infty)\) be a Young function. We denote by \(W^{1,G}(\Omega)\) the Orlicz-Sobolev spaces which is the set of all measurable functions \(f\in L^{G}(\Omega)\) such that \(|\nabla f|\in L^{G}(\Omega)\). In this space, we consider the norm_ \[\|f\|_{W^{1,G}(\Omega)}=\|f\|_{L^{G}(\Omega)}+\|\nabla f\|_{L^{G}(\Omega)},\] _where we denote \(\|\nabla f\|_{L^{G}(\Omega)}=\||\nabla f|\|_{L^{G}(\Omega)}\). Moreover, we denote by \(W^{1,G}_{0}(\Omega)\) the closure of \(C^{\infty}_{0}(\Omega)\) in \(W^{1,G}(\Omega)\)._ **Lemma 2.6**: _Let \(G\in\Delta_{2}\cap\nabla_{2}\) be a Young function._ * _Then_ \(L^{G}(\Omega)=O^{G}(\Omega)\) _and_ \((L^{G}(\Omega),\|\cdot\|_{L^{G}(\Omega)})\) _is a reflexive Banach space._ _._ * _If_ \(f\in L^{G}(\Omega)\) _and_ \(g\in L^{G^{*}}(\Omega)\) _then_ \(fg\in L^{1}(\Omega)\) _and there holds_ \[\fint_{\Omega}|fg|dx\leq 2\|f\|_{L^{G}(\Omega)}\|g\|_{L^{G^{*}}(\Omega)}.\] (2.3) The next part provides us with definitions and some necessary auxiliary results for the Orlicz-Zygmund and Orlicz-Zygmund-Sobolev spaces which will be employed later on. We refer here to [7, 24, 37] for more details concerning these spaces. **Definition 2.7** (**Orlicz-Zygmund spaces \(L^{p}\log^{\alpha}L(\Omega)\))**: _Let either \(\alpha>0\) and \(1\leq p<\infty\), the Orlicz-Zygmund space \(L^{p}\log^{\alpha}L(\Omega)\) is defined by_ \[L^{p}\log^{\alpha}L(\Omega)=\left\{f\in L^{1}(\Omega):\ \fint_{\Omega}|f(x)|^{p}\log^{ \alpha}(e+|f(x)|)dx<\infty\right\}, \tag{2.4}\] _with the following finite Luxemburg norm_ \[\|f\|_{L^{p}\log^{\alpha}L(\Omega)}=\inf\left\{\sigma>0:\ \fint_{\Omega} \frac{|f(x)|^{p}}{\sigma^{p}}\log^{\alpha}\left(e+\frac{|f(x)|}{\sigma} \right)dx\leq 1\right\}. \tag{2.5}\] **Remark 2.8**: * _If_ \(\alpha=0\)_, then the space_ \(L^{p}\log^{\alpha}L(\Omega)\) _reduces to the classical Lebesgue space_ \(L^{p}(\Omega)\) _whose norm is simply denoted by_ \(\|\cdot\|_{p}\)_._ * _If_ \(p=1,\alpha>0\)_, we concern about the Zygmund class. In what follows, when_ \(p=1\) _or_ \(\alpha=1\)_, we shall write_ \(L\log^{\alpha}L(\Omega)\) _or_ \(L^{p}\log L(\Omega)\)_, respectively for simplicity._ * _Let_ \(f\in L^{p}\log^{\alpha}L(\Omega)\) _such that_ \[\|f\|_{p}:=\left(\fint_{\Omega}|f(x)|^{p}dx\right)^{\frac{1}{p}}>0,\] _we shall denote by_ \[[f]_{L^{p}\log^{\alpha}L(\Omega)}:=\left(\fint_{\Omega}|f(x)|^{p}\log^{\alpha }\left(e+\frac{|f(x)|}{\|f\|_{p}}\right)dx\right)^{1/p}\] (2.6) _the modular function of_ \(f\)_._ **Remark 2.9**: _Within the setting of Orlicz-Zygmund spaces, we observe that it is not easy to work with the Luxemburg norm of the Orlicz-Zygmund space \(L^{p}\log^{\alpha}L(\Omega)\) in (2.4). However, the interesting point here is that one can obtain the equivalence between the Luxemburg norm \(\|\cdot\|_{L^{p}\log^{\alpha}L(\Omega)}\) in (2.5) and the modular function \([\,\cdot\,]_{L^{p}\log^{\alpha}L(\Omega)}\) (2.6). The following lemma reveals the relationship between the norm and modular function of \(f\)._ **Lemma 2.10**: _For every \(f\in L^{p}\log^{\alpha}L(\Omega)\) with \(p\geq 1\) and \(\alpha>0\), one has_ \[\|f\|_{L^{p}\log^{\alpha}L(\Omega)}\leq[f]_{L^{p}\log^{\alpha}L(\Omega)}\leq \left[2^{\alpha}+\left(\frac{2\alpha}{ep}\right)^{\alpha}\right]\|f\|_{L^{p} \log^{\alpha}L(\Omega)}. \tag{2.7}\] _For \(q>p\), then there exists a constant \(C=C(p,q,\alpha)>0\) such that_ \[[f]_{L^{p}\log^{\alpha}L(\Omega)}=\left(\fint_{\Omega}|f(x)|^{p}\log^{\alpha} \left(e+\frac{|f(x)|}{\|f\|_{p}}\right)dx\right)^{1/p}\leq C\left(\fint_{ \Omega}|f(x)|^{q}dx\right)^{\frac{1}{q}}, \tag{2.8}\] _for every \(f\in L^{q}(\Omega)\). In particular, there holds_ \[L^{q}(\Omega)\subset L^{p}\log^{\alpha}L(\Omega)\subset L^{p}(\Omega). \tag{2.9}\] **Proof.** Let \(f\in L^{p}\log^{\alpha}L(\Omega)\) such that \(\|f\|_{p}>0\), we set \(\sigma=\|f\|_{L^{p}\log^{\alpha}L(\Omega)}\). It is clear to see that \[\sigma=\left[\fint_{\Omega}|f(x)|^{p}\log^{\alpha}\left(e+\frac{|f(x)|}{ \sigma}\right)dx\right]^{\frac{1}{p}}\geq\|f\|_{p}, \tag{2.10}\] which implies \[\sigma\leq\left[\fint_{\Omega}|f(x)|^{p}\log^{\alpha}\left(e+\frac{|f(x)|}{\| f\|_{p}}\right)dx\right]^{\frac{1}{p}}=[f]_{L^{p}\log^{\alpha}L(\Omega)}.\] Applying the following fundamental inequality \[\log^{\alpha}(e+st)\leq 2^{\alpha-1}\big{[}\log^{\alpha}(e+s)+\log^{\alpha} t\big{]},\quad s\geq 0,\ t\geq 1, \tag{2.11}\] one obtains that \[\begin{split}[f]^{p}_{L^{p}\log^{\alpha}L(\Omega)}& =\fint_{\Omega}|f(x)|^{p}\log^{\alpha}\left(e+\frac{|f(x)|}{\|f\|_ {p}}\right)dx\\ &\leq 2^{\alpha-1}\left[\fint_{\Omega}|f(x)|^{p}\log^{\alpha}\left(e +\frac{|f(x)|}{\sigma}\right)dx+\fint_{\Omega}|f(x)|^{p}\log^{\alpha}\left( \frac{\sigma}{\|f\|_{p}}\right)dx\right].\end{split} \tag{2.12}\] At this stage, we are allowed to make use of the following inequality \[\log^{\alpha}t\leq\left(\frac{\alpha}{ep}\right)^{\alpha}t^{p},\quad\text{for all }t\geq 1 \tag{2.13}\] and from (2.12), it yields that \[[f]^{p}_{p\log}\leq 2^{\alpha-1}\left[\sigma^{p}+\left(\frac{\alpha}{ep}\right) ^{\alpha}\fint_{\Omega}|f(x)|^{p}\frac{\sigma^{p}}{\|f\|_{p}^{p}}dx\right]\leq 2 ^{\alpha p}\left[1+\left(\frac{\alpha}{ep}\right)^{\alpha}\right]^{p}\sigma^{p},\] and therefore, we evoke this estimate to conclude (2.7). We also send the reader to [24] for considerations and references concerning the case when \(\alpha=1\). In a next step, we prove (2.8). Note that in [2], the authors dealt with the case when \(p=1\). So, let us now prove it for the general case when \(p\geq 1\). Using the preceding inequality (2.13), we arrive at \[\log^{\alpha}(e+t)\leq\left(\frac{\alpha}{ep}\right)^{\alpha}(e+t)^{p}\leq\left( \frac{\alpha}{ep}\right)^{\alpha}2^{p-1}e^{p}\left(1+t^{p}\right),\] for all \(t\geq 0\) and \(\alpha>0\). Combining Holder's inequality with it, there holds \[\fint_{\Omega}|f(x)|^{p} \log^{\alpha}\left(e+\frac{|f(x)|}{\|f\|_{p}}\right)dx\] \[\leq C(p,q,\alpha)\left(\fint_{\Omega}|f(x)|^{q}dx\right)^{\frac{ p}{q}}\left(1+\fint_{\Omega}\frac{|f(x)|^{p}}{\|f\|_{p}^{p}}dx\right)^{1-\frac{p}{q}}\] \[\leq C(p,q,\alpha)\left(\fint_{\Omega}|f(x)|^{q}dx\right)^{\frac{ p}{q}},\] which deduces to (2.8). Finally, the last relation (2.9) can be directly obtained from (2.10) and (2.8). The proof is complete. **Remark 2.11**: _Note at this point, that since there exist two positive constants \(C_{1},C_{2}\) such that the estimate_ \[C_{1}(\gamma)\log(e+t)\leq\log(e+t^{\gamma})\leq C_{2}(\gamma)\log(e+t),\] _holds for every \(t,\gamma\geq 0\), and hence_ \[f\in L^{q}\log L(\Omega)\Leftrightarrow|f|^{\frac{q}{p}}\in L^{p}\log L( \Omega).\] _for all \(p,q\geq 1\)._ The kind of results we are interested in here can be considered an analog argumentation to the above. Given \(p>1\), let us consider the Orlicz-Zygmund space \(\mathbb{L}^{p\log}(\Omega)\) defined by \[\mathbb{L}^{p\log}(\Omega)=\left\{f\in L^{1}(\Omega):\ \fint_{\Omega}|f(x)|^{p}+|f(x)|^{p}\log(e+|f(x)|)dx< \infty\right\}, \tag{2.14}\] with the following Luxemburg norm \[\|f\|_{\mathbb{L}^{p\log}(\Omega)}=\inf\left\{\sigma>0:\ \fint_{\Omega}\frac{|f(x)|^{p}}{ \sigma^{p}}+\frac{|f(x)|^{p}}{\sigma^{p}}\log\left(e+\frac{|f(x)|}{\sigma} \right)dx\leq 1\right\}. \tag{2.15}\] Assume that \(f\in\mathbb{L}^{p\log}(\Omega)\) satisfying \(\|f\|_{p}>0\), we shall denote by \[[f]_{\mathbb{L}^{p\log}(\Omega)}:=\left[\fint_{\Omega}|f(x)|^{p}+|f(x)|^{p} \log\left(e+\frac{|f(x)|}{\|f\|_{p}}\right)dx\right]^{1/p} \tag{2.16}\] as the modular function of \(f\). In a similar fashion as in Lemma 2.10, it makes sense to prove that the following relation between Luxemburg norm \(\|\cdot\|_{\mathbb{L}^{p\log}(\Omega)}\) in (2.15) and the modular function \([\,\cdot\,]_{\mathbb{L}^{p\log}(\Omega)}\) in (2.16) holds (in Lemma 2.12 below). **Lemma 2.12**: _For every \(f\in\mathbb{L}^{p\log}(\Omega)\) with \(p>1\), one has_ \[\|f\|_{\mathbb{L}^{p\log}(\Omega)}\leq[f]_{\mathbb{L}^{p\log}(\Omega)}\leq 4\|f \|_{\mathbb{L}^{p\log}(\Omega)}. \tag{2.17}\] **Definition 2.13** (Orlicz-Zygmund-Sobolev spaces): _The Orlicz-Zygmund-Sobolev spaces, denoted by \(W^{1,p\log}(\Omega)\), is the set \(f\) all measurable functions \(f\in\mathbb{L}^{p\log}(\Omega)\) such that \(|\nabla f|\in\mathbb{L}^{p\log}(\Omega)\). This function space is equipped with the norm_ \[\|f\|_{W^{1,p\log}(\Omega)}=\|f\|_{\mathbb{L}^{p\log}(\Omega)}+\|\nabla f\|_{ \mathbb{L}^{p\log}(\Omega)}.\] When not important, or clear from the context, and for the sake of brevity, we shall write \(\|\nabla f\|_{\mathbb{L}^{p\log}(\Omega)}\) in place of \(\||\nabla f|\|_{\mathbb{L}^{p\log}(\Omega)}\). Moreover, one has the fact that \(W^{1,p\log}_{0}(\Omega)\) coincides with the closure of \(C^{\infty}_{0}(\Omega)\) in \(W^{1,p\log}(\Omega)\). For ease of notation, we also write \[\mathbb{W}:=W^{1,p\log}(\Omega)\quad\text{ and }\quad\mathbb{W}_{0}=W^{1,p\log}_ {0}(\Omega).\] Here, in \(\mathbb{W}_{0}\), let us consider the following norm \[\|f\|_{\mathbb{W}_{0}}=\|\nabla f\|_{\mathbb{L}^{p\log}(\Omega)},\quad f\in \mathbb{W}_{0}. \tag{2.18}\] **Remark 2.14**: _We emphasize here that the Orlicz-Zygmund space \(L^{p}\log L(\Omega)\) in (2.14) can be defined as the Orlicz space \(L^{\varphi_{p}}(\Omega)\) associated to the Young function \(\varphi_{p}:\mathbb{R}^{+}\to\mathbb{R}^{+}\) given by_ \[\varphi_{p}(t)=t^{p}\log(e+t),\quad t\geq 0. \tag{2.19}\] _Similarly, \(\mathbb{W}\) and \(\mathbb{W}_{0}\) can be defined as the Orlicz-Sobolev space \(W^{1,H_{p}}(\Omega)\) and \(W^{1,H_{p}}_{0}(\Omega)\) respectively, where the Young function \(H_{p}:\mathbb{R}^{+}\to\mathbb{R}^{+}\) is given by_ \[H_{p}(t)=t^{p}+t^{p}\log(e+t),\quad t\geq 0. \tag{2.20}\] _By this way, since \(\varphi_{p},H_{p}\in\Delta_{2}\cap\nabla_{2}\) with \(p>1\), then all spaces \(L^{p}\log L(\Omega)\), \(\mathbb{L}^{p\log}(\Omega)\), \(\mathbb{W}\) and \(\mathbb{W}_{0}\) are reflexive Banach spaces._ Regarding Orlicz-Zygmund-Sobolev spaces more or less known, at this stage, we will apply [12, Theorem 1, Theorem 3] to obtain the following interesting compact embedding result. **Lemma 2.15**: _For every \(1<q\leq p^{*}:=\frac{np}{n-p}\), the embedding_ \[\mathbb{W}_{0}\hookrightarrow L^{q}\log L(\Omega) \tag{2.21}\] _is compact. Moreover, there exists a constant \(C=C(p,q)>0\) such that_ \[\|f\|_{L^{q}\log L(\Omega)}\leq C\|\nabla f\|_{\mathbb{L}^{p\log}(\Omega)}. \tag{2.22}\] _As a consequence result, we obtain_ \[\|f\|_{\mathbb{W}_{0}}\leq\|f\|_{\mathbb{W}}\leq C\|f\|_{\mathbb{W}_{0}}. \tag{2.23}\] **Proof.** For every Young function \(H\) belonging to \(\Delta_{2}\cap\nabla_{2}\), the complementary function \(H^{*}\) to \(H\) is defined by \[H^{*}(t)=\sup\{ts-H(s):\ s\geq 0\},\quad t\geq 0.\] Let us now introduce the following Young function \[A_{n,H}(t)=\int_{0}^{t}s^{\frac{n}{n-1}-1}\left[\Phi_{n,H}^{-1}(s^{\frac{n}{n-1 }})\right]^{\frac{n}{n-1}}ds,\] where \(\Phi_{n,H}^{-1}\) is the right-continuous generalized inverse function of \(\Phi_{n,H}\) defined by \[\Phi_{n,H}(t)=\int_{0}^{t}\frac{H^{*}(s)}{s^{1+\frac{n}{n-1}}}ds,\ \ \mbox{and}\ \ \Phi_{n,H}^{-1}(t)=\inf\{s\geq 0:\ \Phi_{n,H}(s)>t\},\] for \(t\geq 0\). Thanks to [12, Theorem 3], we infer that \[W_{0}^{1,H}(\Omega)\hookrightarrow L^{A_{n,H}}(\Omega)\] is a continuous embedding. Moreover, if \(B\in\Delta_{2}\cap\nabla_{2}\) is a Young function that more slowly than \(A_{n,H}\) then the following embedding \[W_{0}^{1,H}(\Omega)\hookrightarrow L^{B}(\Omega)\] is compact. The proof of the embedding (2.21) is complete if we may apply the result above with function \(H=H_{p}\) and \(B=\varphi_{q}\) given as in (2.20) and (2.19) respectively. In fact, the most important point is to verify that \(B\) is more slowly than \(A_{n,H}\) in the following sense \[\lim_{t\to\infty}\frac{A_{n,H}(t)}{B(t)}=\infty. \tag{2.24}\] Indeed, let us consider \(G=\varphi_{p}\). Since \(H(t)\geq G(t)\) for every \(t\geq 0\), one has \(H^{*}(t)\leq G^{*}(t)\). It leads to \(\Phi_{n,H}(t)\leq\Phi_{n,G}(t)\) and then \(\Phi_{n,H}^{-1}(t)\geq\Phi_{n,G}^{-1}(t)\) for each \(t\geq 0\). It follows that \(A_{n,H}(t)\geq A_{n,G}(t)\) for all \(t\geq 0\). On the other hand, see [12, Example 1], it is well known that \[A_{n,G}(t)=Ct^{p^{*}}\log^{\frac{p^{*}}{p}}(e+t),\quad t\geq 0.\] Therefore, we have \[\lim_{t\to\infty}\frac{A_{n,G}(t)}{B(t)}=C\lim_{t\to\infty}t^{p^{*}-q}\log^{ \frac{p^{*}}{p}-1}(e+t)=\infty,\quad\mbox{ for all }q\in(1,p^{*}],\] which guarantees (2.24). Hence, it allows us to conclude that \(B\) is more slowly than \(A_{n,H}\). We remark here that we may only infer that the embedding \[W_{0}^{1,H}(\Omega)\hookrightarrow L^{p^{*}}\log^{\frac{p^{*}}{p}}L(\Omega)\] is continuous. Moreover, applying [12, Theorem 1] again, we obtain inequality (2.22). In particular, applying this embedding result with \(q=p\), there exists a constant \(C>0\) such that \[\|f\|_{L^{p}\log L(\Omega)}\leq C\|\nabla f\|_{\mathbb{L}^{p\log( \Omega)}}. \tag{2.25}\] On the other hand, thanks to inequalities (2.17) in Lemma 2.12 and (2.7) in Lemma 2.10, one gets that \[\|f\|_{\mathbb{L}^{p\log(\Omega)}} \leq[f]_{\mathbb{L}^{p\log(\Omega)}}=\left[\fint_{\Omega}|f(x)|^{ p}+|f(x)|^{p}\log\left(e+\frac{|f(x)|}{\|f\|_{p}}\right)dx\right]^{1/p}\] \[\leq 2^{\frac{1}{p}}\left[\fint_{\Omega}|f(x)|^{p}\log\left(e+ \frac{|f(x)|}{\|f\|_{p}}\right)dx\right]^{1/p}=2^{\frac{1}{p}}[f]_{L^{p}\log L (\Omega)}\] \[\leq 8\|f\|_{L^{p}\log L(\Omega)}. \tag{2.26}\] Combining two estimates in (2.25) and (2.26), we have \[\|f\|_{\mathbb{W}_{0}}=\|\nabla f\|_{\mathbb{L}^{p\log(\Omega)}} \leq\|f\|_{\mathbb{W}}=\|f\|_{\mathbb{L}^{p\log(\Omega)}}+\|\nabla f\|_{ \mathbb{L}^{p\log(\Omega)}}\leq C\|f\|_{\mathbb{W}_{0}}.\] The proof is complete. This will not affect the rest, we conclude this section by briefly recalling some definitions that will be required later for the proof of our main results. **Definition 2.16**: _Let \(\mathbb{W}\) be a reflexive Banach space endowed with the norm \(\|\cdot\|_{\mathbb{W}}\). We denote by \(\mathbb{W}^{*}\) its topological dual space of \(\mathbb{W}\) and by \(\langle\cdot,\cdot\rangle\) the duality pairing between \(\mathbb{W}\) and \(\mathbb{W}^{*}\). The norm convergence is denoted by \(\rightarrow\) and the weak convergence by \(\rightharpoonup\). Consider a nonlinear map \(A:\mathbb{W}\rightarrow\mathbb{W}^{*}\), then_ 1. \(A\) _is called bounded if it maps bounded sets to bounded sets;_ 2. \(A\) _is said to be coercive if there holds_ \[\lim_{\|u\|_{\mathbb{W}}}\frac{\langle Au,u\rangle}{\|u\|_{\mathbb{W}}}=+\infty;\] 3. \(A\) _is called pseudo-monotone if_ \(u_{n}\rightharpoonup u\) _in_ \(\mathbb{W}\) _and_ \(\limsup_{n\rightarrow+\infty}\left\langle Au_{n},u_{n}-u\right\rangle\leq 0\)_, it follows that_ \[\left\langle Au,u-w\right\rangle\leq\liminf_{n\rightarrow\infty} \left\langle Au_{n},u_{n}-w\right\rangle,\quad\forall w\in\mathbb{W};\] 4. \(A\) _is said to satisfy the_ \((S_{+})\)_-property if_ \(u_{n}\rightharpoonup u\) _in_ \(\mathbb{W}\) _and_ \(\limsup_{n\rightarrow+\infty}\left\langle Au_{n},u_{n}-u\right\rangle\leq 0\) _imply_ \(u_{n}\to u\) _in_ \(\mathbb{W}\)_._ ## 3 Statement of existence theorem In this section, we shall present the main result, whose proof will be made up in the last section. As already alluded to in the introduction, we will focus our attention in this paper on the existence of at least a weak solution to (1.1). The solution of problem (1.1) is understood in the weak sense. We say that a weak solution to (1.1) is a map \(u\in\mathbb{W}_{0}\) that satisfying the weak formulation \[\int_{\Omega}\mathcal{A}(\nabla u)\cdot\nabla vdx=\int_{\Omega}F(x,u,\nabla u) vdx, \tag{3.1}\] for every test function \(v\in\mathbb{W}_{0}\), where \(\mathbb{W}_{0}\) is the Orlicz-Zygmund-Sobolev space shown in Definition 2.13. Before going into details, let us briefly discuss the eigenvalue problem of the \(p\)-Laplace operator for \(1<p<\infty\) on a bounded domain \(\Omega\) with homogeneous Dirichlet boundary condition: \[\begin{cases}-\text{div}(|\nabla u|^{p-2}\nabla u)&=\ \lambda|u|^{p-2}u, \quad\text{ in }\Omega,\\ \ u&=\ 0,\quad\quad\quad\quad\quad\text{ on }\partial\Omega.\end{cases} \tag{3.2}\] There are several situations that we are interested in knowing about the first eigenvalue, say \(\lambda_{1,p}\) of problem (3.2). The first eigenvalue has some well-known properties: positive, simple, and isolated, (see for e.g. [26]). In addition, the following inequality \[\int_{\Omega}|u(x)|^{p}dx\leq\frac{1}{\lambda_{1,p}}\int_{\Omega}|\nabla u(x) |^{p}dx, \tag{3.3}\] holds for every \(u\in W_{0}^{1,p}(\Omega)\). We refer the reader to [27, 21] for further references. Once having all the preparations at hand, it allows us to state the main existence result of this paper. **Theorem 3.1**: _Let \(1<p<\infty\), \(n\geq 2\) and \(\lambda_{1,p}\) be the first eigenvalue of problem (3.2). Provided the following parameters that satisfying_ \[\mu_{1},\mu_{2}>0,\quad 0<\mu_{3}<\lambda_{1,p},\quad\ 0<\mu_{4}<1,\quad\text{ and }1<q\leq\frac{np}{n-p}. \tag{3.4}\] _Assume further that the nonlinearity on the right-hand side of (1.1), \(F:\Omega\times\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}\), is a Caratheodory function and there exist two functions \(g\in(L^{q}\log L(\Omega))^{*}\) and \(h\in L^{1}(\Omega)\) satisfying the following conditions_ \[\begin{cases}|F(x,t,\xi)|\leq g(x)+\mu_{1}|t|^{q-1}\log(e+|t|)+\mu_{2}|\xi|^{ \frac{p(q-1)}{q}}\log\left(e+|\xi|^{\frac{q}{p}}\right),\\ F(x,t,\xi)t\leq h(x)+\mu_{3}|t|^{p}+\mu_{4}|\xi|^{p}\log(e+|\xi|),\end{cases} \tag{3.5}\] _for a.e. \(x\in\Omega\), for all \(t\in\mathbb{R}\) and \(\xi\in\mathbb{R}^{n}\). Then, the equation (1.1) admits at least one weak solution \(u\in\mathbb{W}_{0}\)._ **Remark 3.2**: _Theorem 3.1 can be extended in a natural way in the study of a class of unbalanced double-phase problems with variable exponent, i.e. when \(p\equiv p(\cdot)\), especially when we deal with obstacle models (models that combine a double-phase operator along with an obstacle constraint). As far as we know, models with variable exponents have been the object of intensive interest in the applied sciences. For instance, non-Newtonian fluids that change their viscosity when the electromagnetic field is present, image segmentation, etc. Generalizing the existence result to this case therefore is significantly more involved, and will be addressed in the future._ _Also, similar to what has been done with quasilinear elliptic equations/systems driven by double-phase operators involving two distinct power hardening exponents \(p\) and \(q\), is of the type_ \[-\mathrm{div}(|\nabla u|^{p-2}\nabla u+a(x)|\nabla u|^{q-2}\nabla u)=F(x,u, \nabla u),\quad\mbox{in}\ \ \Omega,\] _that treated recently by many authors [19, 20, 28, 31, 40], etc, it would be very interesting to deal with the existence of solutions for problem (1.1) or the Euler-Lagrange equation of the functional in (1.5) under different types of boundary conditions (Dirichlet, Neumann, and mixed)._ **Remark 3.3**: _From a mathematical point of view, the authors not only devote special attention to the question of clarifying the existence of the weak solutions/minimizers to certain functionals but also the regularity theory has been paid a lot of attention recently. The regularity study of non-uniformly elliptic equations with \((p,q)\)-growth has been widely investigated. For example, we mention some famous results in regularity theory as in [1, 2, 3, 4, 5, 6, 10, 13, 14, 15, 16, 17, 29, 30, 32, 38, 39] and references therein. Since the regularity for borderline double-phase problems is not yet completely settled, it is natural to establish regularity properties for minimizers/solutions to variational integrals and related equations._ **Remark 3.4**: _It is to be noticed that in the general context of general functionals with double-phase and related equations whose integrand grows almost linearly with respect to the gradient, the existence results are a much more significant challenge. In a notable work by Fillipis and Mingione [18], authors dealt with an initial regularity result for the minimizers of functionals of "nearly-linear" type_ \[\omega\mapsto\mathcal{I}(\omega,\Omega):=\int_{\Omega}\left(a(x)|\nabla u| \log(1+|\nabla u|)+b(x)|\nabla u|^{q}\right)dx.\] _This model is the combination between the classical nearly linear growth and \(q\)-power growth. Many open questions on both the existence and regularity of such problems remain ahead of us. We expect that the same result would be extended to solutions to nonlinear problems in such a borderline case._ ## 4 Proof of main result This section sets forth the proof of Theorem 3.1 with all the preceding results at hand. Further, the proof of our theorem is based on the subjectivity theorem for pseudo-monotone operators as follows. To explore more, we refer to [11, Theorem 2.99]. **Theorem 4.1**: _Let \(\mathbb{W}\) be a real reflexive Banach space. Assume that \(\mathcal{G}:\mathbb{W}\to\mathbb{W}^{*}\) is a bounded, pseudo-monotone and coercive operator. Then the equation \(\mathcal{G}(u)=0\) admits at least one solution in \(\mathbb{W}\)._ Although the key idea of the proof follows a similar path to the proof for double-phase problems with \((p,q)\)-growth in [20], our problem (1.1) (known as the borderline case of a double-phase problem) is difficult to handle by itself nature. Due to the logarithmic growth, the technique is not the same as in [20]. As a natural way, from (3.1), the operator \({\cal G}\) mentioned in Theorem 4.1 will be defined by \[\langle{\cal G}(u),v\rangle=\int_{\Omega}\left[{\cal A}(\nabla u) \cdot\nabla v-F(x,u,\nabla u)v\right]dx,\quad v\in\mathbb{W}_{0}. \tag{4.1}\] The first difficulty comes from the well-definedness of the operator \({\cal G}\), which is obtained by the construction of compact embedding result from \(\mathbb{W}_{0}\) into another reflexive Banach space. Of course, if we embed \(\mathbb{W}_{0}\) into Lebesgue spaces, then assumptions on the convection term \(F\) may be not optimal (see assumption (3.5)). To obtain a better result corresponding to the condition of \(F\), we connect to a compact embedding in Orlicz-Zygmund spaces \(L^{q}\log L(\Omega)\), which is an optimal embedding result as in our knowledge until now. This embedding result can be obtained from a sharp theorem proved by Cianchi in [12]. Furthermore, when we work with the Luxemburg norm of Orlicz-Zygmund space \(L^{q}\log L(\Omega)\), we often consider another integral term that is related to the modular function. Therefore, the proof of coercive property of \({\cal G}\) defined in (4.1) becomes more difficult with this type of modular function. In order to obtain the pseudo-monotone property of \({\cal G}\), one needs to show the \((S_{+})\) condition of the operator related to the term on the left-hand side of (3.1) that involves the logarithmic perturbation. In this paper, we specifically use a very different method to the one in [28] and [20]. At this stage, let us consider the following double-phase operator \({\cal L}:\mathbb{W}_{0}\rightarrow\mathbb{W}_{0}^{\star}\) defined by \[\langle{\cal L}(u),v\rangle=\int_{\Omega}{\cal A}(\nabla u)\cdot \nabla vdx, \tag{4.2}\] for every \(u,v\in\mathbb{W}_{0}\). The next lemma ensures some important properties of the operator \({\cal L}\) that play a crucial role in the main proof of existence. **Lemma 4.2**: _Operator \({\cal L}\) defined in (4.2) is continuous, bounded, monotone and it satisfies \((S)_{+}\) condition._ **Proof.** It is easy to see that \({\cal L}\) is continuous. Let us now show that \({\cal L}\) is bounded. Indeed, let us set \(\tilde{u}=u/\|u\|_{\mathbb{W}_{0}}\) and \(\tilde{v}=v/\|v\|_{\mathbb{W}_{0}}\). One has \[|\langle{\cal L}\left(\tilde{u}\right),\tilde{v}\rangle| =\left|\int_{\Omega}\left[|\nabla\tilde{u}|^{p-2}\,\nabla\tilde{u }\cdot\nabla\tilde{v}+|\nabla\tilde{u}|^{p-2}\log\left(e+|\nabla\tilde{u}| \right)\nabla\tilde{u}\cdot\nabla\tilde{v}\right]dx\right|\] \[\leq\int_{\Omega}|\nabla\tilde{u}|^{p-1}\,|\nabla\tilde{v}|\,dx +\int_{\Omega}|\nabla\tilde{u}|^{p-1}\log\left(e+|\nabla\tilde{u}|\right)| \nabla\tilde{v}|\,dx. \tag{4.3}\] Young's inequality gives us \[\int_{\Omega}|\nabla\tilde{u}|^{p}\,|\nabla\tilde{v}|\,dx\leq C \int_{\Omega}|\nabla\tilde{u}|^{p}\,dx+C\int_{\Omega}|\nabla\tilde{v}|^{p}\,dx. \tag{4.4}\] Thanks to a modified form of Young's inequality in (2.2), there exists a constant \(C>0\) such that \[\int_{\Omega}|\nabla\tilde{u}|^{p-1}\log\left(e+|\nabla\tilde{u}| \right)|\nabla\tilde{v}|\,dx\leq C\int_{\Omega}|\nabla\tilde{u}|^{p}\log^{ \alpha}\left(e+|\nabla\tilde{v}|\right)dx\] \[\qquad\qquad+C\int_{\Omega}|\nabla\tilde{v}|^{p}\log\left(e+| \nabla\tilde{v}|\right)dx. \tag{4.5}\] Substituting (4.5) and (4.4) into (4.3), one obtains that \[|\langle\mathcal{L}\left(\tilde{u}\right),\tilde{v}\rangle| \leq C\int_{\Omega}|\nabla\tilde{u}|^{p}+|\nabla\tilde{u}|^{p}\log \left(e+|\nabla\tilde{u}|\right)dx\] \[\qquad\qquad+C\int_{\Omega}|\nabla\tilde{v}|^{p}+|\nabla\tilde{v }|^{p}\log\left(e+|\nabla\tilde{v}|\right)dx\] \[\leq C\left(\|\tilde{u}\|_{\mathbb{W}_{0}}+\|\tilde{v}\|_{ \mathbb{W}_{0}}\right)=C. \tag{4.6}\] Due to (4.6), it allows us to conclude that \[\|\mathcal{L}(u)\|_{\mathbb{W}_{0}^{*}}=\sup_{\|v\|_{\mathbb{W}_{0}}\leq 1}| \langle\mathcal{L}(u),v\rangle|\leq C\|u\|_{\mathbb{W}_{0}}.\] The fact that the operator \(\mathcal{L}\) is monotone comes from the ellipticity condition of the vector field \(\mathcal{A}\). Indeed, let us define two auxiliary vector fields \(V_{p},V_{p\log}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) by \[V_{p}(\xi):=|\xi|^{\frac{p-2}{2}}\xi,\ V_{p\log}(\xi):=\left(p|\xi|^{p-2}\log( e+|\xi|)+\frac{|\xi|^{p-1}}{e+|\xi|}\right)^{\frac{1}{2}}\xi,\] whenever \(\xi\in\mathbb{R}^{n}\). These operators will be very useful and used mainly in our proofs later. With their aid, there exists a constant \(C>0\) such that \[\left(\mathcal{A}(\xi_{1})-\mathcal{A}(\xi_{2})\right)\cdot(\xi_{1}-\xi_{2}) \geq C\left[|V_{p}(\xi_{1})-V_{p}(\xi_{2})|^{2}+|V_{p\log}(\xi_{1})-V_{p\log }(\xi_{2})|^{2}\right], \tag{4.7}\] for every \(\xi_{1},\xi_{2}\in\mathbb{R}^{n}\). We refer the reader to [4, Section 3.2] for details concerning the proof of (4.7). Then, it is readily verified that \[\langle\mathcal{L}(u_{1})-\mathcal{L}(u_{2}),u_{1}-u_{2}\rangle =\int_{\Omega}\left(\mathcal{A}(\nabla u_{1})-\mathcal{A}(\nabla u _{2})\right)\cdot(\nabla u_{1}-\nabla u_{2})\,dx\] \[\geq C\int_{\Omega}\left[|V_{p}(\nabla u_{1})-V_{p}(\nabla u_{2} )|^{2}+|V_{p\log}(\nabla u_{1})-V_{p\log}(\nabla u_{2})|^{2}\right],\] which implies that \(\mathcal{L}\) is monotone. In a next step, we prove that \(\mathcal{L}\) satisfies the \((S)_{+}\) condition. Assume that \(u_{k}\rightharpoonup u\) in \(\mathbb{W}_{0}\) and \(\limsup_{k\to\infty}\langle\mathcal{L}(u_{k}),u_{k}-u\rangle\leq 0\), we need to show that \(u_{k}\to u\) in \(\mathbb{W}_{0}\). Firstly, it is obvious to see that \[\lim_{k\to\infty}\langle\mathcal{L}(u),u_{k}-u\rangle=0,\] which implies to \[\limsup_{k\to\infty}\langle\mathcal{L}(u_{k})-\mathcal{L}(u),u_{k}-u\rangle\leq 0.\] Combining this with the fact that \(\mathcal{L}\) is monotone, it allows us to conclude that \[\lim_{n\to\infty}\langle\mathcal{L}(u_{k})-\mathcal{L}(u),u_{k}-u\rangle=0.\] In other words, we have \[\lim_{k\to\infty}\fint_{\Omega}\left(\mathcal{A}(\nabla u_{k})-\mathcal{A}( \nabla u)\right)\cdot\left(\nabla u_{k}-\nabla u\right)dx=0. \tag{4.8}\] It follows from [4, Section 3.2] that there exists a constant \(C>0\) such that \[\fint_{\Omega}\left(\mathcal{A}(\nabla u_{k})-\mathcal{A}(\nabla u )\right)\cdot\left(\nabla u_{k}-\nabla u\right)dx\geq C\fint_{\Omega}|\nabla u _{k}-\nabla u|^{2}(|\nabla u_{k}|^{2}+|\nabla u|^{2})^{\frac{p-2}{2}}dx\] \[\qquad+C\fint_{\Omega}|\nabla u_{k}-\nabla u|^{2}(|\nabla u_{k}| ^{2}+|\nabla u|^{2})^{\frac{p-2}{2}}\log(e+|\nabla u_{k}|+|\nabla u|)dx. \tag{4.9}\] On the other hand, for every \(\varepsilon>0\), Young's inequality leads to exist a constant \(C_{\varepsilon}=C_{\varepsilon}(p,\varepsilon)>0\) such that \[\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p}dx\leq\varepsilon\fint_{ \Omega}|\nabla u|^{p}dx\] \[\qquad\qquad+C_{\varepsilon}\fint_{\Omega}|\nabla u_{k}-\nabla u |^{2}(|\nabla u_{k}|^{2}+|\nabla u|^{2})^{\frac{p-2}{2}}dx. \tag{4.10}\] Here, for the detailed proof of comparison estimate (4.10), we propose to the reader our previous work [39, Lemma 3.2]. The same technique applied, it yields \[\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p}\log(e+|\nabla u_{k}- \nabla u|)dx\\ \leq\varepsilon\left(\fint_{\Omega}|\nabla u|^{p}\log(e+|\nabla u |)dx+\fint_{\Omega}|\nabla u_{k}|^{p}\log(e+|\nabla u_{k}|)dx\right)\\ +C_{\varepsilon}\fint_{\Omega}|\nabla u_{k}-\nabla u|^{2}(|\nabla u _{k}|^{2}+|\nabla u|^{2})^{\frac{p-2}{2}}\log(e+|\nabla u_{k}|+|\nabla u|)dx. \tag{4.11}\] Indeed, to obtain (4.11), it is worth noting that the following inequality \[|\nabla u_{k}-\nabla u|^{p} \log(e+|\nabla u_{k}-\nabla u|)\] \[\leq C|\nabla u_{k}-\nabla u|^{2}(|\nabla u_{k}|^{2}+|\nabla u|^ {2})^{\frac{p-2}{2}}\log(e+|\nabla u_{k}|+|\nabla u|)\] holds whenever \(p\geq 2\). When \(1<p<2\) and for every \(\varepsilon>0\), thanks to Young's inequality, it gives \[|\nabla u_{k}-\nabla u|^{p}\log(e+|\nabla u_{k}-\nabla u|)\] \[=(|\nabla u_{k}|^{2}+|\nabla u|^{2})^{\frac{p(2-p)}{4}}\left((| \nabla u_{k}|^{2}+|\nabla u|^{2})^{\frac{p-2}{2}}|\nabla u_{k}-\nabla u|^{2} \right)^{\frac{p}{2}}\log(e+|\nabla u_{k}-\nabla u|)\] \[\leq\varepsilon(|\nabla u_{k}|+|\nabla u|)^{p}\log(e+|\nabla u_{ k}|+|\nabla u|)\] \[\qquad\qquad+C_{\varepsilon}|\nabla u_{k}-\nabla u|^{2}(|\nabla u _{k}|^{2}+|\nabla u|^{2})^{\frac{p-2}{2}}\log(e+|\nabla u_{k}|+|\nabla u|)\] \[\leq C\varepsilon\left(|\nabla u_{k}|^{p}\log(e+|\nabla u_{k}|)+| \nabla u|^{p}\log(e+|\nabla u|)\right)\] \[\qquad\qquad+C_{\varepsilon}|\nabla u_{k}-\nabla u|^{2}(|\nabla u _{k}|^{2}+|\nabla u|^{2})^{\frac{p-2}{2}}\log(e+|\nabla u_{k}|+|\nabla u|),\] which leads to (4.11). Moreover, by using the following inequality \[|\nabla u_{k}|^{p}\log(e+|\nabla u_{k}|)\leq C\big{[}|\nabla u|^{p}\log(e+|\nabla u |)+|\nabla u-\nabla u_{k}|^{p}\log(e+|\nabla u-\nabla u_{k}|)\big{]},\] we can deduce from (4.11) to \[\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p}\log(e+|\nabla u_{k}- \nabla u|)dx\leq\varepsilon\fint_{\Omega}|\nabla u|^{p}\log(e+|\nabla u|)dx\] \[\qquad+C_{\varepsilon}\fint_{\Omega}|\nabla u_{k}-\nabla u|^{2}(| \nabla u_{k}|^{2}+|\nabla u|^{2})^{\frac{p-2}{2}}\log(e+|\nabla u_{k}|+|\nabla u |)dx. \tag{4.12}\] Combining all estimates in (4.9), (4.10) and (4.12), one obtains that \[\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p}+|\nabla u_{k}-\nabla u |^{p}\log(e+|\nabla u_{k}-\nabla u|)dx\] \[\qquad\leq\varepsilon\fint_{\Omega}|\nabla u|^{p}+|\nabla u|^{p} \log(e+|\nabla u|)dx\] \[\qquad\qquad+C_{\varepsilon}\fint_{\Omega}\left(\mathcal{A}( \nabla u_{k})-\mathcal{A}(\nabla u)\right)\cdot\left(\nabla u_{k}-\nabla u \right)dx, \tag{4.13}\] for all \(\varepsilon>0\). For every \(\delta\in(0,1)\), let us choose \(\varepsilon>0\) in (4.13) such that \[\varepsilon\left(1+\fint_{\Omega}|\nabla u|^{p}+|\nabla u|^{p}\log(e+|\nabla u |)dx\right)<\delta^{p}.\] By (4.8), there exists \(k_{0}\in\mathbb{N}\) such that \[\fint_{\Omega}\left(\mathcal{A}(\nabla u_{k})-\mathcal{A}(\nabla u)\right) \cdot\left(\nabla u_{k}-\nabla u\right)dx\leq\varepsilon C_{\varepsilon}^{-1},\quad\forall k\geq k_{0}.\] Therefore, it follows from (4.13) that \[\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p}+|\nabla u_{k}-\nabla u |^{p}\log(e+|\nabla u_{k}-\nabla u|)dx\\ \leq\varepsilon\left(1+\fint_{\Omega}|\nabla u|^{p}+|\nabla u|^{ p}\log(e+|\nabla u|)dx\right)<\delta^{p}, \tag{4.14}\] for all \(k\geq k_{0}\). In what follows, when no confusion arises, we shall always consider \(k\geq k_{0}\). From (4.14), we have \[0<\|\nabla u_{k}-\nabla u\|_{p}<\delta<1,\] and at this stage, apply (2.11) with \(\alpha=1\), we arrive at \[\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p} \log\left(e+\frac{|\nabla u_{k}-\nabla u|}{\|\nabla u_{k}-\nabla u \|_{p}}\right)dx\] \[\leq\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p}\log(e+|\nabla u_{k} -\nabla u|)dx\] \[\qquad+\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p}\log\left(\frac{1 }{\|\nabla u_{k}-\nabla u\|_{p}}\right)dx. \tag{4.15}\] Further, (2.13) with specific \(\alpha=p=1\), we readily obtain \[\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p} \log\left(\frac{1}{\|\nabla u_{k}-\nabla u\|_{p}}\right)dx\] \[\leq\frac{1}{e}\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p}\frac{1}{ \|\nabla u_{k}-\nabla u\|_{p}}dx\leq\frac{1}{e}\delta^{p-1},\] and due to (4.15), it gives \[\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p} \log\left(e+\frac{|\nabla u_{k}-\nabla u|}{\|\nabla u_{k}-\nabla u \|_{p}}\right)dx\] \[\leq\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p}\log(e+|\nabla u_{k }-\nabla u|)dx+\frac{1}{e}\delta^{p-1}. \tag{4.16}\] Moreover, from (2.18), the norm in \(\mathbb{W}_{0}\) and in virtue of inequality (2.17) of Lemma 2.12, we have \[\|u_{k}-u\|_{\mathbb{W}_{0}} =\|\nabla u_{k}-\nabla u\|_{\mathbb{L}^{p\log(\Omega)}}\] \[\leq\left[\fint_{\Omega}|\nabla u_{k}-\nabla u|^{p}+|\nabla u_{k }-\nabla u|^{p}\log\left(e+\frac{|\nabla u_{k}-\nabla u|}{\|\nabla u_{k}- \nabla u\|_{p}}\right)dx\right]^{\frac{1}{p}}. \tag{4.17}\] Substituting (4.14) and (4.16) into (4.17), we infer that \[\|u_{k}-u\|_{\mathbb{W}_{0}}\leq\left[\delta^{p}+\frac{1}{e}\delta^{p-1} \right]^{\frac{1}{p}}.\] Therefore, it allows us to conclude that \[\lim_{k\to\infty}\|u_{k}-u\|_{\mathbb{W}_{0}}=0,\] or \(u_{k}\to u\) in \(\mathbb{W}_{0}\). The proof is complete. Once having the previous Lemma at hand, we are now in a position to accomplish the proof of Theorem 3.1, the main existence result in this study. **Proof of Theorem 3.1.** Let \(I^{*}:(L^{q}\log L(\Omega))^{*}\to\mathbb{W}_{0}^{*}\) be the adjoined operator of the compact embedding \(I:\mathbb{W}_{0}\to L^{q}\log L(\Omega)\). Assume that \[\tilde{\mathcal{N}}_{F}:\mathbb{W}_{0}\subset L^{q}\log L(\Omega)\to(L^{q} \log L(\Omega))^{*}\] is the Nemytskij operator associated to \(F\) and set \(\mathcal{N}_{F}=I^{*}\circ\tilde{\mathcal{N}}_{F}\). According to the assumption (3.5)\({}_{1}\), we arrive at \[|F(x,u,\nabla u)|\leq g(x)+\mu_{1}|u|^{q-1}\log(e+|u|)+\mu_{2}|\nabla u|^{ \frac{p}{q}}\log\left(e+|\nabla u|^{\frac{q}{p}}\right).\] Thanks to inequality (2.1) in Lemma 2.3, it follows that if the Young function \(G\in\Delta_{2}\cap\nabla_{2}\) and \(u\in L^{G}(\Omega)\), then \(\frac{G(|u|)}{|u|}\in(L^{G}(\Omega))^{*}=L^{G^{*}}(\Omega)\). Combining this with the following facts \[u\in L^{q}\log L(\Omega)=L^{\varphi_{q}}(\Omega),\text{ and }|\nabla u|^{ \frac{q}{p}}\in L^{q}\log L(\Omega)\ \ (\text{Remark \ref{eq:K-1}}),\] it leads to \[\frac{\varphi_{q}(|u|)}{|u|}\in(L^{q}\log L(\Omega))^{*},\text{ and }\frac{ \varphi_{q}(|\nabla u|^{\frac{q}{p}})}{|\nabla u|^{\frac{q}{p}}}\in(L^{q}\log L( \Omega))^{*},\] From this reasoning, we conclude that the operator \(\mathcal{N}_{F}\) maps \(\mathbb{W}_{0}\) into \(\mathbb{W}_{0}^{*}\) by Lemma 2.6. Next, let us introduce a new operator \(\mathcal{G}:\mathbb{W}_{0}\to\mathbb{W}_{0}^{*}\) defined by \[\mathcal{G}(u)=\mathcal{L}(u)-\mathcal{N}_{F}(u),\quad u\in\mathbb{W}_{0}, \tag{4.18}\] and it is clear that this operator maps bounded sets into bounded sets. At this stage, let us now show that \(\mathcal{G}\) is pseudo-monotone, that means \[u_{k}\rightharpoonup u\text{ in }\mathbb{W}_{0}\text{ \ and \ }\limsup_{k\to\infty} \langle\mathcal{G}(u_{k}),u_{k}-u\rangle\leq 0, \tag{4.19}\] imply \(u_{k}\to u\) in \(\mathbb{W}_{0}\). Assume that the sequence \((u_{k})\) satisfies (4.19), one need to show that \(u_{k}\to u\) in \(\mathbb{W}_{0}\). With Lemma 2.15 at hand, the embedding \(\mathbb{W}_{0}\hookrightarrow L^{q}\log L(\Omega)\) is compact. It then leads to \(u_{k}\to u\) in \(L^{q}\log L(\Omega)\). On the other hand, by assumption \((\ref{eq:1})_{1}\) and Holder's inequality (2.3) in Lemma 2.6, we infer that \[\left|\int_{\Omega}F(x,u_{k},\nabla u_{k})(u_{k}-u)dx\right| \leq\int_{\Omega}g(x)|u_{k}-u|dx\] \[\quad+\mu_{1}\int_{\Omega}|u_{k}|^{q-1}\log(e+|u_{k}|)|u_{k}-u|dx\] \[\qquad+\mu_{2}\int_{\Omega}|\nabla u_{k}|^{\frac{p}{q^{\prime}}} \log\left(e+|\nabla u_{k}|^{\frac{q}{p}}\right)|u_{k}-u|dx\] \[\leq T_{k}\|u_{k}-u\|_{L^{q}\log L(\Omega)}, \tag{4.20}\] where the bounded sequence \(T_{k}\) is given by \[T_{k}=\|g\|_{(L^{q}\log L(\Omega))^{*}}+\left\|\frac{\varphi_{q}(u_{k})}{|u_{ k}|}\right\|_{(L^{q}\log L(\Omega))^{*}}+\left\|\frac{\varphi_{q}(|\nabla u _{k}|^{p/q})}{|\nabla u_{k}|^{p/q}}\right\|_{(L^{q}\log L(\Omega))^{*}}.\] Passing to the limit \(k\to\infty\) in (4.20), it gives \[\lim_{k\to\infty}\int_{\Omega}F(x,u_{k},\nabla u_{k})(u_{k}-u)dx=0,\] and taking (4.19) into account, it ensures that \[\limsup_{k\to\infty}\langle\mathcal{A}(u_{k}),u_{k}-u\rangle=\limsup_{k\to \infty}\langle\mathcal{G}(u_{k}),u_{k}-u\rangle\leq 0.\] Invoking Lemma 4.2, we are allowed to conclude that \(u_{k}\to u\) in \(\mathbb{W}_{0}\). Hence, one gets that \(\mathcal{G}(u_{k})\to\mathcal{G}(u)\) in \(\mathbb{W}_{0}^{*}\) by the continuity of \(\mathcal{G}\) and therefore, \(\mathcal{G}\) is pseudo-monotone. The last step is devoted to showing that \(\mathcal{G}\) is coercive, which means \[\lim_{\|u\|_{\mathbb{W}_{0}}\to\infty}\frac{\langle\mathcal{G}(u),u\rangle}{ \|u\|_{\mathbb{W}_{0}}}=\infty. \tag{4.21}\] For every \(u\in\mathbb{W}_{0}\), let us present \(\langle\mathcal{G}(u),u\rangle\) as follows \[\langle\mathcal{G}(u),u\rangle=|\Omega|\left(\fint_{\Omega}|\nabla u|^{p}+| \nabla u|^{p}\log(e+|\nabla u|)dx-\fint_{\Omega}F(x,u,\nabla u)udx\right). \tag{4.22}\] Combining assumption (3.5)\({}_{2}\) with inequality (3.3), it gives us that \[\fint_{\Omega}F(x,u,\nabla u)udx \leq\fint_{\Omega}h(x)dx+\mu_{3}\fint_{\Omega}|u|^{p}dx+\mu_{4} \fint_{\Omega}|\nabla u|^{p}\log(e+|\nabla u|)dx\] \[\leq\|h\|_{1}+\frac{\mu_{3}}{\lambda_{1,p}}\fint_{\Omega}|\nabla u |^{p}dx+\mu_{4}\fint_{\Omega}|\nabla u|^{p}\log(e+|\nabla u|)dx. \tag{4.23}\] Then, substituting (4.23) into (4.22), it leads to \[\langle\mathcal{G}(u),u\rangle\geq|\Omega|\left[\left(1-\frac{\mu_{3}}{ \lambda_{1,p}}\right)\fint_{\Omega}|\nabla u|^{p}dx+(1-\mu_{4})\fint_{\Omega} |\nabla u|^{p}\log(e+|\nabla u|)dx-\|h\|_{1}\right]. \tag{4.24}\] On the other hand, it is obviously that \[\fint_{\Omega}|\nabla u|^{p}\log\left(e+\frac{|\nabla u|}{\|\nabla u\|_{p}} \right)dx\leq\fint_{\Omega}|\nabla u|^{p}\log(e+|\nabla u|)dx\] whenever \(\|\nabla u\|_{p}\geq 1\). In the other case \(0<\|\nabla u\|_{p}<1\), thanks to (2.11) and (2.13), there holds \[\fint_{\Omega}|\nabla u|^{p}\log\left(e+\frac{|\nabla u|}{\|\nabla u \|_{p}}\right)dx \leq\fint_{\Omega}|\nabla u|^{p}\log(e+|\nabla u|)dx+\fint_{\Omega }|\nabla u|^{p}\log(\|\nabla u\|_{p}^{-1})dx\] \[\leq\fint_{\Omega}|\nabla u|^{p}\log(e+|\nabla u|)dx+\frac{1}{ep}. \tag{4.25}\] For this reason, inequality (4.25) holds whenever \(\|\nabla u\|_{p}>0\). Combining (4.24) and (4.25), one obtains that \[\langle\mathcal{G}(u),u\rangle \geq|\Omega|\left[\left(1-\frac{\mu_{3}}{\lambda_{1,p}}\right) \fint_{\Omega}|\nabla u|^{p}dx\right.\] \[\qquad\left.+(1-\mu_{4})\fint_{\Omega}|\nabla u|^{p}\log\left(e+ \frac{|\nabla u|}{\|\nabla u\|_{p}}\right)dx-\frac{1-\mu_{4}}{ep}-\|h\|_{1}\right]\] \[\geq|\Omega|\left[\mu_{0}[\nabla u]_{\mathbb{L}^{p}\log(\Omega)} ^{p}-\frac{1-\mu_{4}}{ep}-\|h\|_{1}\right]\] \[\geq|\Omega|\left[\mu_{0}\|u\|_{\mathbb{W}_{0}}^{p}-\frac{1-\mu_ {4}}{ep}-\|h\|_{1}\right], \tag{4.26}\] where \(\mu_{0}\) is given by \[\mu_{0}=\min\left\{1-\frac{\mu_{3}}{\lambda_{1,p}};1-\mu_{4}\right\}.\] Assumption (3.4) guarantees that the constant \(\mu_{0}\) is positive. Since \(p>1\) and \(\mu_{0}>0\), we may conclude that (4.21) holds from (4.26). This mean the operator \(\mathcal{G}\) is coercive. At this point, as our previous proofs, all hypotheses of Theorem 4.1 hold true for \(W=\mathbb{W}_{0}\) and operator \(\mathcal{G}\) defined in (4.18), hence there exists at least one \(u\in\mathbb{W}_{0}\) such that \(\mathcal{G}(u)=0\). In conclusion, the problem (1.1) admits at least one weak solution in \(\mathbb{W}_{0}\). The proof is now complete. ## Acknowledgement This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED), Grant Number: 101.02-2021.17.
2309.16513
On modeling airborne infection risk
Airborne infection risk analysis is usually performed for enclosed spaces where susceptible individuals are exposed to infectious airborne respiratory droplets by inhalation. It is usually based on exponential, dose-response models of which a widely used variant is the Wells-Riley (WR) model. We revisit this infection-risk estimate and extend it to the population level. We use an epidemiological model where the mode of pathogen transmission, either airborne or contact, is explicitly considered. We illustrate the link between epidemiological models and the WR model. We argue that airborne infection quanta are, up to an overall density, airborne infectious respiratory droplets modified by a parameter that depends on biological properties of the pathogen, physical properties of the droplet, and behavioural parameters of the individual. We calculate the time-dependent risk to be infected during the epidemic for two scenarios. We show how the epidemic infection risk depends on the viral latent period and the event time, the time infection occurs. The infection risk follows the dynamics of the infected population. As the latency period decreases, infection risk increases. The longer a susceptible is present in the epidemic, the higher is its risk of infection by equal exposure time to the mode of transmission.
Yannis Drossinos, Nikolaos I. Stilianakis
2023-09-28T15:19:39Z
http://arxiv.org/abs/2309.16513v2
# On modeling airborne infection risk ###### Abstract Airborne infection risk analysis is usually performed for enclosed spaces where susceptible individuals are exposed to infectious airborne respiratory droplets by inhalation. It is usually based on exponential, dose-response models of which a widely used variant is the Wells-Riley (WR) model. We employ a population-based Susceptible-Exposed-Droplet-Infected-Recovered (SEDIR) model to revisit the infection-risk estimate at the population level during an epidemic. We demonstrate the link between epidemiological models and the WR model, including its Gammaitoni-Nucci (GN) generalization. This connection shows how infection quanta are related to the number of infectious airborne droplets. For long latent periods, the SEDIR model reduces to the GN model with parameters that depend on biological properties of the pathogen (size-dependent pathogen droplet concentration, infection probability of a deposited infectious droplet), physical droplet properties (lung-deposition probability), and individual behavioral properties (exposure time). In two scenarios we calculate the probability of infection during the epidemic. The WR and GN limits of the SEDIR model reproduce accurately the SEDIR-calculated infection risk. infectious diseases, SARS-CoV-2 transmission, aerosol, airborne, infection risk, Wells-Riley infections risk model, Gammaitoni-Nucci infection risk model ## I Introduction The determination of the risk of infection during an epidemic is an important quantitative indicator that, among others, influences decisions of public health authorities on intervention strategies and their implementation, including vaccine administration. It contributes, also, to individual decisions on accepting recommended social distancing, implementing proper wearing of face masks, and adhering to mobility restrictions. Estimates of the risk associated with airborne respiratory-pathogen infection have become numerous since the beginning of the coronavirus disease 2019 (COVID-19) pandemic. Most airborne infection-risk analyses during the COVID-19 pandemic concentrated on risk calculations in small, enclosed spaces within which susceptible individuals are exposed by inhalation to infectious airborne respiratory droplets for a brief period. For example, the probability of infection due to the (severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been estimated in numerous micro-environments, such as in an office as a function of number of occupants and their exposure time, in a pharmacy, a supermarket, a restaurant, a post office and a bank [1]; in a hospital room, conference room and auditorium [2]; in shared indoor space [3]; in public spaces like a shopping mall corridor [4] or small shops [5]; in a ski cabin [6]; in a university office [7]. The majority of these risk analyses were based on the exponential, dose-response Wells-Riley (WR) model or its variants, see, for example, the recent generalization to poly-pathogen aerosols and the validity of the Poisson-distribution assumption [8] or the use of a double Poisson model [9]. The Wells-Riley [10; 11; 12] model is a deterministic exposure model, based on the probabilistic airborne infection model proposed by Wells [13]. Wells introduced the quantum of airborne infection 1 as a discrete entity of the infectious dose that would give [8] a 63.21% probability of infection infected is, according to the Poisson distribution, or, in modern terminology, as the Infectious Dose ID\({}_{63.21}\). Riley et al. [10], expanding on Riley (1974) [14] and using Wells' quantum of infection, introduced the average number of quanta inhaled during an individual's exposure to an airborne pathogen in an exponential dose-response model to obtain a model for the probability of airborne infections in a indoor environment. They assumed that the micro-environment is homogeneous, and hence infection quanta were uniformly distributed (well mixed approximation), and that the quantum concentration was at steady state, as well as the ventilation conditions (steady-state outdoor air supply). The resulting steady-state model is commonly referred to as the Wells-Riley model. Moreover, they took \(I\), the number of infectors, constant during exposure, but not so the number of susceptibles \(S\), assuming that the viral latent period, the time of infection to the time of becoming infectious, is much longer than the exposure time, namely the time interval individuals are exposed to the pathogen. An important generalization of the WR model was proposed by Gammaitoni and Nucci [15] (GN). They removed the steady-state quantum concentration assumption to generalize the airborne infection-risk model to time-dependent quanta concentrations within the confined environment. One of the characteristics of the WR model is that it uses input from aerosol dynamics in, e.g., estimates of the generation rate of the quanta of infection, and their removal rate via e.g., gravitational settling or indoor-air ventilation, to estimate viral transmissibility. Human behavior, however, is naively modeled by the lumped parameter of exposure time. The model, being an individual-level model and in contrast to compartmental epidemiological models, does not consider the total population \(N\). Instead, the enclosed-space volume \(V\) determines the spatial scale. This is required to render the exponent in the risk expression an intensive variable, that is a density which is independent of the scale of the system. Infection risk estimates in larger, including closed or semi-closed, populations and at longer, but intermediate, spatial and temporal time scales than those investigated by micro-environmental models are equally important. Envisioned intermediate spatial scales are those encountered in, e.g., hospitals, prisons, cruise and military ships, boarding schools, nursing homes, military camps. Mesoscopic epidemiological models address these scales. The Susceptible-Droplet-Infected-Recovered (SDIR) model [4; 7] is one such model. It has two distinguishing features: it retains the structure of compartmental epidemiological models, and it incorporates explicitly the dynamics of the pathogen-carrying agent. In the case of SARS-CoV-2 where the pathogen-carrying agent is the infectious respiratory droplets, the SDIR model retains the necessary information on the dynamics of the infectious droplets, in addition to incorporating biological aspects of the virus, and behavioral aspects of the individuals. Contrary to micro-environmental models the SDIR model is a population-level model. Macroscopic models, on the other hand, address much larger populations and much longer temporal and spatial scales, for example country-wide and province scales [18; 19; 20] or regional scales [21]. At such scales, the models do not consider explicitly micro-environmental dynamics. Instead, the intricate dynamics of the respiratory droplets and other micro-environmental dynamics are implicitly incorporated via effective transmission rates or parameters, via a procedure akin to coarse-grained descriptions of physical systems [4]. Noakes et al. [12] presented an early attempt to reconcile the WR expression with a standard Susceptible-Infected-Recovered (SIR) compartmental epidemiological model. Their derivation was reconsidered and amplified by Bazant and Bush [22] who included explicitly the exposed population compartment and considered both short and long latent periods, as we do in this work. We use an extended version of the SDIR model to revisit the derivation and to estimate what we shall refer to as the epidemic airborne infection risk, the infection risk during an epidemic. In doing so, we elucidate and establish firmly the connection between compartmental epidemiological models and micro-environmental risk models, like the Wells-Riley model and its Gammaitoni-Nucci generalization, and the relevance of respiratory droplet dynamics. One of the essential observations is that neither the GN nor the WR model considers the time-dependent changes of infected population. ## II Infection probability in compartmental epidemiological models The epidemic infection risk \(P(t_{0},\delta t;\langle\tau_{\rm exp}\rangle)\) is the probability of infection at time \(t_{0}\) from the beginning of an epidemic within a prediction interval \(\delta t\). Expressed in terms of the number of susceptible individuals \(S\), it is their relative change [12; 23] in the period \([t_{0},t_{0}+\delta t]\), \[P(t_{0},\delta t;\langle\tau_{\rm exp}\rangle)=\frac{S(t_{0};\langle\tau_{\rm exp }\rangle)-S(t_{0}+\delta t;\langle\tau_{\rm exp}\rangle)}{S(t_{0};\langle \tau_{\rm exp}\rangle)}. \tag{1}\] The decreasing time series \(S(t)\) depends on the average daily exposure time \(\langle\tau_{\rm exp}\rangle\). This dependence may be explicit, as in the SDIR model, or implicit via, e.g., the daily number of contacts between susceptible and infected individuals in the standard SIR model. The time from the start of the epidemic \(t_{0}\) determines the initial conditions: for example, in a Susceptible-Exposed-Infected-Recovered (SEIR) model it specifies the initial number of infected \(I(t_{0})=I_{0}\), of susceptibles \(S_{0}\) and exposed \(E_{0}\). Additionally, in models that include respiratory-droplet dynamics, \(t_{0}\) specifies the initial number of airborne \(D_{0}\) and settled \(C_{0}\) droplets. The probability of infection may also be expressed in terms of Cases(\(t_{0}\)), the number of new infectious cases at time \(t_{0}\), since Cases(\(t_{0}+\delta t;\langle\tau_{\rm exp}\rangle)=S(t_{0};\langle\tau_{\rm exp} \rangle)-S(t_{0}+\delta t;\langle\tau_{\rm exp}\rangle)\). Bazant and Bush [22] used the secondary attack rate, new cases relative to \(S_{0}\), to obtain the infection probability in a form as reported here. Given Eq. (1) any epidemiological model that calculates \(S(t)\) can be used to calculate the probability of infection. It provides the connection between epidemiological compartmental models and infection-risk models. ## III Droplet Models ### Susceptible-Droplet-Infected-Recovered (SDIR) model The SDIR model of infectious disease transmission via infectious respiratory droplets [16] extends the standard SIR model by coupling the population dynamics of susceptibles and infected to the dynamics of a population of infectious respiratory droplets. It is a population epidemiological model in that the population under consideration is divided into compartments and individuals can move between the compartments. Its particularity is that it adduces the population of airborne and settled infectious respiratory droplets. As the temporal and spatial scales associated with infectious droplets are relatively short, effective removal time scales of infectious droplets are less than a day, the SDIR model is a mesoscopic model [17]. It provides a natural extension of micro-environmental models in that it considers intermediate scales where the properties and the dynamics of the pathogen-carrying droplets are explicitly calculated and incorporated in the model. According to the initial formulation, respiratory droplets are partitioned into two compartments: airborne (\(D\)) and settled (\(S\)). As in the standard SIR model individuals are divided into three population compartments: susceptibles \(S\), infected \(I\), and recovered \(R\). Infection does not occur via direct \(I\leftrightarrow S\) interaction: instead, this interaction is mediated by the infectious droplets, be they airborne or settled. The model allows for a distribution of droplets characterized by their diameter, be it pre- or post-evaporation [24; 25]. ### Susceptible-Exposed-Droplet-Infected-Recovered (SEDIR) model As other respiratory viruses SARS-CoV-2 virus exhibits a latent period. During the latent period \(\tau_{\rm lat}\) exposed individuals are infected but not infectious. Accordingly, we generalize the SDIR model by adding an exposed population compartment \(E\). We, thus, introduce the latent period \(\tau_{\rm lat}=1/\sigma\), a time scale to be contrasted to the average time a susceptible individual is exposed to the pathogen \(\langle\tau_{\rm exp}\rangle\) that, as we will argue, is embedded in the transmission rates \(\beta\)s. As previously mentioned, Bazant and Bush [22] also included the exposed population to connect the SEIR model to the WR- model. They, as we do herein, also considered cases of short and long latent periods. The SEDIR model is defined by the following set of coupled ordinary differential equations (ODEs) \[\frac{dS}{dt} = -\sum_{i=1}^{i=i_{\rm max}}\Big{(}\frac{\beta_{i}^{d}}{N}D_{i}S+ \frac{\beta_{i}^{c}}{N}C_{i}S\Big{)}, \tag{2a}\] \[\frac{dE}{dt} = -\frac{dS}{dt}-\sigma E,\] (2b) \[\frac{dI}{dt} = \sigma E-\mu_{I}I,\] (2c) \[\frac{dD_{i}}{dt} = \kappa_{i}^{d}I-\alpha_{i}^{d}D_{i},\quad\mbox{for}\quad i=1,2 \ldots i_{\rm max},\] (2d) \[\frac{dC_{i}}{dt} = \kappa_{i}^{c}D-\alpha_{i}^{c}C_{i},\quad\mbox{for}\quad i=1,2 \ldots i_{\rm max}. \tag{2e}\] We do not show the equation for the recovered compartment \(R\) since the total population \(S+I+R=N\) is constant representing a closed population. A schematic diagram of the model is shown in Fig. 1. The number of infectious airborne droplets of post-evaporation diameter \(d_{i}^{\rm post}\) is denoted by \(D_{i}\) (number), and that of settled droplets by \(C_{i}\) (number), cf. the Supporting Information Appendix for a discussion of droplet evaporation and associated droplet diameters. The number of droplet classes is \(i_{\rm max}\). The rate of transition from the exposed compartment \(E\) to the infected compartment \(I\) is denoted by \(\sigma\), whose inverse is the virus latent period \(\tau_{\rm lat}=1/\sigma\). The infection recovery rate, the rate at which \(I\to R\), is \(\mu_{I}\). Superscripts denote airborne droplet (\(d\)) and settled (\(c\)), and the subscript \(i\) denotes the droplet class specified by the post-evaporation diameter \(d_{i}^{\rm post}\). The transmission rate per infectious, airborne respiratory droplet that has been inhaled and deposited in the respiratory tract of a susceptible is denoted by \(\beta_{i}^{d}\) (inverse time), whereas that of an infectious settled droplet transferred to facial membranes is denoted by \(\beta_{i}^{c}\) (inverse time). The airborne droplet generation rate per infected individual (by normal oro-nasal activities-e.g., speaking, laughing, breathing- or by violent expiratory events - sneezing, coughing-) is \(\kappa_{i}^{d}\) (number/time) and the corresponding airborne droplet removal rate is \(\alpha_{i}^{d}\) (number/time), the later including droplet removal by ventilation (if present). Settled droplets may be generated either via direct generation by an infected individual and deposition on facial mucous tissues or via deposition of airborne droplets. Direct deposition would introduce an additional generation term in Eq. (2e) proportional to the number of infected individuals, similar to the generation term in the airborne-droplets equation, Eq. (2d). In this version of the model we neglect this mechanism. Instead, settled droplets are generated via deposition of airborne droplets, and specifically solely by gravitational settling. Hence the generation rate \(\kappa_{i}^{c}=\theta_{i}(d_{i}^{\rm post})\) (number/time) with \(\theta\) the gravitational settling rate in still air. The corresponding settled droplet removal rate is \(\alpha_{i}^{c}\) (number/time). We present explicit expressions for the transmission \(\beta_{i}^{c,d}\) and removal \(\alpha_{i}^{c,d}\) rates, along with justifications for our choices, in the Appendix. We remark that the transmission and removal rates are _derived_ quantities. In addition, both transmission rates depend (linearly as we argue in the Appendix) on the average exposure time \(\langle\tau_{\rm exp}\rangle\), i.e., \(\beta=\beta(\langle\tau_{\rm exp}\rangle)\). The SDIR basic reproduction number is [16; 17] \[R_{0}^{\rm SDIR}=\sum_{i=1}^{i=i_{\rm max}}\Big{(}\frac{\beta_{i}^{d}\kappa_{ i}^{d}}{\alpha_{i}^{d}\mu_{I}}+\frac{\beta_{i}^{c}\kappa_{i}^{c}}{\alpha_{i}^{c} \mu_{I}}\Big{)}. \tag{3}\] Equation (3) also gives the SEDIR basic reproduction number, see, for example, Ref. [26]. ### Gammaitoni-Nucci (GN) limit We limit the droplet classes in the SEDIR model to a single airborne droplet class \(D_{1}\), as the original GN model considered only one droplet diameter. It is easily shown, for example by integrating the linear ODE for the infected population Eq. (2c), that if \(\sigma\delta t\ll 1\), latent period much greater than the time infected individuals generate infectious droplets within the enclosed space, and \(\mu_{I}\delta t\ll 1\), infectiousness period much greater than the prediction time \(\delta t\), then \(dI/dt|_{t_{0}}=0\), the number of infected is constant at \(t_{0}\), and denoted as \(I_{0}\). If, in addition, we disregard the equation for the exposed population Eq. (2b), which is irrelevant over the prediction time for the evolution of the infection (the number of \(E\) increases, but not that of \(I\)), the SEDIR model reduces to \[\frac{dS}{dt} = -\frac{\beta_{1}^{d}}{N}D_{1}S, \tag{4a}\] \[\frac{dD_{1}}{dt} = \kappa_{1}^{d}I_{0}-\alpha_{1}^{d}D_{1}. \tag{4b}\] Figure 1: Schematic diagram of the Susceptible-Exposed-Droplet-Infected-Recovered (SEDIR) model (based on a figure of Ref. [17]). Droplet compartments are denoted by \(D_{i}\), airborne droplets, and \(C_{i}\), settled droplets. Superscripts \((d,c)\) denote (airborne, settled) droplets, the subscript \(i\) refers to droplets with post evaporation diameter \(d_{i}^{\rm post}\). Infection transmission rates are denoted by \(\beta_{i}^{d,c}\), droplet generation rates by \(\kappa_{i}^{d,c}\), and removal rates by \(\alpha_{i}^{d,c}\). The latent period is \(\tau_{\rm lat}=1/\sigma\) and the infection recovery rate \(\mu_{I}\). The system of Eqs. (4a,4b) can be compared to the GN equations (5) for the rate of change of the number of susceptibles and total number of quanta of infection \(Q\) in the enclosed space, which expressed in our notation read \[\frac{dS}{dt} = -\frac{B}{V}QS, \tag{5a}\] \[\frac{dQ}{dt} = qI_{0}-\lambda_{\rm air}Q, \tag{5b}\] where \(q\) is the quantum generation rate per infectious individual (quanta/sec), see also [12], \(B\) is the pulmonary ventilation rate (also referred to as breathing rate, m\({}^{3}\)/sec), and \(V\) is the space volume (m\({}^{3}\)). We explicitly denote the number of infected individuals during exposure as \(I_{0}\), Eq. (5b), to stress that their number is constant. The parameter \(\lambda_{\rm air}\) that determines the quantum-removal rate is the ventilation rate in air exchanges per hour (referred to as the disinfection rate expressed as the number of effective, or equivalent, air exchanges, in the original reference [15]). Since the initial formulation of the model, the quanta removal rate has been expanded to include the rate of pathogen inactivation, droplet surface deposition, inactivation due to UV irradiation, filter penetration, mask efficiency, etc. (see also the droplet removal rates \(\alpha_{1}^{d}\) used in this work and summarized in the Appendix). The analytical solution of Eq. (5b) is \[Q(t)=\frac{qI_{0}}{\lambda_{\rm air}}+\left(Q_{0}-\frac{qI_{0}}{\lambda_{\rm air }}\right)\exp(-\lambda_{\rm air}\delta t), \tag{6}\] where \(Q_{0}\) is the initial (at time \(t=t_{0}\)) total concentration of the infection quanta in the enclosed space. The comparison of Eqs. (4) and (5) provides insights on the differences and formal similarities of the SEDIR and GN models. Let the number of quanta be proportional to the number of infectious respiratory droplets, \(Q=\xi D_{1}\), and the transmission rate proportional to the breathing rate, \(\beta_{1}^{d}=B\tilde{\beta}_{1}^{d}\), as argued in the Appendix. Moreover, consider indoor-air ventilation the only droplet or quantum removal process, \(a_{1}^{d}=\lambda_{\rm air}\). Their substitution into Eqs. (4), and a mapping of the resulting equations to Eqs. (5) determines the conversion factor \(\xi\) to be \[\xi=\frac{\beta_{1}^{d}}{B}\frac{V}{N}\equiv\tilde{\beta}_{1}^{d}\rho_{\rm scale}, \tag{7}\] where the last equation defines the scaling density \(\rho_{\rm scale}=V/N\). Hence, in this model infection quanta, up to an overall scaling factor, are infectious respiratory droplets modified by \(\tilde{\beta}_{1}^{d}\), a parameter that includes the probability of infection of a lung-deposited pathogen, number of pathogens in a droplet, lung-deposition probability, and average exposure time, cf. the Appendix. The combination of these factors converts the infectious airborne droplets to infection quanta. Their generation \(q\) is similarly related to the respiratory droplet generation rate via \(q=\kappa_{1}^{d}\xi\). The mapping of the two models also manifests the different inherent scales: the extensive variable, namely the variable that scales linearly with the size of the system, is the volume of the enclosed space in the GN model, whereas it becomes the total population \(N\) in the SEDIR model. The scaling factor \(\rho_{\rm scale}\) implements the transition from a microscopic models which depends on the enclosed-space volume \(V\), to a mesoscopic epidemiological model, which depends on the total population \(N\). This scaling is reminiscent of the scaling proposed in Ref. [18] to transition from an ODE to a PDE (Partial Differential Equations) epidemiological model. Care should be exercised in interpreting \(\rho_{\rm scale}\): if \(V\) is taken to refer to a mesoscopic volume then the GN model is essentially extended to much greater scales. If \(N\) is taken to be the number of occupants in an enclosed, micro-environment the SEDIR model is restricted to smaller scales; however, in that case it may not be considered a proper compartmental epidemiological model. These consideration have important repercussion on the choice of model parameters and prediction times in micro or mesoscale models. The GN limit of the SEDIR-calculated infection risk may be calculated by solving Eqs. (4) to obtain \(S(t)\) and subsequently the infection risk according to Eq. (1). In fact, the droplet equation Eq. (4b) may be solved analytically to obtain an equation formally identical to Eq. (6).Obviously, given an analytical solution of Eq. (4b), the susceptibles equation Eq. (4a) may also be integrated. In the numerical simulations we used the analytical solutions. ### Wells-Riley (WR) limit Reference [27] considered analytically the very common limit where the duration of the infectiousness of an infected individual \(T_{I}=1/\mu_{I}\) is significantly longer than the lifespan of the airborne pathogen \(T_{p}=1/\alpha_{1}^{d}\), i.e., when \(\rho_{1}\equiv\mu_{I}/\alpha_{1}^{d}=T_{p}/T_{I}\ll 1\). For appropriately chosen non-dimensional variables [27] the quasi steady-state limit is defined as \(\rho_{1}d\tilde{D}_{1}/d\tilde{t}=0\) which implies \(\tilde{D}_{1,{\rm qss}}=\tilde{I}\), or in terms of of the original variables \(D_{1,{\rm qss}}=(\kappa_{1}^{d}/\alpha_{1}^{d})I\). Note that the quasi steady-state condition does _not_ imply that the number of infected individuals is constant, \(dI/dt|_{\rm qss}\neq 0\), that is \(I_{\rm qss}\) is _time dependent_. The substitution of the steady-state \((I,D_{1})\) relationship in the original equations Eqs. (2) gives the quasi steady-state limit of SEDIR, \[\frac{dS_{\rm qss}}{dt} = -\frac{\beta_{1}^{d}\kappa_{1}^{d}}{\alpha_{1}^{d}N}I_{\rm qss}S_{ \rm qss}, \tag{8a}\] \[\frac{dE_{\rm qss}}{dt} = \frac{\beta_{1}^{d}\kappa_{1}^{d}}{\alpha_{1}^{d}N}I_{\rm qss}S_{ \rm qss}-\sigma E_{\rm qss},\] (8b) \[\frac{dI_{\rm qss}}{dt} = \sigma E_{\rm qss}-\mu_{I}I_{\rm qss},\] (8c) \[\mbox{with}\quad D_{\rm qss}(t) = \frac{\kappa_{1}^{d}}{\alpha_{1}^{d}}I_{\rm qss}(t). \tag{8d}\] In the quasi steady-state limit the dependence on the number of infectious droplets \(D_{1}(t)\) disappears. As before, in the previously considered double limit, \(\sigma\delta t,\mu_{I}\delta t\ll 1\), we can neglect the equation for the exposed population, Eq. (8b), and take the number of infected individuals constant, \(I_{\rm qss}=I_{0}\). The model equations become \[\frac{dS_{\rm qss}}{dt} = -\frac{\beta_{1}^{d}\kappa_{1}^{d}}{\alpha_{1}^{d}N}I_{0}S_{\rm qss}, \tag{9a}\] \[I_{\rm qss} = I_{0},\quad\mbox{and}\quad D_{\rm qss}=\frac{\kappa_{1}^{d}}{ \alpha_{1}^{d}}I_{0}. \tag{9b}\] The analytical solution of Eq. (9a) leads directly to the WR limit of the SEDIR model as follows \[P_{\rm WR}^{\rm SEDIR}(t_{0},\delta t;\langle\tau_{\rm exp}\rangle)=1-\exp \Big{(}-\frac{\beta_{1}^{d}\kappa_{1}^{d}}{\alpha_{1}^{d}N}I_{0}\delta t\Big{)}. \tag{10}\] For completeness, we also present the WR equation as usually written \[P_{\rm WR}(\delta t)=1-\exp\Big{(}-\frac{Bq}{\lambda_{\rm air}V}I_{0}\delta t \Big{)}, \tag{11}\] where the variables were defined after Eq. (5). Hence, the WR equation is obtained from the quasi steady-state SEDIR equations in the triple limit of latent period and infectiousness period longer than the time scale of observation and \(\rho_{1}\ll 1\). As before, if we let \(k_{1}^{d}=q/\xi\) and \(\alpha_{1}^{d}=\lambda_{\rm air}\) in the SEDIR expression \(P_{\rm WR}^{\rm SEDIR}\), Eq. (10), we obtain the WR infection probability \(P_{\rm WR}\), Eq. (11). Of course, the WR limit of the GN model may be easily obtained by setting \(dQ/dt=0\) in Eq. (5b). The steady-state quantum concentration, then, becomes \(Q_{ss}=qI_{0}/(\lambda_{\rm air})\), leading via Eq. (5a) to the number of susceptibles \[S(t)=S_{0}\exp\Big{(}-\frac{Bq}{\lambda_{\rm air}V}I_{0}\delta t\Big{)}, \tag{12}\] and, thus, to the WR risk-model expression Eq. (11). However, the alternative derivation for the WR limit we presented in term of the quasi steady-state solution of the SEDIR model specifies under which conditions this limit is valid, instead of arbitrarily setting \(dQ/dt=0\). ## IV Numerical results We performed numerical simulations of the SEDIR model, Eqs. (2), to investigate the effect of the prediction interval \(\delta t\) and the latent period \(\tau_{\rm lat}\) on epidemic risk. We also investigate numerically and analytically the validity of the GN, Eqs. (4), and WR, Eq. (10), approximations to the SEDIR model-predictions for the calculated risk. For the simulations we used parameters related to the COVID-19 pandemic, e.g., individual behavior characteristics in addition to physico-chemical and biological properties of the SARS-CoV-2 virus, e.g., viral load. We note, though, that we do not attempt to reproduce a COVID-19 scenario as in our attempt to present the minimal model that reduces to the GN or WR models we do not consider the asymptomatic stage of the disease. We used two airborne droplet classes of post-evaporation diameter \(d_{i}^{\rm post}=2.05,82.13\)\(\mu\)m (\(i=1,2\)). As generally accepted [28], the pathogen concentration was taken to be droplet-size dependent. We opted to limit the airborne droplet classes to two and not to simulate settled droplets to render easier the interpretation of our results: either restriction may be easily removed. The evaporation factor [17], \(d_{i}^{\rm post}=z_{\rm evap}d_{i}^{\rm pre}\), was set to \(z_{\rm evap}=0.40\). Airborne droplet generation rates were taken to correspond to speaking. A complete list of model parameters is presented in the Appendix. Individual behavior determines a number of model parameters. We considered the contact rate, the number of susceptible-infected individual encounters, to be \(c=18\) per day [29]. The duration of an encounter of a susceptible with an infectious droplet,i.e., the breathing time during a \(S\leftrightarrow I\) encounter, was taken to depend on the droplet size: \(\tau_{d_{1}}=25\) min and \(\tau_{d_{2}}=1\) min. Thus, the average exposure time per day of a single susceptible is \(c\times(\tau_{d_{1}}+\tau_{d_{2}})=7.8\) hours per day. Figure 2 summarizes the main results of four simulations to determine epidemic airborne infection risk. We used two latent periods \(\tau_{\rm lat}=0.1\) days (short) and \(\tau_{\rm lat}=6.0\) days (long), along with a short \(\delta t=1.0\) day and a long prediction interval \(\delta t=7\) days. The left panel shows the calculated infection probabilities for each scenario. Two groups of curves may be identified: for the short latent period the infection probability peaks at about \(t_{\rm peak}\approx 43\) days, whereas for the long latent period the peak occurs at \(t_{\rm peak}\approx 96\). Within each group of curves, infection risk increases with increasing prediction interval, as would have been expected. The qualitative behavior of infection risk may be understood by considering the dynamics of the epidemic, described by the time-dependent number of \(S,E,I\) and \(R\) shown in the right panel of Fig. 2. The four curves on the left (filled symbols) correspond to the short latent period, whereas those on the right (no symbols) to the long latent period. We also present the maximum number of infected individuals for each epidemic. We note that, as expected, infection risk follows the time-dependent behavior of the infected individuals. For the short-latent period epidemic, the number of exposed individuals is very small, not discernible on the figure, whereas for the long-latent period epidemic the number of exposed individuals is comparable to the number of infected. In fact, before the \(I\) maximum, \(E>I\), whereas afterwards \(I>E\). Even though not discernible, the number of exposed individuals \(E\) peaks earlier than the number of infected \(I\). The validity of the GN and WR approximations is investigated numerically in Fig. 3. Four groups of curves are shown, each corresponding to the ordered pair \((\delta t,\tau_{\rm lat})\). For each pair choice, we plot the SEDIR infection risk, \(P^{\rm SEDIR}\) as determined via the numerical solutions of Eqs. (2) (filled blue diamonds), \(P^{\rm SEDIR}_{\rm GN}\) as described in Section III.3 (square, unfilled symbols), and \(P^{\rm SEDIR}_{\rm WR}\) as calculated via Eq. (10) (cross, continuous line). Two observations are in order. For the epidemics considered, the GN and WR limits are identical. Whether the two limits would differ depends on the airborne droplet removal rate \(\alpha_{1}^{d}\) (and hence on the dimensionless parameter \(\rho_{1}=\mu_{I}/\alpha_{1}^{d}\)). We restrict the analysis to a single airborne droplet class, for simplicity: the arguments are easily generalized. The importance of the removal rate is apparent from the analytical solution of the droplet equation Eq. (4b), which as Figure 2: Left: Epidemic infection probability according to the SEDIR model. Curves were calculated for two prediction intervals (\(\delta t=1,7\) days) and two latent periods (\(\tau_{\rm lat}=0.1,6\) days). Two airborne-droplet classes were considered, (\(d_{i}^{\rm post}=2.05,82.13\)\(\mu\)m (\(i=1,2\))), susceptible-infectious droplet encounters per day were taken to be \(c=18\), exposure time for each \(S\leftrightarrow D_{i}\) (\(i=1,2\)), encounter \(\tau_{d_{1}}=25\) min and \(\tau_{d_{2}}=1\) min, leading to a total daily susceptible-infectious droplet average exposure time of \(\langle\tau_{\rm exp}\rangle=7.8\) hours (per day). The ventilation rate was taken to be \(\lambda_{\rm air}=0.2\) air exchanges per hour, a typical value for an Italian building [1]. Total population \(N=1000\). Right: Corresponding dynamics of the two epidemics. The left curves (filled symbols) correspond to the short latent period, \(\tau_{\rm lat}=0.10\) days, with \(I\) peaking at \(t\approx 43\) days, and no discernible exposed population. The right curves (lines, no symbols) show the epidemic for the long latent period \(\tau_{\rm lat}=6\) days, with \(I\) peaking at \(t\approx 96\) days, and an appreciable exposed population. noted before is formally identical to its time-dependent part, which determines the difference between the steady-state and non steady-state model, vanishes as \(\alpha_{1}^{d}\delta t\gg 1\), a condition satisfied for all cases considered. The same observation holds for the GN-WR comparison whereby the time-dependent part of the quantum concentration in Eq. (6) vanishes as the droplet removal term tends to infinity. If ventilation is the dominant aerosol removal process, for \(\lambda_{\rm air}\delta t\gg 1\) the two models become identical. Hence, for high ventilation rates the difference between the steady and non steady-state quantum concentration models decreases or even vanishes. The opposite limit \(\alpha_{1}^{d}\delta t\ll 1\) is a bit more subtle as it depends, additionally, on the number of infectious droplets \(D_{1}(t_{0})\) or infection quanta \(Q_{0}\) at the beginning of the infection-risk calculation. The other observation is that for the short prediction interval \(\delta t=1\) day all three calculations predict the same infection risk, irrespective of the viral latent period. The calculations start to differ for the long prediction interval, the difference decreasing with the latent period increasing, i.e., as \(\sigma=1/\tau_{\rm lat}\to 0\). This is expected as both the GN and WR models assume that the latent period is much longer than the exposure time. Hence, the number of susceptibles may decrease due to infection, but the number of infected remains constant since infected susceptibles move to the infected/non-infectious compartment of the exposed. If we analyze the time-dependence of the infection risk we note that as the number of \(I\) increase, i.e., before the maximum of the number of infected, \(P^{\rm SEDIR}>P^{\rm SEDIR}_{\rm GN,WR}\), whereas the opposite holds when \(dI/dt<0\). Again this is expected as the GN model considers that \(I=I_{0}\) is constant, whereas the SEDIR does not: when \(I\) increase more infectious droplets are generated than predicted for a constant \(I_{0}\) leading to a larger infection probability, and vice versa. In an attempt to investigate the GN-WR difference we considered an extreme case of the SEDIR model by shortening the model time scales from days to hours. The calculated infection risk, not shown, behaved as described in the previous paragraphs, confirming the initial estimate that even for short prediction times, e.g., \(\delta t=12\) hours, the condition \(\alpha_{1}^{d}\delta t\gg 1\) remained valid. We note that the GN and WR models may, however, differ in micro-environmental simulations if the necessary conditions, e.g., \(\alpha_{1}^{d}\delta t\ll 1\) are satisfied. ## V Discussion We presented a model to calculate epidemic infection risk due to infectious respiratory pathogens, be they airborne or settled. The model, which is based on the compartmental epidemiological SEDIR model, may be considered an epidemiological generalization of the GN [15] model of airborne infection in an enclosed space. It is valid for arbitrary virus latent period in contrast to the GN model that assumes a long latent period, and hence it neglects changes in the number of infected individuals during exposure to the pathogen. In addition, the SEDIR, being an epidemiological model, provides a connection between SIR-like epidemiological models and infection-risk models based on Wells-Riley Figure 3: Epidemic risk calculated via the SEDIR model and its GN and WR limits, Eq. (10). The four ordered pairs associated with each graph triplet are specified by \((\delta t,\tau_{\rm at})\). For all the simulations the GN and WR limits were identical: they differed from the SEDIR model predictions only for the long prediction interval (\(\delta t=7\) days). See the main text for the explanation. [10], (WR)-like models. We emphasize the importance of system scales, since both the GN and WR models, as initially conceived, are individual-level models that describe infection risk in enclosed micro-environments. SIR-like models are population level model: in particular, the SEDIR model is a mesoscopic model that includes explicitly the dynamics of the pathogen-carrying agents, i.e., the infectious respiratory droplets in the case or airborne infections such as of COVID-19 or influenza. We argued that for long virus latent periods the SEDIR reduces to a set of equations that are reminiscent of the GN equations. Their mapping identified infection quanta as infectious respiratory particles modified by a scaling density and, more importantly, by a combination of parameters that include biological properties of the pathogen (size-dependent pathogen droplet concentration, probability of infection due to a deposited infectious droplet), physical properties (lung-deposition probability), and behavioral properties (exposure time). We noted that the SEDIR model, being an epidemiological model, depends on the total population \(N\), whereas both the WR and GN models consider much smaller scales in terms of the enclosed volume \(V\). We identified the scaling density as the factor to transition from one class of models to the other, and we discusses how this density allows an extension of these micro-environmental models. We performed numerical simulations of two scenarios for an epidemic specified by a short and a long virus latent period and driven by two classes of airborne infectious droplets. Model parameter were based on properties of the SARS-CoV-2 virus, even though we do not claim to model specifically the SARS-CoV-2 transmission dynamics with all of its characteristics. However, the SARS-CoV-2 transmission dynamics reflect those of a range of airborne infections such as influenza. We used the dynamics of the epidemic, specifically the time series of the number of susceptible individuals, to calculate the probability of infection during the epidemic, what we referred to as epidemic airborne risk. We found that the WR and GN limits of the SEDIR-model reproduced accurately infection risk as determined from the numerical solution of the model. Differences arose for large prediction intervals (\(\delta t=7\) days), increasing with decreasing virus latent period. We remark that the WR and GN limiting forms of the epidemic infection risk were almost identical for all our simulations. This is a consequence of the droplet removal rate \(\alpha_{1}^{d}\) being much greater than the inverse prediction interval, i.e., \(\alpha_{1}^{d}\delta t\gg 1\). In fact, this is a general result suggesting that with increasing droplet removal rates, for example via increased ventilation rate, the WR-calculated airborne infection risk with a steady-state quantum concentration provides an excellent approximation to the GN-calculated infection risk with non-steady-state quantum concentrations. The comparative analysis presented here bridges the gap and provides the missing links in the mathematical relationship between individual infection risk models and associated population based models. The corresponding insights allow for a more nuanced epidemiological interpretation of infectious disease outbreaks. ###### Acknowledgements. YD would like to thank the PEACoG (Physical Epidemiology Amherst Covid Group) members for their many insightful discussions and helpful comments over the last years. We thank Marguerite Robinson for discussions during the initial stages of our work, and Vladimir M. Veliov for comments on the connection between the SDIR and WR models. The views expressed are purely those of the authors and may not in any circumstances be regarded as stating an official position of the European Commission. ## Data Availability Statement All data supporting the findings of this study are available within the paper and its Supplementary Information. ## Data Availability Statement All data supporting the findings of this study are available within the paper and its Supplementary Information. ## Author Contributions Y.D. designed research, performed research, analyzed the results and wrote the article; N.I.S. designed research, analyzed the results, and wrote the article. ## Competing interest The authors declare no competing interest ## Funding The research was performed with institutional support only. ## References * [1] G. Buonanno, L. Stabile, and L. Morawska, Environ. Int. **141**, 105794. (2020), [https://doi.org/10.1016/j.envint.2020.105794](https://doi.org/10.1016/j.envint.2020.105794). * [2] G. Buonanno, L. Morawska, and L. Stabile, Environ. Int. **145**, 106112 (2020), [https://doi.org/10.1016/j.envint.2020.106112](https://doi.org/10.1016/j.envint.2020.106112). * [3] Z. Peng, A. Pineda Rojas, E. Kropff, W. Bahnfleth, G. Buonanno, S. Dancer, J. Kurnitski, M. Li, M. Loomans, L. Marr, L. Morawska, C. Nazaroff, C. Noakes, X. Querol, C. Sekhar, R. Tellier, L. Greenhagh, L. Bourouiba, A. Boerstra, J. Tang, S. Miller, and J. Jimenez, Environ. Sci. Technol. **56**, 1125 (2022), 10.1021/acs.est.1c06531. * [4] F. Poydenot, I. Abdourahamane, E. Caplain, S. Der, J. Haiech, A. Jallon, I. Khoutami, A. Loucif, E. Marinov, and B. Andreotti, PNAS Nexus **1**, 1 (2022), [https://doi.org/10.1093/pnasnexus/pgac223](https://doi.org/10.1093/pnasnexus/pgac223). * [5] B. Jones, P. Sharpe, C. Iddon, E. Abigail Hathway, C. Noakes, and S. Fitzgerald, Build. Environ **191**, 107617 (2021), [https://doi.org/10.1016/j.buildenv.2021.107617](https://doi.org/10.1016/j.buildenv.2021.107617). * [6] A. Henriques, N. Mounet, L. Aleixo, P. Elson, J. Devine, G. Azzopardi, M. Andreini, M. Rognlien, N. Tarocco, and N. Tang, Interface Focus **12**, 20210076 (2022), [https://doi.org/10.1098/rsfs.2021.0076](https://doi.org/10.1098/rsfs.2021.0076). * [7] H. Tang, Z. Pan, and C. Li, Build. Environ. **217**, 109067 (2022), [https://doi.org/10.1016/j.buildenv.2022.109067](https://doi.org/10.1016/j.buildenv.2022.109067). * [8] F. Nordsiek, E. Bodenschatz, and G. Bagheri, PLoS ONE **16**, e0248004 (2021), [https://doi.org/10.1371/journal.pone.0248004](https://doi.org/10.1371/journal.pone.0248004). * [9] S. Anand, J. Krishan, B. Sreekanth, and Y. Mayya, Sci. Reports **12**, 14164 (2022), [https://doi.org/10.1038/s41598-022-17693-z](https://doi.org/10.1038/s41598-022-17693-z). * [10] E. C. Riley, G. Murphy, and R. Riley, Am. J. Epidemiol. **107**, 421 (1978), [https://doi.org/10.1093/oxfordjournals.aje.a112560](https://doi.org/10.1093/oxfordjournals.aje.a112560). * [11] S. N. Rudnick and D. Milton, Indoor Air **13**, 237 (2003), [https://doi.org/10.1034/j.1600-0668.2003.00189.x](https://doi.org/10.1034/j.1600-0668.2003.00189.x). * [12] C. J. Noakes, C. Beggs, P. Sleigh, and K. Kerr, Epidemiol. Infect. **134**, 1082 (2006), [https://doi.org/10.1017/S0950268806005875](https://doi.org/10.1017/S0950268806005875). * [13] W. Wells, _Airborne contagion and air hygiene: An ecological study of droplet infections_ (Harvard University Press, Cambridge, MA, 1955). * [14] R. Riley, Am. J. Med. **57**, 466 (1974). * [15] L. Gammaitoni and M. Nucci, Emerg. Infect. Dis. **3**, 335-342. (1997), [https://doi.org/10.3201/eid0303.970310](https://doi.org/10.3201/eid0303.970310). * [16] N. Stilianakis and Y. Drossinos, J. R. Soc. Interface **7**, 1355 (2010), [https://doi.org/10.1098/rsif.2010.0026](https://doi.org/10.1098/rsif.2010.0026). * [17] Y. Drossinos, J. Reid, W. Hugentobler, and N. Stilianakis, Aerosol Sci. Technol. **56**, 777 (2022), [https://doi.org/10.1080/02786826.2022.2120729](https://doi.org/10.1080/02786826.2022.2120729). * [18] P. G. Kevrekidis, J. Cuevas-Maraver, Y. Drossinos, Z. Rapti, and G. Kevrekidis, Phys. Rev. E **104**, 024412 (2021), [https://doi.org/10.1103/PhysRevE.104.024412](https://doi.org/10.1103/PhysRevE.104.024412). * [19] J. Cuevas-Maraver, P. Kevrekidis, Q. Chen, G. Kevrekidis, V. Villalobos-Daniel, Z. Rapti, and Y. Drossinos, Math. Biosci. **336**, 108590 (2021), [https://doi.org/10.1016/j.mbs.2021.108590](https://doi.org/10.1016/j.mbs.2021.108590). * [20] I. Kioutsioukis and N. Stilianakis, Int. J. Environ. Res. Public Health **18**, 1660 (2021), [https://doi.org/10.3390/ijerph18041660](https://doi.org/10.3390/ijerph18041660). * [21] Z. Rapti, J. Cuevas-Maraver, E. Kontou, S. Liu, Y. Drossinos, P. Kevrekidis, M. Barmann, Q.-Y. Chen, and G. Kevrekidis, Bull. Math. Biol. **85**, 54 (2023), [https://doi.org/10.1007/s11538-023-01152-5](https://doi.org/10.1007/s11538-023-01152-5). * [22] M. Bazant and J. Bush, Proc. Natl. Acad. Sci. U.S.A. **118**, e2018995118 (2021), [https://doi.org/10.1073/pnas2018995118](https://doi.org/10.1073/pnas2018995118). * [23] C. Beggs, C. Noakes, P. Sleigh, L. Fletcher, and K. Siddiqi, J. Tuberc. Lung Dis. **7**, 1015 (2003). * [24] Y. Drossinos and N. Stilianakis, Aerosol Sci. Technol. **54**, 639 (2020), [https://doi.org/10.1080/02786826.2020.1751055](https://doi.org/10.1080/02786826.2020.1751055). * [25] Y. Drossinos, T. Weber, and N. Stilianakis, Health Sci. Rep. **4**, e275 (2022), [https://doi.org/10.1002/hsr2.275](https://doi.org/10.1002/hsr2.275). * [26] P. van den Driessche, Infect. Dis. Model. **2**, 288e303 (2027), [http://doi.org/10.1016/j.idm.2017.06.002](http://doi.org/10.1016/j.idm.2017.06.002). * [27] M. Robinson, Y. Drossinos, and N. Stilianakis, Epidemics **5**, 111 (2013). * [28] J. Santarpia, V. Herrera, D. Rivera, S. Ratnesar-Shumate, S. Reid, D. Ackerman, P. Denton, J. Martens, Y. Fang, N. Conoa, M. Callahan, J. Lawler, D. Brett-Major, and J. Lowe, J. Expo. Sci. Environ. Epidemiol. **32**, 706 (2022), [https://doi.org/10.1038/s41370-021-00376-8](https://doi.org/10.1038/s41370-021-00376-8). * [29] V. Sypsa, S. Roussos, D. Paraskevis, T. Lytras, S. Tsiodras, and A. Hatzakis, Emerging Infect. Dis. **27**, 452 (2021), [https://doi.org/10.3201/eid2702.203412](https://doi.org/10.3201/eid2702.203412). ## Appendix A Supporting Information ### Susceptible-Exposed-Droplet-Infected-Recovered (SEDIR) model parameters The droplet population compartments \(D_{i}\), number of airborne droplets, and \(C_{i}\), number of settled droplets, are identified by the droplet diameter. Respiratory droplets are generated in the respiratory tract under conditions of 100% relative humidity and approximately \(37^{o}\) C degrees. Upon expulsion, they equilibrate quickly to the local temperature and relative humidity conditions by water evaporation. As evaporation is a molecular process, droplet shrinking occurs very rapidly, see, for example Refs. [1, 2, 3], and the droplet diameter after equilibration is the droplet diameter most often experimentally accessible. We refer to the droplet diameter at generation as the pre-evaporation diameter, \(d_{i}^{\rm pre}\), and that after equilibration as the post-evaporation diameter, \(d_{i}^{\rm post}\). Their ratio defines the evaporation factor [4]\(\zeta_{\rm evap}\) \[d_{i}^{\rm post}=\zeta_{\rm evap}d_{i}^{\rm pre}. \tag{10}\] The pre-evaporation diameter, via \(\rho_{p}\), the pathogen concentration at the location of droplet generation, e.g., oral region, determines the number of pathogens \(N_{\rm path}^{(i)}\), within a \(d_{i}^{\rm pre}\) droplet, \[N_{\rm path}^{(i)}=\rho_{p}(d_{i}^{\rm pre})\times\frac{\pi}{6}\big{(}d_{i}^ {\rm pre}\big{)}^{3}=\rho_{p}(d_{i}^{\rm pre})\times\frac{\pi}{6}\big{(}d_{i}^ {\rm post}/\zeta_{\rm evap}\big{)}^{3}. \tag{11}\] The post-evaporation diameter determines the physical properties of the droplet like the removal rate \(\lambda_{\rm dep}\), via gravitational settling or other surface-deposition processes, and droplet transport processes. We also consider that it determines their lung-deposition probability \(q_{d_{i}}\). These observation confirm the importance of the evaporation factor \(\zeta_{\rm evap}\), a factor that depends strongly on the ambient relative humidity. Not only does it determine \(N_{\rm path}^{(i)}\) and the droplet deposition and transport properties, it also influences viral infectivity, and eventually the viral inactivation rate \(\mu_{d,c}\) in that changes in the droplet diameter lead to changes of the concentration of the within-droplet species [4]. These concentration changes may have important consequences since, for example, increased concentration of salts, proteins, organics, and acid may damage the pathogen and modify its infectivity [5, 6]. ### Transmission rates The infection transmission rates depend on numerous parameters that may be categorized as biological, behavioral, or physical. In Ref. [7] we showed that the transmission rate associated with a \(d_{i}^{\rm post}\) droplet, be it airborne \(\beta_{i}^{d}\) or settled \(\beta_{i}^{c}\), may be expressed as \[\beta_{i}^{d} = c\tau_{d_{i}}\times\frac{B}{V_{cl}}q_{d_{i}}\times p_{d}\times \rho_{p}^{(i)}(d_{i})\times\frac{\pi}{6}\big{(}d_{i}^{\rm post}/\zeta_{\rm evap }\big{)}^{3}\times\epsilon_{i}^{d},\quad i=1,\ldots,i_{\rm max},\quad\mbox{ airborne droplets}, \tag{12a}\] \[\beta_{i}^{c} = c\tau_{c_{i}}\times\eta_{c}q_{c_{i}}\times p_{c}\times\rho_{p}^{ (i)}(d_{i})\times\frac{\pi}{6}\big{(}d_{i}^{\rm post}/\zeta_{\rm evap}\big{)} ^{3}\times\epsilon_{i}^{c},\quad i=1,\ldots,i_{\rm max},\quad\mbox{settled droplets}, \tag{12b}\] where \(i_{\rm max}\) is the total number of droplet compartments as specified by their post-evaporation diameter. In the main text, we also argued that the breathing rate \(B\) (m\({}^{3}\)/day) may be factored out in Eq. (12a) to define \[\tilde{\beta}_{i}^{d}\equiv\frac{\beta_{i}^{d}}{B}=c\tau_{d_{i}}\times\frac{1 }{V_{cl}}q_{d_{i}}\times p_{d}\times\rho_{p}^{(i)}(d_{i})\times\frac{\pi}{6} \big{(}d_{i}^{\rm post}/\zeta_{\rm evap}\big{)}^{3}\times\epsilon_{i}^{d}, \tag{13}\] a parameter that converts infectious respiratory droplets to infection quanta. The parameters in Eqs. (12) that depend on biological properties are: the pathogen concentration \(\rho_{p}^{(i)}\) at the generation location of droplet \(d_{i}^{\rm pre}\) (number per volume) which we take to be droplet-size dependent; the probability of infection \(p_{d}\) due to a lung-deposited airborne droplet per pathogen (dimensionless); and the probability of infection \(p_{c}\) due to a settled droplet that has been transferred from a surface to a susceptible individual facial membranes per pathogen. The breathing rate \(B\) may also be consider a biological parameter, but we prefer to consider it a physical parameter (see Table 1). Lastly, the infection recovery rate \(\mu_{I}\) (number per day), not present in Eqs. (12), is also a biologically-determined parameter. We consider the lung-deposition probability \(q_{d_{i}}\) of a \(d_{i}^{\rm post}\) droplet to be a physically determined parameter. The characteristic personal-cloud volume, the volume surrounding an individual, is denoted by \(V_{cl}\). Recently, Xenakis (2023) [8] referred to the personal-cloud volume as the "breathing zone volume, i.e., the air volume surrounding a susceptible occupant and determining their epidemiological status". The transmission-rate parameters that depend on an individual's behavior include the individual-infectious person average contact rate \(c\) (number per day), and the transfer rate of settled droplets to facial mucus membranes \(\eta_{c}\) (number per day). During each infectious-susceptible encounter, the susceptible individual is exposed to airborne infectious droplets for a droplet-depending breathing time \(\tau_{d_{i}}\) (days), and to settled infectious droplets for the duration of a hands-face exposure time \(\tau_{c_{i}}\) (days). The combination of these average exposure times per contact leads to an average total exposure time to infectious droplets per day of \(\langle\tau_{\rm exp}\rangle=c\times\sum_{i}^{i_{\rm max}}(\tau_{d_{i}}+\tau_{c _{i}})\). The parameters \(\epsilon_{i}^{d,c}\) include other effects that could modify the transmission rates and not initially considered in Ref. [7]. For example, the filtration efficiency of personal protective equipment or face masks is an important factor that should be included in \(\epsilon_{i}^{d}\). #### a.1.1 Removal rates The droplet removal rates are _effective_ removal rates of infectious droplets in that they include virus inactivation in addition to more traditional removal rates like surface deposition or removal induced by indoor air ventilation. The removal rates of airborne \(\alpha_{i}^{d}\) and settled \(\alpha_{i}^{c}\) droplets of post-evaporation diameter \(d_{i}^{\rm post}\) are \[\alpha_{i}^{d} = \big{(}1+c\tau_{d_{i}}\big{)}\frac{B}{V_{cl}}q_{d_{i}}+\mu_{d}+ \lambda_{\rm dep}^{i}(d_{i}^{\rm post})+\lambda_{\rm air}+\phi_{i}^{d},\quad i =1,\ldots,i_{\rm max},\quad\mbox{airborne droplets}, \tag{10}\] \[\alpha_{i}^{c} = \big{(}1+c\tau_{c_{i}}\big{)}\eta_{c}q_{c_{i}}+\mu_{c}+\phi_{i}^{ c},\quad i=1,\ldots,i_{\rm max},\quad\mbox{settled droplets}. \tag{11}\] Similarly to the infection transmission rates, droplet removal mechanisms may be associated with behavioral, biological, or physical processes. The first term in both Eq. (10) and (11) is a self removal term: in the case of airborne droplets it models removal by inhalation by the susceptible (shown to be negligible for influenza-related parameters [9]), in the case of settled droplets is self-transfer of a deposited droplet to facial membranes.The viral inactivation rate in airborne droplets is denoted by \(\mu_{d}\) (number per day), and that of settled droplets by \(\mu_{c}\) (number per day). They are determined by the properties of the virus under ambient conditions, and hence a strong function of the relative humidity [4]. The ventilation rate is denoted by \(\lambda_{\rm air}\) (number of air exchanges per day), whereas the surface deposition rate is denoted by \(\lambda_{\rm dep}\) (number of droplets per day.) In our simulations we considered that the only physical processes that leads to droplet deposition is gravitational settling, \(\lambda_{\rm dep}^{i}=\theta(d_{i}^{\rm post})\). The parameters \(\phi_{i}^{d,c}\) denote any other process that might induce particle removal: for example, UV radiation would be an additional viral inactivation mechanism that would modify \(\mu_{d,c}\). Another possible inactivation mechanism would be indoor spraying of nonhazardous levels of an acid, e.g., nitric, to decrease droplet pH [5] or spraying a basic solution to increase indoor micro-environmental conditions to basic [6]. #### a.1.2 Droplet generation rates Normal oro-nasal activities, like breathing, talking, laughing, singing, and more violent expiratory events, like sneezing and coughing, produce a distribution of respiratory droplet sizes. As we try to retain features of the spread of SARS-CoV-2 we opted to limit the estimate of the droplet generation rates to normal oro-nasal activities. In addition, we neglect super-spreaders, or super-emitters, [10]. The generation rates we analyzed are based on measurements reported by Johnson et al. (2011) [11], see, also, de Oliveira et al. (2021) [3]. We used the first two distributions [11], B (bronchiolar droplet generation mode) and L (laryngeal droplet generation mode), to determine the concentration-weighed droplet diameter \(d_{1}^{\rm post}\). Their emission rate was determined from the reported data for Cn\({}_{i}\), the droplet number concentration (number of droplet per cm\({}^{3}\)). The droplet concentration was converted to droplet number per second via the flow rate of the Aerodynamic Particle Size (APS) of 5 lt/min. The emitted respiratory droplet per second was converted to number of expelled droplets per day by assuming 1.5 hours of speaking per day (hence the explicit 1.5 in Table 1). Since the APS measures aerosol particles in the size range \(0.50\leq d_{p}\leq 20\), we decided to use the data of Ref. [11] only for the the smaller diameter \(d_{1}^{\rm post}\). The emission rate of the \(d_{2}^{\rm post}\) droplets was based on the data of Loudon and Roberts (1967) [12], as described in Ref. [7], and preserving the total volume of the expelled oral fluid. #### a.1.3 Other parameters All simulation parameters, along with the associated references, are reported in Table 1. We note that observations [20] and simulations [4] suggest the importance of the ventilation rate. We chose to use a characteristic value for typical Italian buildings as reported in Ref. [16], namely \(\lambda_{\rm air}=0.2\) air exchanges per hour. The evaporation factor \(\zeta_{\rm evap}\) was chosen to be 0.40, an intermediate value between the recent estimate [18] of 0.20 and our initial estimate [7] of 0.50. The viral inactivation rate in airborne droplets was based on the early measurements of van Doremalen et al. (2020) [15]. It is frequently quoted [16; 21] as the removal rate in terms of the viral half-life \(t_{1/2}\) as \(\lambda_{\rm inact}=\ln(2)/t_{1/2}\) \begin{table} \begin{tabular}{c c c c} Parameter & Description & Estimate & Reference \\ \hline Biological Parameters & & & \\ \(\rho_{p}^{(1)}\) & pathogen & \(7.0\times 10^{7}\)\(\#/\)cm\({}^{3}\) & Stadnytskyi et al. (2020) [13] \\ & concentration (\(d_{1}^{\rm post}\)) & (viral copies /cm\({}^{3}\)) & _ibid_. \\ \(\rho_{p}^{(2)}\) & pathogen & \(3.50\times 10^{6}\)\(\#/\)cm\({}^{3}\) & _ibid_. \\ & concentration (\(d_{2}^{\rm post}\)) & (viral copies /cm\({}^{3}\)) & \\ \(\mu_{I}\) & infection & \(1/6=0.1677\) & Kevrekidis et al. (2021) [14] \\ & recovery rate & (per day) & \\ \(\mu_{d}\) & inactivation & 15.13 & van Doremalen et al. (2020) [15], \\ & rate (airborne) & (per day) & Buonanno et al. (2020) [16] \\ \(p_{d}\) & probability of & 0.052 & Drossinos and Stilianakis (2010) [7] \\ & infection (airborne) & (-) & \\ \(1/\sigma\) & latent period & 0.1 or 6 days & Scenario parameter \\ Behavioural Parameters & & & \\ \(c\) & contact rate & 18 \(\#\)/day & Sypsa et al. (2021) [17] \\ & per day & & \\ \(\tau_{d_{1}}\) & characteristic breathing & 25 min & Based on \\ & time (\(d_{1}^{\rm post}\)) & & Drossinos and Stilianakis (2010) [7] \\ \(\tau_{d_{2}}\) & characteristic breathing & 1 min & _ibid_. \\ & time (\(d_{2}^{\rm post}\)) & & \\ Physical and physiological parameters & & & \\ \(d_{1}^{\rm post}\) & small-droplet diameter & 2.05 \(\mu\)m & Johnson et al. (2011) [11] \\ & speaking & & \\ \(d_{2}^{\rm post}\) & large-droplet diameter & 82.13 \(\mu\)m & Loudon and Roberts (1967) [12] \\ & speaking & & \\ \(\zeta_{\rm evap}\) & evaporation factor & 0.40 (-) & Lieber et al. (2021) [18] \\ \(B\) & breathing rate & 12 m\({}^{3}\)/day & Drossinos and Housiadas (2006 [19] \\ \(V_{cl}\) & volume personal cloud & 8 m\({}^{3}\) & Drossinos and Stilianakis (2010) [7] \\ \(q_{d_{1}}\) & inhaled-droplet & 0.88 (-) & Drossinos and Housiadas (2010) [19] \\ & deposition probability (\(d_{1}^{\rm post}\)) & & \\ \(q_{d_{2}}\) & inhaled-droplet & 1.00 (-) & _ibid_. \\ & deposition probability (\(d_{2}^{\rm post}\)) & & \\ \(\kappa_{1}^{d}\) & airborne droplet generation rate & \(1.5\times 51,182=\) & Johnson et al. (2011) [11] \\ & speaking (droplets/day)(\(d_{1}^{\rm post}\)) & \(76,773\)\(\#\)/day & \\ \(\kappa_{2}^{d}\) & airborne droplet generation rate & \(0.8\times 47,160=\) & Loudon and Roberts (1976) [12] \\ & speaking (droplets/day) (\(d_{2}^{\rm post}\)) & \(37,728\)\(\#\)/day & \\ \(\lambda_{\rm dep}^{1}=\theta_{1}\) & airborne droplet deposition rate & 7.8 \(\#\)/day & Drossinos and Housiadas (2006) [19] \\ & still-air gravitational settling (\(d_{1}^{\rm post}\)) & & \\ \(\lambda_{\rm dep}^{2}=\theta_{2}\) & settled droplet generation rate & \(10,558\)\(\#\)/day & _ibid_. \\ & still-air gravitational settling (\(d_{2}^{\rm post}\)) & & \\ \(\lambda_{\rm air}\) & air exchange rate (AER) & 4.8 exchanges/day & Typical value for Italian buildings \\ & & & Buonanno et al. (2020) [16] \\ Infection-risk parameter & & & \\ \(\delta t\) & prediction time & 1 or 7 days & estimate \\ \end{tabular} \end{table} Table 1: Simulation parameters: two airborne droplet classes.
2308.00058
Multimode optomechanics with a two-dimensional optomechanical crystal
Chip-scale multimode optomechanical systems have unique benefits for sensing, metrology and quantum technologies relative to their single-mode counterparts. Slot-mode optomechanical crystals enable sideband resolution and large optomechanical couplings of a single optical cavity to two microwave-frequency mechanical modes. Still, previous implementations have been limited to nanobeam geometries, whose effective quantum cooperativity at ultralow temperatures is limited by their low thermal conductance. In this work, we design and experimentally demonstrate a two-dimensional mechanical-optical-mechanical (MOM) platform that dispersively couples a slow-light slot-guided photonic-crystal waveguide mode and two slow-sound $\sim 7$ GHz phononic wire modes localized in physically distinct regions. We first demonstrate optomechanical interactions in long waveguide sections, unveiling acoustic group velocities below 800 m/s, and then move on to mode-gap adiabatic heterostructure cavities with a tailored mechanical frequency difference. Through optomechanical spectroscopy, we demonstrate optical quality factors $Q \sim 10^5$, vacuum optomechanical coupling rates, $g_o/2\pi$, of 1.5 MHz and dynamical backaction effects beyond the single-mode picture. At larger power and adequate laser-cavity detuning, we demonstrate regenerative optomechanical oscillations involving a single mechanical mode, extending to both mechanical modes through modulation of the input laser drive at their frequency difference. This work constitutes an important advance towards engineering MOM systems with nearly degenerate mechanical modes as part of hybrid multipartite quantum systems.
Guilhem Madiot, Marcus Albrechtsen, Clivia M. Sotomayor-Torres, Søren Stobbe, Guillermo Arregui
2023-07-31T18:27:55Z
http://arxiv.org/abs/2308.00058v1
# Multimode optomechanics with a two-dimensional optomechanical crystal ###### Abstract Chip-scale multimode optomechanical systems have unique benefits for sensing, metrology and quantum technologies relative to their single-mode counterparts. Slot-mode optomechanical crystals enable sideband resolution and large optomechanical couplings of a single optical cavity to two microwave-frequency mechanical modes. Still, previous implementations have been limited to nanobeam geometries, whose effective quantum cooperativity at ultralow temperatures is limited by their low thermal conductance. In this work, we design and experimentally demonstrate a two-dimensional mechanical-optical-mechanical (MOM) platform that dispersively couples a slow-light slot-guided photonic-crystal waveguide mode and two slow-sound \(\sim 7\) GHz phononic wire modes localized in physically distinct regions. We first demonstrate optomechanical interactions in long waveguide sections, unveiling acoustic group velocities below 800 m/s, and then move on to mode-gap adiabatic heterostructure cavities with a tailored mechanical frequency difference. Through optomechanical spectroscopy, we demonstrate optical quality factors \(Q\sim 10^{5}\), vacuum optomechanical coupling rates, \(g_{o}/2\pi\), of 1.5 MHz and dynamical backaction effects beyond the single-mode picture. At larger power and adequate laser-cavity detuning, we demonstrate regenerative optomechanical oscillations involving a single mechanical mode, extending to both mechanical modes through modulation of the input laser drive at their frequency difference. This work constitutes an important advance towards engineering MOM systems with nearly degenerate mechanical modes as part of hybrid multipartite quantum systems. ## I Introduction The study of the interaction between an electromagnetic and a mechanical resonator in a cavity-optomechanical system has led to scientific and technological advances in quantum physics, nonlinear optics, and condensed matter physics [1; 2; 3]. However, the canonical theoretical description of the one-to-one interaction [1] fails to describe certain phenomena observed in realistic devices which naturally host multiple optical and mechanical modes [4], i.e., multimode optomechanical systems, which can cause quantum decoherence when the collective interaction is uncontrolled [5; 6]. In the usual experimental setting with a single laser drive, undriven optical modes are usually unimportant. However, the individual parametric optomechanical couplings of the various mechanical modes to the driven optical mode lead to an effective coupling between them. The case of two mechanical modes, i.e., mechanical-optical-mechanical (MOM) systems, has witnessed particular attention due to its potential impact in the quantum regime, e.g. to probe decoherence processes [7; 8; 9], to introduce nonreciprocity [10], or to enhance robustness to thermal noise via mode squeezing [11]. In addition, its linearized Hamiltonian description is particularly suited for the study of exceptional points [12; 13; 14], while strongly-driven MOM devices exhibit collective nonlinear dynamics, including mode competition [15; 16], synchronization [17; 18], and bistability control [19]. At the frontier between these two fields of study -- namely the linear and nonlinear regimes -- lies the so far largely unexplored paradigm of nonlinear non-Hermitian physics, where phenomena like topological mode transfer or unidirectional phonon emission could be realized based on high-frequency coherent phonon self-oscillations [20; 21; 22]. The optically mediated coupling between mechanical modes in MOM systems scales inversely with their frequency difference and becomes symmetrical only when their respective vacuum optomechanical coupling rates to the common optical cavity mode, \(g_{o,1}\) and \(g_{o,2}\), are identical. This has fostered research on physical implementations that exhibit (nearly) degenerate mechanical modes, where the modes coherently mix into optomechanically dark and bright dressed states [23] and can transfer energy efficiently [24]. These include Fabry-Perot microcavities with two identical membranes [25], membrane-in-the-middle cavities with symmetry-enforced (high-order) mechanical mode degeneracies [24], or parallel evanescently coupled optical resonators. The latter category exploits the strong dependence of the frequencies of the resulting optical supermodes on the distance between the resonators and embraces double-disk microcavities [26], bilayer photonic-crystal cavities [27], and photonic-crystal zipper cavities [28]. Interestingly, the electromagnetic boundary conditions across material interfaces lead to a strong local field enhancement, i.e., a slot-mode effect, when such distances are deep sub-wavelength and the field polarization is adequate [29]. Such an effect is non-resonant; therefore, the formation of supermodes is not required. This has allowed integrated MOM devices with large optomechanical couplings by using triple nanobeam geometries separated by tenths of nanometers, each supporting one of the excitations [30]. These quasi-one-dimensional (1D) MOM slot-mode optomechanical crystals (OMCs) can display microwave-frequency phononic-crystal cavity modes and reach the sideband-resolved regime, which has recently allowed the observation of mechanical exceptional points [31]. However, their geometries may be unsuitable to study emergent macroscopic quantum phenomena in millikelvin MOM systems because of inefficient thermalization [32]. In the presence of residual absorption, the limited heat dissipation pathways of 1D OMCs lead to a phononic hot bath which can destroy the prepared quantum states. In addition, the poor stiffness of nanobeams makes them prone to surface-force-induced collapses during and after fabrication [33], limiting how narrow the gap between them can be, i.e., how large \(g_{\mathrm{o}}\) can be [34; 30], and potentially require stress-release management [35; 30]. Two-dimensional (2D) optomechanical structures can sustain even larger optical quality factors [36; 37], large \(g_{\mathrm{o}}\) to long-lived hypersonic mechanical modes [32; 38], and have the additional benefits of enhanced heat dissipation [32; 39] and convenient stiffness. Nonetheless, to the best of our knowledge, no experimental work has focused so far on MOM experiments in 2D OMCs. Here we propose a novel waveguide and cavity optomechanics platform that enables the coupling of a slot-guided optical mode to two independent, nearly degenerate, microwave-frequency mechanical modes. By building mode-gap adiabatic heterostructure cavities [40], we demonstrate a sideband-resolved system with \(g_{\mathrm{o}}\) as high as \(1.5\,\mathrm{MHz}\) between C-band telecom photons in a cavity with a \(Q\sim 10^{5}\) and two \(\sim 7\,\mathrm{GHz}\) acoustic resonators. Passive control over the frequency difference of the latter two via a geometrical parameter is achieved, which may enable tailored MOM systems adaptable to specific experimental requirements. By performing wavelength and power-dependent optomechanical spectroscopy of a device with mechanical modes only differing in frequency by 6 MHz, we provide evidence of multimode dynamical backaction in good agreement with the linearized optomechanical equations of motion of a MOM system. Finally, we demonstrate simultaneous self-oscillatory dynamics of two mechanical resonators stimulated by an intermodulation tone, consistent with recent demonstrations using two mechanical modes of a single nanobeam OMCC [41]. Figure 1: **Multimode optomechanical crystal waveguides (OMCs).** (a) Scanning electron micrograph of an OMCW that couples two phononic waveguide modes — one on each side of the air slot — to a slot-guided optical mode. Insets: cross-section of the slot (top) and a cleaved shamrock. (b) Optical (left) and mechanical (right) dispersion diagrams of the structure in (a). The Bloch modes of interest are represented at \(k=\pi/a_{\mathrm{x}}\) and \(q=0\). (c) Optical transmission spectrum of a mirror-terminated OMCW (\(L=350a_{x}\)) through a tapered fiber loop. (d) Radio-frequency (RF) spectrum measured with a laser drive on an optical mode at \(\lambda=1543\,\mathrm{nm}\), and comparison of the simulated (line) and experimentally reconstructed (blue dots) acoustic group velocity. S1.F1][ENDFIGURE] ## II Slot-mode multimode optomechanical crystal waveguides The geometry of the multimode OMC waveguides (OMCWs) we explore here consists of a line defect waveguide composed of two rows of circular holes with (optionally) different radii \(R_{1}\) and \(R_{2}\), a slot of width \(s\), and two triangular lattices of shamrock-shaped [42; 43] holes around them, with the shamrocks facing each other (Fig. 1(a)). The structures are fabricated in \(220\,\mathrm{nm}\) silicon-on-insulator (SOI) and the \(2\,\mathrm{\SIUnitSymbolMicro m}\) buried oxide is undercut to release the suspended structures with hydrofluoric vapour-phase etching. The patterns are defined in chemically semi-amplified resist with electron-beam lithography [29] and etched into the silicon using dry-etching based on a modified CORE process [44; 45]. The fabrication process is tailored to yield an excellent design-to-realized pattern fidelity, including smooth and vertical sidewalls (Fig. 1(a), insets) [46]. The presence of the slot mechanically decouples the two membrane sides, making the system a MOM optomechanical waveguide with a geometry-controlled mechanical frequency difference provided by \(\Delta R=R_{1}-R_{2}\). The employed shamrock crystal enables large near-infrared electromagnetic [43] and GHz mechanical band gaps [47], and a similar geometry has been exploited for the in-situ generation of coherent acoustic phonons using Anderson-localized optical modes resulting from residual roughness in the etched sidewalls [45]. However, the quality factors, \(Q\), observed in Ref. [45] were limited due to a multimode optical dispersion of the slot-guided mode and much below those measured in a slot photonic-crystal waveguide solely based on circular holes [48]. The geometry we propose leverages the best features of both shamrock-shaped and circular holes as it exhibits single-moded dispersions with zero group velocity, i.e., simultaneous slow light and sound, at the mechanical and optical Bloch wavevectors where forward-type intra-modal Brillouin interactions are phase-matched [49; 50], respectively at \(q=0\) and \(k=\pi/a_{\mathrm{x}}\) (Fig. 1(b)). In addition, the vector parities of the Bloch optical (\(y\)- and \(z\)-symmetric \(\mathbf{E}\) field) and mechanical (\(z\)-symmetric) modes make them optomechanically bright. The electric field amplitude of the optical mode, \(|\mathbf{E}(\mathbf{r})|\), and the displacement amplitude, \(|\mathbf{u}(\mathbf{r})|\), and deformation profile of the mechanical mode of interest are shown in Fig. 1(b). The former exhibits subwavelength light confinement in the etched air slot, making the band edge frequency very sensitive to slot-width variations [29; 30], while the latter is reminiscent of the in-plane breathing mode of a nanobeam waveguide, where one of the lateral free boundary conditions is replaced by a full-gap phononic crystal [51]. The independent breathing motion of the two phononic waveguides (the mechanical mode is only represented for the bottom mechanical waveguide in Fig. 1(b)) strongly modulates the slot width, which leads to large unit-cell vacuum optomechanical coupling rates, \(g_{\mathrm{o,cell}}/2\pi\), of \(4.8\) MHz, between the optical and the two mechanical Bloch modes. Departure from mechanical-mode degeneracy when the radii difference is \(\Delta R\neq 0\) has a negligible effect on the respective values of \(g_{\mathrm{o,cell}}/2\pi\) (see Supplementary Section 1). We probe the optical and mechanical properties of the OMCWs using a tunable diode laser connected to an optical fiber circuit that leads to a tapered fiber loop placed in contact with the slot waveguide. We terminate long OMCWs with short 32 unit cells waveguide segments within which the horizontal pitch is expanded from \(a_{\mathrm{x,1}}=484\) nm to \(a_{\mathrm{x,2}}=510\) nm. These sections behave simultaneously as optical and acoustic mirrors forming standing waves in the central waveguide region [45] (see Supplementary Section 1 for dispersion diagrams as a function of \(a_{\mathrm{x}}\)). Sharp spectral dips in the transmitted optical signal (Fig. 1(c)) evidence evanescent coupling to resonant optical modes of the waveguide. We observe three spectral regions with distinct features: First, a region above \(\lambda\sim 1580\,\mathrm{nm}\) made of Fabry-Perot optical modes with large on-resonance coupling fraction and a free spectral range (FSR) determined by the group index, \(n_{\mathrm{g}}\), and the length, \(L\), of the waveguide. Second, the wavelength region \(1560\,\mathrm{nm}<\lambda<1580\,\mathrm{nm}\), within which the coupling fraction is also large, but the resonant wavelengths are seemingly random. We attribute these resonances also to Fabry-Perot modes whose wavelengths are perturbed by the presence and exact position of the loop (see next section for a discussion of the dispersive effects of the loop on a single resonant optical mode). Third, a region close to the band edge (\(\lambda>1560\,\mathrm{nm}\)) where strong slow-light-induced backscattering from sidewall roughness leads to Anderson-localized modes at random spectro-spatial locations, leading to a random FSR and strong mode-to-mode fluctuations in the coupling fraction [45; 48]. We characterize the mechanical properties of the OMCWs by moderate power and blue-detuned driving of optical modes in the first and second regions. We detect the thermal motion of the Fabry-Perot standing mechanical modes using a fast photoreceiver and an electronic spectrum analyzer. A characteristic radio-frequency (RF) spectrum for an Anderson-localized optical mode at \(\lambda=1543\,\mathrm{nm}\) is shown in Fig. 1(d). For the particular case shown (\(\Delta R=0\)), no degeneracy-lifting between the two independent phononic waveguide modes is observed. The spectrum is composed of many overlapping and regularly spaced Lorentzian-shaped mechanical resonances with estimated linewidths, \(\Gamma\), in the range 2-5 MHz and a transduction amplitude envelope that grows as the mechanical band edge at \(7.2\,\mathrm{GHz}\) is approached. We attribute this overall shape to a relaxed sinc-like phase-matching condition between an optical mode with \(k\)-components dominated by \(k=\pi/a_{\mathrm{x,1}}\) and mechanical modes in the vicinity of \(q=0\) for forward-type Brillouin scattering interactions in a finite waveguide [50]. This remains true even if the optical mode employed is Anderson-localized because its \(k\)-space representation is often centered around the wavevector of the underlying waveguide at that frequency [52], i.e. \(k\sim\pi/a_{\mathrm{x,1}}\). While Fig. 1(d) only ex hibits peaks within the single-mode regime, RF spectra obtained by driving other optical modes also reveal peaks in the multi-mode propagation regime, in which the FSR is ill-defined due to inter-modal mechanical mixing. Using five additional RF spectra (see Supplementary Section 3), we reconstruct the acoustic group velocity, \(v_{\mathrm{g}}\), of the top and bottom phononic waveguides for \(\Delta R=0\). The reconstructed \(v_{\mathrm{g}}\) is shown at the bottom of Fig. 1(d) and compared to the simulated \(v_{\mathrm{g}}\). The simulation curve has been rigidly offset by only \(-90\,\mathrm{MHz}\) (\(\sim 1\,\%\) of \(\Omega\)) to account for potential systematic errors on the SEM-extracted contour of the geometric features, illustrating the good agreement between simulations and measurements. We measure slow propagation of \(\sim 7\) GHz acoustic waves down to a group velocity below 800 m/s, constituting a 7-fold reduction relative to the transverse speed of sound in bulk silicon. ## III Slot-mode multimode optomechanical crystal cavities The efficient coupling of a multitude of Fabry-Perot mechanical modes and Anderson-localized optical modes in the mirror-enclosed 2D OMCWs presented above provides an interesting platform to explore collective effects in multimode optomechanics, such as light squeezing [4]. However, some of the strongest optomechanical effects beyond the canonical one-to-one interaction occur in the MOM configuration. To explore such a setting, we engineer a MOM OMC cavity (OMCC) that couples two mechanical resonators and a high-\(Q\) optical mode based on the waveguide modes of Fig. 1 and the respective partial band gaps above their band-edge frequencies. We adiabatically tune the horizontal pitch, \(a_{\mathrm{x}}\), along the waveguide axis from a central defect unit cell (\(a_{\mathrm{x,1}}\) = 484 nm) to mirror unit cells (\(a_{\mathrm{x,2}}\) = 510 nm) on both sides. The full defect region is formed by \(N_{\mathrm{c}}\) = 15 unit cells, and additional invariant sections made of \(N_{\mathrm{m}}\) = 32 mirror unit cells are included at the edges of the defect to prevent in-plane losses. Fig. 2(a) shows the amplitude of the electric field, \(|\mathbf{E}|\), of the cavity mode for \(\Delta R=0\), whose theoretical resonant wavelength and quality factor are \(\lambda_{\mathrm{o}}\) = 1556 nm and \(Q_{\mathrm{i}}\) = 1.05 \(\times 10^{8}\). The deformation profile and displacement amplitude, \(|\mathbf{u}|\), of the two mechanical modes are shown in Fig. 2(b), and both have mechanical frequency \(\Omega_{1,2}/2\pi\) = 7.15 GHz in the degenerate case. The frequency difference between the two mechanically decoupled mechanical cavity modes is controlled by decreasing \(R_{2}\) relative to \(R_{1}\), which in turn slightly redshifts the resonant wavelength of the optical cavity mode. More details on the cavity-optomechanical figures of merit are provided in Supplementary Section 2. We characterize the optical and mechanical properties of the MOM OMCCs using the same fiber-loop evanescent technique as in Fig. 1. To account for the strong perturbative effect of the fiber loop on the resonant wavelength and losses of the optical cavity mode and infer their unperturbed parameters, i.e., in the absence of the loop, we systematically study the optical response as a function of the loop position. Figure 2(c) shows the optical transmission spectra across the cavity resonance for 24 different loop positions, \(x\), with the loop in contact with the sample and approximately aligned to the slot axis (Fig. 2(c) top inset). The position is extracted via analysis of microscope images acquired with a 100\(\times\) objective imaging the probed structure from above. By moving the sample under the loop while in contact, the loop slides along the slot and changes its overlap with the optical cavity mode. We extract the resonant wavelength, \(\lambda_{\mathrm{o}}\), and the extrinsic, \(\kappa_{\mathrm{e}}/2\pi\), and intrinsic, \(\kappa_{\mathrm{i}}/2\pi\), decay rates of the cavity mode for each loop position by fitting the cavity resonance with a Lorentzian line-shape (Fig. 2(c) bottom inset) and using that the on-resonance transmission is given by \(T_{0}=(1-\kappa_{\mathrm{e}}/\kappa_{\mathrm{t}})^{2}\), with \(\kappa_{\mathrm{t}}=\kappa_{\mathrm{e}}+\kappa_{\mathrm{i}}\). Figure 2(d) shows the extracted \(\lambda_{\mathrm{o}}\) along with a theoretical prediction obtained from a convolution between the calculated \(|\mathbf{E}|^{2}\) and a Gaussian envelope representing the loop [29]. The Gaussian has a standard deviation, \(\sigma=0.99\,\mathrm{\SIUnitSymbolMicro m}\), identified with a least-mean-square optimization. When the loop is centered on top of the cavity, the dispersive perturbation is maximal, shifting the wavelength by as much as \(\sim 15\,\mathrm{nm}\), i.e., \(1\,\%\) of the cavity wavelength. We identify the center position, \(x=0\), as the position causing the largest redshift of the resonance wavelength. The evolution of the total, extrinsic and intrinsic decay rates as a function of the loop position are shown in Fig. 2(e). Several interesting properties can be observed. First, \(\kappa_{\mathrm{i}}\) changes considerably with the loop position, which implies that the loop not only loads the cavity but also adds additional undetected loss pathways. Second, we observe a decrease of both \(\kappa_{\mathrm{i}}\) and \(\kappa_{\mathrm{e}}\) when the loop is centered on the cavity. We hypothesize that this is due to the additional symmetry of this configuration. Third, when the center of the loop is far from the geometric cavity center, the behaviour of \(\kappa_{\mathrm{i}}\) may indicate that the value is still unconverged, contrary to the wavelength. This indicates that the employed technique may not allow the measurement of the unperturbed optical linewidth. Therefore, we identify the unperturbed cavity parameters to be \(\lambda_{\mathrm{o}}=1562\) nm and \(\kappa_{\mathrm{i}}/2\pi<2\,\mathrm{GHz}\) (for the \(R_{2}=193\,\mathrm{nm}\) here). We systematically characterize the mechanical cavity modes for OMCCs with different values of \(R_{2}\) as in Fig. 1(d), with the fiber loop positioned to achieve a minimal perturbation and critical coupling to the optical cavity mode, i.e., \(T_{0}=1/2\). The value of \(R_{2}\) is nominally reduced in \(1\,\mathrm{nm}\) steps from \(R_{1}=R_{2}=194\,\mathrm{nm}\) to \(R_{2}=184\,\mathrm{nm}\). Figure 2(f) shows the RF spectra with Lorentzian fits to both mechanical modes. Following the prediction of the finite-element simulations (see Supplementary Sections 1 and 2), we identify the lower (higher) frequency mechanical mode, fitted in dark (light) blue, as that of the top (bottom) membrane. This probably holds true except for the smaller values of \(\Delta R\), where the effect of disorder-induced dispersion might overcome the as-designed deterministic frequency difference. We note that the relative amplitude of the transduced signals is determined by the exact loop position transverse to the slot axis, which determines to what extent the modes are dampened, and by the presence of dynamical backaction effects, which may occur preferentially for one of the modes at a fixed detuning [15]. Despite this, the signal-to-noise ratio (SNR) remains sufficient to extract the central frequencies accurately. Figure 2(g) shows the extracted frequencies as well as linear fits predicting \(\Omega_{1}(\Delta R)/2\pi=7.052(2)\,\mathrm{GHz}+0.5(4)\Delta R\) MHz/nm and \(\Omega_{2}(\Delta R)/2\pi=7.055(3)\,\mathrm{GHz}+11.8(5)\Delta R\) MHz/nm. The non-zero slope found for \(\Omega_{1}\) likely originates from short-range proximity effects [29, 46], i.e., the smaller value for \(R_{2}\) leads to a smaller effective electron-beam dose on the other membrane side, which is not accounted for with standard long-range proximity effect correction [47]. Meanwhile, the slope of \(\Omega_{2}\) agrees well with the simulation prediction (10.19(7) MHz/nm). Note that the mechanical frequencies are approximately 90 MHz lower Figure 2: **Optomechanical spectroscopy of a 2D slot-mode mechanical-optical-mechanical (MOM) system.** (a) Electric field amplitude, \(|\mathbf{E}|\), of the optical cavity mode and (b) displacement amplitude, \(|\mathbf{u}|\), and deformation profiles of the two mechanical cavity modes. (c) Normalized transmission spectrum at different positions along the waveguide axis, \(x\). The spectra are offset for clarity and the red curves are Lorentzian fits to the resonances. Insets: Schematic of the configuration with the loop on top of the cavity (top) and a close-up of a normalized spectrum (bottom). (d) Resonant wavelength as a function of the loop position, extracted from the fits in (a) and through a convolution of the calculated electric field intensity, \(|\mathbf{E}|^{2}\), with a Gaussian (\(\sigma=0.99\,\mathrm{\SIUnitSymbolMicro m}\)). (e) Total, external, and intrinsic decay rates as a function of the loop position. (f) Normalized radiofrequency (RF) spectra measured for devices with decreasing circular hole radius \(R_{2}\). The spectra are offset for clarity. Blue solid lines are Lorentzian fits to the individual peaks. The inset shows a characteristic power spectral density of and fit to a single mechanical peak. (g) Extracted mechanical frequencies of the two plates (dots), linear fit (solid line) and confidence interval of the fit (shaded background). (h) Same as (e) for their frequency difference. than the simulated values, which is in very good agreement to the theory-experiment offset found for the acoustic waveguide band of Fig. 1(d). Figure 2(h) shows \(\delta\Omega(\Delta R)/2\pi\) illustrating its linear dependence with an intercept \(\delta\Omega(0)/2\pi=(3\pm 4)\,\)MHz. This non-zero intercept corroborate that we consistently observe a separation of a few MHz between the nominally degenerate mechanical modes, which is caused by inherent fabrication imperfections. The values of \(\kappa_{\text{t}}\) under the measurement conditions of Fig. 2(f) are all in reasonable agreement with those reported in Fig. 2(e), i.e., we observe no pronounced effect of \(\Delta R\) on the optical \(Q\) (as expected from simulations, see Supplementary Figure S3), and therefore the MOMOMCs we demonstrate are all in the sideband-resolved regime (\(\kappa_{\text{t}}\)\(<\)\(\Omega_{1,2}\)). In addition, the values of \(g_{\text{o},1}/2\pi\) and \(g_{\text{o},2}/2\pi\) are measured to be in the range of 1.2-1.5 MHz, which is in excellent agreement with simulations (see Supplementary Sections 2 and 4 for details on the simulated and measured \(g_{\text{o},1}\) and \(g_{\text{o},2}\)). In the next section, we explore the implications of the reported cavity-optomechanical figures of merit and the MOM nature of the system on the detuning- and power-dependence of the dynamical backaction effects on a laser-driven device with nearly degenerate mechanical modes. ## IV Optomechanical spectroscopy of a mechanical-optical-mechanical system We consider a structure with \(R_{1}=194\) nm and \(R_{2}=193\) nm and place the loop in the position indicated previously, leading to loss rates of \(\kappa_{\text{i}}/2\pi\approx 3.5\) GHz and \(\kappa_{\text{e}}/2\pi\approx 0.76\) GHz, and a resonance wavelength (at low power) of \(\lambda_{\text{o}}=1559.35\) nm. Using a laser power of \(P_{\text{in}}=131\)\(\mu\)W, we step-scan the laser wavelength, \(\lambda\), from the blue-detuned side of the optical resonance and measure the RF spectrum. In Fig. 3(a), we show the resulting power spectral density as a colormap, highlighting the two mechanical resonances separated by approximately 6 MHz. To track the mechanical frequencies and linewidths as a function of \(\lambda\), we fit each spectrum with a sum of two Lorentzians. Examples of the fitted spectra are presented in Fig. 3(b), with experimental data extracted from the above map as indicated by the white dashed lines. The same procedure is applied for \(P_{\text{in}}=87\), \(131\), \(175\) and \(219\)\(\mu\)W, and the extracted parameters are summarized in Fig. 3(c-e) (left column). For increasing input power, the mechanical resonances experience an increasing displacement around their natural value (Fig. 3(c)) due to optomechanical dynamical backaction. Meanwhile, the mechanical damping rates Figure 3: **Dynamical back-action in the nearly degenerate case.** (a) Measured radiofrequency (RF) spectra as a function of the laser wavelength \(\lambda\) for a laser power \(P_{\text{in}}=131\)\(\mu\)W. The dashed lines indicate the value of \(\lambda\) for the spectra in (b). (b) Individual RF spectra (black dots) fitted with a double-Lorentzian function (grey solid lines) to extract the mechanical frequencies and damping rates. The curves are vertically offset for clarity. (c-e) Experimental (left) and theoretical (right) evolution of (c) the mechanical frequencies \(\Omega_{1}\) and \(\Omega_{2}\), and the mechanical damping rates (d) \(\Gamma_{1}\) and (e) \(\Gamma_{2}\). These parameters are plotted at four different powers (see legend with \(P_{\text{in}}\) indicated in units of \(\mu\)W) and as a function of \(\lambda\) or the laser-cavity detuning, \(\delta\lambda\), respectively, for experiment and theory. The blue dashed lines indicate the wavelength (or detuning) at which the minimum of the damping rates occur for \(P_{\text{in}}=219\)\(\mu\)W. tend to decrease down to a minimum, as expected in the blue-detuned regime. For sufficiently high input power (here at \(P_{\text{in}}=219\)\(\mu\)W), the damping rate saturates around \(\sim 10\) kHz, which indicates that the mode is self-oscillating [1]. In Fig. 3(c-e) (right column), we qualitatively compare the extracted parameters as a function of laser wavelength to the values predicted by the linearized optomechanical equations of motion of a MOM system [13, 53]. In addition to the optical parameters given above, the model uses \(\Omega_{1,2}/2\pi=7.061\) GHz \(\pm 3.05\) MHz, \(\Gamma_{1}/2\pi=\Gamma_{2}/2\pi=3.2\) MHz, \(g_{\text{o},1}/2\pi=1.25\) MHz and \(g_{\text{o},2}/2\pi=1.5\) MHz. Details on the theoretical model are given in Supplementary Section 6. We note that the model ignores the observed residual-absorption-mediated thermal non-linearities, so the horizontal axis in the theoretical plots of Fig. 3(c-e), which use the laser detuning \(\delta\lambda=\lambda-\lambda_{0}\), cannot be directly mapped to the horizontal axis of the experimental plots as the true detuning scales non-linearly with the laser wavelength due to the thermo-optic drag, i.e. \(\lambda_{0}=f(\lambda)\). Nevertheless, the latter produces a nearly asymptotic decrease of \(\delta\lambda\) towards zero (see Supplementary Section 5), followed by a sudden jump in the red-detuned regime (\(\delta\lambda>0\)). Therefore, theory and experiment can be qualitatively compared by using a logarithmic scale for the horizontal axis of the theoretical plots. We observe that the model captures the overall evolution of the mechanical frequencies and damping rates. In particular, the damping rates reach their minimum at different values of \(\lambda\) (or \(\delta\lambda\)), as highlighted with the dashed lines for the case \(P_{\text{in}}=219\)\(\mu\)W. This feature, which does not emerge in a model with two independent optomechanical oscillators (see Supplementary Section 5), suggests that the mechanical modes start hybridizing despite their non-identical mechanical frequencies. The same applies to any asymmetrical feature in the relative evolution of the parameters of both mechanical modes. We note that below \(\lambda\approx 1559.4\) nm, the SNR of the transduced mechanical resonances is very low, which prevents determining the frequencies and damping rates. Furthermore, above \(\lambda\approx 1559.7\), the properties of the modes stabilize, and their transduction slowly decreases until the laser exits the thermo-optic resonance. ## V Stimulated two-mode optomechanical amplification via intermodulation The simultaneous parametric amplification of two thermally excited mechanical oscillators coupled to a common optical mode is typically prevented by mode competition [15] leading to anomalous cooling [54], except for the case of two mechanical resonators of disparate frequency [53, 30]. Nevertheless, such two-mode amplification can be stimulated by modulating the input laser intensity at the intermodal frequency [41]. To demonstrate such a feature in the investigated MOM OMCC, we focus on a device with \(R_{1}=194\) nm and \(R_{2}=190\) nm, which results in a frequency difference of \(\delta\Omega/2\pi\approx 38\) MHz. We fix the optical loading conditions as before, with a blue-detuned driving at a power \(P_{\text{in}}=90\)\(\mu\)W and considerably below self-oscillation threshold. We then apply direct intensity modulation of the laser with modulation depth \(d\) and frequency \(\Omega_{\text{VNA}}\). We report in Fig. 4(a) the recorded RF spectrum for \(d=30\%\) while step-scanning the modulation frequency. We observe the two mechanical modes at \(\Omega_{1}/2\pi=6.965\) GHz and \(\Omega_{2}/2\pi=7.003\) GHz with respective linewidths --when the modulation is off-- \(\Gamma_{1}/2\pi=2.417\) MHz and \(\Gamma_{2}/2\pi=1.234\) MHz. Note that the mechanical frequencies are slightly lower than those reported in Fig. 2(f-g-h) because the device we analyze here used a lithography mask with exposed (void) features shrunk by 5 nm uniformly. Each mechanical peak is surrounded by a pair of sidebands at a distance of \(\pm\Omega_{\text{VNA}}\). When the high-frequency (low-frequency) sideband of the peak at \(\Omega_{1}\) (\(\Omega_{2}\)) crosses the mode at \(\Omega_{2}\) (\(\Omega_{1}\)), i.e., when \(\Omega_{\text{VNA}}=\delta\Omega\), the amplitude of the mechanical peaks increases significantly, far beyond (by at least 20 Figure 4: **Simultaneous Floquet mechanical lasing of two nearly degenerate mechanical modes.** (a) Measured RF spectra as a function of the modulation frequency, \(\Omega_{\text{VNA}}\). (b) Individual RF spectra extracted from (a) and vertically offset for clarity. (c) Measured RF peak amplitude as a function of the modulation depth and modulation frequency for mechanical modes 1 (left) and 2 (right). The green lines are the theoretical amplification threshold obtained using \(\Delta=1.024\overline{\Omega}\) (with \(\overline{\Omega}=6.984\) GHz) and the model in Ref. [41]. dB) the value given by the sum of the thermal transduction and the sideband peak. In Fig. 4(b), we show three spectra extracted from the upper map, corresponding to \(\Omega_{\rm VNA}<\delta\Omega\), in which case the two modes are thermally excited (bottom), \(\Omega_{\rm VNA}\approx\delta\Omega\), where the modes amplify by nearly 40 dB (middle), and \(\Omega_{\rm VNA}>\delta\Omega\), for which the amplitudes of the modes reduce down to that of the sub-threshold regime (top). In order to identify the dynamical range enabling stimulated multimode lasing to occur, we record the transduced amplitudes of both mechanical modes as a function of both modulation parameters \(d\) and \(\Omega_{\rm VNA}\). Figure 4(c) represents the SNR evaluated based on the mode amplitude in the absence of modulation, i.e. in its unaltered thermal regime. For each mode, we plot the lasing threshold calculated from the theory in Ref. [41] using \(g_{\rm o,1}/2\pi=g_{\rm o,2}/2\pi=1.5\) MHz, \(\lambda_{0}=1579.895\) nm, \(\kappa_{\rm i}=2.074\) GHz, and \(\kappa_{\rm e}=0.664\) GHz. The theory is calculated using the optical detuning \(\Delta\) as a fitting parameter, since the presence of thermo-optic effect prevents its independent determination. However, we note that the asymmetry of the amplification area about \(\Omega_{\rm VNA}\) is finely determined by the laser detuning \(\Delta\), and is symmetric when the optomechanical amplification is optimal, i.e. at \(\Delta=\overline{\Omega}\), where \(\overline{\Omega}=6.984\) GHz is the mean mechanical frequency. This allows an unambiguous fit using \(\Delta=1.024\overline{\Omega}\), and leads to a qualitative agreement of the theory with the experimental findings. Note that the second mechanical mode reaches the lasing threshold at lower modulation depth because of its significantly lower damping rate (\(\Gamma_{2}<\Gamma_{1}\)). ## VI Conclusion and outlook In summary, we have demonstrated a 2D OMC design that can operate as a sideband-resolved multimode MOM cavity-optomechanical system with two \(\sim 7\) GHz mechanical modes at a close spectral distance, \(\delta\Omega_{1,2}\ll\Omega_{1,2}\), and a high-\(Q\) optical cavity (\(Q\sim 10^{5}\)). The measured optomechanical coupling rates reaching \(g_{o}/2\pi\sim 1.5\) MHz are amongst the largest values observed in OMCCs with microwave-frequency mechanical modes, enabling low-power self-oscillations of the individual mechanical modes and simultaneous lasing upon modulation of the laser drive at their frequency difference. The latter is shown to be controlled, starting from the nominally degenerate case, by slightly breaking a structural symmetry, which does not significantly degrade the aforementioned properties over the 120 MHz passive tuning range. By performing a wavelength and power-dependent spectroscopic analysis of a structure with nominally degenerate mechanical modes, we show that features of multimode optomechanical backaction are observed. The combination of stochastic and deterministic deviations of the fabricated structure from the nominal design introduces a built-in frequency difference that we observe to be lower-bounded to around 4 MHz, preventing stronger optically-induced hybridization and the exploration of exceptional points. We foresee that introducing thermal tuning elements on both sides of the waveguide [55], e.g., metallic bonding pads, will enable independent and low-cross-talk tuning of the mechanical frequencies, which will allow the proposed system to reach the degenerate case and explore the physics across the frequency-crossing. Given the investigated limitations imposed by the physical presence of the fiber loop on the optical and mechanical losses, we expect further improvement by incorporating butt-coupled [32] or side-coupled input/output waveguides [56]. Another necessary avenue required to leverage the improved heat dissipation of the 2D OMCs is the passivation, via termination chemistry [57] or encapsulation layers [58], of the slot-sidewall defect states that lead to the thermo-optical bistability we observe and that generally prevents us from performing red-detuned optomechanical cooling. In addition, the prospects of self-assembling gaps way below the limits of top-down nanofabrication [59] may allow reaching \(g_{o}\) rates about four times larger than the ones we already report. Finally, the strong transduction of Fabry-Perot mechanical modes we observe on the mirror-terminated waveguides and the demonstrated slow-down of sound to \(\sim 700\) m/s, finds applications in injection-locked optomechanical oscillators [60], on-chip phonon networks for information processing [51], or phononic quantum memories [61]. Conversely, the sensitivity of slow sound to fabrication imperfection [62] and large ensemble measurements on mirror-terminated waveguides may allow the observation of spectral features resulting from Anderson localization of hypersonic acoustic waves [63]. ###### Acknowledgements. The authors acknowledge Karl Pelka for useful discussion. G.M. and C.M.S.T. acknowledge the support from the project LEIT funded by the European Research Council, H2020 Grant Agreement No. 885689. ICN2 is supported by the Severo Ochoa program from the Spanish Research Agency (AEI, Grant No. SEV-2017-0706) and by the CERCA Programme/Generalitat de Catalunya. M.A. and S.S. gratefully acknowledge funding from the Villum Foundation Young Investigator Program (Grant No. 13170). S.S. additionally acknowledges funding from the Danish National Research Foundation (Grant No. DNRF147-NanoPhoton), the Innovation Fund Denmark (Grant No. 0175-00022-NEXUS), the Independent Research Fund Denmark (Grant No. 0135-00315-VAFL), and the European Research Council (Grant. No. 101045396 - SPOTLIGHT). G.A. acknowledges financial support from the European Union's Horizon 2021 research and innovation programme under the Marie Sklodowska-Curie Action (Grant No. 101067606 - TOPEX). ## Appendix A Two-dimensional optomechanical crystal waveguides: band structures In the waveguides we explore, shown in Fig. 1(a) in the main text, we vary several geometrical parameters. The most relevant ones are the horizontal pitch, \(a_{x}\), and the radius of the circular hole in one of the membrane sides, \(R_{2}\). The rest of the parameters are fixed, such as the vertical pitch, \(a_{y}\) = 484 nm, or the different parameters defining the shamrock-shaped holes. We reconstruct the underlying band structures of the relevant waveguide unit cells (both for the waveguides themselves and for the heterostructure cavities) by extracting the contour of the fixed features from analysis of scanning electron microscopy (SEM) images. Figures 5(a) and (b) show a high-magnification SEM image of a shamrock-shaped hole and a circular hole along with the outline (in green) extracted after averaging that of many holes of the same shape. The geometry of the shamrock is kept as extracted, while the circle is found to have a radius \(R_{1,\text{fab}}\) = 196.5 nm. This evidences a slight overgrowth of the exposed and etched areas compared to the lithography mask (\(R_{1,\text{mask}}\) = 194 nm). The fabricated samples also include varying slot widths, \(s\), but we mainly focus here on MOM devices with extracted \(s\) of 50 nm. Using the extracted geometric features and the centroid positions given by the triangular lattice parameters (\(a_{x}\) and \(a_{y}\)), we calculate the dispersion diagrams using a commercial finite-element solver (COMSOL Multiphysics). Note that for the mechanics, we consider the anisotropy of the silicon stiffness tensor and its particular orientation with respect to the waveguide coordinate system [38, 47]. Figures 5(c) and (d) show the optical and mechanical band diagrams for the case where \(R_{1}=R_{2}\), \(a_{y}\) = 484 nm and varying \(a_{x}\). We see that, as expected, the frequencies of the optical and mechanical bands of interest decrease as \(a_{x}\) grows, which we use to build a rectangular confinement potential in the mirror-terminated long waveguides or a smooth potential well in the mode-gap adiabatic heterostructure cavities. We also provide in Fig. 5(e) the unit cell vacuum optomechanical coupling rate, \(g_{\text{o,cell}}/2\pi\), between the Bloch modes respectively at \(q=0\) and \(k=\pi/a_{x}\) for the varying \(a_{x}\). We observe nearly no variation of \(g_{\text{o,cell}}/2\pi\) as a moving boundary contribution dominates it. The latter is mainly determined by the slot width, \(s\), which is kept fixed. Figures 5(f) and (g) depict the effect of reducing \(R_{2}\) on the optical and mechanical band structure for the case \(a_{x}=a_{y}=484\) nm. We see that the reduced air-filling fraction as \(R_{2}\) is reduced relative to \(R_{1}\), which lowers the effective refractive index of the mode, leads to a redshift of the optical band. However, we note that the band edge of the mode is always well within the telecom C-band. In the case of the mechanical band structures, \(\Delta R=R_{1}-R_{2}\neq 0\), breaks the \(y\)-symmetry and lifts the degeneracy between the frequencies of the modes on the two membrane sides. While the bands on one side are independent of \(R_{2}\), the frequency of the mechanical mode on the other membrane side increases as \(R_{2}\) is reduced. In the following section, we see how this effect is used in the case of the MOM system to control the frequency difference between the two cavity phonon modes. ## Appendix B Two-dimensional optomechanical crystal cavities: cavity details The mode-gap adiabatic heterostructure cavities are built by leveraging the band-structure properties de Figure 5: **Optical and mechanical dispersion diagrams of the fabricated structures.** (a) Scanning electron microscopy (SEM) image of a representative shamrock-shaped hole and extracted average outline (green line). (b) Same as (a) for a circular hole. (c) Optical and (d) mechanical band structures for the waveguides with \(R_{1}=R_{2}\), \(a_{y}\) = 484 nm and varying \(a_{x}\). (e) The coupling rate \(g_{\text{o,cell}}\) between the Bloch modes respectively at \(q=0\) and \(k=\pi/a_{x}\) for the varying \(a_{x}\). (f-h) Same as (c-e) for waveguides with \(a_{x}=a_{y}=484\) nm, \(R_{1}\) = 196.5 nm and varying \(R_{2}\). picted in Figs. 5(c) and (d). The horizontal pitch of the waveguide, \(a_{x}\), is changed adiabatically from \(a_{1}=484\) nm in the central unit cell to \(a_{N_{c}}=510\) nm at the edges of the defect region, which comprises \(N_{c}=8\) unit cells, following a cubic interpolation \[a_{i}=\Bigg{\lfloor}a_{N_{c}}\Bigg{(}1-\Big{(}1-\frac{a_{1}}{a_{N_{c}}}\Big{)} \Big{(}2(\frac{i}{N_{c}})^{3}-3\big{(}\frac{i}{N_{c}}\Big{)}^{2}+1\Big{)} \Bigg{)}\Bigg{\rfloor}\] where \(i\in[0,N_{c}]\) and the pitch \(a_{i}\) is rounded to integer values to conform with requirements for the lithographic mask/process. Such adiabatic tapering of \(a_{x}\) is fixed for all the cavities we explore. Beyond the defect region, two mirror sections comprised of \(N_{\text{m}}\) unit cells with pitch \(a_{N_{c}}\) are included. To investigate the intrinsic frequencies and losses of both the optical and mechanical cavity modes, we explore the behaviour of the different cavity parameters as a function of \(N_{\text{m}}\) (Fig. 6). Note that the simulated cavities use the geometry of the electron-beam lithography mask instead of the outlines of the fabricated holes, as in Fig 5, because the smaller mesh elements required to capture the SEM-extracted contours would otherwise lead to prohibitive computational memory requirements. We see that nearly all cavity-optomechanical figures of merit except for the radiation-limited mechanical quality factor, \(Q_{\text{m}}\), are converged --with some numerical fluctuations but no trend-- for \(N_{\text{m}}>10\). For the saturation of \(Q_{\text{m}}\), 32 mirror unit cells are necessary. This large number is linked to the low effective mass associated with the dispersion at the band edge, which results in a low mirror strength [64]. In addition, we observe that the value of \(Q_{\text{m}}\) saturates, which implies that the size of the phononic-crystal cladding transverse to the waveguide axis is the ultimate limiting factor in our simulation setting. Our fabricated cavities use a cladding twice as wide as the simulations; therefore, the radiation-limited \(Q_{\text{m}}\), i.e. at very low temperatures, can be expected to be much larger. Similar to Fig. 6, we investigate the role of varying \(R_{2}\) on the MOM system. In this case, the two mechanical modes are not degenerate anymore. The resulting cavity-optomechanical figures of merit for the case \(N_{\text{m}}\) = 16 are shown in Fig. 7. We observe the following: 1) the coupling rates, \(g_{\text{o},1}\) and \(g_{\text{o},2}\), which only coincide for the case \(R_{1}=R_{2}=194\) nm, depend very weakly on the value of \(R_{2}\); 2) the resonant wavelength \(\lambda_{\text{o}}\) redshifts by 1.63 nm per 1 nm decrease in \(R_{2}\); 3) the optical \(Q\) is not clearly affected by \(R_{2}\); 4) the mechanical frequency \(\Omega_{\text{m},2}/2\pi\) increases by 10.19 MHz per 1 nm decrease in \(R_{2}\); and 5) the mechanical quality factor grows steadily as \(R_{2}\) increases. In connection with the discussion of the dependence of \(Q_{\text{m}}\) with \(N_{\text{m}}\), we attribute the behaviour with \(R_{2}\) to the increase of the effective mass as \(R_{2}\) decreases (see Fig. 5(g)). ## Appendix C Group velocity reconstruction In Fig. 1(d) in the main text, we compare the simulated group velocity, \(v_{\text{g}}\), of the mechanical mode of interest with that extracted via the free spectral range (FSR) of the transduced peaks in a waveguide of length \(L=350a_{x}\). Although a single radiofrequency (RF) spectrum is provided in the main text, the reconstruction uses more RF spectra--albeit with poorer transduction amplitudes--to either confirm the modes observed in the spectrum given in the main text or to detect the presence of additional peaks that are difficult to identify in that single RF spectrum. Figure 8 reproduces the spectra used and also evidences that in the region where the Figure 6: **Cavity-optomechanical figures of merit as a function of the number of mirror unit cells.** Vacuum optomechanical coupling rates, \(g_{\text{o}}\), optical wavelength, \(\lambda_{\text{o}}\), optical quality factor, \(Q_{\text{o}}\), mechanical frequencies, \(\Omega_{\text{m}}\), and mechanical quality factors, \(Q_{\text{m}}\) as a function of \(N_{m}\). The mechanical and optomechanical figures of merit are the same for both degenerate mechanical modes. Figure 7: **Cavity-optomechanical figures of merit as a function of the radius of the circular holes in the bottom membrane, \(R_{2}\). Vacuum optomechanical coupling rates, \(g_{\text{o}}\), optical wavelength, \(\lambda_{\text{o}}\), optical quality factor, \(Q_{\text{o}}\), mechanical frequencies, \(\Omega_{\text{m}}\), and mechanical quality factors, \(Q_{\text{m}}\) as a function of \(R_{2}\).** acoustic waveguide is multimoded, the peaks can not be easily resolved, and there is no well-defined FSR [51]. The persistent presence of the peak at the highest frequency might indicate that it is a tightly-localized acoustic mode, i.e. an Anderson-localized acoustic mode, but confirming such a hypothesis is beyond the scope of this work. ## Appendix D Measurement of the vacuum optomechanical coupling rate We measure the vacuum optomechanical coupling rate, \(g_{0}\), of the two mechanical modes of the MOM system following the method of Gorodetsky _et al._[65]. The thermally excited displacement of the two mechanical modes is probed. Their power spectral density is compared to that transduced by the optical cavity for a phase-modulation tone induced by an electro-optic modulator (EOM) driven by a vector network analyzer (VNA) at \(\omega_{\mathrm{VNA}}=7.01\,\mathrm{GHz}\) and fixed power \(-10\,\mathrm{dBm}\). This power corresponds to a modulation voltage \(V_{\mathrm{VNA}}=0.0707\,\mathrm{V}\) relative to the voltage required for a full phase-shift, i.e., \(V_{\pi}(7\,\mathrm{GHz})=5.29\,\mathrm{V}\). The corresponding modulation depth is \(d=\pi\frac{V_{\mathrm{VNA}}}{V_{\pi}}\) The two vacuum optomechanical coupling rates, \(g_{o,1}\) and \(g_{o,2}\), are evaluated as, [65] \[\frac{g_{o,i}}{2\pi}=\frac{d\times\omega_{\mathrm{VNA}}}{4\sqrt{n}}\sqrt{\frac {A(\Omega_{\mathrm{m,i}})\Gamma_{\mathrm{m,i}}}{A(\omega_{\mathrm{VNA}})\gamma _{\mathrm{mod}}}}. \tag{1}\] Here, \(A(\Omega_{\mathrm{m,i}})\) and \(A(\omega_{\mathrm{VNA}})\) are the measured amplitudes of the power spectral density at the mechanical peak and at the modulation peak, respectively. \(\Gamma_{\mathrm{m,i}}\) and \(\gamma_{\mathrm{mod}}\) are the mechanical and modulation peak widths, repectively provided by the Lorentzian and Gaussian fits. \(n=k_{\mathrm{B}}T/(h\Omega_{i})\) is the phonon thermal occupancy, with \(k_{\mathrm{B}}\), \(T\) and \(h\) the Boltzmann constant, the temperature and Planck's constant, respectively. Figure 9 shows the measured power spectral density, as well as two independent Lorentzian fits to the mechanical modes and a Gaussian fit to the VNA tone for a MOM system with \(R_{2}=192\) nm. Table 1 summarizes the results obtained from the fits and the evaluation of Eq.(1). ## Appendix E Effective laser detuning in a thermo-optic cavity Due to the thermo-optic nonlinearity, the laser wavelength \(\lambda\) does not straightforwardly control the laser detuning to the cavity \(\Delta\). Within a linear case, the optical cavity can be modelled using coupled mode theory and Figure 8: **Radiofrequency spectra measured using different Anderson-localized modes on a waveguide made of 350 unit cells.** For reference, the simulated group velocity, \(v_{\mathrm{g}}\), shifted down by 90 MHz, is included as the top panel. Figure 9: **Measurement of the vacuum optomechanical coupling, g\({}_{0}\).** The green lines represent Gaussian (Lorentzian) fits to the VNA (opto-mechanical) mode(s). The inset shows a magnification of the phase-modulation peak and the corresponding fit to a Gaussian. the transmission through the fiber loop is given by, \[T=\frac{\Delta^{2}+\kappa_{i}^{2}/4}{\Delta^{2}+\kappa_{t}^{2}/4} \tag{10}\] with \(\kappa_{i}\) and \(\kappa_{e}\) the internal and external decay rates and \(\kappa_{t}=\kappa_{i}+\kappa_{e}\). The laser-cavity detuning, \(\Delta\), is given by \(\Delta=\omega-\omega_{c}\), with \(\omega\) the laser frequency and \(\omega_{c}\) the cavity resonance frequency. In the presence of a thermo-optic nonlinearity, \(\beta\), the former is power-dependent and writes \(\omega_{c}=\omega_{0}+\beta E\) with \(E\) the intracavity energy and \(\omega_{0}\) the cavity natural frequency (at zero input power). The value of \(\beta\) depends on several optical and thermal properties of the host material and is also affected by the optical field profile [66]. The independent extraction of such parameters is cumbersome and subject to numerous sources of uncertainty. On the contrary, it is possible to evaluate the detuning from the measured transmission \(T\) using Eq. 10, \[|\Delta|=\sqrt{\frac{1}{4}\frac{\kappa_{t}^{2}T-\kappa_{i}^{2}}{1-T}} \tag{11}\] This enables us to obtain the detuning as a function of, e.g., the laser wavelength \(\lambda\) and establish a correspondence between the control parameter (\(\lambda\)) and the physically relevant quantity (\(\Delta\)). For several measurements of the transmission spectrum \(T(\lambda)\) at increasing laser power (see Fig. 10, top), we show the corresponding laser detuning \(\Delta\) deduced from Eq. 11 (bottom). The data are plotted as a function of \(\lambda-\lambda_{0}\) with \(\lambda_{0}=2\pi c/\omega_{0}\) the natural cavity resonance wavelength evaluated to be \(\lambda_{0}\sim 1559.35\) nm. In absence of thermo-optic nonlinearity, the detuning would scale linearly with the laser wavelength in the case of the nearly-resonant excitations used here: \(\Delta\approx\frac{2\pi c(\lambda_{0}-\lambda)}{\lambda_{0}^{2}}\). Instead, we observe with this representation how \(\Delta\) scales asymptotically with \(\lambda\). We fit the data with a power law \(\Delta(\lambda)=A+B(\lambda-\lambda_{0})^{C}\) (green lines). The fits are rather satisfying which justifies the use of semi-logarithmic scales in Fig. 3(c) of the main text for comparing the experiment to theory. This method assumes that the cavity decay rates remain unchanged with the input power, which remains an approximation due to the emergence of two-photon absorption at higher power that tends to increase the intrinsic loss rate \(\kappa_{i}\). However, we note an irregularity observed in the transmission curves that becomes stronger when the power is increased. This "kink" typically indicates optomechanical self-oscillations, which occurs when the laser reaches the blue sideband at \(\Delta=+\overline{\Omega}\), where \(\overline{\Omega}=\frac{1}{2}(\Omega_{1}+\Omega_{2})\). Here, it appears in agreement with the determined laser detuning as shown with the dashed lines at \(\Delta/\overline{\Omega}\approx+1\), where we use \(\overline{\Omega}/2\pi=7.060\) GHz. It is also in agreement with the measurements presented in Fig.3d-e of the manuscript, where at least one mechanical mode becomes very close to its lasing threshold \(\Gamma_{i}=0\) for input powers above 131 \(\mu\)W. ## Appendix F Linearized optomechanical susceptibility We consider a Mechanical-Optical-Mechanical configuration (MOM), i.e. two mechanically independent mechanical modes with displacements \(x_{1}\) and \(x_{2}\), coupled to a single optical mode with amplitude \(a\) via their respective optomechanical coupling rates \(g_{\text{o},1}\) and \(g_{\text{o},2}\). The classical equations of motion of this system are given by, \[\dot{a} =\Big{(}j\big{(}\Delta+g_{\text{o},1}x_{1}+g_{\text{o},2}x_{2} \big{)}-\frac{\kappa_{t}}{2}\Big{)}a+\sqrt{\frac{\kappa_{e}}{2}}s_{\text{in}} \tag{12}\] \[\ddot{x_{1}} =-\Gamma_{\text{m},1}\dot{x_{1}}-\Omega_{\text{m},1}^{2}x_{1}+2 \Omega_{\text{m},1}g_{\text{o},1}|a|^{2}\] \[\ddot{x_{2}} =-\Gamma_{\text{m},2}\dot{x_{2}}-\Omega_{\text{m},2}^{2}x_{2}+2 \Omega_{\text{m},2}g_{\text{o},2}|a|^{2}\] where \(\Delta=\omega_{\ell}-\omega_{0}\) is the detuning between the laser frequency \(\omega_{\ell}=2\pi c/\lambda\) and the cavity resonance frequency \(\omega_{0}\), and \(s_{\text{in}}\) is the input laser field amplitude, such that \(P_{\text{in}}=\hbar\omega_{\ell}|s_{\text{in}}|^{2}\) is the laser incident power and \(\overline{n}_{\text{cav}}=|a|^{2}\) the cavity photon number. Note that the displacements are normalized by the respective zero-point fluctuations of the mechanical modes such that the mechanical masses can be eliminated. Each mechanical resonator has a natural mechanical susceptibility \(\chi_{\text{m},\text{i}}^{-1}(\omega)=\Big{[}\big{(}\Omega_{\text{m},\text{i} }^{2}-\omega^{2}\big{)}-j\omega\Gamma_{\text{m},\text{i}}\Big{]}\), that is perturbed by radiation pressure forces. The perturbation is described by a self-coupling term [1]\(\Sigma_{i}(\omega)=2\Omega_{\text{m},\text{i}}g_{\text{o},\text{i}}^{2}\beta(\omega)\) where \(\beta(\omega)=\frac{\overline{n}_{\text{cav}}}{(\Delta+\omega)+j\kappa_{\ell} /2}+\frac{\overline{n}_{\text{cav}}}{(\Delta-\omega)-j\kappa_{\ell}/2}\). The presence of a second mechanical mode leads to a deviation in the effective susceptibility of each mechanical oscillator. Applying a linearization to Eq. 12, we obtain: \[\begin{split}&\big{[}\chi_{\text{eff},1}^{-1}(\omega)+\Sigma_{1}( \omega)\big{]}x_{1}=2\Omega_{\text{m},1}g_{\text{o},1}g_{\text{o},2}\beta( \omega)x_{2}\\ &\big{[}\chi_{\text{eff},2}^{-1}(\omega)+\Sigma_{2}(\omega)\big{]} x_{2}=2\Omega_{\text{m},2}g_{\text{o},1}g_{\text{o},2}\beta(\omega)x_{1}\end{split} \tag{13}\] Figure 10: **Extraction of the laser-cavity detuning as a function of the laser wavelength.** For increasing optical power launched to the cavity, we measure the normalized transmission (top) from which we infer the laser detuning \(\Delta\) as a function of the laser wavelength \(\lambda\). Blue-sideband excitation occurs at \(\Delta=+\overline{\Omega}\), as indicated with the dashed lines. The evaluated detuning is fitted with a power law (green lines). This expression highlights that the coupling between the mechanical modes scales with \(g_{1}g_{2}\), which is expected since they do not interact directly but through the optical field. Note that taking \(\Gamma_{1}\approx\Gamma_{2}\) provides the Hamiltonian formulation presented in Ref. [13]. Solving Eq. 11 provides an expression for the effective mechanical coupling between the mechanical modes: \[K_{i}(\omega)=-\frac{\Sigma_{i}(\omega)\Sigma_{j}(\omega)}{\chi_{\rm m,j}^{-1}( \omega)+\Sigma_{j}(\omega)} \tag{12}\] Therefore we can write the total mechanical susceptibility of mode \(i\) as follow: \[\chi_{\rm eff,i}^{-1}(\omega)=\chi_{\rm m,i}^{-1}(\omega)+\Sigma_{i}(\omega)+ K_{i}(\omega) \tag{13}\] From Eq. 13, we evaluate the effective mechanical frequencies and damping rates using \(\Omega_{\rm i}=\Omega_{\rm m,i}+\mathrm{Re}[\Sigma_{i}(\omega)+K_{i}(\omega)]/( 2\Omega_{i})\), and \(\Gamma_{\rm i}=-\mathrm{Im}[\chi_{\rm eff,i}^{-1}]/\Omega_{i}\), respectively. These are the theoretical quantities plotted as a function of \(\lambda\) and \(P_{\rm in}\) in Fig. 3 in the main text. In Fig. 11, we compare the theoretical evolutions of \(\Omega_{1}\), \(\Omega_{2}\), \(\Gamma_{1}\), and \(\Omega_{2}\) as a function of the detuning (\(\delta\lambda\), expressed in terms of wavelength). We compare the situation where the mechanical oscillator do not interact through the optical mode (i.e., by imposing \(K_{i}=0\)) with the MOM configuration (with \(K_{i}\) given by Eq. 12). We note a significant change in the evolution of the linewidths. In particular, accounting for the coupling enables the recovery of two features: 1) the mode linewidths reach a minimum at different spectral locations (indicated by dashed lines) which is not the case when considering single optomechanical cavities, and 2) the decrease of \(\Gamma_{1}\) is more pronounced, enabling the mode to pass its lasing threshold (\(\Gamma_{1}=0\)) at the power \(P_{\rm in}=219\ \mu\)W, while the linewidth of the other mode reduces as much, but within a shorter spectral range. In summary, in the absence of coupling the frequency and linewidth of mode 1 and 2 evolve identically, with only a scaling factor and an offset. However accounting for the full MOM mechanical susceptibility reveals that the modes follow different spectral evolutions (i.e., along the \(x\)-axis). ## Appendix G Stimulated multimode lasing: methodology Here we provide more details on the theoretical curves included in Figure 4(c) in the main text. We present, first, the methodology used to determine the lasing threshold from the measurement of the mechanical peak amplitudes as a function of the modulation depth, \(d\), and frequency, \(\omega_{\rm VNA}\). Secondly, we detail how the extracted thresholds are fitted using the laser detuning \(\Delta\) as a fitting parameter and the theory described in Ref. [41]. The amplitude modulation of the input laser leads to sidebands of the mechanical modes in the radiofrequency spectrum [67]. When \(\delta\omega\equiv\omega_{\rm VNA}-(\Omega_{2}-\Omega_{1})<\Gamma_{1,2}\), a sideband overlaps with the mechanical modes. This situation complicates the fitting of the mechanical modes with a Lorentzian lineshape, and the mechanical linewidth cannot be accurately determined. In order to determine the lasing thresholds - given by \(\Gamma_{i}=0\), by definition - we use the fact that the peak amplitude grows quickly at the threshold. For each map presented in the Figure 4(c) in the main text, i.e., for each mechanical mode, we plot the statistics of the peak amplitude within the full map (see Fig. 12(a)). The statistics display a minimum (dashed lines) that we interpret as the value of the oscillator amplitude that indicates the threshold. For each mode, we then determine the positions in the parameter space, \(\{\omega_{\rm VNA}^{\rm th},d^{\rm th}\}\), where the amplitude passes this threshold. This provides the data points in Fig. 12(b). Now, we theoretically determine the threshold condition in the same parameter space. For a given mode \(i\), it corresponds to the ensemble of pairs \(\{\omega_{\rm VNA},d\}\) fulfilling the condition \(\Gamma_{i}=0\). The model in Ref. [41] leads to \[\Gamma_{i}=-\frac{1}{\Omega_{i}}\mathrm{Im}\Big{[}\chi_{\rm m,i}^{-1}+\Sigma_ {i}(\Omega_{\rm m,1})-\frac{2\Omega_{i}\sigma^{2}(\Omega_{i})}{\delta\omega+j \frac{\Gamma_{\rm m,i}}{2}+\sigma^{\prime}(\Omega_{i})}\Big{]}\] with \[\sigma(\omega) =g_{\rm o,1}g_{\rm o,2}\overline{n}_{\rm cav}d\beta(\omega)\] \[\sigma^{\prime}(\omega) =g_{\rm o,1}g_{\rm o,2}\overline{n}_{\rm cav}d^{2}\beta(\omega)\] and where \(\chi_{\rm m,i}\), \(\Sigma_{i}(\omega)\) and \(\beta(\omega)\) are defined in the previous section,. The fit is performed using \(\Delta\) as the only fitting parameter, as it can be noted that both \(\beta(\omega)\) and \(\overline{n}_{\rm cav}\) depends Figure 11: **The role of the optically-induced mechanical coupling in the observed dynamical back-action effects.** Comparison of the theory curves presented in Fig. 3 of the main text when accounting for a MOM with effective coupling (right) compared to a MOM with no optically-mediated coupling between the two mechanical modes (left). The mechanical frequencies and linewidths are shown for different input laser power indicated in units of \(\mu\)W. on \(\Delta\). The thresholds for mode 1 and 2 are fitted at once (see red line in Fig. 12(b)) and provides \(\Delta=1.024\times\overline{\Omega}\) using the mean mechanical frequency \(\overline{\Omega}=6.984\) GHz. The residual is shown in the inset. Figure 12: **Lasing threshold determination and fitting** a. Distribution signal-to-noise ratio of mechanical modes 1 (left) and 2 (right) over the colormaps presented in Fig. 4 of the manuscript. The minimum of each distribution is interpreted as the lasing threshold amplitude. b. Lasing threshold of each mode (1:left and 2:right) plotted in the parameter space (black dots) and simultaneously fitted with the theoretical model from Ref.[41] (red lines). The residuals (inset) yield a minimum at \(\Delta/\overline{\Omega}=1.024\).
2309.16275
UPB @ ACTI: Detecting Conspiracies using fine tuned Sentence Transformers
Conspiracy theories have become a prominent and concerning aspect of online discourse, posing challenges to information integrity and societal trust. As such, we address conspiracy theory detection as proposed by the ACTI @ EVALITA 2023 shared task. The combination of pre-trained sentence Transformer models and data augmentation techniques enabled us to secure first place in the final leaderboard of both sub-tasks. Our methodology attained F1 scores of 85.71% in the binary classification and 91.23% for the fine-grained conspiracy topic classification, surpassing other competing systems.
Andrei Paraschiv, Mihai Dascalu
2023-09-28T09:17:20Z
http://arxiv.org/abs/2309.16275v1
# UPB @ ACTI: Detecting Conspiracies using fine tuned Sentence Transformers ###### Abstract Conspiracy theories have become a prominent and concerning aspect of online discourse, posing challenges to information integrity and societal trust. As such, we address conspiracy theory detection as proposed by the ACTI @ EVALITA 2023 shared task. The combination of pre-trained sentence Transformer models and data augmentation techniques enabled us to secure first place in the final leaderboard of both sub-tasks. Our methodology attained F1 scores of 85.71% in the binary classification and 91.23% for the fine-grained conspiracy topic classification, surpassing other competing systems. ## 1 Introduction Conspiracy theories distort the shared understanding of reality and erode trust in crucial democratic institutions. By substituting reliable, evidence-based information with dubious, implausible, or blatantly false claims, these theories foster a climate of disagreement regarding facts and give undue weight to personal opinions and anecdotal evidence over established facts and scientifically validated theories. Aaronovitch (2010) defines conspiracy theories as 'the attribution of deliberate agency to something more likely to be accidental or unintended; therefore, it is the unnecessary assumption of conspiracy when other explanations are more probable.' Due to the rapid spread of information across the internet, coupled with the alarming speed at which false information can proliferate (Vosoughi et al., 2018), we find ourselves amidst what some have dubbed a "golden age" of conspiracy theories (Hanley et al., 2023). Being a distinct form of misinformation, conspiracy theories exhibit unique characteristics. Brotherton et al. (2013) identified five key attributes commonly found in modern conspiracy theories: government malfeasance, extraterrestrial cover-up, malevolent global conspiracies, personal well-being, and information control. While embracing conspiracy theories can give individuals a sense of reclaiming power or accessing hidden knowledge, these beliefs can sometimes have negative and dangerous consequences. One recent example is the violent insurrection on the US Capitol on 6 January 2021 driven by conspiracy theories surrounding QAnon and election fraud (Seitz, 2021). Additionally, these theories can serve as powerful tools in the hands of nefarious groups, politicians, or state actors who exploit susceptible communities, manipulating them into taking or endorsing actions that can result in significant and dramatic social repercussions (Audureau, 2023; Yablokov, 2022). Building upon the importance of addressing conspiracy theories, efforts have been made to research and develop automated methods for detecting conspiratorial content on various platforms and languages. For instance, as part of the EVALITA 2023 workshop, the organizers of the ACTI shared task introduced a novel approach: the automatic identification of conspiratorial content in Italian language Telegram messages. This initiative aimed to enhance our ability to quickly recognize and respond to conspiracy theories, enabling the promotion of critical thinking and media literacy by providing reliable sources and encouraging evidence-based discourse. Leveraging such advancements can effectively limit the influence of conspiracy theories while fostering a more informed and resilient society. This paper presents our contribution to the ACTI @ EVALITA 2023 shared task. We focused on employing the power of pretrained Italian language sentence Transformers. To further enhance the performance and address potential biases, we employed Large Language Models (LLMs) to augment the training data, resulting in a more balanced and comprehensive training set. This combina tion of leveraging pre-trained models and data augmentation techniques formed the foundation of our methodology, enabling us to achieve first place in the final leaderboard of both sub-tasks with F1 scores of 85.71% and respectively 91.23%. ## 2 Related Work Recently, online platforms have often banned--entirely deactivated--communities that breached their increasingly comprehensive guidelines. In 2020 alone, Reddit banned around 2,000 subreddits (the name a community receives on the platform) associated with hate speech. Similarly, Facebook banned 1,500 pages and groups related to the QAnon conspiracy theory Collins and Zadrozny (2020). While these decisions are met with enthusiasm [e.g., see Anti-Defamation League (2020)], the efficacy of "deplatforming" these online communities has been questioned Zuckerman and Rajendra-Nicolucci (2021); Russo et al. (2023). When mainstream platforms ban entire communities for their offensive rhetoric, users often migrate to alternative _fringe platforms_, sometimes created exclusively to host the banned community Dewey (2016); Russo et al. (2023). Banning, in that context, would not only strengthen the infrastructure hosting these fringe platforms Zuckerman and Rajendra-Nicolucci (2021) but allow these communities to become more toxic elsewhere Horta Ribeiro et al. (2021). In order to improve the efficacy of such moderation policies identifying and tracking the propagation of problematic content like conspiracy theories is crucial. For example the Zika virus outbreak in 2016, coupled with the influence of social networks and the declaration of a public health emergency by the WHO, showed the harm the dissemination of conspiracy theories can generated Ghenai and Mejova (2017); Wood (2018). The COVID-19 pandemic had a profound impact, emphasizing the dangers associated with the proliferation of conspiracy theories. These theories encompassed a wide range of topics, including the virus's origin, its spread, the role of 5G networks, and the efficacy and safety of vaccines. With COVID-related lockdowns in place, people became more reliant on social networking platforms such as Twitter, Facebook, and Instagram, which increased their exposure to disinformation and conspiracy theories. MediaEval 2020 Pogorelov et al. (2020) focused on a 5G and COVID-19 conspiracy tweets dataset, proposing two shared tasks to address this issue. The first task involved detecting conspiracies based on textual information, while the second task focused on structure-based detection utilizing the retweet graph. Various systems were proposed to tackle these tasks, employing different approaches such as methods relying on Support Vector Machine (SVM) Moosleitner et al. (2020), BERT Malakhov et al. (2020), and GNN Paraschiv et al. (2021). In their study, Tyagi and Carley (2021) employed an SVM to classify the stance of Twitter users towards climate change conspiracies. Their findings revealed that individuals who expressed disbelief in climate change tend to share a significantly higher number of other types of conspiracy-related messages compared to those who believe in climate change. Furthermore, Amin et al. (2022) manually labeled 598 Facebook comments as Covid-19 vaccine conspiracy or neutral and used a BERT-based model in conjunction with Google Perspective API to classify these messages, providing valuable insights into the prevalence of vaccine conspiracy theories on social media platforms. Tunstall et al. (2022) presented a new approach based on Sentence TransformersReimers and Gurevych (2019) called SetFit that focused on data-efficient fine-tuning of sentence embeddings, particularly for binary labels. The training of SetFit follows a two-step process. First, it fine-tuned the sentence embeddings in a contrastive manner. This step helped in optimizing the embeddings for the specific classification task. Subsequently, a classification head was trained using finetuned sentence embeddings, enabling effective classification on the training labels. Their approach aimed to enhance the efficiency and performance of fine-tuning sentence embeddings in scenarios with limited data. The efficacy and power of Sentence-Transformers has been shown in multiple tasks spanning from text generation Amin-Nejad et al. (2020); Russo et al. (2020) to sentence classification tasks.Hong et al. (2023); Piao (2021); Russo et al. (2022). These models capture the semantic and contextual information of sentences or paragraphs, enabling nuanced representations of textual data. Leveraging such models, Bates and Gurevych (2023) used SetFit to propose LAGONN, a hate speech and toxic messages classification framework for content moderation. ## 3 Method ### Task Description The ACTI @ EVALITA 2023 organizers put forth two sub-tasks for participants to address. The first sub-task PeppeRusso (2023) involved binary classification, where participants were provided with a dataset consisting of 1,842 training samples and 460 test samples. The objective was to classify messages as either conspiratorial or non-conspiratorial. The second sub-task PeppeRusso (2023) focused on fine-grained conspiracy topic classification. Participants were required to classify messages into one of four specific conspiracy topic classes: Covid, QAnon, Flat-Earth, or Russia-conspiracy. A training set of 810 records was provided for this sub-task, while the evaluation test set contained 300 samples. Table 1 shows the class distribution for both sub-tasks. The macro F1 score was adopted as a criterion to evaluate the two sub-tasks. During the competition, 30% of the test dataset was immediately evaluated on the Public Leaderboard, giving participants an initial indication of their model's performance. However, the final evaluation was conducted on the remaining 70% of private entries. These final evaluation scores were then used to compile the Private Leaderboard made public after the conclusion of the competition. ### Sentence Transformer and Data Augmentation We considered an Italian language Sentence Transformer model for our submissions and trained contrastive with SetFit1 as described by Tunstall et al. (2022). Since the training dataset is highly imbalanced between the conspiratorial classes (see in Table 1), we integrated a data augmentation step in our classification pipeline, as seen in Figure 1. Footnote 1: [https://github.com/huggingface/setfit](https://github.com/huggingface/setfit) In the data augmentation step, we used an LLM to create paraphrases for our training data using the prompt "riformulare questo testo: [_comment_text_]" and different seeds to create variations of the answers. In our experiments, we used "text-davinci-003" from the GPT-3 family2 and the mT5 model finetuned on Italian language paraphrases3. We set a high temperature (t=0.9) for the LLMs to ensure diverse text generation. The distribution for the augmented dataset is shown in Table 2. Footnote 2: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models) Footnote 3: [https://huggingface.co/aiknowyou/mt5-base-it-paraphraser](https://huggingface.co/aiknowyou/mt5-base-it-paraphraser) Footnote 4: [https://huggingface.co/models](https://huggingface.co/models) Sentence-Transformers are pretrained Transformer models finetuned in a Siamese network, such that semantically similar sentences or paragraphs are projected near each other in the embedding space; in contrast, the distance in the embedding space is maximized for sentence pairs that are different. In our experiments, we used several Italian pretrained Sentence Transformers from the Huggingface Hub5, as mentioned in Table 3. The first step in the SetFit training process involves generating positive and negative triplets. Positive triplets consist of sentences from the same class, while negative triplets contain sentences from different classes. The training data is expanded by including positive and negative triplets, providing a more comprehensive and diverse training set. The Sentence Transformer captures the contextual and semantic information of the messages, providing a powerful feature representation. In the second \begin{table} \begin{tabular}{|l|l|l|} \hline & **Classes** & **Count** \\ \hline Sub-Task A & Non Conspiratorial & 917 \\ & Conspiratorial & 925 \\ \hline Sub-Task B & Covid & 435 \\ & QAnon & 242 \\ & Flat-Earth & 76 \\ & Russian & 57 \\ \hline \end{tabular} \end{table} Table 1: ACTI Dataset distribution for the training sets on Sub-task A and B. \begin{table} \begin{tabular}{|l|l|l|} \hline & **Classes** & **Count** \\ \hline Sub-Task A & Non Conspiratorial & 1,822 \\ & Conspiratorial & 2,524 \\ \hline Sub-Task B & Covid & 779 \\ & QAnon & 672 \\ & Flat-Earth & 362 \\ & Russian & 322 \\ \hline \end{tabular} \end{table} Table 2: Class distribution on the augmented training sets used for Sub-task A and B Figure 1: End-to-End training Pipeline. step, a fully connected classification head is trained on top of the Sentence-Transformer to distinguish between the available classes. ## 4 Results Besides experimenting with different pre-trained models, as shown in Table 3, we also performed grid search tuning with several key hyper-parameters, namely the number of iterations, the learning rate, and the number of epochs for training. The number of iterations determined the quantity of generated triplets during training. By adjusting this parameter, we controlled the training data's size, potentially influencing the model's ability to generalize and capture important patterns. We set the maximum sequence length for the tokenizer to 512 for all of our experiments. We withheld 20% of the training data to evaluate the performance of the trained models during the development time. The best-performing model differed between the sub-tasks. The best-performing model in the binary classification sub-task was based on "efederici/sentence-BERTino". This model was trained on the "text-davinci-003" augmented dataset for 1 epoch. We used 5 iterations and a learning rate of 1e-05. In contrast, the larger "nickprock/sentence-bert-base-italian-xxl-uncased" model performed best for the fine-grained conspiracy topic classification sub-task. We trained this model on the same dataset for 1 epoch. The learning rate used was 1e-05, and the number of iterations was set to 10. This model yielded the best results in both Leaderboards (see Table 4). We conducted an ablation study after the competition ended to assess the impact of data augmentation. We trained the best-performing models under different conditions: a) using the original training data with 20% reserved for development evaluation, b) considering the entire original training data, and c) employing the dataset that was augmented with the mT5 paraphrasing LLM. The results in Tables 5 and 6 show the importance of the augmentation step. In the case of sub-task A, the additional data substantially influenced both the Public and Private test results. The augmented dataset led to significant improvements in performance. However, \begin{table} \begin{tabular}{|l|l|l|} \hline & **Public** & **Private** \\ & **Leader-** & **Leader-** \\ & **board** & **board** \\ \hline No augmented data & 83.32\% & 93.67\% \\ \hline No augmented data with development set & 83.39\% & 89.67\% \\ \hline mT5 augmented Data & 83.24\% & 87.07\% \\ \hline \end{tabular} \end{table} Table 6: Ablation Study for Sub-Task B. \begin{table} \begin{tabular}{|l|l|l|} \hline **Model** & **Embedding Size** \\ \hline efederici/sentence-BERTino & 768 \\ efederici/sentence-bert-base & 768 \\ efederici/sentence-BERTino-3-64 & 64 \\ efederici/mmarco-sentence-BERTino & 768 \\ efederici/sentence-it5-base & 512 \\ efederici/sentence-it5-small & 512 \\ nickprock/sentence-bert-base-italian-uncased & 768 \\ nickprock/sentence-bert-base-italian-xxl-uncased & 768 \\ aiknowyou/aiky-sentence-bertino & 768 \\ \hline \end{tabular} \end{table} Table 3: Sentence-Transformer models considered in our experiments. \begin{table} \begin{tabular}{|l|l|l|} \hline & **Public** & **Private** \\ & **Leader-** & **Leader-** \\ & **board** & **board** \\ \hline Sub-Task A & 85.36\% & 85.71\% \\ \hline Sub-Task B & 87.62\% & 91.23\% \\ \hline \end{tabular} \end{table} Table 4: Best model performance on the Public and Private leaderboard \begin{table} \begin{tabular}{|l|l|l|} \hline & **Public** & **Private** \\ & **Leader-** & **Leader-** \\ & **board** & **board** \\ \hline No augmented data & 75.80\% & 81.29\% \\ \hline No augmented data with development set & 79.36\% & 83.83\% \\ \hline mT5 augmented Data & 78.15\% & 82.25\% \\ \hline \end{tabular} \end{table} Table 5: Ablation Study for Sub-Task A. we see a decline in the Private Leaderboard for the fine-grained task results as the amount of data increased, despite the Public Leaderboard performance keeping the same. This performance decline could be attributed to an unusual distribution difference between the Public and Private test rows. Furthermore, the quality of the paraphrases used in the augmentation process played a crucial role in both sub-tasks. The poor performance achieved by the mT5 model suggests that the quality of the generated paraphrases has a notable impact on the overall model performance. Similarly, a drastic decrease in performance was observed for the second sub-task private leaderboard, arguing for the questionable quality of the paraphrases. ## 5 Conclusion In this paper, we described our approach addressing the two sub-tasks in the ACTI @ EVALITA 2023 competition. The challenge focuses on automatically detecting conspiratorial Telegram messages and the classification into four conspiracy topics: Covid, QAnon, Flat-Earth, and Russian conspiracies. Through the utilization of text augmentation techniques and the training of SentenceTransformers with contrastive learning, we developed robust classifiers. Our best models achieved first place in the Private Leaderboard on both tasks with F1 scores of 85.712% in the binary classification and 91.225% for the fine-grained conspiracy topic classification. This paper contributes to the growing body of research on conspiracy theory detection and emphasizes the effectiveness of leveraging pre-trained models and data augmentation techniques. Our results argue the potential of these approaches in addressing the challenges posed by conspiracy theories and their propagation in online platforms. ## Acknowledgement This work was supported by a grant of the Ministry of Research, Innovation, and Digitalization, project CloudPrecis, Contract 344/390020/06.09.2021, MySMIS code: 124812, within POC.
2309.06066
From inhomogeneous random digraphs to random graphs with fixed arc counts
Consider a random graph model with $n$ vertices where each vertex has a vertex-type drawn from some discrete distribution. Suppose that the number of arcs to be placed between each pair of vertex-types is known, and that each arc is placed uniformly at random without replacement between one of the vertex-pairs with matching types. In this paper, we will show that under certain conditions this random graph model is equivalent to the well-studied inhomogeneous random digraph model. We will use this equivalence in three applications. First, we will apply the equivalence on some well known random graph models (the Erd\H{o}s-R\'enyi model, the stochastic block model, and the Chung-Lu model) to showcase what their equivalent counterparts with fixed arcs look like. Secondly, we will extend this equivalence to a practical model for inferring cell-cell interactions to showcase how theoretical knowledge about inhomogeneous random digraphs can be transferred to a modeling context. Thirdly, we will show how our model induces a natural fast algorithm to generate inhomogeneous random digraphs.
Mike van Santvoort, Pim van der Hoorn
2023-09-12T09:06:40Z
http://arxiv.org/abs/2309.06066v1
# From inhomogeneous random digraphs to random graphs with fixed arc counts ###### Abstract Consider a random graph model with \(n\) vertices where each vertex has a vertex-type drawn from some discrete distribution. Suppose that the number of arcs to be placed between each pair of vertex-types is known, and that each arc is placed uniformly at random without replacement between one of the vertex-pairs with matching types. In this paper, we will show that under certain conditions this random graph model is equivalent to the well-studied inhomogeneous random digraph model. We will use this equivalence in three applications. First, we will apply the equivalence on some well known random graph models (the Erdos-Renyi model, the stochastic block model, and the Chung-Lu model) to showcase what their equivalent counterparts with fixed arcs look like. Secondly, we will extend this equivalence to a practical model for inferring cell-cell interactions to showcase how theoretical knowledge about inhomogeneous random digraphs can be transferred to a modeling context. Thirdly, we will show how our model induces a natural fast algorithm to generate inhomogeneous random digraphs. Inhomogeneous random digraphs Model equivalence Cell-cell interaction model Random graph algorithm ## 1 Introduction Random graphs models are becoming a more and more standardised tool to analyse real world networks. Their usefulness shines brightest in situations where data about the real word is difficult to obtain, or entirely unavailable. In such situations, random graphs are used to mimic real-world networks, so hypotheses can still be explored and tested, without the need for much data (see [9] for a review on modelling with random graphs). Moreover, random graphs are often defined through a process, making them generally efficient and simple to generate and analyse numerically (see e.g. [18]). Some examples of the wide range of applications include using the stochastic block model (see [17]) as a null-model to predict missing links in a graph (see [13]), or using configuration-like models (see [22]) to mimic and test the spread of a disease during an epidemic in [3]. One special random graph model that has been used extensively in both theoretical explorations and practical applications is the _inhomogeneous random graph model_. In their seminal paper [4] on the topic, Bollobas et al. managed to theoretically prove many relevant properties of the model. Subsequently, these theoretical properties could be exploited in practical application of the model. For example, they could be used in [20, 13] to facilitate link prediction, or in [11] to benchmark community detection. Recently, theoretical knowledge on inhomogeneous random graphs have been extended to a directed version of the model (called _inhomogeneous random digraphs_) in [5], opening the door to more modelling opportunities. While [5] provides results for many key properties of graphs, these cannot always be directly used in modelling efforts, since models that are used are often similar to inhomogeneous random graph, but not exactly equivalent. For example, in [23] a model is used to infer cell-cell interactions that is "close" to the model in [5], with the exception that it fixes the number of arcs beforehand and includes connection rules stipulating where arcs can go. Because the model in [23] deviates from the inhomogeneous random graph setting, results from inhomogeneous random digraphs cannot be readily applied to this model. Therefore, the asymptotic behavior of the resulting network remains an open question, meaning properties had to be studied numerically using costly monte-carlo simulations. In this paper, we will bridge the gap between theoretical knowledge from inhomogeneous random digraphs, and a class of random graph models that are used in practice. We call this "new" class of models _arc assigned random digraphs_. These arc assigned random graphs differ from inhomogeneous random digraphs in the sense that their number of arcs is fixed, only to still be assigned a random position. In essence, the relation between inhomogeneous random digraphs and arc assigned random graphs is a generalisation to the relation between the classical Erdos-Renyi model and Gilbert model as outlined in [16] (Section 1.4). We will provide this generalisation as the main result of our paper (Theorem 2.5). The definition of the models and the main result will be given in Section 2. We illustrate our main result by showing equivalence between some well-known random graph models. Additionally, we will extend our main result to include the model in [23]. This will constitute an extra major result (Theorem 3.7) that shows that the main result can be extended to classes of random graph models that do not directly fall into the arc assigned random digraph class. Finally, we will show that the arc assigned random digraph model provides a natural algorithm to generate inhomogeneous random digraphs. This algorithm will be linear in the amount of operations it has to execute, and will be conceptually simpler than the canonical linear algorithm to generate inhomogeneous random digraphs in [14]. These applications will be given in Section 3. The remainder of the paper is concerned with proving our results. In Section 4 we will give the heuristics and machinery behind the proofs of the main theorems. Once these have been outlined, we will also execute the proofs of the main theorems. The proofs of the technical lemmas needed for these proofs will be given later in Section 5. ## 2 Main results As mentioned in the introduction, we will generalise the equivalence that exists between the Erdos-Renyi and Gilbert random graph model. In this generalisation, we will prove equivalence between two models that assign types to vertices. The difference between the two models will express itself in the way they assign edges to vertices. In this section, we aim to formalise the two equivalent models and give the conditions under which equivalence holds true. Finally, we will discuss these conditions and give examples that show how these equivalence fails when the assumptions are not met. ### Inhomogeneous random digraphs The first model in the equivalence, is the _inhomogeneous random digraph model_. It constructs a graph \(G_{n}\) by first defining the vertex set \([n]:=\{1,2,\ldots,n\}\), and assigning each vertex \(v\in[n]\) a type \(T_{v}\). This type is an independent sample from some overarching distribution \(T\) that takes values in some set \(\mathcal{S}\). We call \(T\) the _type distribution_ of the model and \(\mathcal{S}\) the _type space_. After all vertices are assigned a type, the vertex set of \(G_{n}\) is completed, and the arc set can be generated. This is done by fixing two functions \(\kappa:\mathcal{S}\times\mathcal{S}\to\mathbb{R}_{0}^{+}\) and \(\varphi_{n}:\mathcal{S}\times\mathcal{S}\to\mathbb{R}_{0}^{+}\). We call \(\kappa\) the _kernel_ or the model, and \(\varphi_{n}\) the _perturbation function_ of the model. For each pair of vertices \(v\) and \(w\) such that \(v\neq w\) the arc \((v,w)\) becomes part of the arc set with probability \[\left(\frac{\kappa(T_{v},T_{w})(1+\varphi_{n}(T_{v},T_{w}))}{n}\right)\wedge 1, \tag{1}\] independent from the other arcs. Here, we defined \(x\wedge y:=\min\{x,y\}\) and equivalently \(x\lor y:=\max\{x,y\}\). Once the arc set has been generated, the construction of \(G_{n}\) is completed. By defining the arc probabilities through (1), it is implictly expected that the behaviour of the inhomogeneous random digraph model is captured through \(\kappa\) and \(T\) alone. It is generally assumed that \(\varphi_{n}\) converges to zero in some manner as the number of vertices tends to infinity (see e.g. [5] Assumption 3.1 or [4] Definition 2.9). Since it is notationally inconvenient to always explicitly separate the kernel and the perturbation function, we will write \(\kappa_{n}:=\kappa\cdot(1+\varphi_{n})\). In the same spirit, we will also abbreviate the inhomogeneous random digraph model by \(\operatorname{\mathrm{I}\!\mathrm{R}}\!\mathrm{D}_{n}(T,\kappa_{n})\). This description of the inhomogeneous random digraph model is flexible, and admits much freedom in the choice of kernel, perturbation function and type distribution. Therefore, many classical random graph models fall within this framework (see Section 3.1 or Example 2.1 in [5]). In this paper we shall restrict ourselves to discrete type spaces. **Assumption 2.1** (Discrete type spaces).: _The type distribution \(T\) takes values in some set \(\mathcal{S}\subseteq\mathbb{N}\) and satisfies \(\mathbb{E}[T^{\delta}]<\infty\) for some \(\delta>0\)._ Notation.We set \(q_{t}:=\mathbb{P}(T=t)\) and we denote by \(N_{t}\) the number of vertices with type \(t\in\mathcal{S}\). **Remark**.: In principle, Assumption 2.1 is stated slightly restrictively for convenience. Practically, it only stipulates that the type distribution is discrete. In case the type distribution takes values in a countable set unequal to the natural number, then we can still relabel the types in such a way that Assumption 2.1 is satisfied. ### Arc assigned random digraphs In \(\mathtt{IRD}_{n}(T,\kappa_{n})\) it is determined for each vertex-pair separately whether an arc is drawn in between them. For the second model in the equivalence, which we will call the _arc assigned random digraph model_, this arc placement procedure changes. To construct a graph \(G_{n}\) we still initially define the vertex set \([n]\), and assign each vertex \(v\in[n]\) a type \(T_{v}\) that is drawn independently from some type distribution \(T\) that takes values in \(\mathcal{S}\). However, now the number of arcs to be placed between two vertex types is fixed. To make this formal, we define a function \(\Lambda_{n}:\mathcal{S}\times\mathcal{S}\to\mathbb{N}\). We call \(\Lambda_{n}\) the _arc-count function_. The value of \(\Lambda_{n}(t,s)\) for a tuple of vertex-_types_\((t,s)\in\mathcal{S}\times\mathcal{S}\) encodes the number of arcs that will be placed in \(G_{n}\) from a vertex with type \(t\) to a vertex with type \(s\). The assignment of arcs to vertex pairs takes place after vertices have been assigned a type. Given \(\Lambda_{n}\) the arc assignment procedure is executed in the following steps: 1. Fix a vertex-type \(t\in\mathcal{S}\) to which no arcs have been assigned yet, and define the set \[V_{t}:=\{v\in[n]:T_{v}=t\}\,.\] This set encodes all vertices of type \(t\), and note it is deterministic after assigning types to vertices. 2. Choose \(\Lambda_{n}(t,s)\) arcs uniformly at random without replacement from the set \[(V_{t}\times V_{s})\backslash\{(v,v):v\in[n]\}.\] If the size of this set is smaller than \(\Lambda_{n}(t,s)\), simply take the entire set. Add all arcs to the arc set of \(G_{n}\). 3. Repeat steps 1 and 2 until all vertex-type tuples have been considered. We will denote the realisation of this model by \(\mathtt{ARD}_{n}(T,\Lambda_{n})\). Note for the arc assigned random digraph model that independence exists in the arc-assignment procedure between vertex-type pairs (i.e., between separate executions of step 1 and 2). However, for each fixed pair of vertex-types there is dependence between the chosen arc locations, since these are drawn uniformly at random _without replacement_. **Remark**.: A well-known special case of the arc assigned random graph model is obtained when we take \(T=1\) (i.e., we assign all vertices the type 1) and \(\Lambda_{n}(1,1)=N\) for some fixed \(N\in\mathbb{N}\). This turns out to be the directed equivalent to the original Erdos-Renyi random graph model as described in [10]. For this model it is known that it is equivalent to the so-called Gilbert model1 (see [12]), which can be seen \(\mathtt{IRD}_{n}(T,\kappa_{n})\) with \(\kappa_{n}(1,1)=\lambda\) for some \(\lambda\in\mathbb{R}_{0}^{+}\). Specifically, in Section 1.4 of [16] it is argued that equivalence holds whenever \(N\approx\lambda n\). We will generalise this argument to show equivalence between \(\mathtt{IRD}_{n}(T,\kappa_{n})\) and \(\mathtt{ARD}_{n}(T,\Lambda_{n})\). This is the main result of the next section. Footnote 1: Often, this model is referred to as the Erdős-Rényi model. ### Equivalence between the two models To link \(\mathtt{IRD}_{n}(T,\kappa_{n})\) and \(\mathtt{ARD}_{n}(T,\Lambda_{n})\) we need to describe \(\Lambda_{n}\) in terms of \(\kappa_{n}\) in a clever way. A natural choice would be choosing \(\Lambda_{n}\) to be the expected number of arcs between fixed vertex types in \(\mathtt{IRD}\). Although natural, we will see this choice can cause issues for vertex-types that are rare given a fixed value of \(n\). We will call these vertex-types _unstable_. **Definition 2.2** (Unstable vertex-types).: _Fix a number \(n\in\mathbb{N}\), and a tolerance level \(\tau\in(0,1)\). Define the number_ \[u_{n}^{\uparrow}(\tau):=\inf\{t:q_{s}<n^{-1+\tau}\text{ for all }s\geq t\}.\] _We say a vertex-type \(t\in\mathcal{S}\) is unstable at tolerance \(\tau\) if \(t\geq u_{n}^{\uparrow}(\tau)\). If a vertex-type is not unstable at tolerance \(\tau\) we call it stable at tolerance \(\tau\). Instability for vertices can be defined analogously through their assigned types._ We will see later (in Section 2.4 and 4.1) that unstable vertex-types may cause issues in the link between \(\mathtt{IRD}\) and \(\mathtt{ARD}\). Thus, we need to make an assumption on the behaviour of the kernel \(\kappa\) at unstable vertex-types. Simply put, we will assume that the kernel is relatively small when a vertex-type is unstable, so that the influence of these types on the output of our models is negligable. **Assumption 2.3** (Kernel bound).: _Fix an \(n\in\mathbb{N}\) large and two vertex types \(t,s\in\mathcal{S}\) of which at least one is unstable at a tolerance \(\tau\in(0,1)\). We assume there exist two constants \(\alpha,C>0\) with \(\alpha\in(1/2-\tau/2,1/2)\) such that_ \[\kappa(t,s)\leq\frac{Cn^{-1/2+\alpha}}{\sqrt{q_{t}q_{s}}}. \tag{2}\] **Remark**.: The value of \(q_{t}\) and \(q_{s}\) in (2) implicitly depend on \(n\) and \(\tau\), since we require at least one of \(t\) and \(s\) to be unstable. In Definition 2.2 we can see that stability is an \(n\) and \(\tau\) dependent property. For finite type-spaces we have that \(\min_{t\in\mathcal{S}}q_{t}>0\). Hence, if \(n\to\infty\), then there will be a point at which \(\min_{t\in\mathcal{S}}q_{t}>n^{-1+\tau}\) for any \(\tau\in(0,1)\). Thus, when the type-space is finite we automatically have that Assumption 2.3 is satisfied. In the link between IRD and ARD we will be showing that probabilities of certain events are asymptotically the same. Inspired by Section 1.4 in [16] we will show that _monotone_ events can be translate between our two models. Note in our models that the definition of monotonicity will be slightly different from the definition in [16]. This is because we consider slightly different graphs: our graphs are directed and our vertices have types (i.e., are marked). Because we consider marked graph, we need to be careful how we define and interpret sub-graphs. Given two marked graphs \(G_{1}\) and \(G_{2}\), we will say \(G_{1}\) is a sub-graph of \(G_{2}\) (and write \(G_{1}\subseteq G_{2}\)) whenever: 1. The vertices of \(G_{1}\) have the same marks as the vertices of \(G_{2}\). 2. The arcs of \(G_{2}\) include the arcs of \(G_{1}\). Under this interpretation of sub-graphs we can define what monotonicity means for our models. **Definition 2.4** (Monotone events).: _Let \(G_{1}\) and \(G_{2}\) be two graphs such that \(G_{1}\subseteq G_{2}\). We say a collection \(\mathcal{Q}_{n}\) of events is increasing if \(G_{1}\in\mathcal{Q}_{n}\) implies \(G_{2}\in\mathcal{Q}_{n}\). Similarly, we say \(\mathcal{Q}_{n}\) is decreasing if \(G_{2}\in\mathcal{Q}_{n}\) implies \(G_{1}\in\mathcal{Q}_{n}\). Finally, we say \(\mathcal{Q}_{n}\) is monotone if it is either increasing or decreasing._ We can now formulate the main result of the paper. Heuristically, it tells us that a monotone graph property is true for the ARD model whenever it is true for a "range" of IRD models. **Theorem 2.5** (Irb to ARd).: _Fix three numbers \(\alpha,\tau,C>0\), a vertex-type distribution \(T\) and a kernel \(\kappa\) such that Assumption 2.1 and 2.3 is satisfied. Suppose for some monotone event \(\mathcal{Q}_{n}\) there exists a number \(p\in[0,1]\) such that_ \[\mathbb{P}(\text{{IRD}}_{n}(T,\kappa^{\prime}_{n})\in\mathcal{Q}_{n})\to p,\] _as \(n\to\infty\) for all sequences \(\kappa^{\prime}_{n}\) that satisfy the inequality_ \[|\kappa^{\prime}_{n}(t,s)-\kappa(t,s)|\leq\frac{Cn^{-1/2+\alpha}}{\sqrt{q_{t}q _{s}}}. \tag{3}\] _Then, for \(\Lambda_{n}\) satisfying_ \[\Lambda_{n}(t,s)=\begin{cases}\lfloor\kappa(t,s)q_{t}q_{s}n\rfloor,&\text{if $t $ and $s$ is stable,}\\ 0,&\text{else.}\end{cases}\] _we have_ \[\mathbb{P}(\text{{ARD}}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n})\to p.\] ### Discussion of the conditions Theorem 2.5 leans on two assumptions: the event \(\mathcal{Q}_{n}\) must be monotone, and the kernel inequality (2) should be satisfied. Following [16], the need for monotonicity can be seen by considering the event that the graph contains an exact number of arcs. The need for (2) can be seen by looking at an event that only involves unstable vertices. We will discuss both examples below. Basically, a mismatch between IRD and ARD occurs whenever the random number of arcs between given vertex-types in IRD varies too much when compared to the fixed number of arcs in ARD. There is "less randomness" in ARD than in IRD, thus problems occur if we lean heavily on the randomness IRD has, but ARD does not. Monotonicity.Let \(\mathcal{Q}_{n}\) be the event that the graph contains exactly \(n\) arcs, and note this event is not monotone. We consider ARD\({}_{n}(1,n)\). In other words, our model has one vertex-type and \(n\) arcs will be placed between the vertices. Note that this event is non-monotone. Moreover, for this \(\mathcal{Q}_{n}\) it will be difficult to relate IRD and ARD, since the number of arcs in ARD is fixed, while in IRD it is random. Thus, \(\mathcal{Q}_{n}\) will be true with probability \(1\) for ARD if we take \(n\) edges as input, while for IRD the probability of \(\mathcal{Q}_{n}\) will always be strictly less than one. To see this more rigorously, note first that \(\mathbb{P}(\mathtt{ARD}_{n}(1,n)\in\mathcal{Q}_{n})=1\) for all \(n\). However, when we look at \(\mathtt{IRD}_{n}(1,1+\varepsilon_{n})\), i.e. the \(\mathtt{IRD}\) model with kernel \(1+\varepsilon_{n}\) for some \(\varepsilon_{n}\to 0\), then we notice that the number of arcs follows a Binomial distribution with \(n^{2}\) trials and success probability \((1+\varepsilon_{n})/n\). Thus, the median of this distribution for large \(n\) is given by either \(n\), \(n+1\) or \(n-1\), implying that e.g. \(\mathbb{P}(\mathtt{IRD}_{n}(1,1+\varepsilon_{n})\in\mathcal{Q}_{n})<3/4\) (since the probability of being smaller or equal to the median is roughly \(1/2\)). **Remark.** Note the \(\mathtt{IRD}\) model in the above example is equivalent to the Gilbert model. Principally, if the kernel does not vary in \(\mathtt{IRD}\), then the vertex-type serve no distinguishing purpose. However, the vertex-types do play distinguishing roles in \(\mathtt{ARD}\) due to us fixing a (different) number of arcs per vertex-type pair. Hence, the above example also highlights how in certain cases the extra randomness in \(\mathtt{IRD}\) makes the overarching model simpler. **Remark.** As argued in [16] (Remark 1.14), monotonicity is a sufficient condition in Theorem 2.5, but not necessary. One could impose alternative restrictions stipulating how \(\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\) behaves for choices of \(\Lambda_{n}^{\prime}\) "near" \(\Lambda_{n}\). This, however, would make the statement and proof of Theorem 2.5 more technical, so we refrained from making this generalisation. Kernel inequality.To explain the need for (2) we will consider \(\mathtt{IRD}(T,\kappa)\) with \(q_{t}=C/t^{3}\) and \(\kappa(t,s)=C^{-2}\). We will consider \(\mathcal{Q}_{n}\) to be the event that there are no arcs from a vertex of type \(1\) to a vertex of type \(\lceil\sqrt[3]{n}\rceil\). Note that vertex-type \(\lceil\sqrt[3]{n}\rceil\) is unstable, because the sequence \((q_{t})_{t\geq 1}\) is decreasing and for \(n\) large \[q_{\lceil\sqrt[3]{n}\rceil}=\frac{C}{\lceil\sqrt[3]{n}\rceil^{3}}<\frac{C}{n} \leq\frac{1}{n^{1-\tau}},\] for any \(\tau\in(0,1)\). We will consider the sub-sequence of cubic values of \(n\). We will derive a lower and upper bound on the probability that \(\mathcal{Q}_{n}\) occurs in the \(\mathtt{IRD}\) model. We start with the lower-bound. In this specific \(\mathtt{IRD}\) model, we see that the number of vertices with type \(\sqrt[3]{n}\) is given by \(N_{\sqrt[3]{n}}\sim\mathtt{Bin}(n,C/n)\). Thus, as \(n\to\infty\) we see that \(N_{\sqrt[3]{n}}\to\mathtt{Poi}(C)\). Particularly, this means that \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa)\in\mathcal{Q}_{n}) \geq\mathbb{P}(N_{\sqrt[3]{n}}=0)+\mathbb{P}(\mathtt{IRD}_{n}(T, \kappa)\in\mathcal{Q}_{n}\mid N_{\sqrt[3]{n}}=1)\mathbb{P}(N_{\sqrt[3]{n}}=1),\] \[=\left(\exp(-C)+\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa)\in\mathcal{ Q}_{n}\mid N_{\sqrt[3]{n}}=1)C\exp(-C)\right)(1+o(1)).\] Now, observe that conditional on \(\{N_{\sqrt[3]{n}}=1\}\) the number of vertices with type \(1\) is given by \(N_{1}\sim\mathtt{Bin}(n-1,C/(1-1/n))\). In particular, we have that \(N_{1}<n\). Thus, we can bound \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa)\in\mathcal{Q}_{n}\mid N_{\sqrt[3]{n}}=1 )>\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa)\in\mathcal{Q}_{n}\mid N_{\sqrt[3]{n}}= 1,N_{1}=n).\] If we denote by \(A_{n}\) the number of arcs from vertices of type 1 to vertices of type \(\sqrt[3]{n}\), then we have that \(A_{n}\sim\mathtt{Bin}(N_{1}N_{\sqrt[3]{n}},C^{-2}/n)\). Thus, knowing that \(N_{1}=n\) and \(N_{\sqrt[3]{n}}=1\) entails \(A_{n}\sim\mathtt{Bin}(n,C^{-2}/n)\). Note that this binomial distribution converges to \(\mathtt{Poi}(C^{-2})\), thus we find \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa)\in\mathcal{Q}_{n}\mid N_{\sqrt[3]{n}}=1, N_{1}=n)\geq\mathbb{P}(\mathtt{Bin}(n,C^{-2}/n)=0)=\exp(-C^{-2})(1+o(1)).\] All in all, this shows for \(\mathtt{IRD}\) that we have the following lower-bound: \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa)\in\mathcal{Q}_{n})\geq\left(\exp(-C)+C \exp(-C^{-2}-C)\right)(1+o(1)). \tag{4}\] We can derive a upper-bound using the same arguments. First note that \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa)\in\mathcal{Q}_{n})\leq 1-\mathbb{P}( \mathtt{IRD}_{n}(T,\kappa)\not\in\mathcal{Q}_{n}\mid N_{\sqrt[3]{n}}=1)\mathbb{ P}(N_{\sqrt[3]{n}}=1),\] We again observe that conditional on \(\{N_{\sqrt[3]{n}}=1\}\) the number of vertices with type \(1\) is given by \(N_{1}\sim\mathtt{Bin}(n-1,C/(1-1/n))\). In particular, the median of this distribution is given approximately by \(nC\), meaning for \(\varepsilon>0\) small we have that \(\mathbb{P}(N_{1}>\varepsilon n)>1/2\). Thus, we can bound \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa)\not\in\mathcal{Q}_{n}\mid N_{\sqrt[3]{n}} =1)\geq\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa)\not\in\mathcal{Q}_{n}\mid N_{ \sqrt[3]{n}}=1,N_{1}>\varepsilon n)/2.\] In this case, knowing that \(N_{1}>\varepsilon n\) and \(N_{\sqrt[3]{n}}=1\) entails \(A_{n}\succeq\mathtt{Bin}(\varepsilon n,C^{-2}/n)\). Because this binomial converges to a \(\mathtt{Poi}(\varepsilon C^{-2})\) random variable, we have that \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa)\not\in\mathcal{Q}_{n}\mid N_{\sqrt[3]{n}}=1,N_{1}>\varepsilon n)/2\geq\mathbb{P}(\mathtt{Bin}(\varepsilon n,C^{-2}/n)>0)/2 =\left(1-\exp(-\varepsilon C^{-2})\right)(1+o(1))/2.\] Hence, we find the upper-bound \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa)\in\mathcal{Q}_{n})\leq 1-C\exp(-C)\cdot\frac{ \left(1-\exp(-\varepsilon C^{-2})\right)(1+o(1))}{2}. \tag{5}\] Now we compare (4) and (5) to the possible intuitive choices of inputs in ARD. If we look at the ARD model with the expected number of arcs between fixed vertex types from IRD as its input, then we would consider \(\texttt{ARD}_{n}(T,\Lambda^{\prime}_{n})\) with \(\Lambda^{\prime}_{n}(t,s)=\lfloor n/(t^{3}s^{3})\rfloor\). Particularly, we have that \(\Lambda^{\prime}_{n}(1,\sqrt[3]{n})=1\). Thus, if there is a vertex with type \(\sqrt[3]{n}\), then we will also have at least one arc from a vertex with type \(1\) to one with type \(\sqrt[3]{n}\). In other words using (4): \[\lim_{n\to\infty}\mathbb{P}(\texttt{ARD}_{n}(T,\Lambda^{\prime}_{n})\in \mathcal{Q}_{n})=\exp(-C)<\exp(-C)+C\exp(-C^{-2}-C)\leq\liminf_{n\to\infty} \mathbb{P}(\texttt{IRD}_{n}(T,\kappa)\in\mathcal{Q}_{n}).\] Alternatively, if we were to look at the ARD model from Theorem 2.5 with \(\Lambda_{n}(1,\sqrt[3]{n})=0\). In particular this means using (5) that \[\lim_{n\to\infty}\mathbb{P}(\texttt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n })=1>1-C\exp(-C)\cdot\frac{\big{(}1-\exp(-\varepsilon C^{-2})\big{)}}{2}\geq \limsup_{n\to\infty}\mathbb{P}(\texttt{IRD}_{n}(T,\kappa)\in\mathcal{Q}_{n}).\] Thus, we see that the probability of \(\mathcal{Q}_{n}\) occurring in IRD and ARD significantly differs. This makes it difficult to link the models. The mismatch occurs, since the occurrence of \(\mathcal{Q}_{n}\) is heavily influenced by both vertex-type assignment and arc generation in IRD, while it is only influenced by vertex-type generation in ARD. If it is possible to link the models, then the arc generation in IRD cannot have a large influence on the final probability. ## 3 Applications In this section we will show some of the consequences of Theorem 2.5. Specifically, we will focus on four aspects. Firstly, we will highlight the consequences of our result in some "classical" random graph models. Secondly, we will show that the connection between IRD and ARD in Theorem 2.5 can be adapted to forge a connection between IRD and models that fall outside of the ARD class. As a running example, we will investigate a recent model to infer cell-cell interaction networks given in [23]. Thirdly, we will show how our result can be used to compute properties of ARD based on calculations in IRD. Fourthly, we will explain how our results provide an intuitive linear time algorithm to generate random graphs. ### Classical random graph models #### 3.1.1 Directed Erdos-Renyi and Gilbert model We recall that the directed counterpart of the classical Gilbert model in [12] can be seen as \(\texttt{IRD}_{n}(1,\lambda)\) for some \(\lambda>0\), while the directed counterpart of the Erdos-Renyi model in [10] can be seen as \(\texttt{ARD}(1,m)\) for some \(m\in\mathbb{N}\). It is important to observe in this simple setting that \(\mathcal{S}=\{1\}\). In particular, this means that vertex-type \(1\) is always stable according to Definition 2.2, irrespective of the choice of \(\tau\). Hence, the parameter \(\tau\) plays no role, and \(\alpha\) can be chosen as close to zero as we like without invalidating Assumption 2.3. In fact, since \(\alpha\) can be chosen arbitrarily close to zero, we may replace the fixed \(\alpha\) in (3) with a sequence \(\alpha_{n}\) that converges to zero arbitrarily slowly. This will make the condition (3) stronger, meaning there will be less sequences that satisfy it. This, however, is good news in light of Theorem 2.5, because it means the \(\texttt{IRD}_{n}(\kappa^{\prime}_{n})\) probability has to converge for fewer sequences \(\kappa^{\prime}_{n}\). In light of this discussion, writing \(\texttt{Gil}_{n}(\lambda)\) for the (directed) Gilbert model and \(\texttt{ER}_{n}(m)\) for the (directed) Erdos-Renyi model, the consequence of Theorem 2.5 for these models is as follows. **Corollary 3.1** (From Erdos-Renyi to Gilbert).: _Fix a constant \(C>0\), a decreasing sequence \(\alpha_{n}\in(0,1/2)\) and a model parameter \(\lambda>0\). Suppose for some monotone event \(\mathcal{Q}_{n}\) there exists a \(p\in[0,1]\) such that_ \[\mathbb{P}(\texttt{Gil}_{n}(\lambda_{n})\in\mathcal{Q}_{n})\to p,\] _as \(n\to\infty\) for all sequences \(\lambda_{n}\) satisfying_ \[|\lambda_{n}-\lambda|\leq Cn^{-1/2+\alpha_{n}}.\] _Then, also_ \[\mathbb{P}(\texttt{ER}_{n}(\lfloor\lambda n\rfloor)\in\mathcal{Q}_{n})\to p.\] Corollary 3.1 can be identified as being almost equivalent to Proposition 1.13 in [16]. The only difference is that they can choose \(\alpha_{n}=0\) while for us it is converging arbitrarily slowly to zero. The slightly stronger assumption for us emerges, because we use it in the more general cases to overcome the extra randomness introduced by \(T\). #### 3.1.2 Stochastic block model The stochastic block model, as described in e.g. [1], is a random graph model classically used to investigate community detection. The model creates graphs with vertex-set \([n]\), and each vertex is given a type from the set \(\mathcal{S}=\{1,2,\ldots,r\}\) for some fixed \(r>1\). Then, an arc between two vertices \(v,w\in[n]\) with types \(T_{v}\) and \(T_{w}\) is drawn with some fixed probability \(\pi(T_{v},T_{w})\), independently of the other arcs. Hence, the connection procedure is fully defined when vertex-types and the function \(\pi:\mathcal{S}\times\mathcal{S}\rightarrow(0,1)\) are known. We denote this model by \(\mathtt{SSM}_{n,r}(T,\pi)\). Although the stochastic block model shows many similarities with \(\mathtt{IRD}\), there are still two major differences from our context. Firstly, in \(\mathtt{IRD}\) the connection probabilities scale with \(1/n\) while in the stochastic block model \(\pi\) does not have to decline as \(n\rightarrow\infty\). Thus, we will assume in the stochastic block model there exists a kernel \(\kappa\) such that for two vertex-types \(t,s\in\mathcal{S}\) we have \(\pi(t,s)=\kappa(t,s)/n\). Here, division by \(n\) is needed to make the graph sparse. Secondly, the stochastic block model usually fixes deterministic vertex-types (see e.g. [17]), while we draw them randomly from some type distribution \(T\). The goal of these deterministic types is to ensure each vertex-type \(t\in\mathcal{S}\) covers approximately a proportion \(q_{t}\) of vertices. In light of this goal, it is not too dissimilar to give each vertex \(v\in[n]\) a _random_ type \(T_{v}\) with \(\mathbb{P}(T_{v}=t)=q_{t}\), since due to the law of large numbers (or more precisely Lemma 4.4) we have that \(N_{t}/n\approx q_{t}\) for all \(t\in\mathcal{S}\) when \(n\) is large. Concluding, we can see the stochastic block model as \(\mathtt{IRD}_{n}(T,n\pi)\) where \(\kappa=n\pi\) is some kernel, and \(T\) is a discrete type-distribution taking values in \([r]\) for some fixed \(r>0\). The \(\mathtt{ARD}\) "equivalent" to the stochastic block model is also sometimes seen in literature (see [19]). It is called the microcanonical stochastic block model, and instead of fixing connection probability function \(\pi\) it simply fixes a list \(e_{n}:\mathcal{S}\times\mathcal{S}\rightarrow\mathbb{N}\) encoding the number of edges that need to be placed between vertices of two given types. We denote this model by \(\mathtt{MSBM}_{n,r}(T,e_{n})\) and we see that it is equivalent to \(\mathtt{ARD}_{n}(T,e_{n})\). When the number of vertex-types is finite, recall that Assumption 2.3 is automatically satisfied. Similar to the discussion in Section 3.1.1, this means \(\tau\) plays no role and \(\alpha\) can be chosen however we fancy. Additionally, since \(q^{i}:=\inf_{t}\{q_{t}\}>0\), we can bound \(\sqrt{q_{t}q_{s}}\geq q^{i}\) in (3) and merge it with the constant \(C\) to simplify the condition that all sequences \(\kappa^{\prime}_{n}\) must satisfy in Theorem 2.5. Thus, the consequence our main result for the stochastic block model can be formulated as follows. **Corollary 3.2** (Fixing arcs in the stochastic block model).: _Fix a constant \(C>0\), a decreasing sequence \(\alpha_{n}\in(0,1/2)\), and a type distribution \(T\) with \([r]\) as its support for some \(r>1\). Furthermore, fix a probability function \(\pi_{n}=\kappa/n\) for some \(n\)-independent function \(\kappa\). Suppose for some monotone event \(\mathcal{Q}_{n}\) there exists a number \(p\in[0,1]\) such that_ \[\mathbb{P}(\mathtt{SSM}_{n,r}(T,\pi^{\prime}_{n})\in\mathcal{Q}_{n})\to p,\] _as \(n\rightarrow\infty\) for all sequences \(\pi^{\prime}_{n}\) satisfying_ \[|\pi^{\prime}_{n}(t,s)-\pi_{n}(t,s)|\leq Cn^{-3/2+\alpha_{n}}.\] _Then, also for \(e_{n}(t,s)=\lfloor\pi_{n}(t,s)q_{t}q_{s}n^{2}\rfloor\) we have that_ \[\mathbb{P}(\mathtt{MSBM}_{n,r}(T,e_{n})\in\mathcal{Q}_{n})\to p.\] #### 3.1.3 Chung-Lu model In the Chung-Lu model, for which the undirected equivalent is described in [7], each vertex \(v\in[n]\) is given a weight \(w_{v}>0\). Given the weights \(w_{v}\) and \(w_{u}\) of two vertices \(v,u\in[n]\) and the sum of all weights \(\ell_{n}\), the probability that an arc is drawn from \(v\) to \(u\) is given by \(w_{v}w_{u}/\ell_{n}\) independent of the other arcs. This is again close to the setting of \(\mathtt{IRD}\), but we will make the slight modification (that is often done; see e.g. Chapter 6 of [15]) that weights are drawn independently from a weight distribution \(W\). Then, we find ourselves in our setting. Specifically, the Chung-Lu model can now be seen as \(\mathtt{IRD}_{n}(W,\kappa_{n})\) with \[\kappa_{n}(t,s)=\frac{ts}{\sum_{v\in[n]}W_{v}}.\] Assuming the first moment of \(W\) is finite, we can explicitly identify the kernel and perturbation function of the Chung-Lu model. Setting \(\overline{W}\) to be the empirical mean of the weights we find \[\kappa_{n}(t,s)/n=\frac{ts/\mathbb{E}[W]}{n}\cdot\frac{\mathbb{E}[W]}{\overline {W}}.\] Hence, for the kernel we find \(\kappa(t,s)=ts/\mathbb{E}[W]\) and for the perturbation function we find \(\varphi_{n}(t,s)=\mathbb{E}[W]/\overline{W}-1\). Since \(W\) might have an infinite support, it is not immediately clear that Assumption 2.3 is satisfied. Luckily, only a mild additional assumption on \(W\) is needed. See Section 5.1 for the proof. **Proposition 3.3** (Assumption satisfaction for Chung-Lu).: _Suppose there exists a \(\varepsilon>0\) for which \(\mathbb{E}[W^{1+\varepsilon}]<\infty\). Then, there exists a tolerance \(\tau\in(0,1)\) for which Assumption 2.3 is satisfied._ In light of Proposition 3.3, the consequence to Theorem 2.5 for the Chung-Lu model will become: **Corollary 3.4** (Fixing arcs in the Chung-Lu model).: _Consider the Chung-Lu model with i.i.d. weights drawn from \(W\), and assume that \(\mathbb{E}[W^{1+\varepsilon}]<\infty\) for some \(\varepsilon>0\). Fix an \(\alpha>1/2-\varepsilon/(4+3\varepsilon)\). Suppose for some monotone event \(\mathcal{Q}_{n}\) there exists a number \(p\in[0,1]\) such that_ \[\mathbb{P}(\mathcal{IRD}_{n}(W,\kappa_{n}^{\prime})\in\mathcal{Q}_{n})\to p,\] _as \(n\to\infty\) for all sequences \(\kappa_{n}^{\prime}\) satisfying_ \[\left|\kappa_{n}^{\prime}(t,s)-\frac{ts}{\mathbb{E}[W]}\right|\leq\frac{Cn^{-1 /2+\alpha}}{\sqrt{q_{i}q_{s}}}.\] _Then, we also have for \(\Lambda_{n}(t,s)=\lfloor tsq_{t}q_{s}n/\mathbb{E}[W]\rfloor\) (when \(t\) and \(s\) are stable; \(\Lambda_{n}(t,s)=0\) otherwise) that_ \[\mathbb{P}(\mathcal{ARD}_{n}(W,\Lambda_{n})\in\mathcal{Q}_{n})\to p.\] **Remark**.: The Chung-Lu model we considered in this section is directed, because the IRD and ARD models are directed. However, due to symmetry of the kernel \(\kappa_{n}\), the model of this section is closely related to the undirected Chung-Lu model. To find a "true" generalisation of the Chung-Lu model in the directed case, we would need to specify two weights per vertex. One of these is used for the possible out-arc, while the other is used for the possible in-arc. In this setting, the state space would become \(\mathcal{S}=\mathbb{N}^{2}\). Despite notational inconveniences, results equivalent to Proposition 3.3 and Corollary 3.4 would still apply in this further generalisation. ### A model for cell-cell interactions Section 3.1 showcased some direct consequences to Theorem 2.5. Of course, many real-life network models will not fall into the category of ARD. Thus, this section will show how the main result can be adapted to fit an applied setting that goes beyond the ARD class. Specifically, we will show equivalence with the model in [23] to infer cell-cell interaction networks, and use this equivalence to study the existence of the giant strongly connected component. #### 3.2.1 Model definition and its relation to the biological context When tracking the state and severity of diseases, it is important to infer how cells communicate with one another (see e.g. [2]). There are many different strategies to accomplish this feat. Some look at a network of cell-types and infers how it evolves over time (like [25]), while others focus more on the proteins involved in the communication (like [6]). A recent model presented in [23] takes yet another approach, and infers cellular communication through a directed random graph model where vertices represent cells, vertex-types represent cell-types, and arcs represent protein pairs (ligands and receptors). Mathematically, the model needs the following inputs: 1. A vertex-type distribution \(T\) supported on some discrete set \(\mathcal{S}\). 2. A colour-distribution \(C=(C^{\text{out}},C^{\text{in}})\) for the arcs supported on some discrete set \(\mathcal{C}^{\text{out}}\times\mathcal{C}^{\text{in}}\). 3. An indicator function \(I:\mathcal{S}\times\mathcal{C}^{\text{out}}\to\{0,1\}\). 4. An indicator function \(J:\mathcal{S}\times\mathcal{C}^{\text{in}}\to\{0,1\}\). In [26, 8] it turns out that these four pieces of input are relatively easy to extract from patients in practice. Biologically, (1) measures how often certain cell-types are present in a tissue, (2) measures how often certain protein pairs are present in a tissue, and finally (3)-(4) indicate which proteins "could belong to" a given cell-type. Using these mathematical objects, we generate a realisation of the model with vertex set \([n]\), containing \(\lfloor\mu n\rfloor\) arcs for some \(\mu>0\), with the following algorithm. 1. For each vertex \(v\in[n]\), assign it a type \(T_{v}\) drawn from \(T\) independently of the other vertices. 2. For each arc number \(a\in[\lfloor\mu n\rfloor]\), sample an arc-colour pair \(C_{a}=(C^{\text{out}}_{a},C^{\text{in}}_{a})\) drawn from \(C\) independently of the other arc-colour pairs. 3. For each arc number \(a\in[\lfloor\mu n\rfloor]\) choose one vertex \(v\) uniformly from the set \(\{v\in[n]:I(T_{v},C^{\text{out}}_{a})=1\}\). 4. Then, independently from Step 3, choose one vertex \(w\) uniformly from the set \(\{w\in[n]:J(T_{w},C^{\text{out}}_{a})=1\}\). ## 5 Add arc \((v,w)\) to the directed graph. We denote the realisation of the model by \(\mathtt{CCI}_{n,\mu}(T,C,I,J)\). Although \(\mathtt{CCI}\) might seem to fall in the ARD class, it is completely distinct from it. Note in \(\mathtt{CCI}\) it is a priory unclear between which vertex-types a given arc will be drawn. Moreover, even if we were to reveal the arc-colours, then still by virtue of \(I\) and \(J\) it is possible that an arc is drawn between multiple combinations of vertex-types, while in ARD this would be impossible. Thus, it is impossible to directly apply Theorem 2.5 to relate \(\mathtt{CCI}\) to \(\mathtt{IRD}\). #### 3.2.2 Identifying the model as an inhomogeneous random digraphs To link \(\mathtt{CCI}\) with \(\mathtt{IRD}\), we follow Assumption 2.1 and require that \(\mathcal{S},\mathcal{C}^{\text{out}},\mathcal{C}^{\text{in}}\subseteq\mathbb{N}\). Biologically, this assumption is not far-fetched, since in reality the number of protein-types and cell-types in your body is finite. To formulate the kernel in \(\mathtt{IRD}\) belonging to \(\mathtt{CCI}\) we first set \(p_{ij}:=\mathbb{P}(C^{\text{out}}=i,C^{\text{in}}=j)\) and recall that \(q_{k}=\mathbb{P}(T=k)\). We define \[\lambda_{i}=\sum_{k=1}^{\infty}q_{k}I(k,i), \tag{6a}\] \[\varrho_{j}=\sum_{k=1}^{\infty}q_{k}J(k,j). \tag{6b}\] Note that \(\lambda_{i}\) can be interpreted as the (asymptotic) proportion of vertices that an arc with out-colour \(i\) can connect to. Biologically, it is the fraction of cells that can secrete ligand-type \(i\). Similarly, \(\varrho_{j}\) can be interpreted as the fraction of vertices that an arc with in-colour \(j\) can connect to. It is the fraction of cells that can express receptor-type \(j\). Under these definitions, we will show that the \(\mathtt{IRD}\)-kernel belonging to \(\mathtt{CCI}\) for two fixed vertex-types \(t,s\in\mathcal{S}\) is given by \[\kappa(t,s)=\mu\sum_{i=1}^{\infty}\sum_{j=1}^{\infty}\frac{p_{ij}\cdot I(t,i)J (s,j)}{\lambda_{i}\cdot\varrho_{j}}. \tag{7}\] By rewriting (7) as \[\frac{\kappa(t,s)}{n}=\sum_{i=1}^{\infty}\sum_{j=1}^{\infty}\frac{n\mu p_{ij} \cdot I(t,i)J(s,j)}{n\lambda_{i}\cdot n\varrho_{j}},\] we can interpret \(\kappa(t,s)/n\) as the expected number of arcs that connect a specific vertex of type \(t\) to another specific vertex of type \(s\). This is, because \(n\mu p_{ij}\) is the expected number of arcs with colour \((i,j)\), \(\lambda_{i}n\) is the expected number of vertices that can accept out-colour \(i\), and \(\varrho_{j}n\) is the expected number of vertices that can accept in-colour \(j\). Hence, for a given arc with colour \((i,j)\) the probability of it being placed between two specific vertices with type \(t\) and \(s\), respectively, is \(1/(n^{2}\lambda_{i}\varrho_{j})\) due to Steps 3-5 of the \(\mathtt{CCI}\)-generation algorithm. Since there are \(n\mu p_{ij}\) arcs with colour \((i,j)\) that will be placed, we conclude \((n\mu p_{ij})/(n^{2}\lambda_{i}\varrho_{j})\) shall be placed between the two fixed vertices. Summing over all possible arc-colours yields the expression of \(\kappa(t,s)/n\). The indicators ensure arc-colours only contribute if vertex-types \(t\) and \(s\) can actually accept them. To prove a connection between \(\mathtt{IRD}\) and \(\mathtt{CCI}\), we will make a chain of two links. One link from \(\mathtt{CCI}\) to ARD, and another from ARD to \(\mathtt{IRD}\). The second link is facilitated by Theorem 2.5, the first will require a new proof. To make the first link formal, some assumptions on \(\mathtt{CCI}\) are required. **Assumption 3.5** (\(\mathtt{CCI}\) assumptions).: _For \(\mathtt{CCI}_{n,\mu}(T,C,I,J)\) we assume the following:_ 1. \(\mathbb{E}[T^{1+\varepsilon}]<\infty\) _for some_ \(\varepsilon>0\)_._ 2. \(\inf_{i}\{\lambda_{i}:\lambda_{i}>0\}>0\) _and_ \(\inf_{j}\{\varrho_{j}:\varrho_{j}>0\}>0\)_._ The first assumption ensures that there are not too many unstable vertices in \(\mathtt{CCI}\) (cf. Definition 2.2). The second implicitly stipulates that there cannot be any arc-colours that occur with a relatively large probability, but connect to relatively few vertices. Note the second assumption is satisfied when e.g. there are only a finite number of protein types. We also need to make a technical assumption on \(\mathcal{Q}_{n}\). Since Theorem 2.5 sets \(\Lambda_{n}(t,s)=0\) if \(t\) or \(s\) is unstable, we need to ensure \(\mathtt{CCI}\) probabilities do not change when arcs from and to unstable vertices are disregarded. **Assumption 3.6** (Arcs to and from unstable vertices influence nothing).: _Denote by \(\mathtt{CCI}_{n,\mu}(T,C,I,J)\) the cell-cell interaction model after removing all arcs to and from unstable vertices at some tolerance \(\tau\in(0,1)\). We assume the event \(\mathcal{Q}_{n}\) is such that_ \[\mathbb{P}(\mathtt{CCI}_{n,\mu}(T,C,I,J)\in\mathcal{Q}_{n})=\mathbb{P}(\mathtt{ CCI}_{n,\mu}^{-}(T,C,I,J)\in\mathcal{Q}_{n})+o(1)\] Note that in all biologically relevant cases Assumption 3.6 is automatically satisfied. The number of cell-types is finite, so unstable vertex-type will not exists. However, we will show in the case of the giant strongly connected component that one can validate Assumption 3.6 even for an infinite number of vertex-types. We can now formulate the main result of this section. **Theorem 3.7** (Ccl to Irb).: _Consider \(\texttt{CCI}_{n,\mu}(T,C,I,J)\) such that Assumption 3.5 is satisfied, and let \(\kappa\) be as in (7). Define a constant \(\alpha\) such that_ \[\frac{3}{8}<\alpha<\frac{1}{2}. \tag{8}\] _Suppose \(\mathcal{Q}_{n}\) is a monotone event that satisfies Assumption 3.6 for some tolerance \(\tau>1-2\alpha\). If there exists a number \(p\in[0,1]\) such that_ \[\mathbb{P}(\texttt{IRD}_{n}(T,\kappa_{n}^{\prime})\in\mathcal{Q}_{n})\to p,\] _as \(n\to\infty\) for all sequences \(\kappa_{n}^{\prime}\) that satisfy the inequality_ \[|\kappa_{n}^{\prime}(t,s)-\kappa(t,s)|\leq\frac{3n^{-1/2+\alpha}}{\sqrt{q_{t} q_{s}}}. \tag{9}\] _Then, we also have that_ \[\mathbb{P}(\texttt{CCI}_{n,\mu}(T,C,I,J)\in\mathcal{Q}_{n})\to p.\] **Remark**.: The condition (8) might seem a bit arbitrary. In the proof of Theorem 3.7, we will see that this requirement on \(\alpha\) ensures there is a stability tolerance \(\tau\) (cf. Definition 2.2) for which the link between CCI and ARD can be made. We stress that (8) is sufficient and might not be necessary. Moreover, note we have taken \(C=3\) in (9) when compared to (3). We will also see this is sufficient and not necessary. **Remark**.: As we will see in the proof of Theorem 3.7, condition II of Assumption 3.5 is not strictly necessary. It can be replaced with an explicit assumption on the distribution of \(C\) together with some global growth restriction on \(\kappa(t,s)\) as \(t,s\to\infty\). However, to make the proofs less technical, we opted to consider only condition II. In biological contexts, the number of protein- and cell-types is finite, so that Assumption 3.5 is automatically satisfied. #### 3.2.3 Using the main result to find the size of its giant strongly connected component One of the current mathematical downsides of the model in [23], is its heavy reliance on Monte-Carlo simulations to attain results. Theorem 3.7 can help here, since it shows that monotone properties of IRD can be translated to CCI. Thus, since IRD is already well studied in e.g. [5], we can use existing literature to quickly compute asymptotic properties of CCI without the need for any Monte-Carlo simulation. We will show this idea by computing the asymptotic size of the largest strongly connected component (SCC) in CCI. We will start by defining what SCC. **Definition 3.8** (Scc).: _A strongly connected component of a directed graph \(G=([n],E)\) is a subset of vertices \(V\subseteq[n]\) such that for every pair \(v,w\in V\) there exists a path from \(v\) to \(w\) and back over the arcs in \(E\). Specifically, we disregard vertex-types._ We note in [5] (Theorem 3.9) that an expression for the asymptotic size of the largest strongly connect component already exists for IRD. Hence, we seek to apply Theorem 3.7 to translate this result into the language of CCI. To do this, we first need to show that all sequences of kernels \(\kappa_{n}^{\prime}\) that adhere to (9) satisfy some regularity conditions (Assumption 3.1 in [5]). The proof will be given in Section 5.1. **Proposition 3.9** (Regularity).: _Let \(\kappa_{n}^{\prime}\) be a sequence adhering to (9). Define in \(\texttt{IRD}_{n}(T,\kappa_{n}^{\prime})\) the conditional probability measure \(\mathbb{P}_{n}(\cdot)=\mathbb{P}(\cdot\mid(T_{v})_{v\in[n]})\). Then, the following four conditions are satisfied:_ * _There exists a Borel probability measure_ \(\nu\) _on_ \(\mathbb{N}\) _such that for all_ \(V\subseteq\mathbb{N}\) _we have in probability under_ \(\mathbb{P}\) _that_ \[\frac{1}{n}\sum_{v=1}^{n}\mathbb{1}\{T_{v}\in V\}\to\nu(A).\] * \(\kappa\) _is continuous and non-negative almost everywhere on_ \(\mathbb{N}^{2}\)_._ * \(\varphi_{n}(t,s):=(\kappa_{n}^{\prime}(t,s)-\kappa(t,s))/\kappa(t,s)\) _is continuous on_ \(\mathbb{N}^{2}\) _and converges to zero_ \(\mathbb{P}_{n}\)_-a.s. for any_ \(t,s\in\mathbb{N}\)_._ * _For the following limits we have that_ \[\lim_{n\to\infty}\frac{1}{n^{2}}\mathbb{E}\left[\sum_{v=1}^{n}\sum_{w=1}^{n} \kappa(T_{v},T_{w})\right]=\lim_{n\to\infty}\frac{1}{n^{2}}\mathbb{E}\left[ \sum_{v=1}^{n}\sum_{w\neq v}\kappa_{n}^{\prime}(T_{v},T_{w})\right]=\sum_{t=1} ^{\infty}\sum_{s=1}^{\infty}\kappa(t,s)q_{t}q_{s}<\infty.\] Proposition 3.9 shows that Theorem 3.9 of [5] can be adapted to CCI. Theorem 3.9 additionally assumes that \(\kappa\) should be irreducible (cf. Definition 3.7 in [5]). Translated into the language of CCI this can be formulated as follows. **Assumption 3.10** (Irreducibility).: _Let \(\widehat{G}=(\mathbb{N},\widehat{E})\) be a directed graph with arc-set_ \[\widehat{E}:=\{(t,s)\in\mathbb{N}^{2}:\kappa(t,s)>0\}.\] _We assume that \(\widehat{G}\) is strongly connected._ We can now compute the asymptotic size of the largest strongly connected component in CCI. Since it is not a direct consequence of Theorem 3.7, we will provide its proof in Section 5.1. **Proposition 3.11** (Largest SCC in CCI).: _Let \(|\mathcal{C}_{\max}|\) be the size of the largest strongly connected component in CCI, and assume CCI satisfies Assumption 3.10. Denote by \(\pi_{x}^{-}\) the largest fixed points to the system of equations_ \[1-\pi_{x}^{-}(\kappa)=\exp\left\{-\sum_{t=1}^{\infty}\kappa(x,t)q_{t}\pi_{x}^{ -}(\kappa)\right\},\qquad x\in\mathbb{N}, \tag{10}\] _and by \(\pi_{x}^{+}\) the largest fixed points to the system of equations_ \[1-\pi_{x}^{+}(\kappa)=\exp\left\{-\sum_{t=1}^{\infty}\kappa(t,x)q_{t}\pi_{x}^{ +}(\kappa)\right\},\qquad x\in\mathbb{N}. \tag{11}\] _Then, we have that \(|\mathcal{C}_{\max}|/n\to\alpha\) in probability, where_ \[\alpha=\sum_{x=1}^{\infty}\pi_{x}^{+}\pi_{x}^{-}q_{x}.\] **Remark.** The values \(\pi_{x}^{\pm}\) can be recognised as survival probabilities of a multi-type branching process in which the number of children with type \(t\) born from a parent with type \(x\) is Poisson distributed with parameter \(\kappa(x,t)q_{t}\) for \(\pi_{x}^{-}\) or \(\kappa(t,x)q_{t}\) for \(\pi_{x}^{+}\). In Figure 1 we compare the result of Proposition 3.11 to the normalised size of the largest SCC one would obtain through Tarjan's algorithm (see [21]) numerically from realisations the cell-cell interaction model. In these numerical experiments, we consider \(\texttt{{\tt CCI}}_{n,\mu}(T,C,I,J)\) with \(n=10000\), varying \(\mu\), and input distributions/indicator functions summarised by the following vectors/matrices: \[\mathbf{q}=\begin{bmatrix}0.1\\ 0.15\\ 0.25\\ 0.5\end{bmatrix},\quad\mathbf{P}=\begin{bmatrix}0.2&0.2\\ 0&0.1\\ 0.5&0\end{bmatrix},\quad I=\begin{bmatrix}0&1&1\\ 1&0&1\\ 1&1&0\\ 0&1&0\end{bmatrix},\quad J=\begin{bmatrix}1&0\\ 0&1\\ 1&1\\ 0&1\end{bmatrix}. \tag{12}\] Here, entry \(k\) in the vector \(\mathbf{q}\) indicates \(\mathbb{P}(T=k)\). Similarly, entry \((i,j)\) in the matrix \(\mathbf{P}\) indicates \(\mathbb{P}(C=(i,j))\). In the numerical experiments we applied a Monte-Carlo approach with 1000 instances of each \(\texttt{{\tt CCI}}_{n,\mu}(T,C,I,J)\) have been generated, their largest SCC size recorded, and averaged to obtain the numerical largest SCC size. We have also plotted the 95% confidence bounds. We observe that Proposition 3.11 accurately mathches the numerical results. ### An algorithm to generate inhomogeneous random digraphs A final application of Theorem 2.5 is that it provides simple algorithm generate realisations of IRD. If one were to naively generate instances of on IRD, first a list of vertex-types will be generated. Thereafter, we would iterate over all vertex pairs \(v,w\in[n]\), look up their types \(T_{v},T_{w}\in\mathcal{S}\), and finally realise a Bernoulli random variable with parameter \(\kappa(T_{v},T_{w})/n\) to determine whether arc \((v,w)\) will be placed in the graph. This approach will require \(\mathcal{O}(n^{2})\) operations to generate a realisation of IRD. The main issue with the naive approach to generating IRDs, is the fact that IRDs are sparse. Hence, a lot of time is spent on arcs that in the end will not be part of the realisation. Ideally, one would like to skip arcs that will not make it into the final realisation, to speed up the process. Our ARD model provides such an method since it fixes the number of edges upfront. By generating the arcs in one of the initial model steps, we remove the need to consider vertex pairs that ultimately have no arc placed between them. Moreover, through Theorem 2.5 we know how to pick this model such that realisations are equivalent to a given IRD model. Our algorithm to generate ARD (and hence IRD) is given in Section 2.2. It should be noted that techniques have already been devised to considerably speed up graph generation of inhomogeneous random graphs (see e.g. [18, 14]) by skipping arcs that will not make it into the final realisation. However, while the algorithm for the Chung-Lu model [18] is exact, for general inhomogeneous graphs the algorithm also outputs a graph that is only asymptotically equivalent to the original model [14]. Moreover, the general algorithm requires a reasonable amount of computations to be implemented, as it needs to compute definite integrals involving \(\kappa\) and solve related root problems. On the other hand, using the ARD only requires the formula for \(\kappa\). It is therefore much more easy to implement and needs less computations to run. If we compare our algorithm to [18] in the Chung-Lu case (Section 3.1.3), we see that it performs similar. To create the graph, we would have to execute \[\sum_{t=1}^{\infty}\sum_{s=1}^{\infty}\kappa(t,s)q_{t}q_{s}=\mathbb{E}[W]\] samples without replacement from some set. Since these can be executed in constant time (see e.g. [24]) we see that only \(\mathcal{O}(\mathbb{E}[W|n)\) random samples are needed. This is the same condition as given in [18]. We do not see the extra factor \(1/2\), since our Chung-Lu model is directed. ## 4 Strategy, tools, and proofs of the main results Here we will present the main strategy and tools to prove both Theorem 2.5 and 3.7. For both theorems we will first delve into the heuristics behind the proof, before we will outline all the technical tools needed to provide the proof. The proofs of these technical tools are postponed until Section 5. Each proof section will end with a proof of its respective theorem based on the technical tools. ### Heuristics behind Theorem 2.5 To see how a parallel between an IRD and an ARD can be drawn, it is important to observe that an ARD is an IRD with the number of arcs per pair of vertex-types has been revealed. Indeed, if we know that e.g. \(m\) arcs will be drawn from vertices of type \(1\) to vertices of type \(2\) in IRD, then the only information missing is the location where the \(m\) arcs will appear. Since the appearance probability is the same for each possible location (in IRD it depends only on the vertex-types involved), arcs shall be assigned through a uniform choice between the possible locations. This is Figure 1: _The size of the largest SCC computed through both Proposition 3.11 and 1000 instances of \(\texttt{CCI}_{n,\mu}(T,C,I,J)\). Input parameters for the model are given in (12), and the two-sided 95% (Monte-Carlo) confidence bounds are plotted to highlight the deviation in numerical largest SCC sizes._ a uniform choice without replacement, since each location can be chosen only once. Thus, we find ourselves in the setting of ARD. To make this observation formal, we will introduce the concept of an _arc-to-vertex-type function_. **Definition 4.1** (Arc-to-vertex-type function).: _In \(\texttt{IRD}_{n}(T,\kappa_{n})\) we define the random arc-to-vertex-type function \(A_{n}:\mathcal{S}\times\mathcal{S}\to\mathbb{N}\) as the function that counts the number of arcs placed between two vertex-types. Specifically, if we denote the arc-set of \(\texttt{IRD}_{n}(T,\kappa_{n})\) by \(\mathcal{A}_{n}\), then for fixed \(t,s\in\mathcal{S}\) the arc-to-vertex-type function is defined as_ \[A_{n}(t,s):=\left|\{(v,w)\in\mathcal{A}_{n}:(T_{v},T_{w})=(t,s)\}\right|.\] In light of Definition 4.1, the previous observation communicates that the law of \(\texttt{IRD}_{n}(T,\kappa_{n})\) conditioned on \(A_{n}\) is equal to the law of \(\texttt{ARD}(T,\Lambda_{n})\) with \(\Lambda_{n}=A_{n}\). The main difference between IRD and ARD now shows through the fact that \(A_{n}\) is a _random_ function, while \(\Lambda_{n}\) is not. If we seek to show that IRD and ARD are equivalent, then our hope is that the random function \(A_{n}\) starts to become more and more deterministic as \(n\to\infty\). This deterministic limit will then provide the proper scaling of \(\Lambda_{n}\). To find the correct scaling for \(\Lambda_{n}\) we can be inspired by the law of large numbers. For a fixed type \(t\) we know for \(n\) large that \(N_{t}\approx q_{t}n\). When we fix a second vertex type \(s\in\mathcal{S}\), then we similarly have that \(N_{s}\approx q_{s}n\). Putting both together this shows that approximately \(q_{t}q_{s}n^{2}\) are generation attempts in \(\texttt{IRD}_{n}(T,\kappa_{n})\) will be done from a vertex of type \(t\) to a vertex of type \(s\). Since all these generation attempts have an independent success probability of \(\kappa_{n}(t,s)/n\), we expect that \(A_{n}(t,s)\approx q_{t}q_{s}\kappa_{n}(t,s)n\) for large \(n\). Thus, if there is an equivalence between IRD and ARD, we should choose \(\Lambda_{n}(t,s)\approx q_{t}q_{s}\kappa_{n}(t,s)n\). Although the heuristics based on the law of large numbers give a good hint at the choice of \(\Lambda_{n}\), it hides some of the intricacies. First and foremost, it does not reveal how the variance in \(A_{n}\) plays a role. As \(n\) tends to infinity, the probability that \(A_{n}\) equals \(\Lambda_{n}\) for any choice of \(\Lambda_{n}\) will become negligibly small. Thus, if any property needs to be translated from IRD to ARD, its validity in \(\texttt{IRD}_{n}(T,\kappa_{n})\) for one fixed \(\kappa_{n}\) is never enough. Instead, we will require that the property is true in \(\texttt{IRD}_{n}(T,\kappa_{n}^{\prime})\) for all \(\kappa_{n}^{\prime}\) that fall in a "range" from some smallest \(\kappa_{n}^{-}\) to some largest \(\kappa_{n}^{+}\). This range must be chosen such that almost all input \(\kappa_{n}\) in \(\texttt{IRD}_{n}(T,\kappa_{n})\) that could produce the event \(\{A_{n}=\Lambda_{n}\}\) with high probability fall between \(\kappa_{n}^{-}\) and \(\kappa_{n}^{+}\). See Figure 2 for an illustration. Secondly, the heuristics based on the law of large numbers assumes that the asymptotic concentration is true for all vertex-types simultaneously. However, for fixed \(n\) there will always be some vertex-types for which concentration has not kicked in yet. For example, if we dynamically chose the vertex-type \(t_{n}\in\mathcal{S}\) such that \(q_{t_{n}}\approx 1/n\), then for the number of vertices \(N_{t_{n}}\) with type \(t_{n}\) we roughly have that \(N_{t_{n}}\sim\texttt{Poi}(1)\). This is more variable than the deterministic \(N_{t_{n}}=q_{t_{n}}n\approx 1\) we expect. Thus, it is useful to distinguish between the vertex-types for which concentration based on the law of large numbers has started to kick in, and the vertex types that might still behave more erratic. This is why we explicitly defined stability of vertex-types (cf. Definition 2.2). **Remark**.: In Definition 2.2 we need \(q_{t}\gg 1/n\) to ensure concentration started to kick in, explaining the need for the parameter \(\tau\in(0,1)\). Moreover, keeping \(\tau\) as a parameter will allow us to control how "fast" concentration occurs. Often, the specific choice of \(\tau\) does not matter or is clear from context in which case we will omit it. Unstable vertex-types should not influence probability of a certain property being true in IRD too much. If they do, relating IRD to ARD will be impossible. Thus, when setting the aforementioned range of kernels in IRD, we need to ensure that for unstable vertex-types the value zero also falls in the range. Practically, this will entail that the property's probability in IRD will stay the same irrespective of the inclusion of unstable vertex-types. Formally, this translated into Assumption 2.3. The strategy to prove Theorem 2.5 will exploit all previous considerations. It will employ four steps. They will coincide with the proof steps in Section 4.2 and the main ideas behind them are given below: Figure 2: _The idea behind the main result. When we seek to show convergence in \(\texttt{ARD}(T,\Lambda_{n})\) for fixed \(\Lambda_{n}\), then in IRD the same property needs to be true for a range of \(\kappa_{n}\). This range is chosen such that with high probability \(\kappa_{n}\in[\kappa_{n}^{-},\kappa_{n}^{+}]\) given the arc-to-vertex-type function equal to \(\Lambda_{n}\)._ Step I.We fix two kernels \(\kappa_{n}^{+}\) and \(\kappa_{n}^{-}\) with the property that no matter the choice of \(\kappa_{n}^{\prime}\) in (3) we have for stable vertex-types \(t,s\in\mathcal{S}\) that \(\kappa_{n}^{-}(t,s)\leq\kappa_{n}^{\prime}(t,s)\leq\kappa^{+}(t,s)\). By using the law of total probability to condition on the realisation of \(A_{n}(t,s)\) (cf. Definition 4.1), we can "decompose" probabilities of \(\texttt{IRD}_{n}(T,\kappa_{n}^{\pm})\) into probabilities of \(\texttt{ARD}_{n}(T,\Lambda_{n}^{\prime})\) (for different values of \(\Lambda_{n}^{\prime}\)) and probabilities of realisations of \(A_{n}(t,s)\). We use monotonicity of \(\mathcal{Q}_{n}\) to transform these decomposed probabilities in upper- and lower- bounds involving \(\texttt{ARD}_{n}(T,\Lambda_{n})\) with \(\Lambda_{n}(t,s)=\lfloor q_{t}q_{s}n\kappa(t,s)\rfloor\). Step II.The upper- and lower bounds we get from Step I will involve error probabilities of \(A_{n}(t,s)\) for the kernels \(\kappa_{n}^{\pm}(t,s)\). We note for e.g. \(\kappa_{n}^{+}\) that \(A_{n}(t,s)\sim\texttt{Bin}(N_{t}N_{s},\kappa^{+}(t,s)/n)\). For stable vertex-types, we can use the concentration of \(N_{t}\) and \(N_{s}\) that has started to kick in to show that \(A_{n}(t,s)\approx\texttt{Bin}(q_{t}q_{s}n^{2},\kappa^{+}(t,s)/n)\), and use this to conclude that the error probabilities converge to zero. For unstable vertices, we will use the realisation that their inclusion does not alter the final result of the calculations (cf. Assumption 2.3) to remove error terms involving them. ### Proof of Theorem 2.5 As outlined in the strategy, an important characteristic of the proof is the distinction between stable and unstable vertex-types. Not all vertices are important to us, but as \(n\to\infty\) we want all vertex types to start playing a role. Therefore, it is important to characterise approximately how may stable vertex-types there are. This is covered by the following lemma. **Lemma 4.2** (Amount of stable vertex-types).: _Recall Definition 2.2 and suppose that \(\mathbb{E}[T^{\delta}]<\infty\) for some \(\delta>0\). Then, for \(n\) large we have_ \[u_{n}^{\dagger}(\tau)\leq\left\lceil n^{(1-\tau)/(1+\delta)}\right\rceil.\] A helpful property of stable vertex-types, as highlighted in the heuristics, is that they concentrate around their mean. To make this precise, we will define what this means for pairs of stable vertex-types. We will call pairs of vertex-types that started their concentration to be _well-concentrated_. **Definition 4.3** (Well-concentrated vertex-types).: _Given two vertex types \(t,s\in\mathcal{S}\) we say that they are well-concentrated around their mean if the following event occurs:_ \[\mathcal{V}_{ts}:=\{|N_{t}-q_{t}n|\leq\log(n)\sqrt{q_{t}n}\}\cap\{|N_{s}-q_{s }n|\leq\log(n)\sqrt{q_{s}n}\}\,.\] We will see below that \(\mathcal{V}_{ts}\) occurs with high probability for every pair of vertex-types. This highlights what we meant in the heuristics with "concentration starting to kick in" when a vertex type is stable. For stable vertex types we have that \(\log(n)\sqrt{q_{t}n}\ll q_{t}n\), while for unstable vertex-types we have that \(\log(n)\sqrt{q_{t}n}\gg q_{t}\). In other words, unstable vertex-types are expected to exhibit great variability in the sense that their count could be significantly far away from their mean. **Lemma 4.4** (Vertices are well-concentrated).: _Fix two vertex-types \(t,s\in\mathcal{S}\). For any \(r>0\) we have for \(n\) large that \(\mathbb{P}(\neg\mathcal{V}_{ts})\leq 2\exp(-\log(n)^{2}/2)\)._ When vertex-types are well-concentrated and stable, the approximation \(N_{t}N_{s}\approx n^{2}q_{t}q_{s}\) is valid. Thus, if we fix a kernel \(\kappa_{n}^{\prime}\) then we know in \(\texttt{IRD}_{n}(T,\kappa_{n}^{\prime})\) that \(A_{n}(t,s)\sim\texttt{Bin}(N_{t}N_{s},\kappa_{n}^{\prime}/n)\approx\texttt{ Bin}(n^{2}q_{t}q_{s},\kappa_{n}^{\prime}(t,s)/n)\). This will in turn allow us to show that \(A_{n}(t,s)\approx nq_{t}q_{s}\kappa_{n}^{\prime}(t,s)\), meaning that the heuristics based on averages is valid. We can translate this into the following two lemmas: **Lemma 4.5** (Undershoot mixed binomials).: _Fix two stable vertex-types \(t,s\in\mathcal{S}\) at tolerance \(\tau\in(0,1)\) and set \(\Lambda_{n}(t,s)=\lfloor nq_{t}q_{s}\kappa(t,s)\rfloor\) for some kernel \(\kappa\). Let \(\kappa_{n}^{\prime}\) be a different function, and suppose for it there exists an \(\alpha\in(1/2-\tau/2,1/2)\) and a constant \(C>0\) such that_ \[\kappa_{n}^{\prime}(t,s)\geq\kappa(t,s)+\frac{Cn^{-1/2+\alpha}}{\sqrt{q_{t}q_{s }}}.\] _Then, we have for \(n\) large that_ \[\mathbb{P}(A_{n}(t,s)<\Lambda_{n}(t,s))\leq 2\exp(-\log(n)^{2}/2).\] **Lemma 4.6** (Overshoot mixed binomials).: _Fix two stable vertex-types \(t,s\in\mathcal{S}\) at tolerance \(\tau\in(0,1)\) and set \(\Lambda_{n}(t,s)=\lfloor nq_{t}q_{s}\kappa(t,s)\rfloor\) for some kernel \(\kappa\). Let \(\kappa_{n}^{\prime}\) be a different function, and suppose for it there exists an \(\alpha\in(1/2-\tau/2,1/2)\) and a constant \(C>0\) such that_ \[\kappa_{n}^{\prime}(t,s)\leq\kappa(t,s)-\frac{Cn^{-1/2+\alpha}}{\sqrt{q_{t}q_{s }}}.\] _Then, we have for \(n\) large that_ \[\mathbb{P}(A_{n}(t,s)>\Lambda_{n}(t,s))\leq 2\exp(-\log(n)^{2}/2).\] The final piece of the puzzle that is missing, is the influence of monotone events. As argued in the heuristics, monotonicity of \(\mathcal{Q}_{n}\) will provide upper- and lower-bounds on probabilities involving \(\mathtt{ARD}\). Thus, we will need a result that actually show these events are able to give us bounds. The following lemma will tackle this. **Lemma 4.7** (Monotonicity in \(\mathtt{ARD}\)).: _Let \(\mathcal{Q}_{n}\) be a monotone event and let \(\Lambda_{n}\) and \(\Lambda_{n}^{\prime}\) be two functions such that for all \(t,s\in\mathcal{S}\) we have that \(\Lambda_{n}(t,s)\leq\Lambda_{n}^{\prime}(t,s)\). Then, we have the following:_ 1. _If_ \(\mathcal{Q}_{n}\) _is increasing, then_ \(\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n})\leq\mathbb{P}( \mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\)_._ 2. _If_ \(\mathcal{Q}_{n}\) _is decreasing, then_ \(\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n})\geq\mathbb{P}( \mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\)_._ We are now in a position to proof Theorem 2.5. We will only give the proof for \(\mathcal{Q}_{n}\) that are increasing. The proof for decreasing events will be analogous. The proofs of all the lemmas above are postponed until Section 5.2. Proof of Theorem 2.5.: Let \(\mathcal{Q}_{n}\) be some increasing event, and define two kernels \(\kappa_{n}^{\pm}\) such that for \(t,s\in\mathcal{S}\) we have \[\kappa_{n}^{\pm}(t,s)=\begin{cases}\max\left\{\kappa(t,s)\pm Cn^{-1/2+\alpha}/ \sqrt{q_{t}q_{s}},0\right\},&\text{if $t$ and $s$ are stable},\\ 0,&\text{else}.\end{cases}\] Recall that the target arc-count function \(\Lambda_{n}\) is given by \[\Lambda_{n}(t,s)=\begin{cases}\lfloor\kappa(t,s)q_{t}q_{s}n\rfloor,&\text{if $t$ and $s$ is stable},\\ 0,&\text{else}.\end{cases}\] The proof will consist of four steps. 1. We show that \(\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n})\leq\mathbb{P}( \mathtt{IRD}_{n}(T,\kappa_{n}^{+})\in\mathcal{Q}_{n})+\xi_{n}\) using Lemma 4.7. Here, \(\xi_{n}\) is an error term involving mixed-binomial deviations. 2. We show that \(\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\in\mathcal{Q}_{n})\leq\mathbb{P} (\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n})+\zeta_{n}\) again using Lemma 4.7. Here, \(\zeta_{n}\) is another error-term involving mixed-binomial deviations. 3. We show that \(\xi_{n}\to 0\) using Assumption 2.3, Lemma 4.2 and Lemma 4.5. 4. We show that \(\zeta_{n}\to 0\) using Assumption 2.3, Lemma 4.2, and Lemma 4.6. Together, Step Ia and IIa show with the convergence assumption on \(\mathtt{IRD}_{n}(T,\kappa_{n}^{+})\) and Assumption 2.3 on \(\kappa\) that \[\limsup_{n\to\infty}\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_ {n})\leq\limsup_{n\to\infty}\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{+})\in \mathcal{Q}_{n})+\limsup_{n\to\infty}\xi_{n}=p+0.\] Similarly, Steps Ib and IIb show with the convergence assumption on \(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\) and Assumption 2.3 on \(\kappa\) that \[\liminf_{n\to\infty}\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\in\mathcal{ Q}_{n})\leq\lim_{n\to\infty}\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n})\in \mathcal{Q}_{n})+\liminf_{n\to\infty}\zeta_{n},\] so that \[p\leq\liminf_{n\to\infty}\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n})\in \mathcal{Q}_{n})\leq\limsup_{n\to\infty}\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda _{n})\in\mathcal{Q}_{n})\leq p.\] This shows the desired result if Steps I-II are true. We will now show this. Step Ia.We will set \(A_{n}^{+}\) to be the arc-to-vertex-types function of \(\mathtt{IBD}_{n}(T,\kappa_{n}^{+})\). We will use the law of total probability to integrate over all possible realisations of \(A_{n}^{+}\). \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{+})\in\mathcal{Q}_{n})=\sum_{\Lambda _{n}^{\prime}}\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{+})\in\mathcal{Q}_{n} |A_{n}^{+}=\Lambda_{n}^{\prime})\mathbb{P}\left(\bigcap_{t,s\in\mathcal{S}} \{A_{n}^{+}(t,s)=\Lambda_{n}^{\prime}(t,s)\}\right)\] We will now define the set \(\mathcal{L}_{n}^{+}:=\{\Lambda_{n}^{\prime}:\Lambda_{n}^{\prime}(t,s)\geq \Lambda_{n}(t,s)\text{ for all }t,s\in\mathcal{S}\}\), and bound the sum by only considering \(\Lambda_{n}^{\prime}\) that fall in the set \(\mathcal{L}_{n}^{+}\). We will also use the fact that \(\mathtt{IRD}\) conditioned on \(A_{n}^{+}\) equals \(\mathtt{ARD}\). This yields \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{+})\in\mathcal{Q}_{n})\geq\sum_{ \Lambda_{n}^{\prime}\in\mathcal{L}_{n}^{+}}\mathbb{P}(\mathtt{ARD}_{n}(T, \Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\mathbb{P}\left(\bigcap_{t,s\in\mathcal{ S}}\{A_{n}^{+}(t,s)=\Lambda_{n}^{\prime}(t,s)\}\right).\] In \(\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\) note that the value of \(\Lambda_{n}^{\prime}(t,s)\) does not matter when either \(t\) or \(s\) is a vertex type that does not appear in the graph. These excess arcs will be deleted in the end anyways when one generates \(\mathtt{ARD}\). Now, we can use the fact that \(\mathcal{Q}_{n}\) is increasing together with Lemma 4.7 to further lower bound the first probability in the product by considering \(\mathtt{ARD}_{n}(T,\Lambda_{n})\) instead of \(\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\). \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{+})\in\mathcal{Q}_{n}) \geq\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n}) \sum_{\Lambda_{n}^{\prime}\in\mathcal{L}_{n}^{+}}\mathbb{P}\left(\bigcap_{t,s \in\mathcal{S}}\{A_{n}^{+}(t,s)=\Lambda_{n}^{\prime}(t,s)\}\right),\] \[=\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n}) \mathbb{P}\left(\bigcap_{t,s\in\mathcal{S}}\{A_{n}^{+}(t,s)\geq\Lambda_{n}(t, s)\}\right). \tag{13}\] Note that (13) is the split we alluded to in the strategy. We will now continue to lower bound the probability involving the arc-to-vertex-type function. To do this, we will first rewrite this probability by using the complement rule, de Morgan's laws, and the union bound \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{+})\in\mathcal{Q}_{n}) \geq\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n})-\sum_{t=1}^ {\infty}\sum_{s=1}^{\infty}\mathbb{P}\left(A_{n}^{+}(t,s)<\Lambda_{n}(t,s) \right).\] Finally, moving the error-terms to the other side of the inequality yields the result we sought to obtain from this step. \[\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n}) \leq\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{+})\in\mathcal{Q}_{n})+ \underbrace{\sum_{t=1}^{\infty}\sum_{s=1}^{\infty}\mathbb{P}\left(A_{n}^{+}(t, s)<\Lambda_{n}(t,s)\right)}_{\xi_{n}}. \tag{14}\] Step Ib.Similar to Step Ia we use the law of total probability to condition on all possible realisations of the arc-to-vertex-type function. For \(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\) we will denote the arc-to-vertex-type function by \(A_{n}^{-}\). This yields \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\in\mathcal{Q}_{n}) =\sum_{\Lambda_{n}^{\prime}}\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\in \mathcal{Q}_{n}\mid A_{n}^{-}=\Lambda_{n}^{\prime})\mathbb{P}\left(\bigcap_{t, s}\{A_{n}^{-}(t,s)=\Lambda_{n}^{\prime}(t,s)\}\right). \tag{15}\] The idea of this proof step is now to start inductively chiselling away individual vertex-type pairs from (15) to find the desired bound. We will show the main approach by considering the induction base \(t=s=1\). In this base case, split up the sum into the part where \(\Lambda_{n}^{\prime}(1,1)\leq\Lambda_{n}(1,1)\) and the part where \(\Lambda_{n}^{\prime}(1,1)>\Lambda_{n}(1,1)\). In the computation below, we will also apply the link between \(\mathtt{ARD}\) and \(\mathtt{IRD}\). \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\in\mathcal{Q}_{n}) =\sum_{\Lambda_{n}^{\prime}(1,1)\leq\Lambda_{n}(1,1)}\mathbb{P}( \mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\mathbb{P}\left( \bigcap_{t,s}\{A_{n}^{-}(t,s)=\Lambda_{n}^{\prime}(t,s)\}\right)\] \[\quad+\sum_{\Lambda_{n}^{\prime}(1,1)>\Lambda_{n}(1,1)}\mathbb{P} (\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\mathbb{P}\left( \bigcap_{t,s}\{A_{n}^{-}(t,s)=\Lambda_{n}^{\prime}(t,s)\}\right).\] For the second sum, we can simply bound the \(\mathtt{IRD}\) probability by one to find \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\in\mathcal{Q}_{n}) \leq\sum_{\Lambda_{n}^{\prime}(1,1)\leq\Lambda_{n}(1,1)}\mathbb{P} (\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\mathbb{P}\left( \bigcap_{t,s}\{A_{n}^{-}(t,s)=\Lambda_{n}^{\prime}(t,s)\}\right)\] \[\quad+\mathbb{P}\left(A_{n}^{-}(1,1)>\Lambda_{n}(1,1)\right).\] Next, we show the idea of the induction step by considering the case \(t=1\) and \(s=2\). Like before, we split up the sum in the part where \(\Lambda_{n}^{\prime}(1,2)\leq\Lambda_{n}(1,2)\) and the part where \(\Lambda_{n}^{\prime}(1,2)>\Lambda_{n}(1,2)\). \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\in\mathcal{Q}_{n}) \leq\sum_{\begin{subarray}{c}\Lambda_{n}^{\prime}(1,1)\leq\Lambda_{n}(1,1) \\ \Lambda_{n}^{\prime}(1,2)\leq\Lambda_{n}(1,2)\end{subarray}}\mathbb{P}(\mathtt{ ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\mathbb{P}\left( \bigcap_{t,s}\{A_{n}^{-}(t,s)=\Lambda_{n}^{\prime}(t,s)\}\right)\] \[\quad+\sum_{\begin{subarray}{c}\Lambda_{n}^{\prime}(1,1)\leq \Lambda_{n}(1,1)\\ \Lambda_{n}^{\prime}(1,2)>\Lambda_{n}(1,2)\end{subarray}}\mathbb{P}(\mathtt{ ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\mathbb{P}\left( \bigcap_{t,s}\{A_{n}^{-}(t,s)=\Lambda_{n}^{\prime}(t,s)\}\right)\] \[\quad+\mathbb{P}\left(A_{n}^{-}(1,1)>\Lambda_{n}(1,1)\right).\] In the second sum, we will add all the removed terms involving \(\Lambda_{n}^{\prime}(1,1)\) again, creating an upper-bound. Then, as in the base case, we bound the \(\mathtt{IRD}\) probability by one to remove the sum. We find \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\in\mathcal{Q}_{n}) \leq\sum_{\begin{subarray}{c}\Lambda_{n}^{\prime}(1,1)\leq\Lambda_{ n}(1,1)\\ \Lambda_{n}(1,2)\leq\Lambda_{n}(1,2)\end{subarray}}\mathbb{P}(\mathtt{ARD}_{n}(T, \Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\mathbb{P}\left(\bigcap_{t,s}\{A_{n}^ {-}(t,s)=\Lambda_{n}^{\prime}(t,s)\}\right)\] \[\quad+\mathbb{P}(A_{n}^{-}(1,2)>\Lambda_{n}(1,2))+\mathbb{P} \left(A_{n}^{-}(1,1)>\Lambda_{n}(1,1)\right).\] Similar to Step Ia, now define the set \(\mathcal{L}_{n}^{-}:=\{\Lambda_{n}^{\prime}:\Lambda_{n}^{\prime}(t,s)\leq \Lambda_{n}(t,s)\text{ for all }t,s\in\mathcal{S}\}\). We can repeat the previous argumentation for all pairs of vertex-types to find \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\in\mathcal{Q}_{n}) \leq\sum_{\Lambda_{n}^{\prime}\in\mathcal{L}_{n}^{-}}\mathbb{P}( \mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\mathbb{P}\left( \bigcap_{t,s}\{A_{n}^{-}(t,s)=\Lambda_{n}^{\prime}(t,s)\}\right)\] \[\quad+\sum_{t=1}^{\infty}\sum_{s=1}^{\infty}\mathbb{P}\left(A_{n}^ {-}(t,s)>\Lambda_{n}(t,s)\right).\] Now, we apply Lemma 4.7 to upper bound the \(\mathtt{ARD}\) probability by noting for all \(\Lambda_{n}^{\prime}\in\mathcal{L}_{n}^{-}\) that \(\Lambda_{n}\) is greater or equal to it for all vertex-types. After bounding, we take the \(\mathtt{ARD}\) probability out of the sum, and note that the resulting sum bounded by one. We find the desired outcome of this step. \[\mathbb{P}(\mathtt{IRD}_{n}(T,\kappa_{n}^{-})\in\mathcal{Q}_{n})\leq\mathbb{P} (\mathtt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n})+\underbrace{\sum_{t=1}^{ \infty}\sum_{s=1}^{\infty}\mathbb{P}\left(A_{n}^{-}(t,s)>\Lambda_{n}(t,s) \right)}_{\zeta_{n}}.\] Step IIa.To show that \(\xi_{n}\) converges to zero, we first recall that \(A_{n}^{+}(t,s)\geq 0\). Thus, through the definition of \(\Lambda_{n}(t,s)\) we have for every \(t>u_{n}^{\uparrow}\) or \(s>u_{n}^{\uparrow}\) that \[\mathbb{P}\left(A_{n}^{+}(t,s)<\Lambda_{n}(t,s)\right)=\mathbb{P}\left(A_{n}^{ +}(t,s)<0\right)=0.\] Hence, \(\xi_{n}\) simplifies into \[\xi_{n}=\sum_{t=1}^{u_{n}^{\uparrow}}\sum_{s=1}^{u_{n}^{\uparrow}}\mathbb{P} \left(A_{n}^{+}(t,s)<\Lambda_{n}(t,s)\right).\] Thus, through Lemma 4.5 we have for any \(r>0\) that \[\xi_{n}\leq 2u_{n}^{\uparrow}(\tau)^{2}\exp(-\log(n)^{2}/2).\] Now, by applying Lemma 4.2 we find for some \(\delta>0\) that \[4u_{n}^{\uparrow}(\tau)^{2}\exp(-\log(n)^{2}/2)\leq 4n^{2(1-\tau)/(1+\delta)} \exp(-\log(n)^{2}/2).\] Thus, we find indeed that \(\xi_{n}\to 0\). Step IIb.We note from our definition of \(\kappa_{n}^{-}\) that for an unstable vertex type \(t\) or \(s\) we have that \(\kappa_{n}^{-}(t,s)=0\). This implies that \(A_{n}^{-}(t,s)=0\) as well, meaning \[\mathbb{P}(A_{n}^{-}(t,s)>\Lambda_{n}(t,s))=\mathbb{P}(A_{n}^{-}(t,s)>0)=0.\] Thus, we may write \[\zeta_{n}=\sum_{t=1}^{u_{n}^{\uparrow}}\sum_{s=1}^{u_{n}^{\uparrow}}\mathbb{P} (A_{n}^{-}(t,s)>\Lambda_{n}(t,s)).\] Similar to Step IIa, we will apply the bound from Lemma 4.6 and 4.2 to obtain \[\zeta_{n}\leq 4n^{2(1-\tau)/(1+\delta)}\exp(-\log(n)^{2}/2).\] Hence, we also have that \(\zeta_{n}\to 0\). Taking Step I and II together, we have found the desired result. **Q.E.D.** **Remark.** Note that Lemma 4.4 does not play a direct role in the proof of Theorem 2.5. This is because it is a prerequisite for the proof of Lemma 4.5 and 4.6. **Remark.** In the proof of Theorem 2.5 we saw that the difficulty of each step is present in different places. For example, we saw that the strategy to derive the upper bound in Step Ia needed much more careful use of the law of total probability, while in Step Ib the difficulty lied in carefully cutting away the correct error-terms after applying the law of total probability. If we were to also provide the explicit proof for decreasing \(\mathcal{Q}_{n}\), all difficulties would flip. So, the careful cutting would happen in Step Ia while the careful application of the law of total probability would happen in Step Ib. **Remark.** Theorem 2.5 shows how a result from IRD can be translated to ARD. One can formulate a similar theorem that would show how results from ARD can be translated to IRD. For the Erdos-Renyi and Gilbert model, theorems for both directions of equivalence can be found in [16]. The proof for translation from ARD to IRD would use similar techniques as the proof of Theorem 2.5. Like the equivalent result in [16], we expect monotonicity of \(\mathcal{Q}_{n}\) is not required anymore if we translate from ARD to IRD. ### Heuristics behind Theorem 3.7 We want to relate CCI to IRD using Theorem 2.5. However, recall that in CCI there is no knowledge on which arcs are assigned to which vertex-types. We only know something about the potential vertex-types each arc can be placed in between. Thus, if we would like to apply Theorem 2.5 for CCI, we would need to reveal the vertex-types each arc in CCI is going to be placed in between. Then, we are back to the setting of ARD, meaning our main result can be applied. To this end, it is instructive to introduce a random function \(\bar{A}_{n}:\mathcal{S}\times\mathcal{S}\to\mathbb{N}\) that counts in CCI the amount of arcs that are placed in between two given vertex-types. We will call this function the _vertex-type arc count function_. **Definition 4.8** (Vertex-type arc count function).: _In \(\texttt{CCI}_{n,\mu}(T,C,I,J)=([n],E)\), after removing self-loops and multi-arcs, we define the function \(\bar{A}_{n}:\mathcal{S}\times\mathcal{S}\to\mathbb{N}\) for two fixed vertex-types \(t,s\in\mathcal{S}\) as_ \[\bar{A}_{n}(t,s):=\left|(v,w)\in E:T_{v}=t\text{ and }T_{w}=s\right|.\] By noting that arcs are placed between vertices uniformly, and by noting that uniform distributions conditioned on a subset of their support are still uniform, we can conclude that CCI conditioned on \(\bar{A}_{n}\) is equivalent to ARD. Hence, the key to connecting CCI to IRD will be to first show \(\bar{A}_{n}\) concentrates, and then applying Theorem 2.5. Similar to the heuristics in Section 4.1, we want to show that \(\bar{A}_{n}\) concentrates around its mean. Recall from Section 3.2.2 that \(\kappa(t,s)/n\), with \(\kappa\) given by (7), can be interpreted as the expected number of arcs that connect a specific vertex of type \(t\) with a specific vertex of type \(s\). Hence, by noting that there will be roughly \(mq_{t}\) vertices with type \(t\) and \(mq_{s}\) vertices with type \(s\), we can deduce from this that we expect \(\kappa(t,s)n^{2}q_{t}q_{s}\) arcs to be placed from vertices of type \(t\) to vertices of type \(s\). Thus, to apply Theorem 2.5 we will show that \[\bar{A}_{n}(t,s)\approx\sum_{i=1}^{\infty}\sum_{j=1}^{\infty}\frac{np_{ij} \mu\cdot nq_{t}\cdot nq_{s}\cdot I(t,i)J(s,j)}{n\lambda_{i}\cdot nq_{j}}=n \kappa(t,s)q_{t}q_{s}\approx\lfloor n\kappa(t,s)q_{t}q_{s}\rfloor. \tag{16}\] To show the above concentration, though, there is one detail that deserves special attention. Looking at Defintion 4.8 we see that \(\bar{A}_{n}\) counts the arcs between two vertex-types _after removing self-loops and multi-arcs_. This highlights another big difference between CCI and ARD we will have to cope with: ARD will not produce self-loops and multi-arcs, while CCI might. The connection between the two can only be made once we remove these, and hence we will need to show that CCI after erasing self-loops and multi-arcs still approximately equals CCI before doing so. Figure 3 showcases the structure of the proof based on the previous discussion. In essence, the new tools that we will develop create the link between CCI and ARD. In this link, the concept of well-concentrated vertices will be used slightly differently. This is needed, since we lose a big benefit in CCI that IRD and ARD both had. In these two models we could consider the graph generation process for each pair of vertex-types independently of the other pairs. This is why the concept of well-concentrated vertices was defined in terms of pairs of two fixed vertex-types. However, in CCI it is possible that one arc type could possible connect to all vertex-types simultaneously. Thus, if vertices are well-concentrated in this setting, then a significantly big number of vertex-types should be close around their mean _at the same time_. The strategy to prove Theorem 3.7 will have the following steps. These are mirrored in the actual proof in Section 4.4. Step I.We show that the realisation of CCI restricted to _super_-stable vertex-types is similar to the ARD model restricted to _super_-stable vertex-types. Step II.We show that the realisation of CCI restricted to stable vertex-types is similar to the ARD model restricted to stable vertex-types. Step III.We show that every instance of ARD we have created has an input function \(\Lambda_{n}\) for which the corresponding form of \(\kappa_{n}\) in Theorem 2.5 adheres to (3). ### Proof of Theorem 3.7 To prove Theorem 3.7, we will often rely on direct consequence of Assumption 3.5, which is that the kernel belonging to CCI is bounded. Although not necessarily needed to derive our results (cf. the remark after Theorem 3.7), this fact will greatly simplify the proofs of the other techniques we will use, since we always just replace kernel values by a constant. **Lemma 4.9**.: _Under Assumption 3.5 the kernel \(\kappa\) in (7) is bounded._ As alluded to in Section 4.3, we will need to refine our notion of being well concentrated. Specifically, we will require that well-concentration is true for all vertex-types simultaneously until some upper-bound vertex-type that scales with \(u_{n}^{\dagger}\) (cf. Definition 2.2). Like in Definition 4.3, this expanded notion of well-concentration will be satisfied with high probability in our models, since for for each individual vertex-type well-concentration is true with super-polynomial probability (cf. Lemma 4.4), and since the number of stable vertex-types grows only polynomially (cf. Lemma 4.2). Apart from upgrading our notion of concentration, it will also be helpful to upgrade our notion of stability. Specifically, we will define a class of vertex-types that adhere to stricter stability requirements, which we will need for some of our proofs. We will call these vertex-types _super stable_. Their definition will be based on Definition 2.2. **Definition 4.10** (Super stable vertices).: _Recall the definition of stable vertex-types (cf. Definition 2.2), and note particularly its influence on the tolerance \(\tau\). We say a vertex-type is super stable when \(\tau>1/2\)._ We will see that for super stable vertex-types there are relatively few self-loops and multi-arcs in CCI. Moreover, like for the previous concept of stability, we will show that the occurrence of vertex-types that are not super stable is rather rare. When analysing CCI, this fact will allow us to split up the graph into super-stable vertices which exhibit nice properties, and other vertices that occur so little that they might as well be removed from the graph. The extra result on stability that states vertices which are not super stable occur only a little is given below. **Lemma 4.11** (Probability of unstable vertex-types).: _Suppose that \(\mathbb{E}[T^{\delta}]<\infty\) for some \(\delta>0\) Then, for \(n\) large and all \(r\in(0,\delta)\) there exists a constant \(\widehat{C}_{r}>0\) such that_ \[\mathbb{P}(T>u_{n}^{\dagger}(\tau))\leq\widehat{C}_{r}\cdot n^{\frac{(\tau-1) (\delta-r)}{(1+\delta)}}.\] _In other words, vertex-types that are not stable at tolerance \(\tau\) occur with only vanishing probability._ With the concept of super stability we can compute the number of self-loops (connections from a vertex to itself) and multi-arcs (more than one of the same directed connection between two vertices). We will show that both the number of self-loops and multi-arcs within each pair of vertex-types is sub-linear. Specifically, if we consider super stable vertex-types at tolerance \(\tau>1/2\), then we will show that the number of self-loops and multi-arcs is bounded by \(n^{2(1-\tau)}\). Observe that this bound becomes trivial when \(\tau\leq 1/2\), since CCI will have \(\lambda n\) arcs. Figure 3: The heuristics behind the proof of Theorem 3.7. We want to identify CCI with given parameters as IRD, so we will have to (a) remove self-loops and multi-edges, and assign arcs to vertex-types. This brings us in the setting of ARD. Thereafter, we will have to (b) apply Theorem 2.5 to get into the setting of IRD. **Lemma 4.12** (Number of self-loops).: _Fix a \(\tau\in(0,1)\). Denote by \(S_{t}\) the number of self-loops on vertices with type \(t\in\mathcal{S}\). We have for all \(r>0\) and \(\nu<\tau\) that_ \[\mathbb{P}\left(\bigcup_{t=1}^{u_{u}^{*}(\tau)}\{S_{t}>n^{1-\nu}\}\right)\leq 1 /n^{r}.\] **Remark**.: Note that for Lemma 4.12 super stability is not required. **Lemma 4.13** (Number of multi-arcs).: _Fix a \(\tau\in(1/2,1)\). Denote by \(M_{ts}\) the number of multi-arcs from vertices of type \(t\in\mathcal{S}\) towards vertices of type \(s\in\mathcal{S}\). We have for all \(r>0\) and \(\nu<2\tau-1\) that_ \[\mathbb{P}\left(\bigcup_{t=1}^{u_{u}^{*}}\bigcup_{s=1}^{u_{\tau}^{*}}\{M_{ts }>n^{1-\nu}\}\right)\leq 1/n^{r}.\] With the quantification of self-loops and multi-arcs we can start the computation of \(\bar{A}_{n}\). This will happen in two steps. First, we will count the number of arcs between two fixed vertex-types where we do not distinguish between "normal" arcs and "bad" multi-arc/self-loops. We will do this by first estimating the probability that one arc gets assigned to two given vertex types (for super stable and other vertex-types separately). Then, we will use these probabilities to estimate the total number of arcs placed between two given vertex-types. Hetero, we will start by defining random variables that count the number of arcs that are placed between two vertex-types. Compare this with Definition 4.8, and note below that we do not remove self-loops and multi-edges. **Definition 4.14** (Non-unique vertex-type arc count function).: _In \(\texttt{CfCl}_{n,\mu}(T,C,I,J)=([n],E)\), before removing self-loops and multi-arcs, we define the function \(\bar{A}_{n}:\mathcal{S}\times\mathcal{S}\to\mathbb{N}\) for two fixed vertex-types \(t,s\in\mathcal{S}\) as_ \[\bar{A}_{n}(t,s):=\left|(v,w)\in E:T_{v}=t\text{ and }T_{w}=s\right|.\] **Lemma 4.15** (Arc to vertex-type probability).: _Suppose \(t\) and \(s\) are two stable vertex-types at tolerance \(\tau\in(0,1)\), and assume that Assumption 3.5 is satisfied. Define \(\mathcal{A}_{ts}^{(a)}\) as the event that arc \(a\in[\mu n]\) is placed from a vertex with type \(t\) to a vertex with type \(s\). Then, we have that there exists a constant \(\widehat{C}>0\) such that_ \[\left|\mathbb{P}(\mathcal{A}_{ts}^{(a)})-\frac{q_{t}q_{s}\kappa(t,s)}{\mu} \right|\leq\widehat{C}\log(n)n^{-\frac{r}{2}}.\] **Lemma 4.16** (Arc to vertex-type probability - no stability).: _Suppose \(t,s\in\mathcal{S}\) are two vertex-types of which at least one is not stable at tolerance \(\tau\in(0,1)\). Assume that Assumption 3.5 is satisfied. Define \(\mathcal{A}_{ts}^{(a)}\) as the event that arc \(a\in[\mu n]\) is placed from a vertex with type \(t\) to a vertex with type \(s\). Then, we have that there exists a constant \(\widehat{C}>0\) such that for any fixed \(r>0\) we have that_ \[\mathbb{P}(\mathcal{A}_{ts}^{(a)})\leq\widehat{C}\sqrt{q_{t}q_{s}}\left(\sqrt {q_{t}q_{s}}+\frac{\log(n)}{\sqrt{n}}+\frac{1}{n^{r}}\right).\] Lemma 4.15 shows that the probability of arc placement between two vertex-types \(t,s\in\mathcal{S}\) is roughly \(q_{t}q_{s}\kappa(t,s)/\mu\). Since we seek to place \(\mu n\) arcs, we can derive from this that there will be roughly \(q_{t}q_{s}\kappa(t,s)n\) arcs from vertices of type \(t\) to vertices of type \(s\). This is the expected number we require in (16). Furthermore, Lemma 4.16 shows that the probability of arcs being placed in between some unstable vertices is small. Together with Lemma 4.9 this will show that these types of arcs do not contribute too much to the overall count. Together, we can use these considerations to show that \(\bar{A}_{n}\) concentrates for all stable vertex-types. **Lemma 4.17** (Non-unique arc to vertex-type count).: _Suppose that Assumption 3.5 is satisfied. Then, for all \(\alpha>3/8\) and \(C>0\) we have that_ \[\mathbb{P}\left(\bigcup_{t=1}^{u_{u}^{*}(\tau)}\bigcup_{s=1}^{u_{u}^{*}(\tau) }\left\{\left|\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n\right|\right|>Cn^{ 1/2+\alpha}\sqrt{q_{t}q_{s}}\right\}\right)=o(1),\] _for all \(\tau>1/2-\alpha\)._ Lemma 4.17 provides the final piece of the puzzle. Not only does the parameter \(\alpha\) that appears in the lemma's statement coincide with the parameter given in Theorem 3.7, but with the lemma we are now also in a position to prove Theorem 3.7. First, with Lemma 4.12 and 4.13, we will show that \(\bar{A}_{n}\) is close to \(\bar{A}_{n}\) (cf. Definition 4.8) for stable vertex types. Thereafter, we apply this "closeness" together with Assumption 3.5, Assumption 3.6, and Theorem 2.5 to prove Theorem 3.7. We will use the same steps as outlined in Section 4.3. Proof of Theorem 3.7.: Fix a constant \(\alpha\in(0,1/2)\) such that \(3/8<\alpha<1/2\). Given the value of \(\alpha\), choose a super-stable tolerance \(\tau^{+}>1/2\) (cf. Definition 4.10) and a value \(\varepsilon\) close enough to zero such that \[\frac{3}{4}<1-\frac{2+\varepsilon}{8+6\varepsilon}\leq\tau^{+}<2\alpha. \tag{17}\] This is possible because of two reasons. Firstly, the fraction in (17) converges to \(3/4=2\cdot 3/8<2\alpha\) for \(\varepsilon\to 0\). Secondly, we can take \(\varepsilon\to 0\), since the requirement \(\mathbb{E}[T^{1+\varepsilon}]<\infty\) for some \(\varepsilon>0\) in Assumption 3.5 implies that \(\mathbb{E}[T^{1+\varepsilon^{\prime}}]<\infty\) for all \(\varepsilon^{\prime}<\varepsilon\) as well. Additionally, choose a stable tolerance \(\tau>1-2\alpha\). We will move through the following steps to prove the result: 1. We will use Lemma 4.12, 4.13 and 4.17 to show that \[\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\tau}(\tau^{+})}\bigcup_{s=1}^{u_{n}^{ \tau}(\tau^{+})}\left\{\left|\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n \right|\right|>n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right\}\right)=o(1).\] (18) 2. We will use Lemma 4.9 and 4.17 to upgrade the result of Step I to \[\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\tau}(\tau)}\bigcup_{s=1}^{u_{n}^{\tau} (\tau)}\left\{\left|\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n\right| \right|>n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right\}\right)=o(1).\] (19) 3. We will use the result of Step II together with Theorem 2.5 to show the desired result. Step I.Fix two super-stable vertex types \(t,s\in\mathcal{S}\). Denote by \(S_{t}\) the number of self-loops between vertices of type \(t\), and by \(M_{ts}\) the number of multi-arcs from a vertex of type \(t\) to a vertex of type \(s\). First of all, we note that \[\left(\bar{A}_{n}(t,s)-S_{t}\mathbb{1}_{\left\{t=s\right\}}-M_{ts}\right)\lor 0 \leq\bar{A}_{n}(t,s)\leq\bar{A}_{n}(t,s).\] Using this, we bound (18) from above as \[\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\tau}}\bigcup_{s=1}^{u_{n}^ {\tau}}\left\{\left|\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n\right| \right|>n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right\}\right) \tag{20a}\] \[+\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\tau}}\bigcup_{s=1}^{u_{n} ^{\tau}}\left\{\left|\bar{A}_{n}(t,s)-S_{t}\mathbb{1}_{\left\{t=s\right\}}-M_ {ts}-\left|\kappa(t,s)q_{t}q_{s}n\right|\right|>n^{1/2+\alpha}\sqrt{q_{t}q_{s }}\right\}\right). \tag{20b}\] Now, we seek to apply Lemma 4.17 on both probabilities in the sum. For the first, we can directly apply it. For the second, we will have to deal with the inclusion of the \(S_{t}\) and \(M_{ts}\) random variables. To do this, define the events \(\mathcal{S}_{ts}:=\{S_{t}\leq n^{1-\nu_{1}}\}\cap\{t=s\}\) for some \(\nu_{1}<\tau^{+}\) and \(\mathcal{M}_{ts}:=\{M_{ts}\leq n^{1-\nu_{2}}\}\) for some \(\nu_{2}<2\tau^{+}-1\). We can now intersect the second probability with these events and apply the union bound to find an upper-bound for (20b): \[\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\tau}}\bigcup_{s=1}^{u_{n}^ {\tau}}\left\{\left|\bar{A}_{n}(t,s)-S_{t}\mathbb{1}_{\left\{t=s\right\}}-M_{ ts}-\left|\kappa(t,s)q_{t}q_{s}n\right|\right|>n^{1/2+\alpha}\sqrt{q_{t}q_{s}} \right\}\cap\mathcal{S}_{ts}\cap\mathcal{M}_{ts}\right) \tag{21a}\] \[+\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\tau}}\neg\mathcal{S}_{tt} \right)+\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\tau}}\bigcup_{s=1}^{u_{n}^{ \tau}}\neg\mathcal{M}_{ts}\right) \tag{21b}\] We will now show that (21a) converges to zero. To do this, first note that conditioned on the events \(\mathcal{S}_{ts}\) and \(\mathcal{M}_{ts}\) we may replace \(S_{t}\) by \(n^{1-\nu_{1}}\) and \(M_{ts}\) by \(n^{1-\nu_{2}}\) in order to create a larger probability. Doing this, applying the triangle inequality, and removing the events \(\mathcal{S}_{ts}\) and \(\mathcal{M}_{ts}\) yields the upper-bound \[\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\tau}}\bigcup_{s=1}^{u_{n}^{\tau}}\left\{ \left|\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n\right|\right|>n^{1/2+\alpha }\sqrt{q_{t}q_{s}}-n^{1-\nu_{1}}-n^{1-\nu_{2}}\right\}\right). \tag{22}\] Now, we seek to show that \(n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\) dominates over \(n^{1-\nu_{i}}\) for \(i\in\{1,2\}\). Recall that \(q_{t,s}\leq n^{-1+\tau^{+}}\) (due to super-stability) and conclude that \[n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\geq n^{\alpha+\tau^{+}-1/2}.\] Thus, for domination we only need to show that \(\alpha+\tau^{+}-1/2>1-\nu_{i}\) for \(i\in\{1,2\}\). Moreover, since we can choose \(\nu_{1}\) and \(\nu_{2}\) such that \(\nu_{1}<\tau^{+}\) and \(\nu_{2}<2\tau^{+}-1\), we have domination when we can show both that \(\alpha+\tau^{+}-1/2>1-\tau^{+}\) and \(\alpha+\tau^{+}-1/2>2-2\tau^{+}\). Both of these inequalities are true, since we picked \(\alpha>3/8\) and \(\tau^{+}>3/4\) from (17). Because \(n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\) dominates \(n^{1-\nu_{1,2}}\), we may further upper-bound (22) as \[\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\uparrow}}\bigcup_{s=1}^{u_{n}^{ \uparrow}}\left\{\left|\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n\right| \right|>\frac{1}{2}n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right\}\right).\] Note this upper bound equals \(o(1)\) due to Lemma 4.17. Thus, indeed (21a) converges to zero. We also immediately have that (21b) converges to zero from Lemmas 4.12 and 4.13. Hence, we can finally conclude that (20b) converges to zero, implying that (18) is satisfied. Step II.In this step we will bound the following probability: \[\mathbb{P}\left(\bigcup_{(t\lor s)>u_{n}^{\uparrow}(\tau^{+})}^{u_{n}^{ \uparrow}(\tau)}\left\{\left|\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n \right|\right|>n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right\}\right). \tag{23}\] Note that the union inside (23) considers only the pairs vertex-types \(t,s\in\mathcal{S}\) for which at least one of the two is not super-stable. This is the difference between the union in (19) when compared to the one in (18). Thus, if (23) converges to zero, then (due to Step I) we also have that (19) converges to zero. To show that (23) converges to zero, we will first show that \(n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\) is greater than \(\left|\kappa(t,s)q_{t}q_{s}n\right|\). To this end, it follows from Lemma 4.9 to show there exists a constant \(\kappa^{\uparrow}\in\mathbb{R}\) such that \[n^{1/2+\alpha}\sqrt{q_{t}q_{s}}-\left|\kappa(t,s)q_{t}q_{s}n\right|\geq n^{1/2 +\alpha}\sqrt{q_{t}q_{s}}\left(1-\kappa^{\uparrow}n^{1/2-\alpha}\sqrt{q_{t}q_ {s}}\right).\] Because either vertex-type \(t\) or \(s\) is not super-stable we know that \[\kappa^{\uparrow}n^{1/2-\alpha}\sqrt{q_{t}q_{s}}\leq\kappa^{\uparrow}n^{\tau^ {+}/2-\alpha}.\] From (17) we find that \(\alpha>\tau^{+}/2\), implying for \(n\) large that \(\kappa^{\uparrow}n^{1/2-\alpha}\sqrt{q_{t}q_{s}}<1\). Hence, we may conclude indeed that \[n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\geq\left|\kappa(t,s)q_{t}q_{s}n\right|.\] The consequence of this, due to the fact that \(\bar{A}_{n}\geq 0\), is that (23) equals \[\mathbb{P}\left(\bigcup_{(t\lor s)>u_{n}^{\uparrow}(\tau^{+})}^{u_{n}^{ \uparrow}(\tau)}\left\{\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n\right|>n^ {1/2+\alpha}\sqrt{q_{t}q_{s}}\right\}\right).\] The idea is now to upper-bound this probability by recalling that \(\bar{\bar{A}}_{n}\geq\bar{A}_{n}\) and using Lemma 4.17 to show it converges to zero. We can apply Lemma 4.17, since \(\tau>1-2\alpha>1/2-\alpha\). Putting this plan into action yields \[\mathbb{P}\left(\bigcup_{(t\lor s)>u_{n}^{\uparrow}(\tau^{+})}^{ u_{n}^{\uparrow}(\tau)}\left\{\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n \right|>n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right\}\right),\] \[\leq\mathbb{P}\left(\bigcup_{(t\lor s)>u_{n}^{\uparrow}(\tau^{+})}^ {u_{n}^{\uparrow}(\tau)}\left\{\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n \right|>n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right\}\right),\] \[\leq\mathbb{P}\left(\bigcup_{(t\lor s)>u_{n}^{\uparrow}(\tau^{+})}^ {u_{n}^{\uparrow}(\tau)}\left\{\left|\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{ s}n\right|\right|>n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right\}\right)=o(1).\] Step III.We will now use Assumption 3.6 to remove the influence of unstable vertices. Then, we will condition on the realisation of \(\bar{A}_{n}\) using the law of total probability to turn CCI into ARD. This yields \[\mathbb{P}\left(\mathtt{CCI}_{n,\mu}\in\mathcal{Q}_{n}\right) =\mathbb{P}\left(\mathtt{CCI}_{n,\mu}^{-}\in\mathcal{Q}_{n}\right) +o(1),\] \[=\sum_{\Lambda_{n}^{\prime}}\mathbb{P}\left(\mathtt{CCI}_{n,\mu} ^{-}\in\mathcal{Q}_{n}\mid\bar{A}_{n}=\Lambda_{n}^{\prime}\right)\mathbb{P} \left(\bar{A}_{n}=\Lambda_{n}^{\prime}\right)+o(1),\] \[=\sum_{\Lambda_{n}^{\prime}}\mathbb{P}\left(\mathtt{ARD}_{n}(T, \Lambda_{n}^{\prime})\in\mathcal{Q}_{n}\right)\mathbb{P}\left(\bar{A}_{n}= \Lambda_{n}^{\prime}\right)+o(1).\] Note in the expression above that \(\mathbb{P}\left(\bar{A}_{n}=\Lambda_{n}^{\prime}\right)=0\) if there exists an unstable vertex-type \(t\) or \(s\) at tolerance \(\tau\) for which \(\Lambda_{n}^{\prime}(t,s)>0\). Hence, we we can simply set \(\bar{A}_{n}(t,s)=0\) for these vertex-types. Now, define the following set of "desirable" \(\bar{A}_{n}\) realisations: \[\mathcal{L}_{n}=\left\{\Lambda_{n}^{\prime}:\left|\Lambda_{n}^{\prime}(t,s)- \left|\kappa(t,s)q_{t}q_{s}n\right|\right|\leq n^{1/2+\alpha}\sqrt{q_{t}q_{s}} \text{ for all }t,s\leq u_{n}^{\dagger}(\tau)\right\}.\] Splitting up the above sum into the values of \(\Lambda_{n}^{\prime}\) that fall within \(\mathcal{L}_{n}\) and the ones that do not yields \[\mathbb{P}\left(\mathtt{CCI}_{n,\mu}\in\mathcal{Q}_{n}\right) \leq\sum_{\Lambda_{n}^{\prime}\in\mathcal{L}_{n}}\mathbb{P}\left( \mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n}\right)\mathbb{P} \left(\bar{A}_{n}=\Lambda_{n}^{\prime}\right)\] \[\quad+\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\dagger}(\tau)} \bigcup_{s=1}^{u_{n}^{\dagger}(\tau)}\left\{\left|\bar{A}_{n}(t,s)-\left| \kappa(t,s)q_{t}q_{s}n\right|\right|>n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right\} \right)+o(1).\] We can find a similar lower bound by disregarding all the value of \(\Lambda_{n}^{\prime}\) that do not fall within \(\mathcal{L}_{n}\). Hence, by using the result of Step II, we now have upper and lower bounds of the desired probability in terms of ARD probabilities given by \[\sum_{\Lambda_{n}^{\prime}\in\mathcal{L}_{n}}\mathbb{P}\left(\mathtt{ARD}_{n}( T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n}\right)\mathbb{P}\left(\bar{A}_{n}= \Lambda_{n}^{\prime}\right)\leq\mathbb{P}\left(\mathtt{CCI}_{n,\mu}\in \mathcal{Q}_{n}\right)\leq\sum_{\Lambda_{n}^{\prime}\in\mathcal{L}_{n}} \mathbb{P}\left(\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n} \right)\mathbb{P}\left(\bar{A}_{n}=\Lambda_{n}^{\prime}\right)+o(1).\] We will end the proof by showing that \[\sum_{\Lambda_{n}^{\prime}\in\mathcal{L}_{n}}\mathbb{P}\left(\mathtt{ARD}_{n}( T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n}\right)\mathbb{P}\left(\bar{A}_{n}= \Lambda_{n}^{\prime}\right)\to p. \tag{24}\] To achieve this, we first note for all \(\Lambda_{n}^{\prime}\in\mathcal{L}_{n}\) that \(\Lambda_{n}^{\prime}(t,s)=0\) when either \(t\) or \(s\) is unstable, and that for all other \(t,s\leq u_{n}^{\dagger}(\tau)\) we have \[\Lambda_{n}^{-}(t,s):=\left\lfloor\kappa(t,s)q_{t}q_{s}n\right\rfloor-\left\lfloor n ^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right\rfloor\leq\Lambda_{n}^{\prime}(t,s)\leq \left\lfloor\kappa(t,s)q_{t}q_{s}n\right\rfloor+\left\lfloor n^{1/2+\alpha} \sqrt{q_{t}q_{s}}\right\rfloor=:\Lambda_{n}^{+}(t,s).\] Here, the floor in the upper-bound is valid, since \(\Lambda_{n}^{\prime}(t,s)\) must be an integer. We will now exploit monotonicity of \(\mathcal{Q}_{n}\) to take the ARD probability out of the sum in (24). We will assume without loss of generality that \(\mathcal{Q}_{n}\) is increasing. The decreasing case is analogous. From Lemma 4.7 we know that \[\mathbb{P}\left(\mathtt{ARD}_{n}(T,\Lambda_{n}^{-})\in\mathcal{Q}_{n}\right) \leq\mathbb{P}\left(\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n }\right)\leq\mathbb{P}\left(\mathtt{ARD}_{n}(T,\Lambda_{n}^{+})\in\mathcal{Q}_ {n}\right).\] Thus, when substituting these bounds into (24), removing the ARD probabilities from the sum, and computing the remaining sum, we find \[\mathbb{P}\left(\mathtt{ARD}_{n}(T,\Lambda_{n}^{-})\in\mathcal{Q} _{n}\right)\mathbb{P}\left(\bar{A}_{n}\in\mathcal{L}_{n}\right) \leq\sum_{\Lambda_{n}^{\prime}\in\mathcal{L}_{n}}\mathbb{P}\left( \mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n}\right)\mathbb{P} \left(\bar{A}_{n}=\Lambda_{n}^{\prime}\right) \tag{25a}\] \[\leq\mathbb{P}\left(\mathtt{ARD}_{n}(T,\Lambda_{n}^{+})\in \mathcal{Q}_{n}\right)\mathbb{P}\left(\bar{A}_{n}\in\mathcal{L}_{n}\right). \tag{25b}\] From the result of Step II we know that \(\mathbb{P}\left(\bar{A}_{n}\in\mathcal{L}_{n}\right)\to 1\). Thus, we now seek to invoke Theorem 2.5 to show that both ARD probabilities converge to \(p\). For this, we first focus on the lower-bound in (25a) and notice that \[\Lambda_{n}^{-}(t,s)=\left\lfloor\kappa(t,s)q_{t}q_{s}n-\left\lfloor n^{1/2+ \alpha}\sqrt{q_{t}q_{s}}\right\rfloor\geq\left\lfloor\kappa(t,s)q_{t}q_{s}n-n^{1 /2+\alpha}\sqrt{q_{t}q_{s}}\right\rfloor=\left\lfloor\left(\kappa(t,s)-\frac{n^ {-1/2+\alpha}}{\sqrt{q_{t}q_{s}}}\right)q_{t}q_{s}n\right\rfloor.\] Now, we consider the kernel \(\kappa_{n}^{-}(t,s)=\kappa(t,s)-n^{-1/2+\alpha}/\sqrt{q_{t}q_{s}}\). Note that our initial choice of \(\alpha\) and \(\tau\) implies that \(\alpha>1/2-\tau/2\). We also have that \(\alpha>\tau/2\), since \(\tau<1/2\) and \(\alpha>3/8\). All in all, with our choices of \(\alpha\) and \(\tau\) we have that Assumption 2.3 is satisfied. This is because Lemma 4.9 tells us for \(n\) large and one unstable vertex-type that there exists a constant \(c>0\) such that \[\kappa(t,s)\leq\frac{c\sqrt{q_{t}q_{s}}}{\sqrt{q_{t}q_{s}}}\leq\frac{cn^{-1/2+ \tau/2}}{\sqrt{q_{t}q_{s}}}\leq\frac{cn^{-1/2+\alpha}}{\sqrt{q_{t}q_{s}}}.\] Moreover, if we fix a kernel \(\kappa^{\prime}_{n}\) such that \(|\kappa^{\prime}_{n}(t,s)-\kappa^{-}_{n}(t,s)|\leq n^{-1/2+\alpha}/\sqrt{q_{t} q_{s}}\), then we particularly have that \[|\kappa^{\prime}_{n}(t,s)-\kappa(t,s)| \leq\left|\kappa^{\prime}_{n}(t,s)-n^{-1/2+\alpha}/\sqrt{q_{t}q_{s }}-\kappa(t,s)\right|+n^{-1/2+\alpha}/\sqrt{q_{t}q_{s}},\] \[\leq\left|\kappa^{-}_{n}(t,s)-\kappa(t,s)\right|+n^{-1/2+\alpha} /\sqrt{q_{t}q_{s}}\leq 2n^{-1/2+\alpha}/\sqrt{q_{t}q_{s}}.\] Through the assumption in (9) we may now conclude that \(\mathbb{P}(\texttt{IRD}_{n}(T,\kappa^{\prime}_{n})\in\mathcal{Q}_{n})\to p\). Hence, when we set \(\mathring{\Lambda}^{-}_{n}(t,s)=\lfloor\kappa^{-}_{n}(t,s)q_{t}q_{s}n\rfloor\), then we indeed find for (25a) through an application of Theorem 2.5 (with \(C=1\)) that \[p=\lim_{n\to\infty}\mathbb{P}\left(\texttt{ARD}_{n}(T,\mathring{\Lambda}^{-}_ {n})\in\mathcal{Q}_{n}\right)\mathbb{P}\left(\bar{A}_{n}\in\mathcal{L}_{n} \right)\leq\limsup_{n\to\infty}\mathbb{P}\left(\texttt{ARD}_{n}(T,\Lambda^{-}_ {n})\in\mathcal{Q}_{n}\right)\mathbb{P}\left(\bar{A}_{n}\in\mathcal{L}_{n} \right).\] The approach to show that (25b) converges to \(p\) is the same. We first note that \[\Lambda^{+}_{n}(t,s)\leq\lfloor\kappa(t,s)q_{t}q_{s}n\rfloor+n^{1/2+\alpha} \sqrt{q_{t}q_{s}}\leq\lfloor\kappa(t,s)q_{t}q_{s}n+n^{1/2+\alpha}\sqrt{q_{t}q _{s}}\rfloor=\left\lfloor\left(\kappa(t,s)+\frac{n^{-1/2+\alpha}}{\sqrt{q_{t}q _{s}}}\right)q_{t}q_{s}n\right\rfloor.\] Now, we consider the kernel \(\kappa^{+}_{n}(t,s)=\kappa(t,s)+n^{1/2+\alpha}/\sqrt{q_{t}q_{s}}\) and note that Assumption 2.3 is satisfied for this kernel due to Lemma 4.9 if we pick e.g. \(C=2\). Now, we fix an arbitrary kernel \(\kappa^{\prime}_{n}\) such that \(|\kappa^{\prime}_{n}(t,s)-\kappa^{-}_{n}(t,s)|\leq 2n^{-1/2+\alpha}/\sqrt{q_{t}q_{s}}\), and we note particularly that \[|\kappa^{\prime}_{n}(t,s)-\kappa(t,s)| \leq\left|\kappa^{\prime}_{n}(t,s)+n^{-1/2+\alpha}/\sqrt{q_{t}q_ {s}}-\kappa(t,s)\right|+n^{-1/2+\alpha}/\sqrt{q_{t}q_{s}},\] \[\leq\left|\kappa^{+}_{n}(t,s)-\kappa(t,s)\right|+n^{-1/2+\alpha} /\sqrt{q_{t}q_{s}}\leq 3n^{-1/2+\alpha}/\sqrt{q_{t}q_{s}}.\] Through the assumption in (9) we may now conclude that \(\mathbb{P}(\texttt{IRD}_{n}(T,\kappa^{\prime}_{n})\in\mathcal{Q}_{n})\to p\). Hence, when we set \(\mathring{\Lambda}^{+}_{n}(t,s)=\lfloor\kappa^{+}_{n}(t,s)q_{t}q_{s}n\rfloor\), then we indeed find for (25a) through an application of Theorem 2.5 (with \(C=2\)) that \[\liminf_{n\to\infty}\mathbb{P}\left(\texttt{ARD}_{n}(T,\Lambda^{+}_{n})\in \mathcal{Q}_{n}\right)\mathbb{P}\left(\bar{A}_{n}\in\mathcal{L}_{n}\right) \leq\lim_{n\to\infty}\mathbb{P}\left(\texttt{ARD}_{n}(T,\mathring{\Lambda}^ {+}_{n})\in\mathcal{Q}_{n}\right)\mathbb{P}\left(\bar{A}_{n}\in\mathcal{L}_{n} \right)=p.\] We have now shown that the upper- and lower-bound in (25a) and (25b) converge to \(p\). Thus, we have now shown through (24) that indeed \(\mathbb{P}(\texttt{CCI}_{n,\mu}(T,C,I,J)\in\mathcal{Q}_{n})\to p\). **Q.E.D.** **Remark**.: Lemmas 4.11, 4.15 and 4.16 were not directly used in the proof of Theorem 3.7. These are all needed to prove the "main technical lemma" of this section: Lemma 4.17. ## 5 Proofs of propositions and lemmas We end this paper by giving the proofs of all lemmas and propositions that were stated in the main text. We will start with the proofs of all proposition in Section 3, then give the proofs of all lemmas in Section 4.2, and finally give the proofs of all lemmas in Section 4.4. ### Proofs of propositions Proof of Proposition 3.3.: When \(\mathbb{E}[W^{1+\varepsilon}]<\infty\), we know that \(\sum_{t=1}^{\infty}t^{1+\varepsilon}\cdot q_{t}<\infty\). This has the following two consequences: 1. There exists \(t_{1}\) for which \((q_{t})_{t\geq t_{1}}\) is decreasing sequence. 2. There exists a \(t_{2}\) for which \(q_{t}<t^{-2-\varepsilon}\) for all \(t\geq t_{2}\). Define \(t^{\downarrow}:=\max\{t_{1},t_{2}\}\), fix some \(n\in\mathbb{N}\) large and two vertex-types \(t,s\in\mathbb{N}\). Suppose that \(t>t^{\downarrow}\), then from consequence 2. we have that \(q_{t}^{-1/(2+\varepsilon)}>t\). This yields the following lower-bound: \[q_{t}^{-1/2}=q_{t}^{-\frac{\varepsilon}{2(2+\varepsilon)}}\cdot q_{t}^{-\frac{ \varepsilon}{21+\varepsilon}}\geq q_{t}^{-\frac{\varepsilon}{2(2+\varepsilon)}} \cdot t. \tag{26}\] Now, assume without loss of generality that vertex-type \(t\) is unstable at some unspecified tolerance \(\tau\). According to Definition 2.2, for this type \(t\) we must have \(q_{t}<n^{-1+\tau}\). Since this upper-bound converges to zero, we must also have that \(u_{n}^{\uparrow}(\tau)\to\infty\) as \(n\to\infty\). Hence, there exists a \(n\) large enough for which \(u_{n}^{\uparrow}>t^{\downarrow}\), implying (26) is satisfied for every unstable vertex-type. Now, focus on the possibly stable vertex type \(s\). If it happens that \(s>t^{\downarrow}\), then (26) is valid for type \(s\) too. It is only possible that (26) is not valid for type \(s\) when \(s\leq t^{\downarrow}\). However, since \(t^{\downarrow}\) is independent from \(n\), we can choose a \(C>0\) large enough such that \[Cq_{s}^{-1/2}\geq s/\mathbb{E}[W]\qquad\text{for all }s\leq t^{\downarrow}. \tag{27}\] By noting in (26) that \(q_{t}^{-\frac{\varepsilon}{2(2+\varepsilon)}}\cdot t\geq t\), we may conclude that (27) holds for all \(s\in\mathbb{N}\). Now, it is time to lower-bound (2) in Assumption 2.3 with the aforementioned value of \(C\). We find \[\frac{Cn^{\alpha-1/2}}{\sqrt{q_{t}q_{s}}}=\frac{C}{\sqrt{q_{s}}}\cdot\frac{n^ {\alpha-1/2}}{\sqrt{q_{t}}}\geq\frac{ts}{\mathbb{E}[W]}\cdot n^{\alpha-1/2} \cdot q_{t}^{-\frac{\varepsilon}{2(2+\varepsilon)}}=\kappa(t,s)\cdot n^{ \alpha-1/2}\cdot q_{t}^{-\frac{\varepsilon}{2(2+\varepsilon)}}.\] Here, we used (26) and (27) at the inequality. We will end the proof by showing that \(n^{\alpha-1/2}\cdot q_{t}^{-\frac{\varepsilon}{2(2+\varepsilon)}}\geq 1\) for some choice of \(\tau,\alpha>0\), after which Assumption 2.3 will be satisfied. First, we will use instability of \(t\) to bound \[n^{\alpha-1/2}\cdot q_{t}^{-\frac{\varepsilon}{2(2+\varepsilon)}}\geq n^{ \alpha-1/2}\cdot\big{(}n^{1-\tau}\big{)}^{\frac{\varepsilon}{2(2+\varepsilon) }}=n^{\frac{\varepsilon}{2(2+\varepsilon)}+\alpha-1/2-\frac{\varepsilon}{2(2+ \varepsilon)}}.\] If an admissable pair \(\alpha,\tau>0\) exists, then from Assumption 2.3 it follows that \(\alpha-1/2>\tau/2\). Now take any \(0<\tau<\varepsilon/(2+2\varepsilon)<1/2\). Then we have that \[\frac{\varepsilon}{2(2+\varepsilon)}+\alpha-1/2-\frac{\varepsilon\tau}{2(2+ \varepsilon)}\geq\frac{\varepsilon-\tau(2+2\varepsilon)}{2(2+\varepsilon)}>0,\] which implies that \(n^{\alpha-1/2}\cdot q_{t}^{-\frac{\varepsilon}{2(2+\varepsilon)}}\geq 1\). **Q.E.D.** Proof of Proposition 3.9.: As the probability measure \(\nu\) we can simply take the Borel measure that assigns the probabilities \(\nu(\{t\})=q_{t}\) for all \(t\in\mathbb{N}\). Then, by the weak law of large numbers condition (a) is satisfied. Moreover, since \(\mathbb{N}\) is a discrete space, we also have that the continuity conditions in (b) and (c) are satisfied. Hence we are left to show that 1. \(\varphi_{n}(t,s):=(\kappa^{\prime}_{n}(t,s)-\kappa(t,s))/\kappa(t,s)\to 0\) as \(n\to\infty\), and 2. \(\lim_{n\to\infty}\frac{1}{n^{2}}\mathbb{E}\left[\sum_{v=1}^{n}\sum_{w=1}^{n} \kappa(T_{v},T_{w})\right]=\lim_{n\to\infty}\frac{1}{n^{2}}\mathbb{E}\left[\sum _{v=1}^{n}\sum_{w\neq v}\kappa^{\prime}_{n}(T_{v},T_{w})\right]=\sum_{t=1}^{ \infty}\sum_{s=1}^{\infty}\kappa(t,s)q_{t}q_{s}<\infty\). Part I.First note from the definition of \(\kappa^{\prime}_{n}\) that for fixed \(t,s\in\mathbb{N}\) we have \[-\frac{Cn^{\alpha}}{\kappa(t,s)\sqrt{q_{t}q_{s}}n}\leq\varphi_{n}(t,s)\leq \frac{Cn^{\alpha}}{\kappa(t,s)\sqrt{q_{t}q_{s}}n}. \tag{28}\] Note that for fixed \(t,s\in\mathbb{N}\) the values of \(q_{t},q_{s}\) and \(\kappa(t,s)\) are deterministic, so the lower- and upper-bounds are deterministic as well. We only have a problem whenever \(\kappa(t,s)\), \(q_{t}\) or \(q_{s}\) are zero. But, if \(q_{t}\) or \(q_{s}\) is zero, then the vertex-type cannot exist, meaning we can remove it from the model. Alternatively, when \(\kappa(t,s)=0\), we can simply take \(\varphi_{n}(t,s)=0\), since \(\varphi_{n}\) is part of a multiplicative factor in \(\mathtt{IRD}_{n}\) that is multiplied by \(\kappa\). In other words, if \(\kappa(t,s)=0\), the value of \(\varphi_{n}(t,s)\) does not matter. Finally, since \(\alpha<1/2\) we have that these bounds both converge to zero, letting us conclude that \(\varphi_{n}(t,s)\to 0\) almost surely (in \(\mathbb{P}_{n}\)). Part II - Rightmost sum.We first show that the last sum is finite. For this we will substitute the definition of \(\kappa\) and compute the result to show convergence. \[\sum_{t=1}^{\infty}\sum_{s=1}^{\infty}\kappa(t,s)q_{t}q_{s} =\sum_{t=1}^{\infty}\sum_{s=1}^{\infty}\sum_{i=1}^{\infty}\sum_{j= 1}^{\infty}\frac{\mu p_{ij}I(t,i)J(s,j)q_{t}q_{s}}{\lambda_{i}\varrho_{j}},\] \[=\sum_{i=1}^{\infty}\sum_{j=1}^{\infty}\sum_{t=1}^{\infty}\sum_{s =1}^{\infty}\frac{\mu p_{ij}I(t,i)J(s,j)q_{t}q_{s}}{\lambda_{i}\varrho_{j}},\] \[=\mu\sum_{i=1}^{\infty}\sum_{j=1}^{\infty}\lambda_{i}^{-1} \varrho_{j}^{-1}p_{ij}\sum_{t=1}^{\infty}I(t,i)q_{t}\sum_{s=1}^{\infty}J(s,j )q_{s},\] \[=\mu\sum_{i=1}^{\infty}\sum_{j=1}^{\infty}\lambda_{i}^{-1} \varrho_{j}^{-1}p_{ij}\lambda_{i}\varrho_{j},\] \[=\mu\sum_{i=1}^{\infty}\sum_{j=1}^{\infty}p_{ij}=\mu<\infty.\] Note in the second line we swapped the order of summation. This is possible due to the fact that all terms in the sum are non-negative, and Tonelli's theorem. Part II - Leftmost limit.We will show the leftmost limit in II equals \(\mu\) too. Using the fact that \((T_{v})_{v\geq 1}\) is an i.i.d. distributed sequence, and the law of total expectation yields \[\frac{1}{n^{2}}\mathbb{E}\left[\sum_{v=1}^{n}\sum_{w=1}^{n}\kappa (T_{v},T_{w})\right] =\frac{n(n-1)}{n^{2}}\cdot\mathbb{E}[\kappa(T_{1},T_{2})]+\frac{1 }{n}\cdot\mathbb{E}[\kappa(T_{1},T_{1})],\] \[=\frac{n(n-1)}{n^{2}}\cdot\sum_{t=1}^{\infty}\sum_{s=1}^{\infty} \kappa(t,s)q_{t}q_{s}+\frac{1}{n}\cdot\sum_{t=1}^{\infty}\kappa(t,t)q_{t},\] \[=\mu\cdot\frac{n(n-1)}{n^{2}}+\frac{1}{n}\cdot\sum_{t=1}^{\infty }\kappa(t,t)q_{t}.\] Now, Lemma 4.9 shows that \(\kappa\) is bounded. Hence, for some \(\widehat{C}>0\) we have \[0\leq\frac{1}{n}\cdot\sum_{t=1}^{\infty}\kappa(t,t)q_{t}\leq\frac{\widehat{C} }{n}\sum_{t=1}^{\infty}q_{t}=\frac{\widehat{C}}{n}\to 0.\] Thus, we indeed have that \[\lim_{n\to\infty}\frac{1}{n^{2}}\mathbb{E}\left[\sum_{v=1}^{n}\sum_{w=1}^{n} \kappa(T_{v},T_{w})\right]=\lim_{n\to\infty}\left[\mu\cdot\frac{n(n-1)}{n^{2} }+\frac{1}{n}\cdot\sum_{t=1}^{\infty}\kappa(t,t)q_{t}\right]=\mu+0=\mu.\] Part II - Middle limit.We show that the middle limit equals \(\mu\) too. We split up the expectation into a sum over the kernel. Using the same considerations as above, we find \[\frac{1}{n^{2}}\mathbb{E}\left[\sum_{v=1}^{n}\sum_{w\neq v}\kappa^{\prime}_{n }(T_{v},T_{w})\right]=\mu\cdot\frac{n^{2}-n}{n^{2}}+\mathbb{E}[\varphi_{n}(T_{ 1},T_{2})]\cdot\frac{n^{2}-n}{n^{2}}.\] We will now bound the expected value of \(\varphi_{n}\) by using the law of total expectation and (28). \[-Cn^{\alpha-1/2}\sum_{t=1}^{\infty}\sqrt{q_{t}}\sum_{s=1}^{\infty}\sqrt{q_{s}} \leq\mathbb{E}[\varphi_{n}(T_{1},T_{2})]\leq\sum_{t=1}^{\infty}\sum_{s=1}^{ \infty}\frac{Cn^{\alpha}q_{t}q_{s}}{\sqrt{q_{t}q_{s}n}}=Cn^{\alpha-1/2}\sum_{t =1}^{\infty}\sqrt{q_{t}}\sum_{s=1}^{\infty}\sqrt{q_{s}}. \tag{29}\] Now, because we have assumed that \(\mathbb{E}[T^{1+\varepsilon}]<\infty\), we have for \(t\) large that \(q_{t}<t^{-2-\varepsilon}\). Hence, \(\sqrt{q_{t}}<t^{-1-\varepsilon/2}\), meaning the sums in (29) are finite. Thus, for some \(\widetilde{C}>0\) we have that \[-\widetilde{C}n^{\alpha-1/2}\leq\mathbb{E}[\varphi_{n}(T_{1},T_{2})]\leq \widetilde{C}n^{\alpha-1/2}.\] Since \(\alpha<1/2\), both these terms converge to zero. All together, this shows that \[\lim_{n\to\infty}\frac{1}{n^{2}}\mathbb{E}\left[\sum_{v=1}^{n}\sum_{w\neq v} \kappa^{\prime}_{n}(T_{v},T_{w})\right]=\lim_{n\to\infty}\left[\mu\cdot\frac{n^ {2}-n}{n^{2}}+\mathbb{E}[\varphi_{n}(T_{v},T_{w})]\cdot\frac{n^{2}-n}{n^{2}} \right]=\mu\cdot 1+0\cdot 1=\mu.\] **Q.E.D.** Proof of Proposition 3.11.: Denote by \(\texttt{CCI}^{-}_{n,\mu}\) the version of \(\texttt{CCI}_{n,\mu}\) after removing all arcs from or to an unstable vertex. In this model, let \(\mathcal{C}^{-}_{(i)}\) denote the \(i\)-th largest strongly connected component. Finally, let \(A^{\uparrow}_{n}\) denote the number of unstable arcs. We split up the proof in the following steps. 1. We show that with high probability for some constant \(p\in[0,1)\) we have \(A^{\uparrow}_{n}\leq n^{p}\). 2. We show that \[\frac{1}{n}\left|\bigcup_{i=1}^{n^{p}}\mathcal{C}^{-}_{(i)}\right|\to\alpha,\] in probability. 3. We use the above two points to show that \(|\mathcal{C}_{\max}|/n\to\alpha\) in probability as well. Step I.Recall that the total number of arcs in \(\texttt{CCI}_{n,\mu}\) is \(\lfloor\mu n\rfloor\). Thus, we can write (cf. Definition 4.8) \[A^{\uparrow}_{n}=\lfloor\mu n\rfloor-\sum_{t=1}^{u^{\uparrow}_{n}}\sum_{s=1} ^{u^{\uparrow}_{n}}\bar{A}_{n}(t,s).\] Then, using (19) we can bound with high probability \[A^{\uparrow}_{n}\leq\mu n-\sum_{t=1}^{u^{\uparrow}_{n}}\sum_{s=1}^{u^{ \uparrow}_{n}}\left[\lfloor\kappa(t,s)q_{t}q_{s}n\rfloor-n^{1/2+\alpha}\sqrt{ q_{t}q_{s}}\right]\leq\mu n-\sum_{t=1}^{u^{\uparrow}_{n}}\sum_{s=1}^{u^{ \uparrow}_{n}}\left[\kappa(t,s)q_{t}q_{s}n-1-n^{1/2+\alpha}\sqrt{q_{t}q_{s}} \right].\] Since \(\mathbb{E}[T^{1+\varepsilon}]<\infty\), we know that the sum over \(\sqrt{q_{t}}\)-terms converges. Hence, when we compute all the negative sums, we find that there exists a constant \(C>0\) such that \[A^{\uparrow}_{n}\leq\mu n-\sum_{t=1}^{u^{\uparrow}_{n}}\sum_{s=1}^{u^{ \uparrow}_{n}}\kappa(t,s)q_{t}q_{s}n+Cn^{1/2+\alpha}+(u^{\uparrow}_{n})^{2}.\] Using Lemma 4.2 can now conclude that \[A^{\uparrow}_{n}\leq\mu n-\sum_{t=1}^{u^{\uparrow}_{n}}\sum_{s=1}^{u^{ \uparrow}_{n}}\kappa(t,s)q_{t}q_{s}n+Cn^{1/2+\alpha}+n^{1-\tau}.\] We note that the remaining double sum (when summing over _all_ vertex-types) adds up to \(\mu n\). Thus, we can bound the first two terms in the current upper-bound on \(A^{\uparrow}_{n}\) to find \[A^{\uparrow}_{n}\leq\sum_{t=1}^{\infty}\sum_{s=u^{\uparrow}_{n}+1}^{\infty} \kappa(t,s)q_{t}q_{s}n+\sum_{t=u^{\uparrow}_{n}+1}^{\infty}\sum_{s=1}^{\infty }\kappa(t,s)q_{t}q_{s}n+Cn^{1/2+\alpha}+n^{1-\tau}.\] By applying Lemma 4.9 and the fact that the \(q_{t}\)-terms are probabilities, there exists a constant \(\kappa^{+}>0\) such that \[A^{\uparrow}_{n}\leq 2\kappa^{+}n\mathbb{P}(T>u^{\uparrow}_{n})+Cn^{1/2+ \alpha}+n^{1-\tau}.\] When we finally apply Lemma 4.11 we find that there exists an overarching constant \(\widehat{C}>0\) such that \[A^{\uparrow}_{n}\leq\widehat{C}\left(n^{1+\frac{(\tau-1)\varepsilon}{(2+ \varepsilon)}}+n^{1/2+\alpha}+n^{1-\tau}\right).\] Because \(\tau\in(0,1)\) and \(\alpha<1/2\), the result follows. Step II.Let \(\delta>0\) be an arbitrary constant and set \[S_{n}:=\left|\bigcup_{i=1}^{n^{p}}\mathcal{C}_{(i)}^{-}\right|.\] Consider the event \(\mathcal{Q}_{n}^{-}(\delta):=\{S_{n}>n\delta\}\). Note, if we were to add an extra arc to a graph \(G\), then it will either not change the sizes of its strongly connected components, or merge two strongly connected components into one. In both cases, the ordered list of strong connected component sizes will change such that the size of the \(i\)-th largest strongly connected component before the added edge is smaller than or equal to the size of the \(i\)-largest connected component after adding the edge. Thus, we may conclude that \(\mathcal{Q}_{n}^{-}(\delta)\) is increasing. Secondly, if we look at pairs of stable vertex-types in \(\mathtt{CCI}_{n,\mu}^{-}\), then we note that all concentration lemmas (like e.g. Lemma 4.17) are still true, since probabilities of arcs being assigned to these vertex-types do not change. Arcs are just thrown away if they happen to be assigned to unstable vertex-types. Thus, we can use the result of Theorem 3.7 to this slightly adapted model as well. Of course, Assumption 3.6 is trivially satisfied for this model. Thirdly, suppose \(\kappa_{n}^{\prime}\) is a function that adheres to (9). Denote by \(|\mathcal{C}_{(i)}^{\mathtt{IRD}}|\) the \(i\)-th largest strongly connected component in \(\mathtt{IRD}_{n}(T,\kappa_{n}^{\prime})\), and by \(S_{n}^{\mathtt{IRD}}\) its corresponding version of \(S_{n}\). Then, due to Proposition 3.9 we can apply Theorem 3.9 in [5] to conclude that \[|\mathcal{C}_{(1)}^{\mathtt{TRD}}|/n\to\sum_{x=1}^{\infty}\pi_{x}^{+}\pi_{x}^ {-}q_{x}=:\alpha, \tag{30}\] in probability, where \(q_{x}=\mathbb{P}(T=x)\) and \(\pi_{x}^{\pm}\) are defined through (10) and (11). Moreover, by applying Theorem 3.11 in [5] we may conclude that \(|\mathcal{C}_{(2)}^{\mathtt{IRD}}|\leq\log(n)^{2}\) with high probability, because otherwise it would be part of the giant. Thus, we may conclude that \[\left|\bigcup_{i=2}^{n^{p}}\mathcal{C}_{(i)}^{\mathtt{IRD}}\right|\leq n^{p} \log(n)^{2},\] with high probability. This means that indeed \(S_{n}^{\mathtt{IRD}}/n\to\alpha\) in probability too. Combining these three points, using Theorem 3.7, we can conclude that \(\mathbb{P}(S_{n}\leq\delta n)\to 0\) when \(\delta<\alpha\) and \(\mathbb{P}(S_{n}\leq\delta n)\to 1\) when \(\delta\geq\alpha\). Together, this means that \(S_{n}/n\to\alpha\) in distribution, allowing us to conclude that \(S_{n}/n\to\alpha\) in probability. Moreover, with the same argument we can also conclude from (30) that \(|\mathcal{C}_{(1)}^{-}|/n\to\alpha\) in probability. Step III.We note from Step I that at most \(n^{p}\) extra edges get added connecting to at least one unstable vertex in \(\mathtt{CCI}_{n,\mu}\) with high probability. Each of these arcs can do one of two things: 1. Add all unstable vertices to the largest strongly connected component in \(\mathtt{CCI}_{n,\mu}\). 2. (Indirectly) connect \(\mathcal{C}_{(i)}\) for some \(i>1\) to the largest strongly connected component in \(\mathtt{CCI}_{n,\mu}\). If we denote by \(N_{n}^{\uparrow}\) the number of unstable vertices, then the above two points show that with high probability we have that \(|\mathcal{C}_{(1)}^{-}|\leq|\mathcal{C}_{\max}|\leq S_{n}+N_{n}^{\uparrow}\). Now, we will show that \(N_{n}^{\uparrow}\) is sub-linear. For this, we recall from Lemma 4.11 and assumption 3.5 that \[\mathbb{P}(T>u_{n}^{\uparrow}(\tau))\leq n^{\frac{(\tau-1)\varepsilon}{2+ \varepsilon}}.\] Thus, we have that \[N_{n}^{\uparrow}\preceq\mathtt{Bin}\left(n,n^{\frac{(\tau-1)\varepsilon}{2+ \varepsilon}}\right).\] By Chebyshev's inequality it holds that with high probability \[N_{n}^{\uparrow}\leq n^{1+\frac{(\tau-1)\varepsilon}{2+\varepsilon}}+n^{3/4+ \frac{(\tau-1)\varepsilon}{4+2\varepsilon}}=o(n).\] Thus, we can conclude that \(N_{n}^{\uparrow}/n\to 0\) in probability. Together with the results of step II we now have that \(|\mathcal{C}_{(1)}^{-}|/n\to\alpha\) and \((S_{n}+N_{n}^{\uparrow})/n\to\alpha\) in probability, implying that also \(|\mathcal{C}_{\max}|/n\to\alpha\) in probability. **Q.E.D.** ### Proofs of lemmas for Theorem 2.5 Proof of Lemma 4.2.: Since \(\mathbb{E}[T^{\delta}]<\infty\) we have that \(\sum_{t=1}^{\infty}t^{\delta}q_{t}<\infty\). In particular, this means (for \(t\) large) that \(q_{t}\leq t^{-1-\delta}\). Moreover, since \(q_{t}\to 0\) as \(t\to\infty\) we must have that \(u_{n}^{\uparrow}\to\infty\) as \(n\to\infty\). Thus, we know (for \(n\) large) that \[u_{n}^{\uparrow}(\tau)\leq\widehat{u}_{n}^{\uparrow}(\tau):=\inf\{t:s^{-1- \delta}<n^{-1+\tau}\text{ for all }s\geq t\}.\] We can now calculate the value of \(\widehat{u}_{n}^{\uparrow}(\tau)\) to find the desired result. We have for all \(t\) that \[t^{-1-\delta}<n^{-1+\tau}\iff n^{(1-\tau)/(1+\alpha)}<t.\] Hence, we can conclude that \[u_{n}^{\uparrow}(\tau)\leq\widehat{u}_{n}^{\uparrow}(\tau)=\left\lceil n^{(1- \tau)/(1+\delta)}\right\rceil.\] Proof of Lemma 4.4.: Fix a pair \(t,s\in\mathcal{S}\). First note we can rewrite the event we are interested in as follows: \[\neg\mathcal{V}_{ts}=\{|N_{t}-q_{t}n|>\log(n)\sqrt{q_{t}n}\}\cup\{|N_{s}-q_{s }n|>\log(n)\sqrt{q_{s}n}\}.\] By applying the union bound, we can then bound \[\mathbb{P}(\neg\mathcal{V}_{ts})\leq\mathbb{P}(|N_{t}-q_{t}n|>\log(n)\sqrt{q _{t}n})+\mathbb{P}(|N_{s}-q_{s}n|>\log(n)\sqrt{q_{s}n}).\] We will now show the result for the first probability, the argument for the second probability will be analogous. We write \[\mathbb{P}(|N_{t}-q_{t}n|>\log(n)\sqrt{q_{t}n})=\mathbb{P}(N_{t}>q_{t}n+\log( n)\sqrt{q_{t}n})+\mathbb{P}(N_{t}<q_{t}n-\log(n)\sqrt{q_{t}n}). \tag{31}\] By noting that \(N_{t}\sim\texttt{Bin}(n,q_{t})\) we can apply the Chernoff bound (see [15] Theorem 2.21) on both probabilities to find \[\mathbb{P}(|N_{t}-q_{t}n|>\log(n)\sqrt{q_{t}n})\leq 2\exp(-\log(n)^{2}/2).\] Proof of Lemma 4.5.: Fix two stable vertex types \(t,s\in\mathcal{S}\) and a \(\kappa_{n}^{\prime}(t,s)\). We define \[\kappa_{n}^{+}(t,s)=\kappa(t,s)+Cn^{-1/2+\alpha}/\sqrt{q_{t}q_{s}}.\] Note that \(\kappa_{n}^{+}(t,s)\leq\kappa_{n}^{\prime}(t,s)\). Hence, by recalling Definition 4.1, we have the following stochastic bound. \[A_{n}(t,s)\succeq A_{n}^{+}(t,s)\sim\texttt{Bin}(N_{t}N_{s},\kappa_{n}^{+}(t, s)/n).\] From this stochastic bound we may conclude that \[\mathbb{P}(A_{n}(t,s)<\Lambda_{n}(t,s))\leq\mathbb{P}(A_{n}^{+}(t,s)<\Lambda_ {n}(t,s)).\] We will now show that the desired bound holds for \(A_{n}^{+}(t,s)\). We do this in the following steps: 1. We intersect the event \(\{A_{n}^{+}(t,s)<\Lambda_{n}(t,s)\}\) with \(\mathcal{V}_{ts}\) and use the law of total probability to transform the mixed-binomial probability into several binomial ones where \(\mathcal{V}_{ts}\) is satisfied. 2. We show that on these binomial probabilities the Chernoff bound may be applied. 3. We apply the Chernoff bound to achieve an upper-bound, and we show that this upper-bound converges to zero with the rate we require. Step I.Intersecting with \(\mathcal{V}_{ts}\) yields \[\mathbb{P}(A_{n}^{+}(t,s)<\Lambda_{n}(t,s))\leq\mathbb{P}(\{A_{n}^{+}(t,s)< \Lambda_{n}(t,s)\}\cap\mathcal{V}_{ts})+\mathbb{P}(\neg\mathcal{V}_{ts}).\] Applying Lemma 4.4 shows that \[\mathbb{P}(A_{n}^{+}(t,s)<\Lambda_{n}(t,s))\leq\mathbb{P}(\{A_{n}^{+}(t,s)< \Lambda_{n}(t,s)\}\cap\mathcal{V}_{ts})+2\exp(-\log(n)^{2}/2).\] Thus, the lemma is true when the leftover probability converges to zero with a faster rate than \(\exp(-\log(n)^{2}/2)\). We will now apply the law of total probability on this leftover probability, conditioning on the value of \(N_{t}\) and \(N_{s}\), to find \[\mathbb{P}(\{A_{n}^{+}(t,s)<\Lambda_{n}(t,s)\}\cap\mathcal{V}_{ts})=\mathbb{E }[\mathbb{P}(\{A_{n}^{+}(t,s)<\Lambda_{n}(t,s)\}\cap\mathcal{V}_{ts}\mid N_{t},N_{s})]. \tag{32}\] For the remainder of the proof we will focus on \(\mathbb{P}(\{A_{n}^{+}(t,s)<\Lambda_{n}(t,s)\}\cap\mathcal{V}_{ts}\mid N_{t},N_{s})\). Note this probability is zero when \(N_{t}\) and \(N_{s}\) are such that \(\mathcal{V}_{ts}\) does not hold. If \(\mathcal{V}_{ts}\) does hold we can remove the condition, and rewrite the probability as \[\mathbb{P}(A_{n}^{+}(t,s)-\mathbb{E}[A_{n}^{+}(t,s)\mid N_{t},N_{s}]<\Lambda_{n }(t,s)-\mathbb{E}[A_{n}^{+}(t,s)\mid N_{t},N_{s}]\mid N_{t},N_{s}).\] Since under the conditioning \(A_{n}^{+}(t,s)\) is a binomial random variable, we know that \(\mathbb{E}[A_{n}^{+}(t,s)\mid N_{t},N_{s}]=\kappa_{n}^{+}(t,s)N_{t}N_{s}/n\). Thus, by setting \(\theta(t,s)=\Lambda_{n}(t,s)-\kappa_{n}^{+}(t,s)N_{t}N_{s}/n\) we find that the probability we seek to control equals \[\mathbb{P}(A_{n}^{+}(t,s)-\mathbb{E}[A_{n}^{+}(t,s)\mid N_{t},N_{s}]<\theta(t,s)\mid N_{t},N_{s}). \tag{33}\] Step II.To apply the Chernoff bound (i.e., Theorem 2.21 in [15]) we need to show that \(\theta(t,s)\) is negative. First we bound \[\theta(t,s)=\lfloor nq_{t}q_{s}\kappa(t,s)\rfloor-\frac{\kappa_{n}^{+}(t,s)N_ {t}N_{s}}{n}\leq nq_{t}q_{s}\kappa(t,s)-\frac{\kappa(t,s)N_{t}N_{s}}{n}-\frac {Cn^{-1/2+\alpha}N_{t}N_{s}}{n\sqrt{q_{t}q_{s}}}.\] Recall that we only consider settings in which \(\mathcal{V}_{ts}\) is satisfied. Thus, we have that \(N_{t}\geq q_{t}n-\log(n)\sqrt{q_{t}n}\) and \(N_{s}\geq q_{s}n-\log(n)\sqrt{q_{s}n}\). Using these facts on the already existing upper-bound of \(\theta(t,s)\) yields the larger upper-bound \[\kappa(t,s)q_{t}q_{s}n-\frac{\kappa(t,s)(q_{t}n-\log(n)\sqrt{q_{t }n})(q_{s}n-\log(n)\sqrt{q_{s}n})}{n}-\frac{C(q_{t}n-\log(n)\sqrt{q_{t}n})(q_{ s}n-\log(n)\sqrt{q_{s}n})}{n^{3/2-\alpha}\sqrt{q_{t}q_{s}}},\] \[\leq\kappa(t,s)\log(n)n^{1/2}(\sqrt{q_{t}}+\sqrt{q_{s}})\sqrt{q_ {t}q_{s}}-Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+C\log(n)n^{\alpha}(\sqrt{q_{t}}+ \sqrt{q_{s}}),\] \[=\sqrt{q_{t}q_{s}}\left(\kappa(t,s)\log(n)n^{1/2}(\sqrt{q_{t}}+ \sqrt{q_{s}})-Cn^{1/2+\alpha}+C\log(n)n^{\alpha}\cdot\frac{\sqrt{q_{t}}+\sqrt{ q_{s}}}{\sqrt{q_{t}q_{s}}}\right). \tag{34}\] In the final inequality we have removed all additional negative terms, and noted that the term \(\kappa(t,s)q_{t}q_{s}n\) cancels out. We now deal with the two remaining positive terms and show they are dominated by the negative term. We first deal with the last positive term. Since we have assumed \(t,s\leq u_{n}^{\uparrow}(\tau)\) we have that \(q_{t},q_{s}\geq n^{-1+\tau}\). Thus, we have that \[\frac{\sqrt{q_{t}}+\sqrt{q_{s}}}{\sqrt{q_{t}q_{s}}}\leq q_{t}^{-1/2}+q_{s}^{ -1/2}\leq 2n^{1/2-\tau/2}. \tag{35}\] If we consider the first positive term, we note from Assumption 2.3 that it implies \(\kappa(t,s)\leq 1/\sqrt{q_{t}q_{s}}\) for \(t,s\to\infty\). Hence, for \(n\) large we have through (35) that \[\kappa(t,s)(\sqrt{q_{t}}+\sqrt{q_{s}})\leq\frac{\sqrt{q_{t}}+\sqrt{q_{s}}}{ \sqrt{q_{t}q_{s}}}\leq 2n^{1/2-\tau/2}.\] Substituting this, together with (35), back into (34) yields (for \(n\to\infty\)) \[\theta(t,s)\leq\sqrt{q_{t}q_{s}}\left(\log(n)n^{1-\tau/2}-Cn^{1/2+\alpha}+C \log(n)n^{\alpha+1/2-\tau/2}\right).\] Recall from Assumption 2.3 that \(1/2-\tau/2<\alpha<1/2\). Thus, in particular we have that \(1/2+\alpha>1-\tau/2\), meaning it is the dominant term. In conclusion, we will have for some \(\widehat{C}>0\) that \[\theta(t,s)\leq-\widehat{C}n^{1/2+\alpha}\sqrt{q_{t}q_{s}}.\] This is negative, and hence the Chernoff bound can be applied. Step III.With the result from Step II we may now apply the Chernoff bound on (33) to find that \[\mathbb{P}(A_{n}^{+}(t,s)<\Lambda_{n}(t,s)\mid N_{t},N_{s})\leq\exp\left(- \frac{\widehat{C}^{2}n^{1+2\alpha}q_{t}q_{s}}{2\mathbb{E}[A_{n}^{+}(t,s)\mid N_ {t},N_{s}]}\right). \tag{36}\] We now need to find a useful upper-bound on \(\mathbb{E}[A_{n}^{+}(t,s)\mid N_{t},N_{s}]\) in order to show (36) converges to zero. To do this, we first recall that \(\mathcal{V}_{ts}\) is satisfied, and hence that we can use a bound similar to the one applied in Step II. This approach shows \(\mathbb{E}[A_{n}^{+}(t,s)\mid N_{t},N_{s}]\) can be upper-bounded by \[\frac{\kappa(t,s)(q_{t}n+\log(n)\sqrt{q_{t}n})(q_{s}n+\log(n)\sqrt{q_{s}n})}{ n}+\frac{C(q_{t}n+\log(n)\sqrt{q_{t}n})(q_{s}n+\log(n)\sqrt{q_{s}n})}{n^{3/2- \alpha}\sqrt{q_{t}q_{s}}}.\] Since \(t,s\leq u_{n}^{\uparrow}\), note that \(q_{t}n\geq\log(n)\sqrt{q_{t}n}\) and \(q_{s}n\geq\log(n)\sqrt{q_{s}n}\). Recall also that \(\kappa(t,s)\leq 1/\sqrt{q_{t}q_{s}}\) if Assumption 2.3 is satisfied. Using these facts, the previously derived upper-bound becomes \[\mathbb{E}[A_{n}^{+}(t,s)\mid N_{t},N_{s}]\leq 4\sqrt{q_{t},q_{s}}+4Cn^{1/2+ \alpha}\sqrt{q_{t}q_{s}}\leq 5Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}.\] We will now substitute this upper-bound into (36) and show it still converges to zero. In this computation we will use the fact that for the stable vertex-type \(t\) (at tolerance \(\tau\)) it is true that \(q_{t}\geq n^{-1+\tau}\). We find \[\mathbb{P}(A_{n}^{+}(t,s)<\Lambda_{n}(t,s)\mid N_{t},N_{s})\leq\exp\left(- \frac{\widehat{C}^{2}n^{1/2+\alpha}\sqrt{q_{t}q_{s}}}{10}\right)\leq\exp\left( -\frac{\widehat{C}^{2}n^{-1/2+\alpha+\tau}}{10}\right).\] By recalling from Assumption 2.3 that \(\alpha>1/2-\tau\), we find that there exists a number \(\nu>0\) such that \[\mathbb{P}(A_{n}^{+}(t,s)<\Lambda_{n}(t,s)\mid N_{t},N_{s})\leq\exp\left(-n^{ \nu}\right).\] Substituting this back into (32) and noting that this bound is uniform in \(N_{t}\) and \(N_{s}\) yields \[\mathbb{P}(\{A_{n}^{+}(t,s)<\Lambda_{n}(t,s)\}\cap\mathcal{V}_{t,s})=\exp \left(-n^{\nu}\right).\] Hence, for the original target probability from Step I we find \[\mathbb{P}(A_{n}^{+}(t,s)<\Lambda_{n}(t,s))\leq\exp\left(-n^{\nu}\right)+2 \exp(-\log(n)^{2}/2)\leq 2\exp(-\log(n)^{2}/2).\] Hence, indeed we find that the statement is true. **Q.E.D.** Proof of Lemma 4.6.: This proof is similar to the proof of Lemma 4.5, hence we will not provide the same amount of detail as we did in its proof. Instead, we will mainly focus on the differences. For two stable \(t,s\in\mathcal{S}\) we define \[\kappa_{n}^{-}(t,s):=(\kappa(t,s)-Cn^{-1/2+\alpha}/\sqrt{q_{t}q_{s}})\wedge 0.\] By recalling Definition 4.1 we then we have the stochastic bound \[A_{n}(t,s)\preceq A_{n}^{-}(t,s)\sim\texttt{Bin}(N_{t}N_{s},\kappa_{n}^{-}(t,s)/n),\] allowing us to conclude that \[\mathbb{P}(A_{n}(t,s)>\Lambda_{n}(t,s))\leq\mathbb{P}(A_{n}^{-}(t,s)>\Lambda_ {n}(t,s)).\] We will now apply same three steps as in the proof of Lemma 4.5. Step I.Intersecting with \(\mathcal{V}_{ts}\), applying Lemma 4.4 and applying the law of total probability yields \[\mathbb{P}(A_{n}^{-}(t,s)>\Lambda_{n}(t,s))\leq\mathbb{E}[\mathbb{P}(\{A_{n}^{ -}(t,s)>\Lambda_{n}(t,s)\}\cap\mathcal{V}_{ts}\mid N_{t},N_{s})]+2\exp(-\log( n)^{2}/2). \tag{37}\] We will now focus on \(\mathbb{P}(\{A_{n}^{-}(t,s)>\Lambda_{n}(t,s)\}\cap\mathcal{V}_{ts}\mid N_{t},N_{s})\), and note due to the intersection that we can assume \(\mathcal{V}_{ts}\) to be satisfied. Hence, similar to (33) we can write this probability as \[\mathbb{P}(A_{n}^{-}(t,s)-\mathbb{E}[A_{n}^{-}(t,s)\mid N_{t},N_{s}]>\theta(t,s)\mid N_{t},N_{s}),\] where \(\theta(t,s)=\Lambda_{n}(t,s)-\mathbb{E}[A_{n}^{-}(t,s)\mid N_{t},N_{s}]\). To apply the Chernoff bound (i.e., Theorem 2.21 in [15]), we need to show this parameter is positive. Step II.We first bound \(\theta(t,s)\) as \[\theta(t,s)=\lfloor nq_{t}q_{s}\kappa(t,s)\rfloor-\frac{\kappa^{-}(t,s)N_{t}N _{s}}{n}\geq nq_{t}q_{s}\kappa(t,s)-\frac{\kappa(t,s)N_{t},N_{s}}{n}+\frac{CN_ {t}N_{s}}{n^{3/2-\alpha}\sqrt{q_{t}q_{s}}}-1.\] Now, since \(\kappa_{n}^{-}(t,s)>0\) and since we can assume \(\mathcal{V}_{ts}\) to be satisfied, we note that we can create a further lower-bound by using \(N_{t}\leq q_{t}n+\log(n)\sqrt{q_{t}n}\) and \(N_{s}\leq q_{s}n+\log(n)\sqrt{q_{s}n}\). Substituting these bounds, and simplifying yields \[\theta(t,s)\geq\sqrt{q_{t}q_{s}}\left(-\kappa(t,s)\log(n)n^{1/2}(\sqrt{q_{t}} +\sqrt{q_{s}})-\log(n)^{2}+Cn^{1/2+\alpha}-\frac{1}{\sqrt{q_{t}q_{s}}}\right).\] Here, we have removed additional positive terms. We now want the sole positive term to dominate. We have already covered domination over the first negative term in the proof of Lemma 4.5. The argument is given after (35). We also trivially see that the positive term dominates the second negative term (the logarithm). To see domination over the third negative term, we use the fact that \(t\) and \(s\) are stable at tolerance \(\tau\) to conclude \(1/\sqrt{q_{t}q_{s}}\leq n^{1-\tau}\). By recalling from Assumption 2.3 that \(\alpha>1/2-\tau/2\) we find that \(n^{1/2+\alpha}>n^{1-\tau/2}\), which dominates \(n^{1-\tau}\). Hence, indeed we see there exists a constant \(\widehat{C}>0\) such that \[\theta(t,s)\geq\widehat{C}n^{1/2+\alpha}\sqrt{q_{t}q_{s}}.\] This is the positivity we required. Step III.Applying the Chernoff bound shows \[\mathbb{P}(A_{n}^{-}(t,s)>\Lambda_{n}(t,s)\mid N_{t},N_{s})\leq\exp\left(-\frac{ \widehat{C}^{2}n^{1+2\alpha}q_{i}q_{j}}{2\mathbb{E}[A_{n}^{-}(t,s)\mid N_{t},N_ {s}]}\right).\] We will now upper-bound the expectation inside the exponential function. Recall the definition of \(A_{n}^{+}(t,s)\) in the proof of Lemma 4.5 and note that \[\mathbb{E}[A_{n}^{-}(t,s)\mid N_{t},N_{s}]=\frac{\kappa_{n}^{-}(t,s)N_{t}N_{s} }{n}\leq\frac{\kappa_{n}^{+}(t,s)N_{t}N_{s}}{n}=\mathbb{E}[A_{n}^{+}(t,s)\mid N _{t},N_{s}].\] Substituting this into the Chernoff bound we just obtained unveils the bound in (36). From here, repeating the arguments of Step III in the proof of Lemma 4.5 shows there is a \(\nu>0\) such that \[\mathbb{P}(A_{n}^{-}(t,s)>\Lambda_{n}(t,s)\mid N_{t},N_{s})\leq\exp\left(-n^{ \nu}\right).\] Substituting this back into (37) yields the desired result. Proof of Lemma 4.7.: We will only show the increasing case, since the proof for the decreasing case is similar. Suppose \(\mathcal{Q}_{n}\) is increasing. We couple \(\mathtt{ARD}_{n}(T,\Lambda_{n})\) and \(\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\) using the following procedure: 1. Generate the types of each vertex. 2. First, for each pair of types \(t,s\in\mathcal{S}\) choose \(\Lambda_{n}(t,s)\) vertex pairs where the first vertex has type \(t\) and the second \(s\) uniformly at random from all possible pairs without replacement. This is the realisation of \(\mathtt{ARD}_{n}(T,\Lambda_{n})\). 3. Then, for each pair of types \(t,s\in\mathcal{S}\) choose \(\Lambda_{n}^{\prime}(t,s)-\Lambda_{n}(t,s)\geq 0\) of the remaining vertex pairs where the first has type \(t\) and the second \(s\) uniformly at random without replacement. This provides the realisation of \(\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\). Since \(\Lambda_{n}^{\prime}(t,s)-\Lambda_{n}(t,s)\geq 0\) we can see that step 2 and 3 in the above procedure are equivalent to choosing \(\Lambda_{n}^{\prime}(t,s)\) pairs of vertices where the first has type \(t\) and the second \(s\) uniformly at random without replacement. This is what is needed in Step 2 and 3 of \(\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\) (cf. Section 2.2). Under this coupling we also have that \(\mathtt{ARD}_{n}(T,\Lambda_{n})\subset\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\). Thus, by virtue of \(\mathcal{Q}_{n}\) being an increasing event, we have that \(\mathtt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n}\) implies \(\mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n}\), letting us conclude that \(\mathbb{P}(\mathtt{ARD}_{n}(T,\Lambda_{n})\in\mathcal{Q}_{n})\leq\mathbb{P}( \mathtt{ARD}_{n}(T,\Lambda_{n}^{\prime})\in\mathcal{Q}_{n})\). **Q.E.D.** ### Proofs of lemmas for Theorem 3.7 Proof of Lemma 4.9.: Set \(\lambda^{\downarrow}:=\inf_{i}\{\lambda_{i}:\lambda_{i}>0\}\) and similarly \(g^{\downarrow}:=\inf_{j}\{\varrho_{j}:\varrho_{j}>0\}\). From Assumption 3.5 we have that \(\lambda^{\downarrow},\varrho^{\downarrow}>0\). Substituting these into (7) gives us the upper bound \[\kappa(t,s)\leq\frac{\mu}{\lambda^{\downarrow}\varrho^{\downarrow}}\sum_{i=1} ^{\infty}\sum_{j=1}^{\infty}p_{ij}I(t,i)J(s,j).\] Finally, by noting that \(I\) and \(J\) are indicators, and that \((p_{ij})_{ij}\) is a probability mass function, we find an upper-bound that is uniform in \(t\) and \(s\), proving the claim. \[\kappa(t,s)\leq\frac{\mu}{\lambda^{\downarrow}\varrho^{\downarrow}}\sum_{i=1} ^{\infty}\sum_{j=1}^{\infty}p_{ij}=\frac{\mu}{\lambda^{\downarrow}\varrho^{ \downarrow}}.\] **Q.E.D.** Proof of Lemma 4.11.: Fix an arbitrary number \(r\in(0,\delta)\). Since \(\mathbb{E}[T^{\delta}]<\infty\) we have that \(q_{t}<t^{-1-\delta}\) (for all \(t\) sufficiently large), implying that \[q_{t}^{(1+r)/(1+\delta)}<t^{-1-r}. \tag{38}\] We will now split up the desired probability as \[\mathbb{P}(T>u_{n}^{\uparrow}(\tau))=\sum_{t=u_{n}^{\downarrow}}^{\infty}q_{t }=\sum_{t=u_{n}^{\downarrow}}^{\infty}q_{t}^{(\delta-r)/(1+\delta)}q_{t}^{(1+r) /(1+\delta)}.\] From Definition 2.2 we have that \(q_{t}<n^{-1+\tau}\). Using this fact yields \[\mathbb{P}(T>u_{n}^{\uparrow}(\tau))=\sum_{t=u_{n}^{\uparrow}}^{\infty}q_{t}^{( \delta-r)/(1+\delta)}q_{t}^{(1+r)/(1+\delta)}\leq n^{(\tau-1)(\delta-r)/(1+ \delta)}\sum_{t=u_{n}^{\downarrow}}^{\infty}q_{t}^{(1+r)/(1+\delta)}.\] Using (38) in the remaining sum yields the desired result (by noticing that the leftover sum converges). \[\mathbb{P}(T>u_{n}^{\uparrow}(\tau))\leq n^{(\tau-1)(\delta-r)/(1+\delta)} \sum_{t=1}^{\infty}q_{t}^{(1+r)/(1+\delta)}\leq n^{(\tau-1)(\delta-r)/(1+ \delta)}\sum_{t=1}^{\infty}\frac{1}{t^{1+r}}=\widehat{C}_{r}\cdot n^{\frac{( \tau-1)(\delta-r)}{1+\delta}}.\] **Q.E.D.** Proof of Lemma 4.12.: Fix a constant \(\nu<\tau\). First, apply the union bound to find that \[\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\uparrow}(\tau)}\left\{S_{t}>n^{1-\nu} \right\}\right)\leq\sum_{t=1}^{u_{n}^{\uparrow}}\mathbb{P}(S_{t}>n^{1-\nu}). \tag{39}\] Now, conditioning on \(\mathcal{V}_{tt}\) (cf. Definition 4.3) and applying Lemma 4.4 shows that \[\mathbb{P}(S_{t}>n^{1-\nu})\leq\mathbb{P}(\{S_{t}>n^{1-\nu}\}\cap\mathcal{V}_{ tt})+2\exp\left(-\log(n)^{2}/2\right). \tag{40}\] Now, we apply the law of total probability on the remaining probability condition on the number of vertices with type \(t\). We find \(\mathbb{P}(S_{t}>n^{1-\nu})=\mathbb{E}[\mathbb{P}(\{S_{t}>n^{1-\nu}\}\cap \mathcal{V}_{tt}\mid N_{t})]\). Using this, we note that there will be \(N_{t}^{2}\) pairs of vertices with type \(t\) between which an arc can be placed and \(N_{t}\) of these will create a self-loop. Thus, for each arc the probability that it creates a self-loop within type \(t\) is upper bounded by \(N_{t}/N_{t}^{2}=1/N_{t}\). Thus, we have that \(S_{t}\preceq\texttt{Bin}(\lfloor\mu n\rfloor,1/N_{t})\). Furthermore, note that \(\mathcal{V}_{tt}\) either happens or not based on the value of \(N_{t}\). Specifically, it stipulates that \(N_{t}\geq q_{t}n-\log(n)\sqrt{q_{t}n}\). This means we can further stochastically bound \[S_{t}\preceq\texttt{Bin}\left(\lfloor\mu n\rfloor,\frac{1}{q_{t}n-\log(n) \sqrt{q_{t}n}}\right)=:B_{t}.\] Altogether, these arguments show that \[\mathbb{E}[\mathbb{P}(\{S_{t}>n^{1-\nu}\}\cap\mathcal{V}_{tt}\mid N_{t})] \leq\mathbb{P}(B_{t}>n^{1-\nu})\leq\mathbb{P}(|B_{t}-\mathbb{E}[B_{t}]|>n^{1- \nu}-\mathbb{E}[B_{t}]).\] We now seek to apply the Chernoff bound on this binomial probability. For this, we show that \(n^{1-\nu}-\mathbb{E}[B_{t}]>0\). A straightforward calculation first yields \[n^{1-\nu}-\mathbb{E}[B_{t}]=n^{1-\nu}-\frac{\lfloor\mu n\rfloor}{q_{t}-\log(n )\sqrt{q_{t}n}}\geq n^{1-\nu}-\frac{\mu n}{q_{t}-\log(n)\sqrt{q_{t}n}}.\] Now, note that \(t\leq u_{n}^{\uparrow}\) which implies that \(q_{t}\geq n^{-1+\tau}\), resulting in the observation that for \(n\) large \(q_{t}n>2\log(n)\sqrt{q_{t}n}\). Thus, this shows us that \[n^{1-\nu}-\mathbb{E}[B_{t}]\geq n^{1-\nu}-\frac{2\mu}{q_{t}}\geq n^{1-\nu}-2 \mu n^{1-\tau}.\] Now, since we have chosen \(\nu<\tau\) we indeed find that \(n^{1-\nu}-\mathbb{E}[B_{t}]>0\). Moreover, if we apply the Chernoff bound, then the choice \(\nu<\tau\) even ensures there exists a constant \(\widehat{C}>0\) such that \[\mathbb{P}(|B_{t}-\mathbb{E}[B_{t}]|>n^{1-\nu}-\mathbb{E}[B_{t}])\leq\exp \left(-\widehat{C}n^{1-\nu}\right).\] Substituting this together with (40) into (39) yields \[\mathbb{P}(S_{t}>n^{1-\nu})\leq u_{n}^{\uparrow}\exp\left(-\widehat{C}n^{1- \nu}\right)+2u_{n}^{\uparrow}\exp\left(-\log(n)^{2}/2\right).\] By noting from Lemma 4.2 that \(u_{n}^{\uparrow}\) is polynomial, we see that both terms in this sum decay super-polynomially to zero, verifying the statement of the lemma. **Q.E.D.** Proof of Lemma 4.13.: The proof is very similar to the proof of Lemma 4.12, so we will mainly highlight differences. Fix a number \(\nu<2\tau-1\). We first apply the union bound and intersect with \(\mathcal{V}_{ts}\) (cf. Definition 2.2) to find through Lemma 4.4 for any large number \(p\) that \[\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\uparrow}}\bigcup_{s=1}^{u_{n}^{ \uparrow}}\{M_{ts}>n^{1-\nu}\}\right)\leq\sum_{t=1}^{u_{n}^{\uparrow}}\sum_{s =1}^{u_{n}^{\uparrow}}\mathbb{P}(\{M_{ts}>n^{1-\nu}\}\cap\mathcal{V}_{ts})+2(u _{n}^{\uparrow})^{2}\exp\left(-\log(n)^{2}/2\right). \tag{41}\] We now use the law of total probability to condition on the outcomes of both \(N_{t}\) and \(N_{s}\). If we know these values, then in our model the worst-case scenario for preventing multi-arcs would be the situation that \(\mu n-1\) arcs have already been placed between unique vertex-pairs with type \(t\) and \(s\), respectively. So, the probability for a new arc to form a multi-arc from a vertex with type \(t\) to a vertex with type \(s\) would be bounded by \(\frac{\mu n}{N_{t}N_{s}}\). Once again \(\mathcal{V}_{ts}\) stipulates that \(N_{t}\geq q_{t}n-\log(n)\sqrt{q_{t}n}\) and \(N_{s}\geq q_{s}n-\log(n)\sqrt{q_{s}n}\). Thus, together with the previously derived multi-arc probability upper-bound, we may conclude that \[M_{ts}\preceq\texttt{Bin}\left(\,\lfloor\mu n\rfloor,\frac{\mu n}{N_{t}N_{s}} \right)\preceq\texttt{Bin}\left(\,\lfloor\mu n\rfloor,\frac{\mu n}{(q_{t}n- \log(n)\sqrt{q_{t}n})(q_{s}n-\log(n)\sqrt{q_{s}n})}\right)=:B_{ts}.\] Hence, we find that \[\mathbb{E}[\mathbb{P}(\{M_{ts}>n^{1-\nu}\}\cap\mathcal{V}_{ts})\mid N_{t},N_{ s}]\leq\mathbb{P}(\left|B_{ts}-\mathbb{E}[B_{ts}]\right|>n^{1-\nu}-\mathbb{E}[B_{ ts}]).\] We now show that \(B_{ts}-\mathbb{E}[B_{ts}]>0\). Recall from the proof of Lemma 4.12 that \(q_{t}n\geq\sqrt{q_{t}n}\) and \(q_{s}n\geq\sqrt{q_{s}n}\) when \(t,s\leq u_{n}^{\uparrow}\). Moreover, we also have that \(q_{t},q_{s}\geq n^{-1+\tau}\). Hence, we may conclude for \(n\) large that \[n^{1-\nu}-\mathbb{E}[B_{ts}]\geq n^{1-\nu}-\frac{\mu^{2}n^{2}}{(q_{t}n-\log(n )\sqrt{q_{t}n})(q_{s}n-\log(n)\sqrt{q_{s}n})}\geq n^{1-\nu}-\frac{2\mu}{q_{t} q_{s}}\geq n^{1-\nu}-2\mu n^{1-(2\tau-1)}.\] Now, since we have assumed that \(\nu<2\tau-1\) we indeed find that \(n^{1-\nu}-\mathbb{E}[B_{ts}]>0\). When we apply the Chernoff bound, the inequality \(\nu<2\tau-1\) specifically shows that there exists a \(\widehat{C}>0\) such that \[\mathbb{P}(\left|B_{ts}-\mathbb{E}[B_{ts}]\right|>n^{1-\nu}-\mathbb{E}[B_{ts}] )\leq\exp\left(-\widehat{C}n^{1-\nu}\right).\] Substituting this back into (41) shows \[\mathbb{P}\left(\bigcup_{t=1}^{u_{n}^{\uparrow}}\bigcup_{s=1}^{u_{n}^{ \uparrow}}\{M_{ts}>n^{1-\nu}\}\right)\leq(u_{n}^{\uparrow})^{2}\exp\left(- \widehat{C}n^{1-\nu}\right)+2(u_{n}^{\uparrow})^{2}\exp\left(-\log(n)^{2}/2 \right).\] Like at the end of the proof of Lemma 4.12, the statement is true due to the fact that \(u_{n}^{\uparrow}\) is polynomial in size (cf. Lemma 4.2). **Q.E.D**, Proof of Lemma 4.15.: Let \(C_{a}\) denote the colour assigned to arc \(a\in[\,\lfloor\mu n\rfloor\), and denote by \((N_{k})_{k\in\mathcal{S}}\) the sequence that records for each vertex-type the amount of vertices of said type. Finally, set a dummy upper-bound (cf. Definition 2.2) \[M_{n}:=u_{n}^{\uparrow}\left(1-\frac{2+\varepsilon}{2(1+\varepsilon)}\right).\] This dummy value serves to ensure that \(N_{k}\) concentrates for as many vertex-types simultaneously as possible. Note from step 3-5 in the generation algorithm for \(\texttt{CCI}_{n,\mu}\) that \[\mathbb{P}\left(\mathcal{A}_{ts}^{(a)}\ \left|\ \bigcap_{k=1}^{M_{n}}\bigcap_{k^{ \prime}=1}^{M_{n}}\mathcal{V}_{kk^{\prime}},\mathcal{V}_{ts},C_{a}=(i,j),(N_{k })_{k\in\mathcal{S}}\right)=\frac{N_{t}N_{s}I(t,i)J(s,j)}{\left(\sum_{k\in \mathcal{S}}N_{k}I(k,i)\right)\left(\sum_{k^{\prime}\in\mathcal{S}}N_{k^{ \prime}}J(k^{\prime},j)\right)}, \tag{42}\] Where \(\mathcal{V}_{ts}\) is the event from Definition 4.3. Now, since the intersection of the \(\mathcal{V}_{kk^{\prime}}\) events and \(\mathcal{V}_{ts}\) occurs, we have for all types \(k\leq M_{n}\) and \(k\in\{t,s\}\) that \[nq_{k}-\log(n)\sqrt{nq_{k}}\leq N_{k}\leq nq_{k}+\log(n)\sqrt{nq_{k}}. \tag{43}\] In essence, the rest of the proof consists of using (43) to find bounds on (42) independent of \((N_{k})_{k\in\mathcal{S}}\), and then use the law of total probability on these bounds to achieve the desired result. More precisely, we shall derive the following upper and lower bound: \[\frac{N_{t}N_{s}I(t,i)J(s,j)}{\left(\sum_{k\in\mathcal{S}}N_{k}I(k,i)\right) \left(\sum_{k^{\prime}\in\mathcal{S}}N_{k^{\prime}}J(k^{\prime},j)\right)} \leq\frac{q_{t}q_{s}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}+\hat{C}^{\uparrow} \log(n)n^{-\frac{\varepsilon}{2}}, \tag{44}\] and \[\frac{N_{t}N_{s}I(t,i)J(s,j)}{\left(\sum_{k\in\mathcal{S}}N_{k}I(k,i)\right)\left( \sum_{k^{\prime}\in\mathcal{S}}N_{k^{\prime}}J(k^{\prime},j)\right)}\geq\frac{q _{t}q_{s}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}-\tilde{C}^{\downarrow}\log(n)n^{ -\frac{\kappa}{2}}, \tag{45}\] for some constants \(\tilde{C}^{\uparrow}\) and \(\tilde{C}^{\downarrow}\). We first finish the proof using these bounds. We use the law of total probability to write \[\mathbb{P}(\mathcal{A}^{(a)}_{ts})=\sum_{i=1}^{\infty}\sum_{j=1}^{\infty}p_{ij }\mathbb{P}(\mathcal{A}^{(a)}_{ts}\mid C_{a}=(i,j)). \tag{46}\] Now, we condition on the intersection of \(\mathcal{V}_{kk^{\prime}}\) events and \(\mathcal{V}_{ts}\) to find that \[\mathbb{P}(\mathcal{A}^{(a)}_{ts}\mid C_{a}=(i,j)) =\mathbb{P}\left(\mathcal{A}^{(a)}_{ts}\mid C_{a}=(i,j),\bigcap_{ k=1}^{M_{n}}\bigcap_{k^{\prime}=1}^{M_{n}}\mathcal{V}_{kk^{\prime}},\mathcal{V}_{ ts}\right)\mathbb{P}\left(\bigcap_{k=1}^{M_{n}}\bigcap_{k^{\prime}=1}^{M_{n}} \mathcal{V}_{kk^{\prime}},\mathcal{V}_{ts}\ \Bigg{|}\ C_{a}=(i,j)\right),\] \[+\mathbb{P}\left(\mathcal{A}^{(a)}_{ts}\ \Bigg{|}\ C_{a}=(i,j), \bigcup_{k=1}^{M_{n}}\bigcup_{k^{\prime}=1}^{M_{n}}\neg\mathcal{V}_{kk^{ \prime}}\cup\neg\mathcal{V}_{ts}\right)\mathbb{P}\left(\bigcup_{k=1}^{M_{n}} \bigcup_{k^{\prime}=1}^{M_{n}}\neg\mathcal{V}_{kk^{\prime}}\cup\neg\mathcal{ V}_{ts}\ \Bigg{|}\ C_{a}=(i,j)\right).\] By noting from Step 1 and 2 from the \(\mathtt{CCI}_{n,\mu}\) generation algorithm that all \(\mathcal{V}_{kk^{\prime}}\) and \(C_{a}\) are independent this reduces into \[\mathbb{P}(\mathcal{A}^{(a)}_{ts}\mid C_{a}=(i,j)) =\mathbb{P}\left(\mathcal{A}^{(a)}_{ts}\ \Bigg{|}\ C_{a}=(i,j), \bigcap_{k=1}^{M_{n}}\bigcap_{k^{\prime}=1}^{M_{n}}\mathcal{V}_{kk^{\prime}},\mathcal{V}_{ts}\right)\mathbb{P}\left(\bigcap_{k=1}^{M_{n}}\bigcap_{k^{ \prime}=1}^{M_{n}}\mathcal{V}_{kk^{\prime}},\mathcal{V}_{ts}\right), \tag{47}\] \[+\mathbb{P}\left(\mathcal{A}^{(a)}_{ts}\ \Bigg{|}\ C_{a}=(i,j), \bigcup_{k=1}^{M_{n}}\bigcup_{k^{\prime}=1}^{M_{n}}\neg\mathcal{V}_{kk^{ \prime}}\cup\neg\mathcal{V}_{ts}\right)\mathbb{P}\left(\bigcup_{k=1}^{M_{n}} \bigcup_{k^{\prime}=1}^{M_{n}}\neg\mathcal{V}_{kk^{\prime}}\cup\neg\mathcal{ V}_{ts}\right).\] Now we use \((N_{k})_{k\in\mathcal{S}}\)-independent bounds (44) and (45) to create upper- and lower-bounds of (47). Specifically, it shows that \[\mathbb{P}(\mathcal{A}^{(a)}_{ts}\mid C_{a}=(i,j)) \leq\frac{q_{t}q_{s}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}+\tilde{ C}^{\uparrow}\log(n)n^{-\frac{\kappa}{2}}+\mathbb{P}\left(\bigcup_{k=1}^{M_{n}} \bigcup_{k^{\prime}=1}^{M_{n}}\neg\mathcal{V}_{kk^{\prime}}\cup\neg\mathcal{ V}_{ts}\right),\ \text{and}\] \[\mathbb{P}(\mathcal{A}^{(a)}_{ts}\mid C_{a}=(i,j)) \geq\frac{q_{t}q_{s}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}-\tilde{ C}^{\downarrow}\log(n)n^{-\frac{\kappa}{2}}-\mathbb{P}\left(\bigcup_{k=1}^{M_{n}} \bigcup_{k^{\prime}=1}^{M_{n}}\neg\mathcal{V}_{kk^{\prime}}\cup\neg\mathcal{ V}_{ts}\right).\] We use the union bound on the \(\mathcal{V}_{kk^{\prime}}\)-terms and use Lemma 4.4 to create further upper- and lower-bounds given by \[\mathbb{P}(\mathcal{A}^{(a)}_{ts}\mid C_{a}=(i,j)) \leq\frac{q_{t}q_{s}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}+ \tilde{C}^{\uparrow}\log(n)n^{-\frac{\kappa}{2}}+2(M_{n}^{2}+1)\exp\left(-\log( n)^{2}/2\right),\ \text{and}\] \[\mathbb{P}(\mathcal{A}^{(a)}_{ts}\mid C_{a}=(i,j)) \geq\frac{q_{t}q_{s}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}- \tilde{C}^{\downarrow}\log(n)n^{-\frac{\kappa}{2}}-2(M_{n}^{2}+1)\exp\left(-\log( n)^{2}/2\right).\] Using Lemma 4.2 to conclude that \(M_{n}\leq n\), we find that the final terms are not dominant. Substituting these bounds back into (46) yields \[\frac{q_{t}q_{s}\kappa(t,s)}{\mu}-\widehat{C}\log(n)n^{-\frac{\kappa}{2}}\leq \mathbb{P}(\mathcal{A}^{(a)}_{ts})\leq\frac{q_{t}q_{s}\kappa(t,s)}{\mu}+\widehat {C}\log(n)n^{-\frac{\kappa}{2}},\] for some \(\widehat{C}>0\). Together, these two bounds indeed show that \[\left|\mathbb{P}(\mathcal{A}^{(a)}_{ts})-\frac{q_{t}q_{s}\kappa(t,s)}{\mu} \right|\leq\widehat{C}\log(n)n^{-\frac{\kappa}{2}},\] which proves the main result. What is left now is to establish the upper and lower bound (44) and (45). The upper bound (44).Define the indicator \[\hat{I}_{k}:=\mathbb{1}\left\{q_{k}n-\log(n)\sqrt{nq_{k}}\geq 0\right\}.\] Substituting the upper-bound of (43) in the numerator of (42), and substituting the lower bound of (43) in the denominator of (42) yields the upper-bound of (42) given by \[\frac{(q_{t}n+\log(n)\sqrt{q_{t}n})(q_{s}n+\log(n)\sqrt{q_{s}n})I(i,t)J(j,s)}{ \left(\sum_{k\in\mathcal{S}}[q_{k}n-\log(n)\sqrt{q_{k}n}]I(i,k)\hat{I}_{k} \right)\left(\sum_{k^{\prime}\in\mathcal{S}}[q_{k^{\prime}}n-\log(n)\sqrt{q_{ k^{\prime}}n}]J(j,k^{\prime})\hat{I}_{k^{\prime}}\right)}.\] Here, the inclusion of the indicators \(\hat{I}_{k}\) is possible in the denominator, since we know \(N_{k}\geq 0\) for all \(k\in\mathcal{S}\). Now, we will extend this upper bound by only considering the first \(M_{n}\) terms in the sum. Note for these terms that \(\hat{I}_{k}=1\) (cf. Definition 2.2). Thus, we find the further upper-bound \[\frac{(q_{t}n+\log(n)\sqrt{q_{t}n})(q_{s}n+\log(n)\sqrt{q_{s}n})I(i,t)J(j,s)}{ \left(\sum_{k=1}^{M_{n}}[q_{k}n-\log(n)\sqrt{q_{k}n}]I(i,k)\right)\left(\sum_ {k^{\prime}=1}^{M_{n}}[q_{k^{\prime}}n-\log(n)\sqrt{q_{k^{\prime}}n}]J(j,k^{ \prime})\right)}.\] Next, we expand the products in the upper-bound and remove the additional positive term from the denominator to derive a further upper-bound. It is given by \[\frac{(q_{t}q_{s}n^{2}+(\sqrt{q_{t}}+\sqrt{q_{s}})n\log(n)\sqrt{q_{t}q_{s}n}+ n\log(n)^{2}\sqrt{q_{t}q_{s}})I(i,t)J(j,s)}{\sum_{k=1}^{M_{n}}\sum_{k^{\prime}=1}^{M_ {n}}q_{k}q_{k^{\prime}}n^{2}I(i,k)J(j,k^{\prime})-\sum_{k=1}^{M_{n}}\sum_{k^{ \prime}=1}^{M_{n}}n\log(n)\sqrt{q_{k^{\prime}}q_{k^{\prime}}n}(\sqrt{q_{k}}+ \sqrt{q_{k^{\prime}}})}.\] Now, we further bound the negative term in the denominator to attain an error sum that only depends on \(\sqrt{q_{k}q_{k^{\prime}}}\). We find \[\frac{(q_{t}q_{s}n^{2}+(\sqrt{q_{t}}+\sqrt{q_{s}})n\log(n)\sqrt{q_{t}q_{s}n}+ n\log(n)^{2}\sqrt{q_{t}q_{s}})I(i,t)J(j,s)}{\sum_{k=1}^{M_{n}}\sum_{k^{ \prime}=1}^{M_{n}}q_{k}q_{k^{\prime}}n^{2}I(i,k)J(j,k^{\prime})-n\log(n)\sqrt{ n}\sum_{k=1}^{M_{n}}\sqrt{q_{k}}\sum_{k^{\prime}=1}^{M_{n}}\sqrt{q_{k^{\prime}}}}. \tag{48}\] Since \(\mathbb{E}[T^{1+\varepsilon}]<\infty\) for some \(\varepsilon>0\) (cf. Assumption 3.5), it holds that \(q_{k}\leq 1/k^{2+\varepsilon}\) for \(k\) large. Thus, we have that \(\sum_{k=1}^{\infty}\sqrt{q_{k}}<\infty\). Using this in (48) we find there exists a constant \(\tilde{C}^{\prime}>0\) such that it is upper-bounded by \[\frac{(q_{t}q_{s}n^{2}+(\sqrt{q_{t}}+\sqrt{q_{s}})n\log(n)\sqrt{q_{t}q_{s}n}+ n\log(n)^{2}\sqrt{q_{t}q_{s}})I(i,t)J(j,s)}{\sum_{k=1}^{M_{n}}\sum_{k^{ \prime}=1}^{M_{n}}q_{k}q_{k^{\prime}}n^{2}I(i,k)J(j,k^{\prime})-\widehat{C}^{ \prime}\cdot n\log(n)\sqrt{n}}. \tag{49}\] Note that the double sum in (49) is close to \(\lambda_{i}\varrho_{j}\) (cf. (6a) and (6b)). We will now make these parameters visible in the denominator by adding and subtracting the remainders of the sum. We find for any \(r\in(0,1+\varepsilon)\) (cf. Assumption 3.5) and some corresponding \(\widehat{C}_{r}>0\) that \[\sum_{k=1}^{M_{n}}\sum_{k^{\prime}=1}^{M_{n}}q_{k}q_{k^{\prime}}n ^{2}I(i,k)J(j,k^{\prime}) \geq n^{2}\lambda_{i}\varrho_{j}-\sum_{k=1}^{\infty}\sum_{k^{\prime }=M_{n}+1}^{\infty}q_{k}q_{k^{\prime}}n^{2}-\sum_{k=u_{k}^{\pm}+1}^{\infty} \sum_{k^{\prime}=1}^{\infty}q_{k}q_{k^{\prime}}n^{2},\] \[\geq n^{2}\lambda_{i}\varrho_{j}-2n^{2}\mathbb{P}(T>M_{n}),\] \[\geq n^{2}\left(\lambda_{i}\varrho_{j}-2\widehat{C}_{r}\cdot n^{ -\frac{1}{2}+\frac{r}{2(1+\varepsilon)}}\right).\] We used Lemma 4.11 in the final line of this string of inequalities. Substituting this back into (49) yields \[\frac{(q_{t}q_{s}n^{2}+(\sqrt{q_{t}}+\sqrt{q_{s}})n\log(n)\sqrt{q_{t}q_{s}n}+ n\log(n)^{2}\sqrt{q_{t}q_{s}})I(i,t)J(j,s)}{n^{2}\left(\lambda_{i}\varrho_{j}-2 \widehat{C}_{r}\cdot n^{-\frac{1}{2}+\frac{r}{2(1+\varepsilon)}}-\widehat{C} ^{\prime}\cdot\log(n)n^{-\frac{1}{2}}\right)}.\] Next, we divide everything through by \(n^{2}\) and extract the factor \(q_{t}q_{s}I(i,t)J(j,s)/(\lambda_{i}\varrho_{j})\) to find the following upper-bound \[\frac{q_{t}q_{s}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}\cdot\frac{1+(\sqrt{q_{t}} +\sqrt{q_{s}})n^{-1/2}\log(n)/\sqrt{q_{t}q_{s}}+n^{-1}\log(n)^{2}/\sqrt{q_{t}q_{ s}}}{1-\widehat{C}_{1}\cdot n^{-\frac{1}{2}+\frac{r}{2(1+\varepsilon)}}-\widehat{C}_{2} \cdot\log(n)n^{-\frac{1}{2}}}. \tag{50}\] Here, \(\widehat{C}_{1}=2\widehat{C}_{r}/(\lambda^{\downarrow}\varrho^{\downarrow})\) and \(\widehat{C}_{2}=\widehat{C}^{\prime}/(\lambda^{\downarrow}\varrho^{\downarrow})\) with \(\lambda^{\downarrow}:=\inf_{i}\{\lambda_{i}:\lambda_{i}>0\}\) and \(\varrho^{\downarrow}:=\inf_{j}\{\varrho_{j}:\varrho_{j}>0\}\). We recall that \(\lambda^{\downarrow},\varrho^{\downarrow}>0\) due to Assumption 3.5. By noting that all the \(n\)-dependent terms in the denominator (and numerator, since \(\tau>0\)) of (50) converge to zero, we can use the Taylor expansion of this error-fraction to conclude there exists a constant \(\tilde{C}>0\) such that it is bounded by \[1+\tilde{C}\left(\frac{\log(n)}{\sqrt{n}}\left(\frac{1}{\sqrt{q_{t}}}+\frac{1}{ \sqrt{q_{s}}}\right)+\frac{\log(n)^{2}}{n\sqrt{q_{t}q_{s}}}+n^{-\frac{1}{2}+ \frac{r}{2(1+\varepsilon)}}+\log(n)n^{-\frac{1}{2}}\right).\] Now, recall that \(t,s\leq u_{n}^{\uparrow}(\tau)\) and hence that \(q_{t},q_{s}\geq n^{-1+\tau}\). Thus to remove \(t\) and \(s\) dependence we further bound this expression by \[1+\tilde{C}\left(2\log(n)n^{-\frac{\tau}{2}}+\log(n)^{2}n^{-\tau}+n^{-\frac{1}{ 2}+\frac{\tau}{2(1+s)}}+\log(n)n^{-\frac{1}{2}}\right).\] From this expression we note that only the first term can be dominant if we choose \(\tau\) sufficiently close to zero. Thus, we find that there exists a constant \(\tilde{C}^{\uparrow}>0\) such that (49) is bounded from above by \[\frac{q_{t}q_{s}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}+\tilde{C}^{\uparrow}\log (n)n^{-\frac{\tau}{2}},\] which proves (44). The lower bound (45).The proof is similar to the upper bound. The biggest difference is the way we bound the denominator in (42) using (43). For this we first note \(\sum_{k=1}^{\infty}N_{k}=n\). Thus, we can write \[\sum_{k=M_{n}+1}^{\infty}N_{k}I(i,k)\leq\sum_{k=M_{n}+1}^{\infty}N_{k}=n-\sum _{k=1}^{M_{n}}N_{k}=\sum_{k=1}^{\infty}q_{k}n-\sum_{k=1}^{M_{n}}N_{k}.\] Now, using the lower bound in (43) we can conclude that \[\sum_{k=M_{n}+1}^{\infty}N_{k}I(i,k) \leq\sum_{k=1}^{\infty}q_{k}n-\sum_{k=1}^{M_{n}}\left[q_{k}n- \log(n)\sqrt{q_{k}n}\right]=\sum_{k=1}^{\infty}q_{k}n+\log(n)\sqrt{n}\sum_{k=1 }^{n}\sqrt{q_{k}}-\sum_{k=1}^{M_{n}}q_{k}n,\] \[=\sum_{k=M_{n}+1}^{\infty}q_{k}n+\log(n)\sqrt{n}\sum_{k=1}^{n} \sqrt{q_{k}}=\mathbb{P}(T>M_{n})n+\widehat{C}^{\prime}\cdot\log(n)\sqrt{n}.\] Similarly, using the upper bound in (43) we have that \[\sum_{k=1}^{M_{n}}N_{k}I(i,k)\leq\sum_{k=1}^{M_{n}}q_{k}I(i,k)n+\widehat{C}^{ \prime}\cdot\log(n)\sqrt{n}.\] Now, using these bounds in the denominator of (42) and using the lower-bound of (43) yields the following lower-bound \[\frac{(q_{t}n-\log(n)\sqrt{n}q_{t})(q_{s}n-\log(n)\sqrt{n}q_{s})I(i,t)J(j,s)} {\left(\sum_{k=1}^{M_{n}}q_{k}nI(i,k)+2\widehat{C}^{\prime}\log(n)\sqrt{n}+n \mathbb{P}(T>M_{n})\right)\left(\sum_{k^{\prime}=1}^{M_{n}}q_{k^{\prime}}nJ(i,k^{\prime})+2\widehat{C}^{\prime}\log(n)\sqrt{n}+n\mathbb{P}(T>M_{n})\right)}.\] We expand the factors in this bound, remove the additional positive terms from the numerator, and only keep the dominant terms in the expansion of the denominator. This yields a lower-bound of (42) similar to (49) given by \[\frac{(q_{t}q_{s}n^{2}-(\sqrt{q_{t}}+\sqrt{q_{s}})n\log(n)\sqrt{n}q_{t}q_{s}) I(i,t)J(j,s)}{\sum_{k=1}^{M_{n}}\sum_{k^{\prime}=1}^{M_{n}}q_{k}q_{k^{\prime}}n^{2 }I(i,k)J(j,k^{\prime})+\widehat{C}^{\prime}\left(n\log(n)\sqrt{n}+n^{2} \mathbb{P}(T>M_{n}))\right)},\] where \(\widehat{C}^{\prime}>0\) is some constant. Now, we create a further lower bound by running the sum in the denominator up to infinity (revealing \(\lambda_{i}\varrho_{j}\)), and applying Lemma 4.11 for the remainder two terms. We find \[\frac{(q_{t}q_{s}n^{2}-(q_{t}+q_{s})n\log(n)\sqrt{n})I(i,t)J(j,s)}{n^{2}\left( \lambda_{i}\varrho_{j}+\widehat{C}_{r}\cdot n^{-\frac{1}{2}+\frac{\tau}{2(1+s )}}+\widehat{C}n^{-\frac{1}{2}}\log(n)\right)}.\] Now, we extract \(q_{t}q_{s}I(i,t)J(j,s)/(\lambda_{i}\varrho_{j})\) from the fraction to find a lower-bound similar to (50) given by \[\frac{q_{t}q_{s}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}\cdot\frac{1-(\sqrt{q_{t }}+\sqrt{q_{s}})n^{-1/2}\log(n)/\sqrt{q_{t}q_{s}}}{1+\widehat{C}_{1}\cdot n^{ -\frac{1}{2}+\frac{\tau}{2(1+s)}}+\widehat{C}_{2}\cdot\log(n)n^{-\frac{1}{2} }},\] where \(\widehat{C}_{1},\widehat{C}_{2}>0\). Repeating the same arguments as in for the upper bound finally yields the desired lower-bound \[\frac{q_{t}q_{s}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}-\tilde{C}^{\downarrow} \log(n)n^{-\frac{\tau}{2}},\] for some \(\tilde{C}^{\downarrow}>0\) and any \(r\in(0,1+\varepsilon)\). **Q.E.D.** Proof of Lemma 4.16.: The approach is similar to the proof of Lemma 4.15. However, this time we only focus on an upper-bound. First, we set \(M_{n}:=u_{n}^{\uparrow}(\nu)\) for some \(\nu\) close to zero, and write similar to (42) that \[\mathbb{P}\left(\mathcal{A}_{ts}^{(a)}\ \Bigg{|}\ \bigcap_{k=1}^{M_{n}}\bigcap_{k^{ \prime}=1}^{M_{n}}\mathcal{V}_{kk^{\prime}},\mathcal{V}_{ts},C_{a}=(i,j),(N_{k })_{k\in\mathcal{S}}\right)=\frac{N_{t}N_{s}I(t,i)J(s,j)}{\left(\sum_{k\in \mathcal{S}}N_{k}I(k,i)\right)\left(\sum_{k^{\prime}\in\mathcal{S}}N_{k^{ \prime}}J(k^{\prime},j)\right)}. \tag{51}\] Now, since both the intersection and \(\mathcal{V}_{ts}\) occurs, we have that \[nq_{k}-\log(n)\sqrt{q_{k}n}\leq N_{k}\leq nq_{k}+\log(n)\sqrt{q_{k}n}\ \ \text{for}\ k\leq M_{n}\ \text{and}\ k\in\{t,s\}. \tag{52}\] The rest of the proof consists of using (52) to find an upper-bound of (51) independent of \((N_{k})_{k\in\mathcal{S}}\), and then use the law of total probability on these bounds to achieve the desired result. We will proceed in the following steps: 1. Derive a desirable upper-bound on (51). 2. Using the law of total probability on the bounds to find the desired result. Step I.Repeat the argumentation for the upper bound in the proof of Lemma 4.15 until (50). We find the upper-bound \[\frac{q_{t}q_{s}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}\cdot\frac{1+(q_{t}^{-1/2 }+q_{s}^{-1/2})n^{-1/2}\log(n)+q_{t}^{-1/2}q_{s}^{-1/2}n^{-1}\log(n)^{2}}{1- \widehat{C}_{1}\cdot n^{\frac{(\nu-1)(1+\varepsilon-\nu)}{2+\varepsilon}}- \widehat{C}_{2}\cdot\log(n)n^{-\frac{1}{2}}}.\] Note we cannot repeat the Taylor expansion argument from the proof of Lemma 4.15 here, since the two \(n\)-dependent terms in the numerator might diverge as \(n\to\infty\), due to the instability of either \(t\) or \(s\). Thus, we slightly rewrite this expression to make the Taylor expansion argument viable again. \[\frac{\sqrt{q_{t}q_{s}}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}\cdot\frac{\sqrt{q _{t}q_{s}}+(\sqrt{q_{t}}+\sqrt{q_{s}})n^{-1/2}\log(n)+n^{-1}\log(n)^{2}}{1- \widehat{C}_{1}\cdot n^{\frac{(\nu-1)(1+\varepsilon-\nu)}{2+\varepsilon}}- \widehat{C}_{2}\cdot\log(n)n^{-\frac{1}{2}}}.\] Now we continue repeating the rest of the arguments for the upper bound in the proof of Lemma 4.15, but keep the error-term multiplicative. This yields for some \(\tilde{C}>0\) and any \(r\in(0,1+\varepsilon)\) that \[\frac{\sqrt{q_{t}q_{s}}I(i,t)J(j,s)}{\lambda_{i}\varrho_{j}}\left(\sqrt{q_{t}q _{s}}+\tilde{C}\left(\frac{(\sqrt{q_{t}}+\sqrt{q_{s}})\log(n)}{\sqrt{n}}+ \frac{\log(n)^{2}}{n}+n^{\frac{(\nu-1)(1+\varepsilon-\nu)}{2+\varepsilon}}+ \frac{\log(n)}{\sqrt{n}}\right)\right).\] Due to Assumption 3.5 we can bound \(1/(\lambda_{i}\varrho_{j})\) by a constant. Hence, if we take \(\nu\) and \(r\) close enough to zero, then there exists a constant \(\widehat{C}>0\) for which we can bound (51) from above by \[\widehat{C}^{\prime}\sqrt{q_{t}q_{s}}\left(\sqrt{q_{t}q_{s}}+\frac{\log(n)}{ \sqrt{n}}\right). \tag{53}\] Step II.We use the law of total probability to write \[\mathbb{P}(\mathcal{A}_{ts}^{(a)})=\sum_{i=1}^{\infty}\sum_{j=1}^{\infty}p_{ ij}\mathbb{P}(\mathcal{A}_{ts}^{(a)}\mid C_{a}=(i,j)). \tag{54}\] Now, we condition on the intersection of \(\mathcal{V}_{kk^{\prime}}\) events and \(\mathcal{V}_{ts}\) to bound \[\mathbb{P}(\mathcal{A}_{ts}^{(a)}\mid C_{a}=(i,j))\leq\mathbb{P}\left( \mathcal{A}_{ts}^{(a)}\ \Bigg{|}\ C_{a}=(i,j),\bigcap_{k=1}^{u_{n}^{\uparrow}}\bigcap_{k^{\prime}=1}^{u _{n}^{\uparrow}}\mathcal{V}_{kk^{\prime}},\mathcal{V}_{ts}\right)+\sum_{k=1}^ {u_{n}^{\uparrow}}\sum_{k^{\prime}=1}^{u_{n}^{\uparrow}}\mathbb{P}(\neg \mathcal{V}_{kk^{\prime}})+\mathbb{P}(\neg\mathcal{V}_{ts}).\] Using the \((N_{k})_{k\in\mathcal{S}}\)-independent bound (53) together with Lemma 4.4 shows there exists a constant \(\widehat{C}>0\) such that \[\mathbb{P}(\mathcal{A}_{ts}^{(a)}\mid C_{a}=(i,j))\leq\widehat{C}\sqrt{q_{t}q _{s}}\left(\sqrt{q_{t}q_{s}}+\frac{\log(n)}{\sqrt{n}}\right)+2((u_{n}^{ \uparrow})^{2}+1)\exp\left(-\log(n)^{2}/2\right).\] Substituting this back into (54) and computing the sum shows the desired result, since the second term in the upper-bound is super-polynomial (cf. Lemma 4.2). **Q.E.D.** Proof of Lemma 4.17.: Fix a constant \(\alpha>3/8\). We fix two auxiliary constants \(\tau_{1}\in(2/3,2\alpha)\) and \(\tau_{2}\in(1/2-\alpha,1/2)\) and set \(\zeta_{n}:=u_{n}^{\dagger}(\tau_{1})\) and \(\xi_{n}:=u_{n}^{\dagger}(\tau_{2})\). Using these two constants, we will apply the union bound and split up the resulting bound on the target probability as follows: \[\sum_{t=1}^{\zeta_{n}}\sum_{s=1}^{\zeta_{n}}\mathbb{P}\left(\left| \bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n\right|\right|>Cn^{1/2+\alpha} \sqrt{q_{t}q_{s}}\right)\] \[+\sum_{t=\zeta_{n}}^{\xi_{n}}\sum_{s=1}^{\xi_{n}}\mathbb{P}\left( \left|\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n\right|\right|>Cn^{1/2+ \alpha}\sqrt{q_{t}q_{s}}\right)\] \[+\sum_{t=1}^{\xi_{n}}\sum_{s=\zeta_{n}}^{\xi_{n}}\mathbb{P}\left( \left|\bar{A}_{n}(t,s)-\left|\kappa(t,s)q_{t}q_{s}n\right|\right|>Cn^{1/2+ \alpha}\sqrt{q_{t}q_{s}}\right).\] Our goal is now to show that each of these sums converge to zero in order to prove the claim. This is what we will do in the rest of the proof. First double sum.In essence, this computation will consist of an application of the Chernoff bound together with Lemma 4.15. We will first stochastically bound \(\bar{A}_{n}(t,s)\) in terms of binomial distributions. To do this, we write the probability inside this sum as \[\mathbb{P}\left(\bar{A}_{n}(t,s)>Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+\left|\kappa (t,s)q_{t}q_{s}n\right|\right)+\mathbb{P}\left(\bar{A}_{n}(t,s)<\left|\kappa( t,s)q_{t}q_{s}n\right|-Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right). \tag{55}\] Denote by \(\mathcal{A}_{ts}^{(a)}\) the event that arc \(a\in[\mu n]\) gets placed from a vertex with type \(t\) to a vertex with type \(s\). We note conditional on \((N_{k})_{k\in\mathcal{S}}\) that \[\bar{A}_{n}(t,s)\sim\mathtt{Bin}\left(\left|\mu n\right|,\mathbb{P}(\mathcal{ A}_{ts}^{(a)})\right).\] Thus, using Lemma 4.15 we can stochastically bound this from above and below for some \(\widehat{C}>0\) by \[\underbrace{\mathtt{Bin}\left(\left|\mu n\right|,\frac{q_{t}q_{s}\kappa(t,s) }{\mu}-\widehat{C}\log(n)n^{-\frac{\nu}{2}}\right)}_{B_{n}^{-}(t,s)}\preceq \widetilde{A}_{n}(t,s)\preceq\underbrace{\mathtt{Bin}\left(\left|\mu n\right|, \frac{q_{t}q_{s}\kappa(t,s)}{\mu}+\widehat{C}\log(n)n^{-\frac{\nu}{2}}\right)} _{B_{n}^{+}(t,s)},\] where we use \(B_{n}^{-}(t,s)\) and \(B_{n}^{+}(t,s)\) to denote the random variables on, respectively, the left and right hand side. Using this in (55) allows us to bound the terms in the first sum as \[\mathbb{P}\left(B_{n}^{+}(t,s)>Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+\left|\kappa( t,s)q_{t}q_{s}n\right|\right)+\mathbb{P}\left(B_{n}^{-}(t,s)<\left|\kappa(t,s)q_{t}q_{s }n\right|-Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right). \tag{56}\] We now apply the Chernoff bound on both these probabilities. We will only work out the first of the two, since the argument for the second is analogous. Like for all the previous applications of the Chernoff bound, we first sequentially bound \(Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+\left|\kappa(t,s)q_{t}q_{s}n\right|-\mathbb{ E}[B_{n}^{+}(t,s)]\) from below to show that it is positive. First, a direct calculation of this expression and using \(\left|\mu n\right|\leq\mu n\) shows that it is bounded by \[Cn^{\frac{1}{2}+\alpha}\sqrt{q_{t}q_{s}}+\left|\kappa(t,s)q_{t}q_{s}n\right|- \kappa(t,s)q_{t}q_{s}n-\mu\widehat{C}\log(n)n^{1-\frac{\tau_{1}}{2}}\geq Cn^{ \frac{1}{2}+\alpha}\sqrt{q_{t}q_{s}}-1-\mu\widehat{C}\log(n)n^{1-\frac{\tau_{ 1}}{2}}.\] Using the fact that \(t,s\leq\zeta_{n}\) allows us to further lower-bound this expression by \[Cn^{-\frac{1}{2}+\alpha+\tau_{1}}-1-\mu\widehat{C}\log(n)n^{1-\frac{\tau_{1}} {2}}.\] Note that by our condition on \(\alpha\) and \(\tau_{1}\) it holds that \(3\tau_{1}+2\alpha>3\), which implies that \(\tau_{1}+\alpha-1/2>1-\tau_{1}/2\). Therefore, the first term in the expression above dominates. Thus, indeed there exists a \(\widetilde{C}>0\) such that for large enough \(n\) \[Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+\left|\kappa(t,s)q_{t}q_{s}n\right|-\mathbb{ E}[B_{n}^{+}(t,s)]\geq\widetilde{C}n^{\alpha+\tau_{1}-\frac{1}{2}}>0.\] We can now apply the Chernoff bound, which shows the following bound for some constant \(C^{-}>0\): \[\mathbb{P}\left(B_{n}^{+}(t,s)>Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+\left|\kappa( t,s)q_{t}q_{s}n\right|\right)\leq\exp\left(-C^{-}n^{\alpha+\tau_{1}-\frac{1}{2}} \right).\] We repeat the same arguments, and find a similar bound for the probability involving \(B_{n}^{-}(t,s)\) in (56). Then, we substitute both bounds in (55), revealing a uniform bound for all probabilities in the first sum. Thus, we see that there exists a \(C^{\pm}>0\) such that the first sum is bounded by \[2\zeta_{n}^{2}\cdot\exp\left(-C^{\pm}n^{\alpha+\tau_{1}-\frac{1}{2}}\right)\to 0,\] by Lemma 4.2. Second and third double sum.We will only give the argument for the second sum, because the argument for the third is the same. Similar to (55), we start by noting that each term in the second sum is equal to \[\mathbb{P}\left(\widetilde{A}_{n}(t,s)>Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+\lfloor \kappa(t,s)q_{t}q_{s}n\rfloor\right)+\mathbb{P}\left(\widetilde{A}_{n}(t,s)< \lfloor\kappa(t,s)q_{t}q_{s}n\rfloor-Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}\right).\] We will proceed to show that the second probability is zero by showing that \(\lfloor\kappa(t,s)q_{t}q_{s}n\rfloor-Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}\) is negative. To do this, we subsequently bound \[\lfloor\kappa(t,s)q_{t}q_{s}n\rfloor-Cn^{1/2+\alpha}\sqrt{q_{t}q_ {s}} \leq n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\left(\kappa(t,s)n^{1/2-\alpha }\sqrt{q_{t}q_{s}}-C\right),\] \[\leq n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\left(\kappa^{+}n^{\tau_{1}/ 2-\alpha}-C\right)\leq 0.\] Here, in the second inequality we have used that either \(t\) or \(s\) is larger than \(\zeta_{n}\) and that \(\kappa\) is bounded (Lemma 4.9). In the final inequality we used that \(\tau_{1}<2\alpha\). Thus, to bound the second (and third) sum, we only need to bound \[\mathbb{P}\left(\widetilde{A}_{n}(t,s)>Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+ \lfloor\kappa(t,s)q_{t}q_{s}n\rfloor\right). \tag{57}\] Like in the previous step, we again have \[\bar{A}_{n}(t,s)\sim\texttt{Bin}\left(\lfloor\mu n\rfloor,\mathbb{P}(\mathcal{ A}_{ts}^{(a)})\right),\] which we can stochastically bound using Lemma 4.16 for some \(\widehat{C}>0\) and all fixed \(r>0\) as \[\bar{A}_{n}(t,s)\preceq\underbrace{\texttt{Bin}\left(\lfloor\mu n\rfloor, \widehat{C}\sqrt{q_{t}q_{s}}\left(\sqrt{q_{t}q_{s}}+\frac{\log(n)}{\sqrt{n}} \right)+\frac{\widehat{C}}{n^{r}}\right)}_{B_{n}^{+}(t,s)},\] where we now use \(B_{n}^{+}(t,s)\) to denote the random variable on the right hand side. This show we can bound (57) by \[\mathbb{P}\left(B_{n}^{+}(t,s)>Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+\lfloor\kappa( t,s)q_{t}q_{s}n\rfloor\right).\] As always, we seek to apply the Chernoff bound, so we will show that indeed \(\theta_{ts}:=Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+\lfloor\kappa(t,s)q_{t}q_{s}n \rfloor-\mathbb{E}[B_{n}^{+}(t,s)]\geq 0\). Substituting the expectation and bounding \(\lfloor\mu n\rfloor\leq\mu n\) yields \[\theta_{ts}=Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+\lfloor\kappa(t,s)q_{t}q_{s}n \rfloor-\mu\widehat{C}\mathit{n}q_{t}q_{s}-\mu\widehat{C}\log(n)\sqrt{q_{t}q_{ s}n}-\mu\widehat{C}n^{1-r}.\] Since both \(t,s\leq\xi_{n}\) we have that the first term in this expression is larger that \(Cn^{-1/2+\alpha+\tau_{2}}\). Hence, by choosing \(r\) sufficiently large in the last term, we see that the first term dominates it. Moreover, we also see that the first term dominates the fourth. In principle, this means that the fourth and final term are insignificant. Therefore, we rewrite to obtain the following lower bound: \[\theta_{ts}\geq n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\left(C-\mu\widehat{C}n^{1/2- \alpha}\sqrt{q_{t}q_{s}}\right)-\mu\widehat{C}\log(n)\sqrt{q_{t}q_{s}n}-\mu \widehat{C}n^{1-r}.\] Without loss of generality, we know in these sums that \(q_{t}\leq n^{-1+\tau_{1}}\) (otherwise, this would be true for \(q_{s}\)). Hence, we can further bound \[\theta_{ts}\geq n^{1/2+\alpha}\sqrt{q_{t}q_{s}}\left(C-\mu\widehat{C}n^{\tau_ {1}/2-\alpha}\right)-\mu\widehat{C}\log(n)\sqrt{q_{t}q_{s}n}-\mu\widehat{C}n^{1 -r}.\] Again, since \(\tau_{1}<2\alpha\) we know from all previous arguments there exists a \(\widetilde{C}>0\) such that \[\theta_{ts}\geq\widetilde{C}n^{1/2+\alpha}\sqrt{q_{s}q_{t}}>0.\] This means we can apply the Chernoff bound. It shows for some \(C^{+}>0\) that \[\mathbb{P}\left(B_{n}^{+}(t,s)>Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+\lfloor\kappa( t,s)q_{t}q_{s}n\rfloor\right)\leq\exp\left(-C^{+}n^{1/2+\alpha}\sqrt{q_{t}q_{s}} \right).\] Since \(t,s\leq\xi_{n}\), meaning \(q_{t},q_{s}\geq n^{-1+\tau_{2}}\), we can further bound this as \[\mathbb{P}\left(B_{n}^{+}(t,s)>Cn^{1/2+\alpha}\sqrt{q_{t}q_{s}}+\lfloor\kappa( t,s)q_{t}q_{s}n\rfloor\right)\leq\exp\left(-C^{+}n^{\alpha+\tau_{2}-1/2} \right).\] Since we have assumed that \(\alpha>1/2-\tau_{2}\), we have that \(\alpha+\tau_{2}-1/2>0\), showing that (57) converges exponentially to zero. Thus, for the second (and third) sum we find it is bounded by \[\xi_{n}^{2}\cdot\exp\left(-C^{+}n^{\alpha+\tau_{2}-1/2}\right)\to 0,\] by Lemma 4.2. Since all the sums converge to zero, we also have that the target probability converges to zero. **Q.E.D.** Acknowledgments.The research of Mike van Santvoort is funded by the Institute for Complex Molecular Systems (ICMS) at Eindhoven University of Technology.
2309.09355
Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties
We introduce the elEmBERT model for chemical classification tasks. It is based on deep learning techniques, such as a multilayer encoder architecture. We demonstrate the opportunities offered by our approach on sets of organic, inorganic and crystalline compounds. In particular, we developed and tested the model using the Matbench and Moleculenet benchmarks, which include crystal properties and drug design-related benchmarks. We also conduct an analysis of vector representations of chemical compounds, shedding light on the underlying patterns in structural data. Our model exhibits exceptional predictive capabilities and proves universally applicable to molecular and material datasets. For instance, on the Tox21 dataset, we achieved an average precision of 96%, surpassing the previously best result by 10%.
Shokirbek Shermukhamedov, Dilorom Mamurjonova, Michael Probst
2023-09-17T19:41:32Z
http://arxiv.org/abs/2309.09355v3
Structure to Property: Chemical Element Embeddings and a Deep Learning Approach for Accurate Prediction of Chemical Properties ###### Abstract The application of machine learning (ML) techniques in computational chemistry has led to significant advances in predicting molecular properties, accelerating drug discovery, and material design. ML models can extract hidden patterns and relationships from complex and large datasets, allowing for the prediction of various chemical properties with high accuracy. The use of such methods has enabled the discovery of molecules and materials that were previously difficult to identify. This paper introduces a new ML model based on deep learning techniques, such as a multilayer encoder and decoder architecture, for classification tasks. We demonstrate the opportunities offered by our approach by applying it to various types of input data, including organic and inorganic compounds. In particular, we developed and tested the model using the _Matbench_ and _Moleculenet_ benchmarks, which include crystal properties and drug design-related benchmarks. We also conduct a comprehensive analysis of vector representations of chemical compounds, shedding light on the underlying patterns in molecular data. The models used in this work exhibit a high degree of predictive power, underscoring the progress that can be made with refined machine learning when applied to molecular and material datasets. For instance, on the Tox21 dataset, we achieved an average accuracy of 96%, surpassing the previous best result by 10%. Our code is publicly available at [https://github.com/dmamur/element](https://github.com/dmamur/element). ## 1 Introduction Due to their effectiveness in fitting experimental data and predicting material properties, machine learning models have found extensive applications in research on batteries[1, 2], supercapacitors[3], thermoelectric[4] and photoelectric[5] devices, catalysts[6] and in drug design[7]. In a'second wave', deep learning models (DLMs) have exhibited remarkable potential in advancing the field of chemical applications. So-called Word2vec[8] DLMs have been used for processing chemical text data extracted from academic articles. By representing chemical formulas as embeddings or vectors, non-obvious connections between compounds and chemical properties can be discovered. For instance, the mat2vec[9] NLP model was able to predict materials with good thermoelectric properties, even when these materials and their properties were not explicitly named in the original papers. Other NLP-inspired models, such as Bag of Bonds[10], mol2vec[11], smile2vec[12], SPvec[13], have used unsupervised machine learning and have been applied to chemical compound classification tasks, achieving remarkable results. These models hold immense potential for accelerating the discovery and the design of materials with tailored properties. In this regard, the type of input data is crucial for ML models. In chemistry, this could be chemical text data, like in mat2vec, or structural data. Chemical texts make it possible to use reference information of a compound[14], such as weight, melting point, crystallization temperature, and element composition. These types of inputs can, in turn, be used by general deep learning models, with ELMO, BERT, and GPT-3 (or GPT-4) being the most famous examples. One of the most common types of input data used for ML-based approaches is structural representation, which provides valuable information about the atomic environment of a given material. However, text-based data does not normally capture important structural features, such as interatomic distances. Structural information is crucial for predicting material properties, as it is key to all pertinent physical and chemical characteristics. This can be understood in the same sense as the Born-Oppenheimer approximation, in short, states that atomic coordinates (and from them the potential energy) are all that is needed in chemistry. The challenge of linking structural information to material properties is commonly referred to as the "structure to property" task. Overcoming this challenge has the potential to greatly enhance our ability to predict and design novel materials with desired properties. Structure could be translated into property by graph neural networks (GNN) or high-dimensional neural networks (HDNN) formalisms. GNNs transform graphs of molecules (or compounds) into node and edge embeddings, which can then be used for state-of-the-art tasks[15, 16, 17, 18, 19, 20, 21]. HDNNs based on converting Cartesian coordinates of atoms to continuous representations use techniques like the smooth overlap of atomic positions (SOAP)[22], the many-body tensor representation (MBTR)[23], or the atomic centered symmetry functions (ACSF)[24] to achieve the same goal. Message passing neural networks (MPNN) are a subgroup of HDNN that use atomic positions and nuclear charges as input. Examples include SchNet[25] and PhysNet[26]. In these models, atomic embedding encodes the atomic identifier into vector arrays, which are first initialized randomly and optimized during training. Despite the increasing use of deep learning in computational chemistry, many aspects of NLP models have yet to be fully explored. One of them is the attention mechanism[27], which allows the model to focus on specific parts of the input data when making predictions. It works by assigning different levels of importance, or attention, to different elements in the input sequence. Additionally, the so-called transformer approach has not yet been fully utilized in chemistry. The transformer consists of two distinct components: an encoder responsible for processing the input data and a decoder responsible for generating task-related predictions. In this paper, we introduce a new deep learning model for chemical compounds that utilizes both of these approaches. Specifically, our model incorporates local attention layers to capture properties of local atomic environments and then utilizes a global attention layer to make weighted aggregations of these atomic environment vectors to create a global representation of the entire crystal structure. While the attention mechanism has been previously used in graph neural networks[28], this work introduces an atomic representation deep learning model that can be applied to a wide range of tasks. From its components, we call this model 'elEmBERT' (**e**lement **E**mbeddings and **B**idirectional **E**ncoder **R**epresentations from **T**ransformers). In summary, the main aspects of our work are: * We use a transformer mechanism for binary classification based on structural information. * Our model is flexible and can be easily adapted to different types of datasets. * Benchmarks show the state-of-the-art performance of our model for a variety of material property prediction problems, both involving organic and inorganic compounds. ## 2 Methods As input to the neural network (NN), we utilize atomic pair distribution functions (PDFs) and the atom types that compose the compounds. The PDF represents the probability of finding an atom inside a sphere with a radius \(r\) centered at a selected atom[29]. To prepare the training data, we calculate PDFs employing the ASE library[30] with a cutoff radius of 10A. The second input for the NN consists of element embedding vectors. To achieve this, all elements in all crystals are mapped to integers (typically using the nuclear number), creating an elemental vocabulary of size \(V_{size}\)=101. These embeddings are then passed to the BERT module. BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained deep learning model originally designed for natural language processing (NLP) tasks. It employs a bidirectional transformer encoder to capture word context in sentences, allowing it to generate accurate text representations. BERT employs masked language modeling (MLM), where some tokens in a sentence are masked or replaced with a [MASK] token, and the model is trained to predict the original word based on the surrounding context. Additionally, BERT uses next sentence prediction, training on pairs of sentences to predict whether the second sentence follows the first. Our model is illustrated in Fig. 1. It can use various combinations of embedding sizes, encoder-decoder layers, and attention heads. In chemical applications, the atomic composition of a compound can be equated to a sentence, with individual atoms serving as constituent tokens. Leveraging this analogy, we introduce four new tokens to the vocabulary: [MASK] for MLM, [UNK] for unseen tokens, [CLS] for classification, and [SEP] for separating two compounds. In a chemical compound, each element can exhibit different oxidation states or formal charges, indicating the relative electron loss or gain during chemical reactions. Considering the foundations of chemistry, it is evident that the elements composing them do not interact uniformly, but instead exhibit specific interactions with neighboring atoms. In the case of inorganic substances, such interactions can manifest themselves as ionic interactions represented by the oxidation state, while for Figure 1: Classification Model Architecture: The initial step involves computing the pair distribution function for each element based on atom positions within the chemical compound. This information is then passed through the PCAKM layer. Subsequently, the resulting subelements are converted into tokens, with additional tokens incorporated before input into the BERT module. The [CLS] token output vector from BERT is used for the classification task. organic substances, it may be covalent bonding. To create a universal criterion for understanding these interactions, we considered the number of electrons that can participate in a chemical reaction. Using this criterion, we categorized the elements in our training dataset into subelements based on the number of electrons in their outer shell or their oxidation states. However, it is important to note that information about the type of interaction between atoms in molecular structures is often missing, and existing algorithms can be prone to errors. In view of this and recognizing that the length of the chemical bond carries information about the type of interaction, we used Principal Component Analysis (_PCA_) to reduce the dimensionality of PDF vectors. We then employed a \(k_{mean}\) algorithm to cluster the outputs and categorize elements into subelement classes. This is similar to what has often been done manually when developing classical force fields. Examples of such differentiation are presented in Fig. 2. We trained an individual model for each element in our dataset, resulting in a total of 192 models, including one _PCA_ and one \(k_{means}\) model for each element. The final dictionary size was \(V_{silve}\)=565. In the following sections, we will present the results of prediction models with specific parameters, including an embedding size of 32, 2 attention heads, and 2 layers. We explored two model versions, V0 (where the _PCA-K\({}_{m}\)_ block is omitted) and V1, as discussed previously. These calculations were carried out three times for each dataset, using random numbers 12345, 67890, and 234567. ## 3 Results We trained our elEmBERT model to perform various classification tasks. To do this, we used the [CLS] token and added an additional layer to the BERT module with the same number of neurons as there are classes in the dataset. Our first task involved using the Materials Project (MP) metallicity dataset to predict the metallicity of materials based on crystal structure information [31, 32]. Next, we employed a portion of the datasets gathered for the CegaNN model [33]. This led us to undertake a classification task known as the Liquid-Amorphous (LA) task, which revolves around distinguishing between liquid and amorphous phases of silicon (Si). The LA dataset comprises 2,400 Si structures, evenly divided between amorphous and liquid phases (50% each). Importantly, these Si structures lack symmetry and differ solely in terms of density and coordination number. In addition to these tasks, we evaluated the elEmBERT model's ability to classify material polymorphs across different dimensionalities, specifically clusters (0D), sheets (2D), and bulk structures (3D). Carbon, with its wide range of allotropes spanning these dimensionalities, served as an excellent system for assessing the efficiency of our network model in dimensionality classification (DIM task). The DIM dataset contained 1,827 configurations. Finally, we ventured into characterizing the space group of crystal structures, encompassing a total of 10,517 crystal structures distributed among eight distinct space groups (SG task) [34]. Expanding beyond inorganic material datasets, we incorporated organic compounds, which greatly outnumber their inorganic counterparts. This expansion encompasses an extended range of properties, including biochemical and pharmaceutical aspects. To rigorously validate our model, we turned to benchmark datasets from MoleculeNet [35], specifically BBBP (BloodBrain Barrier Penetration), ClinTox (Clinical Toxicity), BACE (\(\beta\)-Secretase), SIDER (Side Effect Resource) [36], and Tox21. These datasets cover a diverse array of chemical compounds and provide a comprehensive assessment of our model's predictive performance for binary properties or activities associated with organic molecules. In this context, a positive instance signifies that a molecule possesses a specific property, while a negative instance indicates its absence. The MoleculeNet dataset primarily comprises organic molecules represented in SMILES format. For analysis purposes, we converted these SMILES formulas into the standard XYZ format using the Open Babel software [37] and RDKit package [38]. To evaluate our model's performance, we employed the 'Receiver Operating Characteristic - Area Under the Curve' (ROC-AUC) metric, a common measure for assessing binary classification quality. ROC-AUC quantifies the model's ability to differentiate between positive and negative classes based on predicted probabilities. We divided the datasets into three subsets: the training set, the validation set, and the test set, with an 80:10:10 ratio. The ROC-AUC results reported in Table 1 are based on the test set. Figure 2: Two examples illustrating the division of elements into sub-elements based on their environment: a hypothetical organic compound (a) and LiCoO\({}_{6}\) (b) crystal with ID mp-27920. The numbers at the top right of elements correspond to subelements. These results serve as a reliable metric for evaluating prediction capabilities and the model's ability to generalize to new instances. Notably, the Tox21 and SIDER datasets encompass 12 and 27 individual tasks, respectively, each corresponding to specific toxicity predictions. Table 1 provides clear evidence that the accuracy of predictions improves as the number of subtypes increases, particularly for inorganic compounds. In the LA task, using single-element inputs, such as Si, results in only 50% accuracy, which is comparable to random guessing. However, incorporating sub-elements significantly enhances the performance, leading to an impressive ROC-AUC of 0.98. Our approach also demonstrates improved accuracy across other datasets. While further increasing the number of sub-elements has a relatively small impact, it still leads to higher accuracy. In the subsequent sections, we will delve into each dataset, from Matbench to Toxic21, and examine the elEmBERT-V1 model in more detail, providing comprehensive insights into the predictions. ### MP metallicity Figure 2(a) illustrates the confusion matrix and presents the performance of the elEmBERT-V1 model in classifying MP metallicity. In this task, the objective is to predict or estimate whether a material or chemical compound is a metal or not. The dataset for this task comprises 106,113 samples of training structures and 21,222 samples of test structures. Our trained model achieves a binary accuracy of approximately 0.91 and an AUC of 0.965 on the test set. In Figure 2(b), the t-SNE (t-distributed stochastic neighbor embedding) plot shows the embeddings of the entire reference dataset, categorized by labels, revealing a smooth differentiation among labels within the feature space. Figure 2(c) demonstrates how our model classifies the reference dataset. It is evident that the classification models create a clear separation in the feature space, in contrast to the diffuse boundary in the reference dataset. The primary errors are located at the boundary, where the model sometimes struggles to effectively capture the diffusive behavior. The metallicity prediction task highlights \begin{table} \begin{tabular}{c|c c c} **Benchmark** & **V0** & **V1** & **BEST** \\ \hline **MP metallicity** & \(0.961\pm 0.001\) & \(\mathbf{0.965\pm 0.001}\) & \(0.950^{39}\) \\ **SG** & \(0.944\pm 0.003\) & \(0.968\pm 0.002\) & \(\mathbf{1}^{33}\) \\ **LA** & \(0.475\pm 0.014\) & \(0.980\pm 0.003\) & \(\mathbf{1}^{33}\) \\ **DIM** & \(0.893\pm 0.013\) & \(0.958\pm 0.003\) & \(\mathbf{1}^{33}\) \\ **BACE** & \(0.827\pm 0.005\) & \(0.856\pm 0.010\) & \(\mathbf{0.888}^{40}\) \\ **BBBP** & \(0.900\pm 0.020\) & \(0.905\pm 0.025\) & \(\mathbf{0.932}^{40}\) \\ **CLINTOX** & \(0.945\pm 0.011\) & \(\mathbf{0.951\pm 0.016}\) & \(0.948^{41}\) \\ **HIV** & \(0.978\pm 0.002\) & \(\mathbf{0.979\pm 0.003}\) & \(0.776^{42}\) \\ **SIDER** & \(\mathbf{0.778\pm 0.032}\) & \(0.777\pm 0.028\) & \(0.659^{40}\) \\ **TOX21** & \(\mathbf{0.961\pm 0.006}\) & \(0.958\pm 0.007\) & \(0.860^{41}\) \\ \end{tabular} \end{table} Table 1: Performance of different models applied to datasets (Matbench to TOXIC21) used in this work. A Bold font indicates the best performance and an underline represents the second-best performance, and the last column presents previous results obtained from other models. V0 represents models that use chemical element embeddings, while V1 uses subelement embeddings as input for the BERT module. Figure 3: Confusion matrix (a) and visualization of [CLS] token embeddings for the MP metallicity dataset for the reference (b) and predicted (c) datasets: blue circles denote negative labels (not metal) and orange dots represent positive labels (metal). elEmBERT's remarkable capability to characterize these binary properties of crystals. The achieved accuracy surpasses the capabilities of previously published models, including those of GNNs. ### LA, DIM and SG This section presents the results obtained from benchmarks conducted for the CegaNN model, beginning with a focus on the LA classification task. In Fig. 4a and 4b show the embedding representation of Si structures based on their labels, reduced through the t-SNE algorithm. Our model effectively segregates the structures into distinct clusters, with two clusters clearly corresponding to their respective classes. However, one cluster exhibits intermixing of structures, which challenges accurate recognition by the model. The confusion matrices shown in Fig. 4c-e provide insights into the performance of the elEmBERT-V1 model across the LA, DIM, and SG datasets. The model achieves a high accuracy of approximately 0.958 on the DIM task's test set and a slightly higher accuracy of 0.968 on the SG dataset. These confusion matrices illustrate the model's ability to identify and categorize each structure accurately. It is worth noting that the model faces challenges in distinguishing the bcc (229) structure from others in the SG dataset. This challenge arises from the structural similarities between the bcc structure and others, resulting in identical geometrical representations unless the orientational order of the particles is considered. While the CegaNN model achieved 100% efficiency in this benchmark, our model does not reach this level of performance. Nonetheless, it demonstrates strengths in terms of versatility, speed, and simplicity for this benchmark as well. ## Bace The BACE dataset consists of compounds classified as either active or inactive inhibitors of the \(\beta\)-secretase enzyme, which plays a crucial role in the production of amyloid-beta peptides associated with Alzheimer's disease. This dataset contains a total of 1,513 compounds, including 681 positive instances, making it a valuable resource for developing and evaluating predictive models aimed at identifying potential BACE enzyme inhibitors [35]. Our elEmBERT-V1 model, trained on the BACE dataset, achieved a ROC-AUC value of 0.86 in classifying compounds as active or inactive inhibitors. The visualization of our model's predictions on the BACE dataset is presented in Fig. 5. Our model predicts the presence of two distinct clusters, with some infiltration of both labels within each other (Fig. 5c). However, it's important to note that reference labels do not uniformly distribute across these visible clusters; instead, both labels intermixing, leading to errors in both active (true) and inactive (false) predictions. Nevertheless, the attained AUC value of Figure 4: Top row: Visualization of [CLS] Token Embeddings for the LA Dataset: a) reference labels and b) predicted labels. The embeddings are represented using blue circles for liquid phase labels and orange dots for amorphous labels. Bottom row: Confusion matrix analysis of the LA (c), Dim (d), and SG (e) datasets. 0.856 closely approximates the best performance obtained from a GNN model. We believe that exploring alternative combinations of model parameters may further enhance these results. ### Bbgp Next, we used the BBBP dataset, which comprises 2,039 chemical compounds annotated based on their ability to penetrate the blood-brain barrier. This dataset serves as a valuable resource for training and evaluating models aimed at predicting drug candidates' permeability through the blood-brain barrier [35]. Remarkably, our predictive model achieved a high ROC-AUC value of 0.905, ranking as the second-best value among other models. As before, we present Fig. 6, which includes the confusion matrix of the test set and t-SNE plots, illustrating the feature representation of labels. As you can see, the model successfully separates compounds according to their labels. However, the primary source of errors again arises from the diffuse boundary, where our model establishes a clear boundary. ### Clintox The ClinTox dataset is a valuable resource for studying the clinical toxicity profiles of chemical compounds. It provides data on two crucial toxicity endpoints: clinical trial toxicity and FDA approval status. Researchers use this dataset to develop predictive models and evaluate the safety profiles of compounds, aiding in the early identification of potentially toxic substances during the drug development process [35]. The ClinTox dataset contains 1,491 compounds. The ClinTox model achieves an impressive ROC-AUC accuracy of approximately 0.951 on the FDA approval task, as demonstrated in Fig. 7. Nevertheless, this dataset features only 94 negative instances, which leads to the confusion matrix showing zero predicted False values. A more detailed analysis of t-SNE projections shows that our model identifies a region with the highest concentration of negative values, yielding accurate true predictions for all points within this limited area. However, the criteria for false selection necessitate greater complexity. We believe that increasing the embedding size, along with attention heads and encoder layers, may further enhance our results. Figure 5: Classification of BACE data: a) Confusion matrix of predicted labels on the test set. b) t-SNE feature representation of the entire reference dataset according to their labels. c) Feature representation of the predicted labels. Figure 6: Classification of BBBP data: a) Confusion matrix of predicted labels on the test set. b) t-SNE feature representation of the entire reference dataset according to their labels. c) Feature representation of the predicted labels. ## HIV The HIV dataset comprises diverse biomedical data related to the Human Immunodeficiency Virus (HIV), including clinical records, genetic sequences, drug resistance profiles, and more. Machine learning techniques are applied to this dataset for tasks such as predicting patient treatment responses, identifying drug resistance mutations, and understanding viral evolution patterns. Within the HIV dataset, there are approximately 41,000 distinct data structures, of which 1,443 are considered positive cases. For the task at hand, the achieved AUC score is an impressive 0.98. Notably, the primary source of erroneous predictions lies in the positive classification of negative compounds, as presented in the confusion matrix plot (Fig. 8a). Fig. 8b illustrates that some positive data points continue to mix with negative ones, contributing to these misclassifications. Furthermore, a diffuse region housing negative values (found in the lower-right region) also contributes to these inaccuracies. Despite the notable occurrence of incorrect positive predictions, it's worth emphasizing that our model demonstrates a robust capability to effectively categorize HIV compounds. Importantly, the highest AUC score achieved by other models remains considerably lower at 0.778, significantly lagging behind our results. ## SIDER The SIDER dataset serves as a comprehensive pharmacovigilance resource, containing structured information on drug-associated side effects. Curated from diverse sources such as clinical trials, regulatory reports, and medical literature, it offers a systematic compilation of adverse drug reactions associated with various pharmaceutical interventions. The SIDER dataset plays a vital role in assessing drug safety, understanding adverse reaction patterns, and informing clinical decision-making and drug development. It comprises 27 individual tasks, each requiring a corresponding model for fitting. The training results are presented in Table 2, where the average AUC value across all tasks is approximately 0.78. This notably exceeds the previous best-predicted value of 0.659 (over all tasks). The task with the lowest AUC value was the initial SIDER task, concerning Hepatobilary disorders. Figure 8: Classification of ClinTox data: a) Confusion matrix of predicted labels on the test set. b) t-SNE feature representation of the entire reference dataset according to their labels. c) Feature representation of the predicted labels. Figure 7: Classification of ClinTox FDA approval task: a) The confusion matrix of predicted labels on the test set. b) The t-SNE feature representation of the entire reference dataset according to their labels. c) The feature representation of the predicted labels. The Meta-MGMNN (MMGNN) model achieved a score of 0.763[43] for this task, compared to 0.635 in our model. However, in other tasks, our model demonstrates comparability or superiority. The average AUC value across all tasks for our model is higher than the reported value, which averaged the first six tasks in the MMGNN model. In Fig. 9, both the confusion matrix of the test set and t-SNE plots for the compound embeddings are illustrated. These visualizations reveal that labels within the feature space of the reference data are intermingled, necessitating a more intricate model than the one employed in this study for effective label separation. Nonetheless, the model proves better suited for other tasks, providing satisfactory results and improvements over prior predictions. ### Toxic21 The Tox21 dataset is a collection of chemical compounds evaluated for their toxicity against a panel of 12 different biological targets. With over 8,000 compounds, it serves as a valuable resource for predicting the toxicity and potential adverse effects of various chemical compounds. Our model, trained on the Tox21 dataset, demonstrated impressive performance, achieving an average AUC of 0.96 across all 12 toxicity prediction tasks[35]. The results of these individual tasks are presented in Table 3, enabling a comprehensive evaluation of the model's performance on each toxicity prediction within the Tox21 dataset. Comparing our results with those of the MMGNN model highlights the significant advantages of our approach. Fig. 10 shows the confusion matrix of the test set and the t-SNE projection representing the features of the sr-mmp task in the Tox21 dataset. As shown, our model predicts distinct patterns in the t-SNE projections, with each label value occupying a specific region (Fig. 10b). The molecular embedding visualizations are also available in the MMGNN model report for the sr-mmp task[43]. In contrast, our feature space exhibits more structure, with positive values being less dispersed across all compounds. Our model primarily has only a few points that are significantly distant from the positive value region. Both elEmBERT models successfully identify the boundary between these two classes and make predictions (Fig. 10c). Errors primarily arise from diffuse boundary regions and points located far from the true cluster. This observation holds true for all tasks within the Tox21 dataset. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} **SIDER N** & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** & **9** & **10** & **11** & **12** & **13** & **14** \\ **V0** & 0.626 & **0.756** & 0.972 & 0.735 & 0.843 & 0.736 & **0.958** & **0.846** & **0.775** & **0.712** & 0.748 & **0.930** & **0.802** & **0.859** \\ **V1** & 0.635 & 0.723 & **0.976** & 0.700 & **0.881** & 0.677 & 0.957 & 0.865 & 0.757 & 0.662 & **0.769** & 0.918 & 0.785 & 0.838 \\ **MMGNN** & **0.754** & 0.693 & 0.723 & **0.744** & 0.817 & **0.741** & - & - & - & - & - & - & - & - \\ **SIDER N** & **15** & **16** & **17** & **18** & **19** & **20** & **21** & **22** & **23** & **24** & **25** & **26** & **27** & **Ave** \\ **V0** & 0.841 & 0.675 & 0.898 & 0.812 & **0.792** & 0.761 & 0.843 & 0.750 & 0.909 & **0.669** & **0.758** & 0.952 & **0.798** & **0.778** \\ **V1** & **0.877** & **0.732** & **0.921** & **0.833** & 0.781 & **0.798** & **0.873** & **0.781** & **0.918** & 0.545 & 0.726 & **0.962** & 0.731 & 0.777 \\ **MMGNN** & - & - & - & - & - & - & - & - & - & - & - & - & - & 0.747 \\ \end{tabular} \end{table} Table 2: ROC-AUC performances of various models on the SIDER dataset. MMGNN denotes the prior top-performing results[43]. The last column presents the elEmBERT model’s average performance across all tasks. The Bold entries signify the highest performance, while underlined values indicate the second-best performance. Figure 9: Classification of SIDER-1 data: a) Confusion matrix of predicted labels on the test set. b) t-SNE feature representation of the entire reference dataset according to their labels. c) Feature representation of the predicted labels. The binary classification results of our model for organic compounds exemplify its exceptional efficiency in predicting the behavior of interactions between organic compounds and protein molecules. By accurately classifying these compounds, our model provides valuable insights into their potential effects and interactions within biological systems. This capability holds significant promise for drug discovery, as it enables the identification of organic compounds that have a high likelihood of binding to specific protein targets and exerting desired therapeutic effects. ## 4 Conclusions In conclusion, the deep learning model presented in this paper signifies a significant advancement in the application of machine learning to computational chemistry. By integrating the attention mechanism and a transformer-based approach, our model can capture both local and global properties of chemical compounds, enabling highly accurate predictions of chemical properties that outperform similar approaches. Our innovative combination of principal component analysis and k-means clustering for sub-elements accounts for the nuanced effects stemming from electronic structure, a fact confirmed through the analysis of numerous chemical databases. Our classification approach, which relies on compound embeddings, has substantially improved prediction accuracy compared to previously published scores. Additionally, t-SNE projections provide valuable insights into the classification mechanisms and can pinpoint sources of erroneous predictions. Beyond accurately predicting desired properties, we believe that our model has the potential to illuminate the underlying reasons behind structure/property relationships. ## Acknowledgements The work has partially been carried out within the framework of the EUROfusion Consortium and received funding from the Euratom research and training programme by Grant Agreement No. 101052200-EUROfusion. SS has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 847476. The views and opinions expressed herein do not necessarily reflect those of the European Commission. The computational results have been obtained using the HPC infrastructure LEO of the University of Innsbruck. ## Data availability All data used in this paper are publicly available and can be accessed from various sources. The structure files for the MP metallicity dataset are accessible at [https://matbench.materialsproject.org/](https://matbench.materialsproject.org/). The LA, SG, and DIM datasets are available at [https://github.com/sbanik2/CEGANN/tree/main/pretrained](https://github.com/sbanik2/CEGANN/tree/main/pretrained). The BACE, BBBP, Clintox, HIV, and SIDER datasets can be retrieved from [https://molecularnet.org/](https://molecularnet.org/). Structure files for the Tox21 dataset can be obtained from Figure 10: Classification of sr-mmp data from Tox21 dataset: a) Confusion matrix of predicted labels on the test set. b) t-SNE feature representation of the entire reference dataset according to their labels. c) Feature representation of the predicted labels. \begin{table} \begin{tabular}{c c c c c c c c c c c c} **Model** & **nr-** & **nr-ar-lbd** & **nr-ar** & **nr-ar** & **nr-er-lbd** & **nr-er** & **nr-prpar-g** & **sr-** & **sr-atad5** & **sr-** & **sr-mmp** & **sr-p53** & **Ave** \\ & **ahr** & & & & & & & & & & & **hese** & 0.947 & **0.970** & **0.970** & **0.970** \\ **V0** & 0.947 & **0.987** & **0.973** & **0.982** & 0.972 & 0.924 & **0.991** & 0.908 & 0.982 & 0.975 & 0.935 & **0.970** & **0.961** \\ **V1** & **0.953** & 0.981 & 0.972 & **0.982** & **0.976** & **0.930** & 0.989 & **0.911** & **0.984** & **0.976** & **0.941** & **0.970** & 0.958 \\ **MMGNN** & **-** & **-** & **-** & **-** & **-** & **-** & **-** & **-** & **-** & 0.748 & 0.804 & 0.790 & 0.781 \\ \end{tabular} \end{table} Table 3: ROC-AUC performances of different tasks from the Tox21 dataset. MMGNN denotes the prior top-performing results[43]. The last column presents the elEmBERT model’s average performance across all tasks. The Bold entry signifies the highest performance, while underlined values indicate the second-best performance. [https://tripod.nih.gov/tox21/challenge/data.jsp](https://tripod.nih.gov/tox21/challenge/data.jsp). The source code used in this study is available at [https://github.com/dmamur/element](https://github.com/dmamur/element), and detailed Python notebooks for replicating all calculations can be found on the corresponding GitHub page.
2309.12013
Electrostatic tuning of bilayer graphene edge modes
We study the effect of a local potential shift induced by a side electrode on the edge modes at the boundary between gapped and ungapped bilayer graphene. A potential shift close to the gapped-ungapped boundary causes the emergence of unprotected edge modes, propagating in both directions along the boundary. These counterpropagating edge modes allow edge backscattering, as opposed to the case of valley-momentum-locked edge modes. We then calculate the conductance of a bilayer graphene wire in presence of finger-gate electrodes, finding strong asymmetries with energy inversion and deviations from conductance quantization that can be understood with the gate-induced unprotected edge modes.
Hira Ali, Llorenç Serra
2023-09-21T12:31:53Z
http://arxiv.org/abs/2309.12013v1
# Electrostatic tuning of bilayer graphene edge modes ###### Abstract We study the effect of a local potential shift induced by a side electrode on the edge modes at the boundary between gapped and ungapped bilayer graphene. A potential shift close to the gapped-ungapped boundary causes the emergence of unprotected edge modes, propagating in both directions along the boundary. These counterpropagating edge modes allow edge backscattering, as opposed to the case of valley-momentum-locked edge modes. We then calculate the conductance of a bilayer graphene wire in presence of finger-gate electrodes, finding strong asymmetries with energy inversion and deviations from conductance quantization that can be understood with the gate-induced unprotected edge modes. ## I Introduction Bilayer graphene (BLG) allows a remarkable mechanism of electronic confinement by tuning the energy gap with electrostatic gates on the sides of the two graphene layers.[1; 2; 3; 4; 5; 6; 7; 8; 9; 10] Indeed, an interlayer electric field opens a gap in the spectrum, thus favouring electronic confinement to those regions with a vanishing (or small) interlayer field. Electrodes of carefully chosen shapes, designed with lithographic techniques, allow different types of BLG nanostructures such as open semi-infinite edges, quasi-1D wires (electron guides), and fully closed loops, rings and dots. For instance, the blue/red electrodes in Fig. 1a create an open BLG edge at \(y=0\), separating two half planes, gapped and ungapped, for electronic motion. Graphene nanostructures can also be made with etching techniques, removing parts of the graphene system, as opposed to the above mentioned electrostatic confinement of BLG. With etching, however, the specific atomic arrangement at the borders as well as the presence of undesired edge roughnesses or imperfections is usually relevant and methods to reduce or minimize them are generally desired.[11; 12; 13] The tuning of the electric gap in BLG using electric fields was demonstrated in early magnetotransport experiments with bulk BLG [14]. These were followed by a very intense research activity, as summarized, e.g., in the field reviews Refs. [1; 3]. More recently, experiments on electrostatic confinement in BLG nanostructures have been reported for dots [6; 7; 8; 9; 10] and 1D edges [15; 16; 17; 18; 12]. The hallmark of the latter are the observation of conductance quantization in quantum transport experiments. Ungapped BLG hosts bulk propagating electronic modes for any energy, with characteristic 2D wave numbers, \((k,q)\) in \((x,y)\) directions. States in translationally invariant edges or wires in only one direction (\(x\)) have a 1D wave number (\(k\)); while closed loops and dots possess a fully discrete electronic spectrum. In Ref. [19] it was shown that an open electrostatic edge in BLG is able to bind an edge mode with a characteristic valley-momentum locking; i.e., with opposite valleys propagating in opposite directions along the edge. Remarkably, the wave number \(k\) of this mode separates from the continuum band of bulk ungapped modes and yields characteristic transport signatures in BLG junctions.[19] In this work we further investigate the properties of electrostatic edge modes in BLG. In particular, we focus on the effect of a potential shift as induced by an additional side electrode. We consider an electrostatic edge defined by two side gates (red and blue in Fig.1a), and an additional gate creating the potential shift (green in Fig. 1a). We found that a lateral shift of the electrodes by a small distance \(l_{y}\) has a very relevant effect. It causes additional edge modes in the stripe of width \(l_{y}\), running in both directions along the edge. Therefore, they allow backscattering mediated by these edge modes alone, without the need to couple with bulk modes. In presence of disorder, or other inhomogeneities, an additional electrode will then strongly affect the conductance along the edge. Besides, the electrode also causes energy-inversion Figure 1: a) Sketch showing the two bilayer graphene planes (gray), the pair of electrodes for asymmetric potential \(V_{a}\) (red and blue) and the electrode for symmetric potential \(V_{s}\) (green). A \(y\)-displacement of the symmetric and asymmetric electrodes is indicated by \(l_{y}\). b) Model symmetric \(V_{s}(y)\) and asymmetric \(V_{a}(y)\) potentials with a displacement \(l_{y}=200\) nm, \(V_{s}^{0}=2\) meV, \(V_{a}^{0}=20\) meV. asymmetry, with different conductances for positive and negative energies (with zero energy being the Dirac-point reference energy). Subsequently, we use the results on the open edge to understand the effect of finger gate electrodes (FGE) across a BLG wire or guide. We consider the cases of an extended FGE covering both wire edges, or shorter FGE's affecting one or the two edges of the wire. We predict conspicuous energy asymmetries and conductance deviations from quantization that can be explained with the FGE induced edge modes. Therefore, similarly to the case of semiconductor wires,[20] FGE's are a practical way to manipulate electronic transport in BLG electrostatic wires. ## II Theoretical model Our analysis is based on a low-energy Hamiltonian for BLG in presence of electrostatic potentials.[1] We consider two types of potentials: a _symmetric_ potential \(V_{s}\), equal on the two layers, and an _asymmetric_ potential \(\pm V_{a}\), with opposite signs on the two layers. In this work we consider parameterized model functions for both potentials, as shown in Fig. 1b for the case of an open edge. These functions read \[V_{s/a}(y)=\frac{V_{s/a}^{0}}{1+e^{(y-y_{s/a})/s}}\;, \tag{1}\] where parameters \(V_{s/a}^{0}\) and \(y_{s/a}\) are the asymptotic value and position of the border for the symmetric/asymmetric potential. Parameter \(s\) is a small distance representing the smoothness of the potential steps. Examples of our model potentials can be seen in Fig.1b. The low-energy effective Hamiltonian we will use in this work is built on an underlying tight-binding atomistic description for BLG. The electronic band struture of unbiased bulk BLG is characterized by gap closings at the six Dirac points in reciprocal space, three of them corresponding to the valley \(K_{+}\) and the other three to valley \(K_{-}\). Near those Dirac points, an expansion to the leading terms in electronic momenta yields an effective multiband continuum Hamiltonian. We refer the reader to Ref. [1] for details on the mathematical derivations and only stress here that we restrict to graphene layers in AB Bernal stacking. Adding the model potentials of the type (1) to the resulting effective Hamiltonian describes the specific confinement mechanisms due to the electrostatic gates of this work. The potential difference \(2V_{a}\) between the two graphene layers opens an energy gap in the low-energy scale around the Dirac points which is modulated in space by a position dependent potential. The BLG low-energy Hamiltonian reads [1] \[H = v_{F}p_{x}\tau_{z}\sigma_{x}+v_{F}\,p_{y}\sigma_{y}+\frac{t}{2}\, \left(\,\lambda_{x}\sigma_{x}+\lambda_{y}\sigma_{y}\,\right) \tag{2}\] \[+ V_{s}(x,y)+V_{a}(x,y)\,\lambda_{z}\;,\] with the Fermi velocity \(\hbar v_{F}=660\,\)meV nm and the interlayer coupling \(t=380\,\)meV. In Eq. (2), \(\sigma_{x,y,z}\), \(\tau_{x,y,z}\) and \(\lambda_{x,y,z}\) are sets of Pauli matrices for sublattice, valley and layer degrees of freedom, respectively. This Hamiltonian is valley diagonal and it has been used to study quantum states in a variety of BLG nanostructures. As mentioned in Sec. I, the use of position-dependent potentials \(V_{s}(x,y)\) and \(V_{a}(x,y)\) allow modeling the effect of potential gates that create electrostatic borders. Notice that Eq. (2) is for general inhomogenous potentials \(V_{s}\) and \(V_{a}\) depending on both coordinates \((x,y)\). See below, however, for the restricted cases considered in this work of potentials which are uniform along \(x\) or piecewise-uniform along \(x\), with each uniform section of the type given in Eq. (1). We also stress that for the case of sign-changing \(V_{a}(x,y)\)'s, Hamiltonian (2) predicts the emergence of _topological_ modes near the sign-change border. The spectrum becomes gapless in presence of these modes since their energies \(E(k)\) cross from the negative to the positive energy sectors. These modes also show a characteristic valley-momentum locking and protection from bulk modes by an energy gap. [21; 22; 23; 24; 25; 26; 27] From a formal Condensed Matter topology approach, it has been pointed out that specific invariants for each valley \(N_{\tau}=\pm 1\) can be approximately defined in BLG.[28; 2] However, it has also been stressed that the bulk-boundary correspondence between those invariants and the edge modes is not general and may depend on the specific type of interface, such as in BLG-vacuum or BLG-BLG.[28] The latter type corresponding to the electrostatic boundaries considered in this work. Below, we will discuss a) the eigenstates of the Hamiltonian (2) for fully translational invariant BLG systems with both potentials \(V_{s}(y)\) and \(V_{a}(y)\); and b) the conductance through junctions of different BLG sections described by \(V_{s}(x,y)\) and \(V_{a}(x,y)\) having a piecewise-constant dependence on \(x\). They model the effect of a central FGE on a quantum wire (see device sketches in Figs. 3-5). In all cases our resolution method is based on a combination of spatial grid discretization and multiple component wave functions using complex-band-structure theory. More details of the method can be found in Sec. V. ## III Results and discussion #### iii.0.1 Single Edge Figure 2 shows the electron eigenenergies for the open BLG edge sketched in Fig. 1. The gray region in Fig. 2a is the continuum for bulk modes, given by the condition \[|k|<\frac{1}{\hbar v_{F}}\,\sqrt{|E|\,\left(\,|E|+t\,\right)}\;. \tag{3}\] See App. A for a derivation of this momentum restriction for bulk propagating states. The red line in Fig. 2a shows the edge mode in absence of symmetric potential \(V_{s}=0\). This mode spatially decays with the distance to the boundary (Fig. 2b) and it is characterized by valley-momentum locking; reversed valleys propagating in reversed directions in a similar way to the quantum spin Hall effect but replacing spin with valley. The edge mode becomes damped when it overlaps with the continuum of bulk BLG modes, indicated in gray colour in Fig. 2a. In this case the localized edge mode decays into bulk modes with the same \(E\) and \(k\), thus flying away from the edge. In Fig. 2a this overlap occurs in the region of vanishing \(E\) and \(k\) and, technically, it is not easily resolved by the numerical calculation. The modification induced by a potential shift of an additional gate with \(l_{y}=200\) nm is shown in Figs. 2cd. The discrete branch of states of Fig. 2a is now shifted upwards in energy, merging with the continuum for energies beyond a given maximum value. In addition, new branches of modes emerge at low and negative energies that are localized to the region of width \(l_{y}\) near the boundary. These modes propagate in both directions, as seen from the positive and negative slopes of the energy branches. The corresponding probability densities show substantial overlaps (Fig. 2d), suggesting the possibility of backscattering mediated by these edge modes in presence of inhomogeneities along the edge. Most remarkably, the additional side gate (and potential shift \(V_{s}\)) yields energy-inversion asymmetry in Fig. 2c, with the presence of edge-mode branches only in the lower part of the energy diagram. We stress that the shift \(l_{y}\) in Fig. 1a is essential for the emergence of additional branches of edge states, as well as for the energy asymmetry of the spectra. #### ii.2.2 Quantum Wire Junctions Having analyzed the gate-induced modifications in the open edge, we consider next the role of a FGE on an electrostatic quantum wire of width \(L_{y}\). More specifically, we calculate the total left-to-right transmission \(T\) (with conductance \(G=Te^{2}/h\)) using the complex-band-structure method for the double junction system sketched in Figs. 3-5. Firstly, Fig. 3a shows that a FGE covering all the wire has a negligible effect on the wire conductance: transmission is perfect and the coductance simply reproduces the staircase function of the number of active modes. On the contrary, a FGE covering only one edge of the quantum wire (Fig. 3b) yields relevant modifications. \(G\) deviates from the plateau values, with conspicuous minima for energies \(E<V_{s}^{0}\). There is also a Figure 2: a) Eigenenergies for the (open) single edge with \(V_{s}=0\). The gray region is the bulk continuum while the red line is the discrete branch of edge states. b) Spatial probability distributions for two selected wave numbers \(k\) indicated in panel a) by the corresponding labels. c,d) Similar results to a,b) but with \(V_{s}=2\) meV and \(l_{y}=200\) nm. Other parameters: \(V_{a}=20\) meV, \(s=7.5\) nm. Figure 3: a) Conductance of a quantum wire in presence of a FGE across all the wire (a), and with a FGE covering only one edge (b). The sketch insets show the corresponding systems. The number of active modes in the asymptotic leads \(\mathcal{N}_{l}\) and center \(\mathcal{N}_{c}\) are also shown. Parameters: \(L_{y}=600\) nm, \(l_{y}=200\) nm, \(l_{x}=1\)\(\mu\)m, \(V_{s}^{0}=0.5\) meV. clear asymmetry with respect to energy inversion in Fig. 3b. For \(E>V_{s}^{0}\) the conductance is almost perfectly quantized, while for \(E<V_{s}^{0}\) it shows the mentioned deviations. The conductance non-quantization and asymmetry of Fig. 3b can be understood as effects of the edge modes induced by the FGE, as discussed above. Quasi bound states, allowed by edge mode backscatters at the interfaces, lead to conductance dips for specific (resonant) energies. This mechanism is only present for \(E<V_{s}^{0}\), thereby explaining the asymmetry in conductance. The case of two FGE's, one on each edge of the wire, is presented in Fig. 4. We studied this configuration using the same shift \(V_{s}^{0}\) on the two FGE's (Fig. 4a), and with opposite signs of the shift \(\pm V_{s}^{0}\) on the two FGE's (Fig. 4b). The case of identical shifts is very similar to the preceding case with just a single FGE (Fig. 3b). However, with opposite signs the results change markedly; the conductance becoming again symmetric with energy inversion and the deviations from quantization are enhanced. As a final case, we consider a topological inversion in the asymmetric potential \(V_{a}\),[29; 30; 31; 32; 33] with the red/blue electrodes being reverted on the two edges of the quantum wire (Fig. 5) and with two FGE's. The results in Fig. 5ab are very similar to those in Fig. 4ab but with a notable difference near zero energy. Namely, in the topological cases the conductance is perfectly quantized to \(4e^{2}/h\) in a small energy plateau around zero, while it vanishes in Figs. 3 and 4. This is explained by the gapless character of the topological wire, which hosts two valley-momentum-locked branches crossing zero energy. On the contrary, the nontopological (trivial) confinement in a finite \(L_{y}\) wire is always characterized by a zero energy gap due to the finite size. The energy inversion symmetries of the different configurations of FGE electrodes considered in Figs. 4 and 5 is summarized in Table 1. Notice that this symmetry is fixed by the product of the signs of FGE potentials on opposite edges of the wire, irrespectively of the trivial or topological character of the wire confinement. This re Figure 4: Similar to Fig. 3 but with two FGE’s with the same \(V_{s}^{0}\) (a), and with opposite \(V_{s}^{0}\) (b). Figure 5: Similar to Figs. 3 and 4 but for a quantum wire with topological confinement; i.e., with reversed blue and red electrodes on the two wire edges, as shown in the inset sketches. \begin{table} \begin{tabular}{c|c|c} \hline \hline Confinement & FGE1 x FGE2 & conductance \\ \hline Trivial & + & A \\ & - & S \\ \hline Topological & + & A \\ & - & S \\ \hline \hline \end{tabular} \end{table} Table 1: Symmetric/antisymmetric (S/A) character of the conductances in Figs. 4 and 5. The column FGE1xFGE2 indicates the sign product of the FGE potentials covering the two wire edges. sult illustrates how conductance measurements could be used to observe tuning of the edge modes using FGE's. #### iii.3.3 Further discussion All results presented above are for a single valley, \(K_{+}\). The corresponding results for the reversed valley \(K_{-}\) can be inferred simply reverting \(k\to-k\) in Fig. 2ab for a single edge, while the results remain invariant in the cases of wires with FGE's of Figs. 3-5. Therefore, we do not find any valley polarization induced by a FGE in the quantum wire. An important underlying aspect, however beyond our present analysis, is the role of random imperfections and disorder in the device. While it can be reasonable to assume that BLG is relatively free of such disorder effects, the additional processing required for the electrostatic electrodes could introduce such random disorder. Therefore, this is a relevant aspect to consider in the future. We may expect, however, that the conductance asymmetry and the nonquantization induced by FGE's in a quantum wire would be enhanced in presence of random disorder. ## IV Conclusions We have studied the role of an electrode creating a potential shift near an electrostatic BLG edge. We found that in presence of a displacement \(l_{y}\) between the BLG edge and the additional electrode, new modes emerge near the edge, in the region of width \(l_{y}\), that propagate in both directions. Furthermore, the valley-momentum-locked branch of the single edge is shifted in energy by the electrode and the spectrum becomes asymmetric with energy inversion. We also investigated the more practical case of a BLG quantum wire in presence of transverse FGE's. If a FGE is covering the two edges of a quantum wire, the system's conductance is almost unchanged and remains nearly perfectly quantized. However, if the FGE covers only one edge, or there are two different FGE's covering the two edges, then the conductance displays strong non quantizations and asymmetries with respect to energy inversion. These changes are in good agreement with the modifications expected from the single-edge spectrum in presence and a displaced electrode. The energy inversion symmetry in the quantum wire is restored with two FGE's having opposite potential signs on the two edges. In summary, our work shows that FGE's can be a practical way to manipulate transport properties of BLG quantum wires by the electrostatic tuning the electronic modes at the wire edges. ## V Methods We solved the eigen-problem with Hamiltonian (2) using finite-difference discretization and matrix diagonalization routines. With translational invariance, in the cases of a single edge and quantum wire, a matrix diagonalization for each (real) wave number \(k\) yields the band structure \(E_{n}(k)\) as well as it corresponding eigenstates. An important aspect is the filtering of spurious modes emerging due to an artificial Fermion doubling of the physical eigenstates. In practice the filtering is done by eliminating those states with large oscillations in neigbouring grid points, such that spatially averaging on a small neigbourhood strongly modifies the wave function. We found this simple technique to be quite effective and robust.[32] The transport problem for the junctions of piece wise homogenous sections in the transport direction (\(x\)) was solved using the complex-band-structure approach discussed in Ref. [34]. Here, it is important to include complex wave numbers \(k\) in order to describe evanescent-state behavior in the proximity of the junction interfaces. The wave-function matching at the junction interfaces is transformed into a large set of linear equations whose solution determines the quantum transmissions \(T_{kk^{\prime}}\) and its corresponding Landauer conductance \(G=\frac{e^{2}}{h}\sum_{kk^{\prime}}T_{kk^{\prime}}\). We refer to Ref. [34] for more details of the complex band structure approach and to Refs. [19; 32] for its specific application to BLG structures. ## Appendix A Bulk continuum For constant \(V_{s/a}\) potentials the eigenstates of the Hamiltonian (2) are plane waves, with momenta \((k,q)\) along \((x,y)\), \[\Psi\equiv\Phi_{\eta_{\sigma}\eta_{\tau}\eta_{\lambda}}\,e^{i(kx+qy)}\;, \tag{10}\] with \(\eta_{\sigma},\eta_{\tau},\eta_{\lambda}=1,2\) indicating the different spinorial components of the wave function. Assuming a given real \(k\), we can determine the corresponding \(q\)'s by transforming the eigenvalue equation as follows \[H\Psi=E\Psi\quad\Rightarrow\quad\sigma_{y}H\Psi=E\sigma_{y}\Psi\;. \tag{11}\] The purpose of the above transformation is that Eq. (11) can be easily rewritten as an eigenvalue equation for \(q\), \[\frac{1}{\hbar v_{F}}\left[E\,\sigma_{y} + i\,\hbar v_{F}\,k\,\sigma_{z}\tau_{z}+\frac{t}{2}\left(i\,\lambda _{x}\sigma_{z}-\lambda_{y}\right)\right. \tag{12}\] \[- \left.V_{s}\,\sigma_{y}-\,V_{a}\,\sigma_{y}\lambda_{z}\right] \Phi=q\,\Phi\;.\] After some algebra the eigenvalues of Eq. (12), assuming free BLG for which \(V_{s/a}=0\), can be analytically determined with the diagonalization of an algebraic matrix. The \(q\) eigenvalues read \[q=\pm\frac{1}{\hbar v_{F}}\sqrt{-\hbar^{2}v_{F}^{2}\,k^{2}+|E|\left(\,|E|\pm t\, \right)}\;. \tag{10}\] Notice that Eq. (10) already proves the existence of a critical value \(k_{c}=\sqrt{|E|\left(\,|E|+t\,\right)}/\hbar v_{F}\), as given in Eq. (3). Indeed, the propagating-mode condition requires \(q\) to be real which, from the square root in Eq. (10), requires \(|k|<k_{c}\). ###### Acknowledgements. We acknowledge support from Grant No. PDR2020-12 funded by GOIB; and from Grant No. MDM2017-0711 and Grant No. PID2020-117347GB-I00 funded by MCIN/AEI/10.13039/501100011033. H.A. was supported by GOIB program "SOIB Recerca i Innovacio"
2309.14462
On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Recent advances with self-supervised learning have allowed speech recognition systems to achieve state-of-the-art (SOTA) word error rates (WER) while requiring only a fraction of the labeled training data needed by its predecessors. Notwithstanding, while such models achieve SOTA performance in matched train/test conditions, their performance degrades substantially when tested in unseen conditions. To overcome this problem, strategies such as data augmentation and/or domain shift training have been explored. Available models, however, are still too large to be considered for edge speech applications on resource-constrained devices, thus model compression tools are needed. In this paper, we explore the effects that train/test mismatch conditions have on speech recognition accuracy based on compressed self-supervised speech models. In particular, we report on the effects that parameter quantization and model pruning have on speech recognition accuracy based on the so-called robust wav2vec 2.0 model under noisy, reverberant, and noise-plus-reverberation conditions.
Arthur Pimentel, Heitor Guimarães, Anderson R. Avila, Mehdi Rezagholizadeh, Tiago H. Falk
2023-09-25T18:54:16Z
http://arxiv.org/abs/2309.14462v1
On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild" ###### Abstract Recent advances with self-supervised learning have allowed speech recognition systems to achieve state-of-the-art (SOTA) word error rates (WER) while requiring only a fraction of the labeled training data needed by its predecessors. Notwithstanding, while such models achieve SOTA performance in matched train/test conditions, their performance degrades substantially when tested in unseen conditions. To overcome this problem, strategies such as data augmentation and/or domain shift training have been explored. Available models, however, are still too large to be considered for edge speech applications on resource-constrained devices, thus model compression tools are needed. In this paper, we explore the effects that train/test mismatch conditions have on speech recognition accuracy based on compressed self-supervised speech models. In particular, we report on the effects that parameter quantization and model pruning have on speech recognition accuracy based on the so-called robust wav2vec 2.0 model under noisy, reverberant, and noise-plus-reverberation conditions. ## 1 Introduction Large deep learning models have recently achieved great success on speech recognition tasks [1; 2]. These models, however, use a considerable amount of computational resources, which can be unfeasible for many edge applications. Edge applications focus on bringing computing as close to the source of data as possible in order to reduce latency and bandwidth use. It can be particularly important for speech recognition applications, where private and/or sensitive speaker data may need to be sent over the network to be processed remotely on large data processing clusters hosting very large and complex models. Bringing such large models to the edge can be challenging, as some edge devices may be resource-constrained with limited storage and processing capacity. Moreover, edge applications are corrupted by several environmental factors, such as ambient noise and/or room reverberation, which are known to be detrimental to speech-based applications. As such, a more detailed study on the impact of model compression and inference efficiency for large speech recognition models is needed. We aim to fill this gap. More specifically, in this study our overarching goal is two-fold: (1) understand how well state-of-the-art (SOTA) speech recognition models behave under different model compression schemes, and (2) how well the compressed model accuracy remains under varying noise and reverberation conditions. We hope that the results from this study will shed light on the performance gaps that may exist before "edge speech recognition" is implemented in practice. Experiments with the latest (robust) wav2vec 2.0 model are conducted under two different compression schemes (quantization and model pruning) and five noisy conditions (SNR = 0, 5, 10, 15, and 20 dB) and two reverberations conditions (small room and medium room). Methods and Materials ### Speech Recognition Models The _wav2vec 2.0_[3] model learns basic speech units used to tackle a self-supervised task. The architecture consists of a multi-layer convolutional feature encoder which takes as input raw audio and outputs latent speech representations at each time step, which are then fed to a context network. The encoder consists of several blocks containing a temporal convolution followed by layer normalization [4] and a GELU [5] activation function, while the context network is a Transformer creating contextualised representations from the entire sequence. The _robust wav2vec 2.0_[6] model is a recent variant developed to provide improved robustness against domain shifts (e.g., due to noise, varying datasets, or other factors) at test time. Using the same architecture as its predecessor, robust wav2vec utilizes target domain data during pre-training, thus leading to significant performance improvements in out-of-domain ASR. In its original proposal, Hsu et al. [6] evaluated model performance in different out-of-domain conditions. However, varying noise levels - an important condition in edge applications - was not explored comprehensively. Here, we aim to fill this gap, as well as gauge the impact that model compression may have. In our experiments, pre-trained and fine-tuned models from the _Hugging Face_ platform were used. More specifically, the wav2vec 2.0 model is pre-trained and fine-tuned on 960 hours of the Librispeech dataset1, while the robust wav2vec 2.0 is pre-trained on the Libri-Light, CommonVoice, Switchboard and Fisher datasets and fine-tuned on 960 hours of the Librispeech dataset2. Footnote 1: [https://huggingface.co/facebook/wav2vec2-large-960h](https://huggingface.co/facebook/wav2vec2-large-960h) Footnote 2: [https://huggingface.co/facebook/wav2vec2-large-robust-ft-libri-960h](https://huggingface.co/facebook/wav2vec2-large-robust-ft-libri-960h) ### Model compression techniques Two classic model compression techniques are explored to gauge the potential of edge speech recognition applications. The first is quantization, where the number of bits required to store each weight is reduced, thus substantially shrinking the model size, saving memory and accelerating computation. The method can be further extended to represent gradient and activation in the quantized form [7]. Here, we explore the impact of 8-bit quantization on all linear layers of the speech models and compare against its original 32-bit version (i.e., a compression ratio of 4). Next, model pruning is explored where redundant parameters can be removed from the network with minimal effect on model accuracy [8]. Global unstructured pruning based on the lowest L1-norm was used at five different pruning rates, from 10-30% at 5% intervals. ### Additive noise and reverberation As we are interested in understanding the impact of compressed speech models in edge conditions, we use the noise signals present in the Deep Noise Suppression Challenge 4 (DNS4) dataset [9] to corrupt the test speech signals of the Librispeech dataset. This noise dataset consists of 180 hours of noise, present across 62,000 utterances, covering 150 different non-speech-like noise types. These files are added to the test samples at five varying SNR levels, ranging from 0 dB to 20 dB at 5 dB intervals.Reverberation, in turn, is simulated by convolving the clean signals with a uniformly sampled room impulse response (RIR). The openSLR28 dataset with 248 real RIRs and the openSLR26 with 60,000 synthetic RIRs are used [10] to simulate small and medium sized rooms. Lastly, reverberation and noise is simulated by combining the two previous steps. In all cases, if necessary, waveforms are resampled to 16 kHz. ## 3 Results and Discussion First, we explore the impact of quantization on both original and robust versions of wav2vec 2.0 in clean matched conditions. Both models achieved a WER of 3.2% with weights stored in 32-bit floating point precision (total model size of 1262 Mb). After quantization, WER increased to 3.3%, thus a slight increase for a 4-fold model compression (total model size 354.5 Mb). Next, we explore the impact of pruning the weights of the convolutional and linear layers of the models. It is observed that the robust version of the speech model showed only a slight increase in WER to 3.3% at a prune rate of 0.3, while its original version increased to 5.7%. Next, we explore the impact of varying noise levels and reverberation. Figures 0(a) and 0(b) show WER plots of original and quantized models, as a function of SNR and room size, respectively. As can be seen, 8-bit quantization showed minimal effect on model performance for the noisy case and a small impact when reverberation was present (e.g., WER for robust wav2vec went from 0.049 to 0.052 for the medium room size condition). Next, we perform an in-depth analysis of the WER achieved for different noise types. Table 1 shows the mean WER for audio files corrupted with six common noise types, namely domestic sounds (e.g., vacuuming), human voice, music, vehicle noise, wind noise, and other miscellaneous sources. As can be seen, human voice, domestic sounds, and vehicle noise showed the greatest performance deterioration. These are conditions in which edge applications would typically be seen, such as smart speakers and in-vehicle speech recognition. Overall, the robust wav2vec 2.0 model outperformed the original wav2vec 2.0 across all noise types, with the closest match achieved with wind noise. Next, we explore the robustness of the models to pruning. Figures 1(a) and 1(b) show WER plots as a function of pruning rate and SNR or pruning rate and room size, respectively. As can be seen, pruning of the two models affected WERs, especially for SNRs lower than 15 dB, with the original wav2vec 2.0 model showing the greatest deterioration, particularly with pruning rates above 20%. Reverberation, in turn, showed minimal effect on the pruned robust version, but had a substantial impact on wav2vec 2.0, especially for medium sized rooms and pruning rates greater than 15%. Lastly, we evaluate the robustness of the two models to pruning with combined additive noise and reverberation present in the test signals. Figure 3 shows the WER as a function of prune rate and room size where noise levels have been averaged across the 0-20 dB range. Again the robust model showed to be insensitive to increased pruning rates, but sensitive to the degradations themselves. For example, at an SNR of 5 dB, the robust model achieved a WER of 10.1% at a pruning rate of 0.3. This error increased to 25.5% at an SNR of 0 dB. Notwithstanding, this is substantially better than what was shown with the original wav2vec 2.0 model that, under the same compression and noise conditions, achieved a WER of 62.2%. \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Noise Type**} & \multicolumn{2}{c}{**Mean WER**} \\ \cline{2-3} & **wav2vec 2.0** & **Robust wav2vec 2.0** \\ \hline Domestic sounds & 13.6 & 10.6 \\ Human voice & 12.0 & 9.3 \\ Miscellaneous sources & 6.3 & 3.3 \\ Music & 11.0 & 7.6 \\ Vehicle & 11.7 & 8.8 \\ Wind & 5.8 & 5.4 \\ \hline \hline \end{tabular} \end{table} Table 1: Mean WER per noise type. Figure 1: WER for original (FP32) and quantized (Int8) models, with (a) noise and (b) reverberation. Overall, the experiments described herein suggest that existing quantization and pruning compression schemes seem to be well-suited for edge speech recognition applications when applied in somewhat clean conditions. In such scenarios, compression rates as high as 4 could be achieved with minimal impact on WER. On the other hand, if test applications involve noisy and/or reverberant conditions, improved speech representations are still needed, beyond what can be achieved with the so-called robust wav2vec model. Environment-aware knowledge distillation may be one possible solution. ## 4 Conclusions In this work, we evaluate the robustness of two SOTA ASR speech models, namely wav2vec 2.0 and robust wav2vec 2.0, to unseen noisy and reverberant conditions when the models are compressed via quantization and pruning schemes. In particular, 8-bit quantization and L1-norm based global unstructured pruning were explored. It was found that while quantization and pruning have minimal impact on WER in clean conditions, noise and reverberation cause a significant WER degradation, even with models built inherently to be robust to such conditions. Future work should explore more robust compression and self-supervised representations before edge speech recognition applications can be deployed "in the wild". Figure 3: WER as a function of room size for signals with added noise between 0 and 20 dB. Figure 2: WER as a function of the pruning rate and (a) additive noise or (b) reverberation levels.
2301.13507
An Analysis of Classification Approaches for Hit Song Prediction using Engineered Metadata Features with Lyrics and Audio Features
Hit song prediction, one of the emerging fields in music information retrieval (MIR), remains a considerable challenge. Being able to understand what makes a given song a hit is clearly beneficial to the whole music industry. Previous approaches to hit song prediction have focused on using audio features of a record. This study aims to improve the prediction result of the top 10 hits among Billboard Hot 100 songs using more alternative metadata, including song audio features provided by Spotify, song lyrics, and novel metadata-based features (title topic, popularity continuity and genre class). Five machine learning approaches are applied, including: k-nearest neighbours, Naive Bayes, Random Forest, Logistic Regression and Multilayer Perceptron. Our results show that Random Forest (RF) and Logistic Regression (LR) with all features (including novel features, song audio features and lyrics features) outperforms other models, achieving 89.1% and 87.2% accuracy, and 0.91 and 0.93 AUC, respectively. Our findings also demonstrate the utility of our novel music metadata features, which contributed most to the models' discriminative performance.
Mengyisong Zhao, Morgan Harvey, David Cameron, Frank Hopfgartner, Valerie J. Gillet
2023-01-31T09:48:53Z
http://arxiv.org/abs/2301.13507v1
An Analysis of Classification Approaches for Hit Song Prediction using Engineered Metadata Features with Lyrics and Audio Features ###### Abstract Hit song prediction, one of the emerging fields in music information retrieval (MIR), remains a considerable challenge. Being able to understand what makes a given song a hit is clearly beneficial to the whole music industry. Previous approaches to hit song prediction have focused on using audio features of a record. This study aims to improve the prediction result of the top 10 hits among Billboard Hot 100 songs using more alternative metadata, including song audio features provided by Spotify, song lyrics, and novel metadata-based features (title topic, popularity continuity and genre class). Five machine learning approaches are applied, including: k-nearest neighbours, Naive Bayes, Random Forest, Logistic Regression and Multilayer Perceptron. Our results show that Random Forest (RF) and Logistic Regression (LR) with all features (including novel features, song audio features and lyrics features) outperforms other models, achieving 89.1% and 87.2% accuracy, and 0.91 and 0.93 AUC, respectively. Our findings also demonstrate the utility of our novel music metadata features, which contributed most to the models' discriminative performance. Keywords:Hit song prediction Music Information Retrieval Machine learning Text processing. ## 1 Introduction Music labels spend more than $4.5 billion every year discovering new talented artists and producing popular songs [1]. Precipitated by the growing importance of online digital music platforms and recent advancements in machine learning and big data technologies, a new research area called hit song science has attracted increasing attention [2]. A successful hit song prediction approach could bring considerable benefits to many music lifecycle stakeholders. Early hit song prediction studies illustrate the complexity of this problem, delivering only weak classification results [3, 4, 5, 6]. In recent years, more advanced approaches have been able to accurately predict hits and non-hits using audio features [7, 8, 9, 10, 11, 12, 13]; however, many other potentially useful sources of information about the songs are also available. In this study, we employ 12 Spotify audio features (energy, liveness, tempo, speechiness, acousticness, time_signature, key, duration_ms, loudness, valence, mode and danceability), these features are drawn directly from Spotify, together with _novel features_ based on Billboard music metadata (popularity continuity, genre class and title topic), as well as the topics extracted from the songs' lyrics to identify Top 10 hits among Top 100 hits. To our knowledge, this work is the first attempt to improve hit song prediction by extracting features from the topic of song titles and by using a song's prior popularity information. We examine the effectiveness of these novel features together with song audio and lyrics features for hit song prediction using a variety of machine learning approaches, including k-nearest neighbours (kNN), Naive Bayes (NB), Random Forest (RF), Logistic Regression (LR) and Multilayer Perceptron (MLP). Our findings demonstrate the utility of the new features and provide state-of-the-art prediction performance, as well as providing promising avenues for future work in this area. ## 2 Related Work Hit song prediction (HSP) has been investigated frequently in recent decades. Much seminal work failed to accurately predict hit songs, with some work even suggesting that popularity was not predictable [4, 5]. An early approach by Dhanaraj and Logan [3] achieved promising results by using a SVM model to classify top 1 songs through acoustic and lyrics data. However, they provided only scant details about their data gathering, feature engineering, model training and parameter optimization procedure and found textual features to be more predictive than audio analysis. Salganik et al.[4], and Pachet and Roy [5] attempted to reproduce Dhanaraj and Logan's work but failed to achieve a similar level of accuracy. Various algorithms have been applied to tackle this task, among them: Logistic Regression (LR), Support Vector Machine (SVM) and Neural Networks (NN) are commonly used [3, 6, 8, 11]. Ni et al. [7] gained promising results in predicting UK Top 5 hits on the Top 40 single song charts, but again little implementation detail was provided. Fan and Casey [11] used LR and SVM models to predict British and Chinese hit songs but found that audio features worked better for predicting Chinese hits than British ones, and that textual features worked best overall. Herremans et al. [6] focussed particularly on dance songs and classified hits using five machine learning models. Their research affirmed the importance of audio features; however, they achieved relatively poor accuracy results, perhaps due to their use of a large number of features without performing any feature selection. Georgieva et al. [8] compared six machine-learning algorithms when conducting Billboard hit song prediction; the most successful algorithms were LR and a NN with a single hidden layer. Their work also demonstrated the utility of Spotify's audio features for this task. Nasreddin [12] did similar research but identified XGBoost as the top performing classifier; in their study the SVM model performed the worst. As they only use the raw data without any feature selection, they only achieved accuracy results similar to those of Herremans et al. [6] Recently, Zangerle et al. [15] adopted deep neural networks and treat HSP as a regression task, and their experimental results show that the wide and deep neural network-based approach performed best, achieving 72.04% accuracy. However, the common problem with deep neural networks is that their results are hard to interpret. Essa et al. [16] tried to solve the HIS task by using both classification and regression models. They considered audio features alone and, through adopting seven machine learning models, they achieved results suggesting that both machine learning approaches (classification and regression) can be used for HSP. Although previous studies have made a large contribution to this topic, it is still unclear which features can be used to successfully classify hit songs when including audio features, music metadata and song lyrics, and in what combination. Audio features have shown promise, but only raw terms have been used to construct features to date [5, 6, 7]. Textual features have rarely been adopted in hit song prediction tasks and, although Singhi and Brown [17] did attempt to extract 31 song lyrics features and build SVM model to predict hit songs, the performance achieved was not inspiring. ## 3 Data and Methodology ### Data collection and preprocessing To investigate hit song prediction, we obtained Billboard hot 100 songs data from the open-source platform _data.world_ named "_Billboard Hot-100 Songs 2000-2018 w/Spotify Data+Lyric_"3. The dataset includes all songs in the Billboard hot 100 weekly charts from 2007 to 2017, as well as audio features, metadata and lyrics of each song provided by Spotify. The raw dataset includes 33 attributes in total. We firstly remove the irrelevant features (e.g., spotify_link, video_link, analysis_url). Then, we define "hits" in this context to be songs whose highest position in the Billboard Hot 100 list was at rank 10 or above to produce a binary classification of "hit" (1) that at some point reached the Top 10 or "not-hit" (0) that never reached the Top 10. The features used in this study include those engineered based on metadata (e.g., weeks, song title, music genre), 12 Spotify audio features, as well as lyrics of each song. 273 songs had missing audio features data and/or lyrics, and were subsequently removed as it would not be possible to extrapolate or estimate such features. This left 3581 unique songs in the final data set: 507 hits and 3074 non-hits. ### Feature Engineering We engineered several additional features to augment the existing metadata features from the original Billboard data and the Spotify audio features. _Popularity continuity_ was created to represent the sum of each song's popular duration (i.e., how many weeks it had already been listed in the hot 100 chart prior to the week of interest). Songs already present in the chart for more than 50 weeks were assigned a 3; those present for between 20 and 50 were assigned to 2; those between 10 and 20 were assigned 1; otherwise, a song was assigned 0. Unlike classical music, popular music has relatively rapid iterations [19]. The majority of songs only remain in the chart for a short period, typically less than 20 weeks. Therefore, we assign a number based on 3 duration splits where the assigned numbers are only based on weekly duration data. The _song title topic_ feature was created based on the song title. We removed symbols, punctuation, short terms (i.e., fewer than 4 chars) and stopwords from the data, then, inspired by [3], used a bagofwords representation with Latent Dirichlet Allocation (LDA) to extract topics from the song titles. In total, ten topics were extracted, and each song was assigned to the topic number with the highest probability for that song in \(\theta\). The numerical variable named _genre class_ was created to replace the existing string variable _broad_genre_ in which each genre was assigned a numerical value: 1 to 6 representing country, electronic dance music (edm), pop, r&b, rock and rap music respectively. We treat song lyrics similar to how we treat song titles, the only difference being the number of topics: 20 topics were extracted from the lyrics. This is because lyrics are far longer than titles, thus providing sufficient data to extract a larger number of more meaningful topics. Each song was assigned to the topic number with the highest probability for that song in \(\theta\). ### Training Environment Min-max normalization method was applied to accelerate the algorithm convergence speed [16]. After preprocessing and feature engineering, a total of 16 features were used for model building. We treat each song as an individual, temporal factors were not considered in our experiment, the data were split into training and testing sets using a ratio of 80:20 and, due to the relatively small size of the overall data set, 5-fold cross validation was applied instead of an individual validation set. Due to the highly imbalanced classes (i.e., most songs are not top-10 hits), Synthetic Minority Over-Sampling (SMOTE) was adopted inspired by Chawla et al. [18], which could effectively increase the accuracy of minority class (hit song) prediction. In this paper, 5 nearest neighbours have currently used to over-sampling the minority class (hit songs). resulting in a final training set of 4918 songs (including hits 2459 and 2459 non-hits). Forward feature selection was carried out. All models were trained and tested using KNIME 4.4.04. ### Model Setup and Optimisation This study examined five commonly-used machine learning approaches from the prior literature, and all the model parameter tuning has been using 5-fold cross validation. The model hyperparameter and their optimum values are shown in Table 1. _k-Nearest Neighbour (kNN)_. We tested values of k between 1 to 20 to seek for more appropriate neighborhood distance when predicting hit songs, and when we tuned the hyperparameter to k = 1 has achieved most effective accuracy. _Naive Bayes (NB)_. We tested the default probability from 0.001 to 1 every 0.01, the best setting was default probability = 0.031. _Random Forest (RF)_. When using RF to train our model, the different split criterion algorithm provides varied performance, which includes information gain, information gain ratio, and Gini index. We tested the number of models of all algorithms from 50 to 1000 every 50, and the Gini index achieved best performance at 600 numbers of models. _Logistic Regression (LR)_. We tested four ways to solve the equation, iteratively reweighted least squares with Gauss, iteratively reweighted least squares with Laplace, stochastic average gradient with Gauss, stochastic average gradient with Laplace. We find out using iteratively reweighted least squares with Laplace regularization to solve the equation is more effective. The Laplace equals to 3 has been accepted as best performance. _Neural Network (NN)_. Multilayer perceptron (MLP) model consisting of an input layer, a hidden layer, and an output layer has been conducted in this study. We tested the Maximum number of iterations from 500 to 5000 with 500 stop sizes, Number of hidden layers and Number of hidden neurons per layer from 1 to 25 was measured every 3. The best parameter tuning result is 4500, 4, 22, respectively. ## 4 Findings, Results and Limitations Our results include an analysis of accuracy, as well as AUC and the number of features used as a measure of parsimony (see Table 2 and Table 3). We compare models trained using all features, including our novel engineered ones, audio features and lyrics features together, against three "baseline" models, audio features \begin{table} \begin{tabular}{|l|l|l|} \hline Classifier & Hyperparameter & Value \\ \hline kNN & K value & 1 \\ NB & Default probability & 0.031 \\ RF & Gini index: number of models & 600 \\ LR & Laplace & 3 \\ NN & Maximum number of iterations & 4500 \\ & Number of hidden layers & 4 \\ & Number of hidden neurons per layer & 22 \\ \hline \end{tabular} \end{table} Table 1: All model hyperparameter tuning optimization value. alone, audio features and original metadata features, as well as novel features and audio features model. It is notable that Random Forest (Accuracy=89.1%, AUC=0.91) and Logistic Regression (Accuracy=87.2%, AUC=0.93) with all features performed best according to both metrics. Logistic Regression with Laplace regularisation achieves the best AUC score while only using 4 features. According to Han et al. [20], the reason L1 regularisation is more appropriate to this task could be it capable of reduce the coefficients of some features to zero and generate a spare solution. Random Forest achieved the best accuracy result, but required seven features to train the model, which leads to longer training times, and poorer explainability. MLP shows average performance in this task; this model requires a maximum number of features according to Table 2, and longest training time to achieve the best result, perhaps because the volume of the data available is insufficient to train the network well. Naive Bayes performs worst on accuracy, but better on AUC score, which means this model has great ability on identifying hits but weak on identifying non-hits. \begin{table} \begin{tabular}{|l|l|l|} \hline Classifier & Accepted Feature Combination \\ \hline kNN & _popularity continuity_; _song title topic, genre class_, energy, liveness, key, **lyrics topic1** \\ NB & _popularity continuity_, _genve class_, key, loudness \\ RF & _popularity continuity_, _genve class_, _song title topic_, key, valence, energy, **lyrics topic** \\ LR & _popularity continuity_, _genve class_, **lyrics topic**, danceability \\ NN & _popularity continuity_, _genve class_, **key**, _song title topic_, **lyrics topic**, acousticness, liveness, tempo, danceability \\ \hline \end{tabular} \({}^{1}\) Novel features are marked in italics. \({}^{2}\) Lyrics feature is marked in bold. \end{table} Table 2: Features selected for each model. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & 5-fold CV Accuracy & 5-fold CV AUC & Model Test Accuracy & Model Test AUC \\ \hline KNN (Audio) & 83.98\% & 0.847 & 79.92\% & 0.530 \\ KNN (Metadata+audio) & 90.85\% & 0.917 & 82.08\% & 0.748 \\ KNN (NFE\({}^{1}\)+audio) & 93.79\% & 0.930 & 86.05\% & **0.775\({}^{2}\)** \\ KNN (NFE+audio+lyrics) & **94.31\%** & **0.937** & **86.38\%** & 0.745 \\ NB (Audio) & 62.71\% & 0.697 & 42.82\% & 0.609 \\ NB (Metadata+audio) & 82.78\% & 0.915 & 71.13\% & 0.899 \\ NB (NFE+audio) & **86.26\%** & 0.924 & 74.76\% & **0.922** \\ NB (NFE+audio+lyrics) & 86.23\% & **0.931** & **78.52\%** & 0.900 \\ RF (Audio) & 79.22\% & 0.876 & 71.13\% & 0.629 \\ RF (Metadata+audio) & 91.62\% & 0.977 & 74.76\% & 0.869 \\ RF (NFE+audio) & 93.84\% & 0.980 & 87.59\% & 0.908 \\ RF (NFE+audio+lyrics) & **95.1\%** & **0.989** & **89.12\%** & **0.912** \\ LR (Audio) & 61.26\% & 0.649 & 57.88\% & 0.603 \\ LR (Metadata+audio) & 84.83\% & 0.917 & 83.54\% & 0.927 \\ LR (NFE+audio) & 86.15\% & 0.928 & 86.47\% & 0.923 \\ LR (NFE+audio+lyrics) & **87.07\%** & **0.933** & **87.17\%** & **0.927** \\ MLP (Audio) & 68.20\% & 0.756 & 63.60\% & 0.563 \\ MLP (Metadata+audio) & 87.48\% & 0.923 & 76.85\% & **0.847** \\ MLP (NFE+audio) & 88.0\% & 0.929 & 79.36\% & 0.734 \\ MLP (NFE+audio+lyrics) & **90.04\%** & **0.931** & **84.66\%** & 0.808 \\ \hline \end{tabular} \({}^{1}\) NFE stands for abbreviation of novel feature engineering. \({}^{2}\) The best performance has been marked in **bold**. \end{table} Table 3: All model training and test results summarisation and comparison. Compared to the baseline method, all the model test accuracy results with our novel metadata features provided significant performance improvement seen in Table 3, which proved our novel metadata features have contributed impact to HSP task. When adding _song lyrics topic_ features, the accuracy score of all models are slightly increased, the AUC score of kNN and NB are decreased for.030 and.022 respectively, probably because the lyrics topic increase the complexity of features, which might be hard for both algorithm to classify the patterns of hits and non-hits. The novel variables almost frequently in the list of automatically selected features as shown in Table 2, demonstrating their discriminative power. The utility of _popularity continuity_ indicates that the longer a song in a particular genre can maintain a position in the charts, the more likely it is to become a hit song. Certain topically-coherent sets of terms, such as _love, girls, life_, and _hearts_ are more likely to appear in the hits than non-hits, as captured in the _song title topic_ and _lyrics topic_ feature. Based on the ablation studies, some of the Spotify audio features such as _key, liveness, energy_, and _danceability_ are also important when classifying hit songs but less consistently so than our _novel features_ and _song lyrics feature_. The contributed features are varied between each model. Compared to _song title topic_, _song lyrics feature_ shows more contribution when using these two features together to identify hit songs. The result of this study supports the findings of [6, 8, 13, 14] that music metadata, audio features and lyrics can be used to classify hit songs through machine learning approaches. Adding all features together has achieved the best performance of all models. Moreover, we have been able to outperform the baseline results of [8, 9, 10], as their work achieved an accuracy score around 60% to 87% compared to our work, which gave accuracy scores around 79% to 89%. As future work, we intend to further enrich our models by developing more features based on, for example, music reviews and social tags. More complex and granular genre classifications such as different types of music from various cultures, like Latin music or dance songs from India could be used to extend our model. Furthermore, a larger dataset covering a longer period will be examined. As the hit songs identified in our study can be defined as extremely popular songs (top 10 among100), the model generalisation ability may need more tests, particularly adding songs never achieved in billboard top 100. The substance of a hit song may change over time, and we will consider more complex models that include temporal aspects to model changes in genres and topical popularity over time. Other audio-based features could also be considered, such as Mel-frequency Cepstral Coefficients (MFCC) and compared with the Spotify audio features.
2302.14738
H$_2$O MegaMaser emission in NGC 4258 indicative of a periodic disc instability
H$_2$O MegaMaser emission may arise from thin gas discs surrounding the massive nuclei of galaxies such as NGC\,4258, but the physical conditions responsible for the amplified emission are unclear. A detailed view of these regions is possible using the very high angular resolution afforded by space very long baseline interferometry (SVLBI). Here we report SVLBI experiments conducted using the orbiting RadioAstron Observatory that have resulted in detections of the H$_2$O 22 GHz emission in NGC\,4258, with Earth-space baselines of 1.3, 9.5 and 19.5 Earth diameters. Observations at the highest angular resolution of 11 and 23 $\mu$as show distinct and regularly spaced regions within the rotating disc, at an orbital radius of about 0.126 pc. These observations at three subsequent epochs also indicate a time evolution of the emission features, with a sudden rise in amplitude followed by a slow decay. The formation of the emission regions, their regular spacing and their time-dependent behaviour appear consistent with the occurrence of a periodic magneto-rotational instability in the disc. This type of shear-driven instability within the differentially rotating disc has been suggested to be the mechanism governing the radial momentum transfer and viscosity within a mass-accreting disc. The connection of the H$_2$O MegaMaser activity with the magneto-rotational instability activity would make it an indicator of the mass-accretion rate in the nuclear disc of the host galaxy.
Wiillem A. Baan, Tao AN, Christian Henkel, Hiroshi Imai, Vladimir Kostenko, Andrej Sobolev
2023-02-28T16:41:12Z
http://arxiv.org/abs/2302.14738v1
# H\({}_{2}\)O MegaMaser emission in NGC 4258 indicative of a periodic disc instability ###### Abstract **H\({}_{2}\)O MegaMaser emission may arise from thin gas discs surrounding the massive nuclei of galaxies such as NGC 4258, but the physical conditions responsible for the amplified emission are unclear. A detailed view of these regions is possible using the very high angular resolution afforded by space very long baseline interferometry (SVLBI). Here we report SVLBI experiments conducted using the orbiting RadioAstron Observatory that have resulted in detections of the H\({}_{2}\)O 22 GHz emission in NGC 4258, with Earth-space baselines of 1.3, 9.5 and 19.5 Earth diameters. Observations at the highest angular resolution of 11 and 23 \(\mu\)as show distinct and regularly spaced regions within the rotating disc, at an orbital radius of about 0.126 pc. These observations at three subsequent epochs also indicate a time evolution of the emission features, with a sudden rise in amplitude followed by a slow decay. The formation of the emission regions, their regular spacing and their time-dependent behaviour appear consistent with the occurrence of a periodic magneto-rotational instability in the disc. This type of shear-driven instability within the differentially rotating disc has been suggested to be the mechanism governing the radial momentum transfer and viscosity within a mass-accreting disc. The connection of the H\({}_{2}\)O MegaMaser activity with the magneto-rotational instability activity would make it an indicator of the mass-accretion rate in the nuclear disc of the host galaxy.** _Published in Nature Astronomy, Volume 6, p. 976-983, June 2022_ \({}^{1}\)Xinjiang Astronomical Observatory, Chinese Academy of Sciences, 150 Science 1-Street, Urumqi, Xinjiang 830011, China \({}^{2}\)Netherlands Institute for Radio Astronomy ASTRON, Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands, email: [email protected] \({}^{3}\)Shanghai Astronomical Observatory, Chinese Academy of Science, Nandan Road 80, Shanghai 200030, China \({}^{4}\)Max-Planck-Institut fur Radioastronomie, Auf dem Hugel 69, D-53121 Bonn, Germany \({}^{5}\)Astron. Dept., King Abdulazir University, P.O. Box 80203, Jeddah, Saudi Arabia \({}^{6}\)Amanogawa Galaxy Astronomy Research Center, Graduate School of Science and Engineering, Kagoshima University, 1-21-35 Korimoto, Kagoshima 890-0065, Japan \({}^{7}\)AstroSpace Centre, Lebedev Institute, Moscow, GSP-7, 117997, Russia \({}^{8}\)Astronomical Observatory, Ural Federal University, Lenin Ave. 51, Ekaterinburg 620083, Russia _Published in Nature Astronomy, Volume 6, p. 976-983, June 2022_ **H\({}_{2}\)O MegaMaser emission may arise from thin gas discs surrounding the massive nuclei of galaxies such as NGC 4258, but the physical conditions responsible for the amplified emission are unclear. A detailed view of these regions is possible using the very high angular resolution afforded by space very long baseline interferometry (SVLBI). Here we report SVLBI experiments conducted using the orbiting RadioAstron Observatory that have resulted in detections of the H\({}_{2}\)O 22 GHz emission in NGC 4258, with Earth-space baselines of 1.3, 9.5 and 19.5 Earth diameters. Observations at the highest angular resolution of 11 and 23 \(\mu\)as show distinct and regularly spaced regions within the rotating disc, at an orbital radius of about 0.126 pc. These observations at three subsequent epochs also indicate a time evolution of the emission features, with a sudden rise in amplitude followed by a slow decay. The formation of the emission regions, their regular spacing and their time-dependent behaviour appear consistent with the occurrence of a periodic magneto-rotational instability in the disc. This type of shear-driven instability within the differentially rotating disc has been suggested to be the mechanism governing the radial momentum transfer and viscosity within a mass-accreting disc. The connection of the H\({}_{2}\)O MegaMaser activity with the magneto-rotational instability activity would make it an indicator of the mass-accretion rate in the nuclear disc of the host galaxy.** **1. Introduction** The Space Radio Telescope (SRT) of the RadioAstron Observatory (RAO) is an observing station onboard the Spectr-R spacecraft on a highly elliptical orbit with an apogee of 350 000 km[1]. Operating this SRT in combination with ground-based radio telescopes allows for Space Very Long Baseline Interferometry (SVLBI) observations with baselines up to thirty times as long as those possible with Earth-based telescope arrays and a spatial resolution at the target sources 30 times higher. Such high-resolution observations enable the study of the structural details of the H\({}_{2}\)O maser emission in the nearby (7.6 Megaparsec) galaxy NGC 4258 (also named Messier 106). Many extragalactic H\({}_{2}\)O MegaMaser (MM) hosting galaxies show line emission associated with extended and shocked regions in their interstellar medium, possibly as a result of mergers or interactions of jet outflows with the ambient medium. However, the H\({}_{2}\)O emission in the MM NGC 4258 originates within the fast rotating disc surrounding the active galactic nucleus (AGN) and it has become a prototype for about one third of those among the 160 known MM sources[2, 3, 4, 5]. The behaviour of the maser emission regions in NGC 4258 has been extensively studied over many years[6, 7, 8, 9, 10]. The maser regions have been well modelled with an edge-on and warped thin Keplerian disc resulting in the most accurate estimate of the mass of the central black hole of 4.00 \(\pm\)0.09 \(\times\) 10\({}^{7}\) M\({}_{\odot}\), and a very accurate distance to the galaxy of 7.6 \(\pm\)0.17 \(\pm\)0.15 Mpc, with a systemic line-of-sight velocity of 474.3\(\pm\)0.5 km s\({}^{-1}\) in the local standard of rest (LSR)[9, 11, 12, 13]. The H\({}_{2}\)O maser emission originates at three locations in the disc of NGC 4258, at the systemic velocity of the galaxy, with orbiting molecular regions passing from west to east in front of the nuclear region and at the blue- and redshifted edges of the disc. The emission at the systemic velocity is most prominent because there the warped disc lies along the line of sight with the north-south jet extension from the AGN in NGC 4258[14, 11, 15], which facilitates maser amplification of a background radio continuum by excited H\({}_{2}\)O molecules in the foreground section of the disc[16]. Maser features close to the systemic velocity continuously drift upwards in velocity as they transit from the approaching side to the receding side of the disc; i.e. from below to above the systemic velocity. The drift rate at which this happens depends on the orbital velocity and the distance from the central nucleus. The emission in the approaching western and the receding eastern edges of the disc is much weaker, likely because of the absence of substantial background continuum. However, these edge regions also define the approximate radial extent within the disc from 0.14 parsec (pc) to 0.29 pc where the masering activity can occur[10, 13]. **2. Results and Discussion** The results of three SVLBI observations of the systemic 22 GHz H\({}_{2}\)O emission of NGC 4258 are presented in the form of total power (single dish) and cross-correlated spectra. Imaging of the emission regions is not (yet) possible with a limited number of antennae and such extremely long baselines. The Key Science Project experiment RAKS07AT on Earth-space baselines of 9.1 - 9.8 Earth Diamaters (ED) was conducted on 5 February 2014 (2014.099), giving a resolution or beam size of 23 \(\mu\)as or 175 Astronomical Units (AU) at the target (Figure 1). The General Observing Time GOT) experiment RAGS11AF with Earth-space baselines of 1.4 - 1.9 ED was conducted on 18 December 2014 (2014.964), giving a beam size of 113 \(\mu\)as or 861 AU) at the target (Figure 2). The third experiment, a GOT experiment RAGS18H on an Earth-space baseline of 19.5 ED was conducted on 17 March 2016 (2016.210), giving a beam size of 11 \(\mu\)as or 84 AU at the target (Figure 3). All experiments have been correlated with the ASC Software Correlator[17]. Although these experiments were done roughly one year apart, the auto-correlation spectra at these epochs show a similar broad-structured H\({}_{2}\)O emission line together with a number of weaker components at higher velocities. The cross-correlated spectra and interferometric fringe-visibility phases from the three experiments show that at higher spatial resolution the broad emission profile breaks up into well-spaced features. Two dominant components are detected on the shortest Earth-space baselines (Fig. 2), four well-spaced components are seen on an intermediate baseline (9.3 ED) (Fig. 1), and a complex profile with six well-spaced components is revealed on the longest baseline (19.5 ED) (Fig. 3). At high resolution the spectrum shows well-spaced emission components that imply the existence of a string of compact and high brightness regions in the accretion disc. At lower resolution and with single dish experiments, these features are less visible because the spectrum is convolved with diffuse and halo emissions within the broader beam. The flux density of these cross-correlated features decreases from about 60% to 18% of the auto-correlated flux density when going from 1.3 to 19.5 ED baselines. The LSR velocity range of 430 - 560 km s\({}^{-1}\) covered by the emission features is the same as that found in earlier monitoring experiments[6, 10, 11]. However, the pattern of a multi-component broad feature trailing behind a series of weak features differs from the data obtained at earlier epochs where prominent groups of features and individual features were observed filling the whole velocity range. Analysis of the identifiable features in the current auto- and cross-correlated spectra shows that there is a nearly regular spacing in velocity of the prominent cross-correlated features of RAKS07AT and RAGS18H (see Fig. 4 and Table 1 in Methods). In particular, a mean velocity separation of 7.3\(\pm\)0.2 km s\({}^{-1}\) is found for the RAGS18H data, while the RAKS07AT and also RAGS11AF data at earlier epochs show a separation of about 6.5\(\pm\)0.2 km s\({}^{-1}\). The regular spacing of the main features also extends among the weaker features at higher velocities, although with some irregularity in the spacing and some undetected features. This observed spacing and even those irregularities suggest that some ongoing dynamical process leads to the formation of the emission regions with an apparent periodicity. Another important property that further qualifies the nature of the masering regions is the velocity drift of the emission features over time that has been observed during previous monitoring observations [6, 12]. The current observations also show that the velocity pattern of weak and strong features observed at one epoch repeats at a higher velocity during the next epoch, so that even the weak feature at the lowest velocity in the RAGS11AF spectrum corresponds to a prominent feature in the RAGS18H spectrum (Figs. 1-3). An analysis of the feature velocities in the current (systemic velocity) spectra at three separated epochs confirms a steady velocity drift of \(\ddot{R}_{d}\) = 11.1\(\pm\)0.5 km s\({}^{-1}\) yr\({}^{-1}\) (Fig. 4; Methods). The observed drift velocity of the features is important because it confirms that the masering regions are in orbital motion and it also determines the radial location of those regions within a Keplerian disc. The orbital velocity \(V_{k}\) of an emission region moving within the Keplerian disc at a radius \(R_{em}\) may be expressed as \(V_{k}=416\,R_{em}^{-1/2}M_{4}^{1/2}\) km s\({}^{-1}\), while its associated velocity drift rate, or the orbital acceleration, may be expressed as \(\ddot{R}_{d}=V_{k}^{2}R_{em}^{-1}=0.176M_{4}R_{em}^{-2}\) km s\({}^{-1}\) yr\({}^{-1}\). The observed velocity drift rate of systemic emission regions in the disc surrounding a central AGN mass \(M_{4}=4\times 10^{7}\) M\({}_{\odot}\) identifies an orbital velocity of the regions of 1172 km s\({}^{-1}\) and an orbital radius of \(R_{em}\) = 0.126 pc. The current value for the velocity drift rate falls within the range 6.2 to 11.6 km s\({}^{-1}\) yr\({}^{-1}\) found during earlier monitoring, which corresponds to a range in radius of 0.168 to 0.123 pc within in the disc[6, 12]. Apparently, the observed emission regions are close to the inner edge of the known molecular zone in the disc. On this orbit the mean velocity separation for the prominent emission regions would indicate a geometric separation on the order of 172 AU, which may be compared with twice the scale height in the disc. For NGC 4258 the local scale height in a standard thin accretion disc[18]\(H=c_{s}R_{em}^{3/2}(GM_{bh})^{-1/2}\), where \(c_{s}\) is the local speed of sound, will be 63 AU at radius \(R_{em}\) = 0.126 pc for a gas temperature of 1000 K, which agrees with the estimate of \(H\leq 0.002R_{em}\) = 62 AU suggested by ground-based observations[10]. The full width at half power of the best resolved features in the RAKS07AT and RAGS18H spectra indicates a size scale for the emission regions on the order of 95 AU, which is similar to the scale height of the disc and nearly half the separation between the regions. The half power beam width is also similar to the scale size of 67 AU of the (related) absorbing structures found in the X-ray data of NGC 4258[19]. For comparison, the estimated half-thickness of the disc inferred from ground-based facilities of 190 AU is about three times the estimated scale height of the disc[11]. The current understanding of the H\({}_{2}\)O emission in NGC 4258 is that the emission regions occur in a radial zone within the rotating disc where viscous heating and external X-ray illumination can forge a transition from atomic to molecular gas[20, 21, 22, 10, 2]. When the density in this radial zone is in the required range of 10\({}^{8}\) to 10\({}^{10}\) cm\({}^{-3}\) and the temperature in the range of 400 - 1 000 K, collisional excitation of the water molecules can result in a population inversion and a negative optical depth. Regions with a negative optical depth of the H\({}_{2}\)O population in the front section and in any edge-sections of the masering disc may then result in maser-amplification of background radio continuum and observable maser features at the systemic velocity and at high- and low-velocities[16]. The systemic emission features in NGC 4258 have been associated with cloud-cloud superpositions or multi-armed spiral patterns in the disc and/or the presence of turbulent cells within this molecular ring[23, 22, 24, 25, 26]. Similarly the well-spaced (and possibly periodic) high-velocity features may result from the alignment of spiral shock waves[22], turbulent cells in the molecular ring [25], or flexural waves in the warping disc[26]. The observational data for NGC 4258 from these experiments indicate that the strong masering features first appear at the (apparent) low-velocity end of a string of spectral features drifting upward in velocity over time (moving leftwards in Figs. 1-3) and that features re-appear at a higher velocity in subsequent observing epochs. Because the spectra are actually snapshots of the ongoing masering activity, the sequence of spectra at three epochs show a time evolution of the features. After appearing at lower velocities, the features show a very rapid initial rise in strength followed by a slower decay on time scales of years, after which weak and decaying features remain for a longer time as they drift towards higher velocities. Assuming that maser amplification is responsible for the emission from within the disc, a velocity-aligned and line-of-sight column density of excited molecules will be superposed on the background radio continuum. Assuming that this background continuum at the base of the southern jet is rather uniformly distributed (Methods), this velocity-aligned column density needs to be specially generated because the differential rotation in the disc prevents any such alignment. Some mechanism is required to structurally perturb the disc and produce a strong temporary enhancement of the amplifying optical depth to facilitate longer-lasting and apparently regularly-spaced emission regions. The apparent regularity in the line-of-sight velocity of the observed emission features may first imply a periodic disc instability for producing these velocity-aligned molecular columns. Large-scale disc instabilities such as sausage and kink instabilities[27] may not operate in a thin rotating disc surrounding an AGN but a gravitational instability can operate in a disc if its mass is comparable with the mass of the AGN itself[28]. A gravitational instability may result in the formation of spiral arms in the disc and further gravitational contraction along these spiral arms may form distinct beads or clouds, as seen in the simulations of the Milky Way disc[29]. The formation of dense spiral arms has indeed been put forward as an explanation of the high velocity features of NGC 4258[22]. When self-gravitation in the disc becomes important, the disc will become locally unstable for all axisymmetric perturbations when the Toomre stability coefficient \(Q=(c_{s}\kappa)/(\pi G\Sigma)<1\), where \(c_{s}\) is the local sound speed, \(\kappa=V/R_{em}\) is the epicyclic frequency (equal to \(\Omega\) in a Keplerian disc), and \(\Sigma\) is the molecular surface density in the disc at the location of the emission regions[30, 31, 28]. A simple estimate of the mass of a disc with an outer radius of 0.29 pc, with a uniform density of \(n_{d}\) = \(5\times 10^{9}cm^{-3}\), and a scale height of 62 AU[10, 11] gives a disc mass \(M_{d}\) = 1.9 \(\times\) 10\({}^{4}\) M\({}_{\odot}\) (Methods). This suggests a value for \(Q\) = 24 for NGC 4258 and indicates the disc to be very stable for gravitational instability. Similar values of the disc mass have been derived from modelling the required disc accretion rate for driving the radio activity of the source[22, 10, 33]. Therefore, unless the disc is substantially more massive, a gravitational instability will not operate in NGC 4258 and would not result in spiral arms with beads-on-a-string emission regions at the systemic velocity and in the tangential sections of the disc. Alternatively, the small-scale shear-driven interchange instability may be considered to facilitate the masering activity. Such an interchange instability has been suggested early by Shakura and Sunyaev[34] to be the most likely viscosity agent in the thin Keplerian disc. Evidenced by the north-south extended radio structure and the non-thermal nuclear emission of NGC 4258, there must be an active viscosity agent in the disc[11, 15] causing an accretion rate on the order of \(\dot{M}\) = 10\({}^{-4}\alpha\) M\({}_{\odot}\)yr\({}^{-1}\) with a viscosity parameter \(\alpha\leq 1\), which is determined on the basis of the radio and bolometric luminosity[10], the equipartition upper limit of the magnetic field[33], and the X-ray absorption within the disc[19]. The magneto-rotational version of this shear-drive interchange instability (MRI) can operate in a differentially rotating disc and would generate local'sinusoidal' disruptions of the shear layers that interchange inner fluid elements with outer elements and cause viscosity and an exchange of momentum. This MRI operates independently of the strength of the B-field and varies linearly with the local value of the angular rotation velocity in the disc, where the field energy of a (self-amplified) poloidal (B\({}_{z}\)) component does not exceed the thermal energy density[35, 36] (see below). The presence of a weak poloidal B-field component serves to counteract Coriolis forces that would prevent non-linear growth in the non-magnetised version of this instability. Large scale numerical simulations of the linear and non-linear evolution of the MRI show that the'sinusoidal' toroidal field disruptions will develop into a series of radial interpenetrating 'fingers' of high and low angular momentum, that eventually break up into turbulence and strongly enhance the angular momentum transport[36, 37]. These simulations show that MRIs would be able to operate in the disc of NGC 4258 and that the scale size of the cells will be on the order of the local scale height in the disc[38], Phenomenologically the non-linear development of periodic MRI structures within the molecular zone of the disc provides an attractive scenario of creating radial column densities sufficient for H\({}_{2}\)O masering action (Methods). Simulations of the MRI waveform [35, 36] show that non-linear stretching of the radial segments of the initially sinusoidal waveform could temporarily form a velocity-coherent molecular column density. Amplification in these radial filaments with an enlarged optical depth could, temporarily, result in velocity-separated high-brightness emission features within the MRI waveform as depicted in the cartoon representation of Figure 5. In this scenario for NGC 4258, the two radial flanks of the outward moving MRI loop in the waveform form together an emission feature with a size of about half the separation between the regions and equal to the scale height in the disc, as observed. Similarly, the contributions of two distinct flanks in the waveform would account for the phase changes observed in the middle of the profiles of some high resolution features of RAKS07AT and RAGS18H (Figs. 1 and 3). However, following the initial formation of radial filaments in the MRI waveform, this waveform will further deform and disrupt the filaments and leave behind some turbulent cells, as evidenced by the MRI simulations[35, 36] and magnetohydrodynamic simulations of accretion discs[39]. Considering that the observational data reveals only the presence of amplifying column densities within the MRI waveform, and not the waveform itself, the appearance in NGC 4258 of a series of regularly spaced weak (remnant) features, preceded by transient high-brightness emission features provides a scenario that is consistent with the characteristic MRI behaviour. Superpositions of multiple MRI strings may explain the observed spectra obtained during earlier observation epochs in NGC 4258. While the development of MRIs fits well with the current understanding of the accretion viscosity in the disc, further study and simulations are needed to confirm the physical conditions required for the observed MRI development, and the formation of velocity-coherent radial molecular filaments. The MRI scenario provides a natural explanation for the apparent periodicity of the spectral features on the basis of a viscous process that already may operate in the disc, and the sudden onset of non-linear development may explain the formation of radial column density enhancements and the observed flux evolution. The presence of two amplifying flanks in the MRI waveform may account for the variation of the observed line profiles and would be consistent with the phase changes in the middle of some high resolution features. Alternative scenarios for generating these same emission characteristics have been considered based on modulation of the interferometric fringe visibility amplitudes with projected baseline lengths, or bi-refringence, or special scattering/propagation conditions. However, such scenarios also require a mechanism for generating sufficient (radial) path lengths of excited molecules in a differentially rotating and stratified medium, while a second mechanism would be needed for generating the periodic and flaring spectral behaviour. Although alternative scenarios other than beads-on-a-string spiral cloud structures or instabilities for generating the required column densities cannot be ruled out, no satisfactory alternative scenario has yet been devised to reproduce the observed emission characteristics. Adopting a scenario with MRI-generated filamentary foreground structures amplifying a background radio continuum of the southern jet, more of the physical conditions of the emission regions can be estimated. Because the radio continuum background at the base of the southern jet must be rather uniform with a flux density on the order of \(S_{c}\) = 0.1 mJy at 22 GHz[14, 11, 15] (Methods), the amplifying optical depth associated with the flux density \(S_{\ell}\) of the strongest cross-correlated feature in the RAGS18H spectrum can be estimated as follows from \(\tau_{c}=ln(S_{\ell}/S_{c})=-9.6\) (see Table 1 in Methods). Slightly lower optical depth values are found for the other high resolution features in RAGS18H, while the values for the less resolved features in RAKS07AT and RAGS11AF become slightly higher. The optical depth required for the peak in the auto-correlation spectrum in the RAGS18H data is \(\tau_{t}\) = \(-\)11.3, which suggests that an additional halo contribution \(\tau_{d}\) = \(-\)1.7 comes from the more diffuse toroidal sections of the MRI waveform. This diffuse contribution to the optical depth convolves the high brightness components in the auto-correlation spectra and decreases as the beam becomes larger at lower resolution. Considering that the local density may be on the order of n(\(H_{2}\)) = 10\({}^{9}\) cm\({}^{-3}\) with a water abundance n(H\({}_{2}\)O)/n(\(H_{2}\)) \(\approx\) 10\({}^{-4}\) in the disc at 0.15 pc[20, 21], the actual population inversion in a (representative) 10 AU radial filament within an 84 AU sized emission region appears not very demanding at \(\Delta n/n(H_{2}O)=1.3\times 10^{-7}\) (Methods). The estimated brightness temperatures of the strongest masering components increase with a narrowing SVLBI beam from \(T_{b}\) = 5.3 \(\times\) 10\({}^{12}\) K for RAGS11AF, to 1.2 \(\times\) 10\({}^{14}\) K for RAKS07AT, and to 2.5 \(\times\) 10\({}^{14}\) K for RAGS18H. Although magnetic fields are present in the disc of NGC 4258, the strength of the field is not yet known. While the MRI scenario does not depend directly on the magnetic field, a dominant toroidal component would be modified during the non-linear MRI development and produce a radial field component in the radial sections of the MRI waveform (see Fig. 5). Similarly, the toroidal field component should be dominant for the high- and low-velocity features from the edge-on sections of the disc. Magnetic fields have not yet been detected from within the disc and only an upper limit of 0.130 Gauss has been found for any radial field components associated with prominent features of NGC 4258[33]. Incidentally, a similar upper limit is found for the poloidal field component because the MRI only works when the energy density of the poloidal field component B\({}_{z}\) does not dominate the local thermal energy density. This suggests a ratio \(\beta(B_{z})=8\pi\rho c^{2}/B_{z}^{2}\geq 3\)[35] and a value for B\({}_{z}\)\(<\) 0.14 Gauss. Further evaluation is not yet possible for the effect of the magnetic fields on the conditions for MRI development and for the formation of the masering features. **3. Summary** In conclusion, the SVLBI observations of NGC 4258 with the RadioAstron Observatory have resulted in new details about the H\({}_{2}\)O masering action at the systemic velocity and the workings inside the rotating disc. At high spatial resolution, the single broad features observed in the total power spectra decompose into regularly spaced high-brightness masering regions inside the disc. These high-brightness regions detected with SVLBI are found to be part of a longer series of drifting and periodic emission regions that are also detected in total power data. Interpreting the spectra obtained from these three high resolution data sets of NGC 4258 shows that after an initial rapid growth in strength, the high-brightness features show a slow decrease in strength and eventually become part of this slowly drifting series of weak spectral features that remain in the spectrum for years. In terms of a maser amplification scenario, the observed emission features in NGC 4258 only indicate the presence of regions with a radially amplifying column density of H\({}_{2}\)O molecules where they do not directly explain the underlying mechanism producing these column densities. Although more complex interpretations may be possible to interpret the apparently periodic and moving emission regions, their association with transiently varying radial sections in a magneto-rotational instability waveform provides an attractive scenario. The sudden onset of the non-linear MRI development naturally explains the periodicity of the emission regions and the generation of a temporary enhancement of the molecular column density in the cells. The velocity-coherent column density containing excited H\({}_{2}\)O molecules results in amplification of the background continuum, and further non-linear MRI development will in time deform the waveform and diminish/destroy this radial column density. An MRI scenario for the emission regions suggests: first, that the size of the high-brightness features is roughly half the separation distance between the regions and equals the local scale height of the disc; second, that the observed velocity drifts confirm they are part of the differential rotation pattern in the disc; third, that the non-linear behaviour of the MRI explains the flaring and flux variability; and fourth, that the transient behaviour of the radial flanking sections of the MRI waveform may produce the required velocity-coherent amplifying optical depth. Furthermore, the association of MRI processes with the viscosity and turbulence causing radial momentum transfer provides a consistent link between the intensity of observed maser emission and the accretion flow in the differentially rotating thin Keplerian disc. In addition, this association suggests that the accretion rate has been low during the current three epochs as compared with earlier epochs when a higher accretion rate with multiple MRI series filled the spectra with strong features. The occurrence of an MRI in a Keplerian disc would indeed confirm this shear-driven instability to be an agent for generating the viscosity in the disc as proposed nearly 50 years ago by Shakura and Sunyaev. The observed emission regions inside the molecular zone in the disc move from west to east across the line with the systemic velocity and within the MRI scenario the variation of their emission strength reflects the time-evolution of the velocity-coherent molecular columns in the radial filaments. The available observational data from earlier epochs needs to be re-investigated in order to discover patterns and variations of line fluxes that would further verify the currently emission scenario for the regions in the disc. Adopting an MRI scenario also for the observed redshifted and blueshifted edge-on disc features of NGC 4258 would require that their amplifying optical depth follows from a tangential integration over a series of MRI cells. The velocity spacing of such features would then indicate the radial distribution of developed MRI structures in the disc as it determines the ongoing viscosity process in the disc. ## Methods **The RadioAstron observations -** The space VLBI observations of NGC 4258 (M 106) with the 10-metre radio telescope on the _RadioAstron Observatory[1]_ in the time period between February 2014 and February 2017 have resulted in successful RAO experiments on baselines with ground-based radio telescopes (GRT). The highest angular resolution on single baselines from RadioAstron to ground-based radio telescopes during experiments ranged from 165 \(\mu\)as for a perigee space-Earth baseline of 1.0 ED (Earth diameter) to 7 \(\mu\)as fringe spacing for an apogee baseline of 26.9 ED. The three experiments presented in this paper demonstrate the effect of increasing resolution on determining the structure of the emission regions in NGC 4258. The observations were executed during relatively short time intervals of 1 to 1.5 hours in order to facilitate the necessary cooling periods for the space antenna, thus avoiding its deformation due to differential solar heating. The ground-based facilities used for the H\({}_{2}\)O Megamaser experiments with RadioAstron are Effelsberg (Germany), Green Bank (United States), Torun (Poland), Yebes (Spain), and the Kwazar stations Kalyazin, Svetloe, and Badary (Russia). The conversion from angular resolution to spatial resolution is 7.62 AU/micro-arcsecond at a distance of 7.6 Megaparsecs. The RadioAstron experiments described here employed the 22 GHz receiver system operating within a fixed frequency window of 22188 - 22204 MHz, with an estimated T(system) = 100 - 127 K. The observation of the H\({}_{2}\)O vapour "6(1,6)-5(2,3) F=5-4" transition used a rest frequency of 22 235.120 MHz. The location of the redshifted spectral emission of NGC 4258 at 22 199.96 MHz within the 16 MHz frequency window varies with the relative velocity of the spacecraft during its orbit at the time of the observations, and is often close to the edge of the spectral window. The diameter of the RadioAstron antenna of only 10 m is small compared with the large 100 m class ground-based facilities such as the Green Bank Telescope (GBT) and the Effelsberg Telescope (EFF). However, since the sensitivity on a VLBI baseline depends on the product of the two antennas, the VLBI measurements with RadioAstron are also most sensitive on the longest baselines when using one of these large ground-based telescopes. RadioAstron baselines with smaller ground-based antennas are less sensitive but they are essential for verification and calibration purposes. Because the RAO baselines are much longer than any of the terrestrial baselines, the results reported on in this paper are single baseline measurements and cannot be used for mapping the source. Experiment RAKS07AT on a baseline of 9.1 - 9.8 ED was executed for 60 minutes on 5 February 2014 (2014.0986) with ground stations at Effelsberg (EFF, Germany; RAO-EFF = 9.4 ED), Torun (TR, Poland; RAO-TR = 9.1 ED), and Badary (BD, Russia; RAO-BD = 9.8 ED), resulting in space-ground and ground-ground fringes having an angular resolution of 23 \(\mu\)as on an Earth-space baseline at 22 GHz, which corresponds to 175 AU at NGC 4258. A bandpass calibration was performed using a 5 minute observation of the calibrator source 1219+044. Experiment RAGS11AF on baselines 0.5 - 1.9 ED was successfully executed for 60 minutes on 18 December 2014 (2014.964) with ground stations at Green Bank (GBT, USA; RAO-GBT = 1.9 ED) and Torun (TR, Poland; RAO-TR = 1.4 ED). Space-ground fringes resulted in an angular resolution of 113 \(\mu\)as on an Earth-space baseline at 22 GHz corresponding to 861 AU at the target source. It was not possible to use the 5 minute observation of the calibrator source 1150+497 intended for bandpass calibration of the target data. Instead, an offline baseline fitting was used to remove some of the baseline structure in Figure 2. Experiment RAGS18H on a baseline of 19.5 ED was executed for 74 minutes on 17 March 2016 (2016.2103) with ground stations at Green Bank (GBT, USA), Yebes (YB, Spain), and Torun (Poland). Space-ground fringes provide an angular resolution of 11 \(\mu\)as at 22 GHz, corresponding to 84 AU for NGC 4258. Ground-ground fringes were obtained on the terrestrial GBT - TR baseline. A 15-minute observation of the continuum calibrator source 0851+202 has been used for bandpass calibration. **Data processing -** The staff at the ASCFX correlator at the Astrospace Center in Moscow has correlated the spectral line data observations multiple times in order to find and confirm interferometric fringes on the Earth-space baselines[17]. The spectra of the full 16 MHz correlated bandwidth are presented here. The data for bandpass calibrator sources were correlated using the same channel spacing as the spectral line data for RAKS07AT and RAGS18H. For RAGS11AF no bandpass calibrator was available and a baseline fitting routine was applied for the auto-correlation spectrum of the Effelsberg data. Finding interferometric fringes on the baselines with the orbiting antenna is challenging because of the typically low signal-to-noise ratio (SNR) of the fringes and the positional uncertainties in the spacecraft orbit. During the post-correlation, a cursory fringe search for the delay and delay rate (to find the distance and velocity of the spacecraft) was performed with the PIMA software[40] for processing individual baselines. The PIMA searches resulted in fringe detections up to baseline lengths of 26.9 ED for NGC 4258. After setting the instrumental delays of the system, a second step of fringe-fitting was used to update the delay and delay rate model using the task _FRING_ from the Astronomical Image Processing System (AIPS). Although this task does allow combining data from multiple ground telescopes in a global solution, it also serves well for our single baseline detection experiments where no imaging can be possible. Further data reduction has been done with AIPS using the tasks _BPASS_ for bandpass calibration with data of nearby calibrators, and _APCAL_ for amplitude calibration using the _ANTAB_ file with station gain curves and system temperatures. The velocity scale of the correlated signals was determined from the actual channel frequencies and the known frequency offsets during the observations. In addition, the radial component of the observer's velocity (i.e. the ground station) at the time of observation was used to adjust the velocity scale of the spectra as described for the AIPS task _CVEL_. All auto-correlation and single baseline cross-correlation Stokes \(I\) spectra and phases were obtained with the task _POSSM_, and exported for plotting and smoothing in a _MATLAB_ package. **Structure of the background radio continuum -** Within a maser amplification scenario both the foreground optical depth and the background radio continuum are the important parameters. Therefore, strong emission from a moving region in the accretion disc may also result from a local enhancement in the radio continuum background. In the disc configuration of NGC 4258, a location for the radio continuum enhancement west of the transit position in the disc will result in strong emission features below the systemic velocity. If the location is east, all strong features would be above the systemic velocity. However, for NGC 4258 the strong features are found both below and above the systemic velocity, which suggests a rather uniform radio background without substantial flux enhancements. **The Velocities of the Masering features and the Drift Rate -** The characteristics of the features have been determined by measuring the spectrum and by Gaussian fitting procedures of the convolved features. The main objective for these measurements was to determine the centroid velocities, while the amplitudes and line-widths of the resolved features are not yet important for this evaluation. Gaussian fitting of isolated features may not be adequate because of baseline structures and non-Gaussian shapes of the lines. Adequate Gaussian fitting of the convolved main profiles in the total power data and also the convolved profile in the cross-correlated RAGS11AF profile would require a priori knowledge about the velocities of the underlying components. In the case of RAKS07AT and RAGS18H, the number of components and their centre velocities may be derived from the cross-correlated spectra, which may be further verified from the breaks in the phase information. Only for the partially resolved cross-correlated spectrum as well as the auto-correlated spectra of RAGS11AF, three underlying components were assumed to be aligned with the regular pattern in the other parts of the spectrum and guided by breaks in the phases of the cross-correlated data. Considering the uncertainties in determining the velocities of the features, they appear regularly spaced although some irregularity may be seen in the separation between features at higher velocities. The component velocities and their estimated strength are presented in Table 1. All identified velocity components in the spectral data at the three epochs are displayed in the diagram in Figure 4, and the high brightness components seen in the cross-correlation data have been indicated. The time intervals between the data of RAKS07AT, RAGS11AF and RAGS18H are 0.865 years and 1.244 years. Considering that there are substantial time intervals between these observational epochs, linear extrapolations have been indicated to show that the pattern of weak and strong velocity components repeats from epoch to epoch. Inaccuracies in the velocity determinations resulting from changes in the emission profiles and total power baselines structures add uncertainty to the drift rate determined from these data. Since the emission features are associated with a non-linearly developing instability, some of these data points may show a velocity offset and some may have disappeared in time. A mean value of the drift rate starting at RAKS07AT and ending at RAGS18H is found to be 11.1\(\pm\)0.5 km s\({}^{-1}\) yr\({}^{-1}\), which is indeed close to the upper end of the range of 6.2 to 11.6 km s\({}^{-1}\) yr\({}^{-1}\) observed during earlier monitoring campaigns[6, 12]. Choosing different sequences of three data points (up or down) for linear extrapolations at subsequent epochs results in impossible fits and unrealistic values for the drift rate. Tracing the features across the epochs also confirms the intensity evolution of individual velocity features during the 2.1-year time interval. After a long quiescent period of the MRI with no emission, the maser features show a rapid increase in strength followed by a slower decay, after which they remain for years as remnant features in the spectrum. This evolution is shown by the decay of the strong features of RAKS07AT and RAGS11AF and by the lowest velocity features in the RAGS11AF evolving into the first three prominent features in the shifted spectral window of the RAGS18H observation (Figs. 1-3). This evolution has also been depicted in the cartoon representation of the non-linear MRI development in Figure 5. **The mass of the disc and the accretion rate -** When self-gravitation in the disc becomes important, the disc will be locally unstable and starts forming spiral arms when the Toomre stability coefficient: \[Q=(c_{s}\Omega_{k})/(\pi G\Sigma) \tag{1}\] becomes less than unity[30, 28], where \(c_{s}\) is the local sound speed, \(\Omega_{k}=V/R_{em}=\sqrt{(}GM_{bh}/R_{em}^{3})\) at the emission regions, the molecular surface density in the disc \(\Sigma=2H\times n(H_{2})\times 2m_{p}\), and \(R_{em}\) is the radius and \(H\) is the scale height at the location of the emission regions. For a local temperature of 800K the speed of sound is estimated at \(c_{s}\) = 3.3 km s\({}^{-1}\), the local density \(n(H_{2})\) = \(5\times 10^{9}cm^{-3}\), and the orbital velocity in the Keplerian disc is 1171 km s\({}^{-1}\) at a radius \(R_{em}\) = 0.126 pc. This gives an estimate of the value for the Toomre stability parameter \(Q\) = 24, which suggests the disc is rather stable against gravitational instabilities. A simple estimate of the mass of an accretion disc with outer radius of 0.29 pc with a uniform density of \(n_{d}\) = \(5\times 10^{9}cm^{-3}\) and a scale height of 62 AU[10, 11] gives a disc mass \(M_{d}\) = 1.9 \(\times 10^{4}\) M\({}_{\odot}\). Similar values may also be derived from models of the required disc accretion rate on the order of \(\dot{M}\) = \(10^{-4}\alpha\) M\({}_{\odot}\)\(yr^{-1}\) (\(\alpha\leq 1\)) to drive the radio activity of the source[22, 10, 33]. Typically for a mass ratio \(M_{d}/M_{bh}\leq 10^{-2}\), the self-gravity effect is not important[28, 32] and this ratio would be \(5\times 10^{-4}\) for NGC 4258. These stability estimates suggest that the accretion disc in NGC 4258 is gravitationally stable and that the formation of spiral structures, spiral shock waves and beads-on-a-string is not very likely. **Behaviour of a Magneto-Rotational Instability -** The association of apparently periodic and moving emission regions with transiently varying radial sections in a magneto-rotational instability (MRI) waveform provides an attractive scenario. Starting with the idea that MRI operates in accretion discs and is the cause of the necessary viscosity in the disc, this instability naturally explains the periodicity for the emission regions, and can create a temporary molecular column density in the cells, and, as simulations show, there can be a sudden onset of non-linear development. A velocity-coherent column density containing excited water molecules will result in temporary amplification of the background continuum, but further non-linear MRI development will in time deform the waveform and diminish/destroy this radial column density. The MRI interchange instability only operates in the presence of a weak poloidal field[35] that is needed to compensate for the Coriolis forces that would otherwise prevent non-linear growth in the non-magnetised version of the instability as proposed early by Shakura and Sunyaev[34]. Although the MRI will operate independently of the strength of the local magnetic field, there is a self-regulated limit for the poloidal B-field component to be less than the local energy density. Although the MRI may be affected by the magnetic field conditions, yet it does not put any requirements on the radial and toroidal B-field components. The MRI simulations presented in the series of papers by Balbus and Hawley[35, 36, 37] show that MRI waveforms can develop in a disc over multiple orbits and then suddenly show a non-linear increase in amplitude. During this sudden non-linear development the sinusoidal cells will be stretched radially and form radial strings of swept material. The MRI structure could be visualised as a 2D structure in radial and toroidal directions in such a way that the vertical (poloidal) height of the cell is equal to the scale height of the disc[38]. As a result the dominant molecular columns during a non-linear development will extend perpendicular to the disc and they are equally spaced along the centreline in the toroidal (and velocity) direction. The differentially rotating atmospheric surface layer of Jupiter shows similar shear-driven structures. An amplifying column density only works when the excited molecules in the column are velocity-coherent, which suggests that this velocity coherence gets destroyed easily during further non-linear evolution of the MRI. The simulations also show that the end result of the non-linear MRI development phase will produce blobs of entrained gas alternatingly moving inward and outward, which will account for radial momentum transfer in the disc and the generation of viscosity in the disc. Interpreting the results of the simulations[35], a cartoon has been made to visualise the evolution of the instability and the change of the amplifying column density in Figure 5. In this scenario, the amplifying gains along each of the filaments will determine the shape of the feature, such that high optical depths produce a single peaked feature and smaller optical depths can produce flat-shaped (maybe double-peaked) features. **Masering Conditions in the Disc -** The optical depths deduced for the emission lines detected in our experiments appear very large because the radio continuum background associated with the base of the southern jet is very low in NGC 4258. However, the expected path length with a coherent velocity can be very large for an MRI structure in a parsec scale accretion disc. The optical depth or maser gain for the H\({}_{2}\)O column density may be expressed as[41]: \[\tau=(h\nu/4\pi\Delta v_{D})g_{2}B_{21}\int(n_{2}-n_{1})d\ell, \tag{2}\] where \(\nu\) is the transition frequency, \(\Delta v_{D}\) is the Doppler line width, \(g_{2}\) is the statistical weight of the upper sub-level and \(B_{21}\) is the Einstein absorption coefficient. The difference in the upper and lower energy level population is expressed as \(\Delta n=n_{2}-n_{1}\). For the 22 GHz \(6_{16}-5_{23}\) transition of water vapour, this expression reduces to \(\tau=-5.02\times 10^{-12}(\Delta n\ell)\) with \(\ell\) being the path length. For a negative optical depth of \(\tau\) = -10, the required column density of inverted molecules \((\Delta n\ell)=2\times 10^{12}cm^{-1}\) For a particle density of \(N_{H_{2}}=10^{9}cm^{-3}\) and a relative H\({}_{2}\)O abundance of 10\({}^{-4}\) in the disc at 0.15 pc[20, 21], the required inversion level \((\Delta n/n_{H_{2}O})=1.3\times 10^{-6}AU^{-1}\). For a possible path length of 10 AU, the inversion level along a velocity coherent path needs to be \(1.3\times 10^{-7}\), which would be very low for the environment in the molecular zone. AcknowledgementsThe authors dedicate this paper to the memory of our colleague and friend Nikolai Kardashev, a man of great vision, who persevered to realise the RadioAstron mission. The authors thank the observatory staff of the ground telescope stations Effelsberg, Green Bank, Torun, Yebes, and the Kwazar stations Kalyazin, Svetloe, and Badary for their participation in the observations. These observations have been correlated at the ASC DiFX correlator and the authors thank the Correlator Team members for their contributions, their repeated re-correlation efforts, and their unfailing support of this project. The authors also thank the other members of the H\({}_{2}\)O Megamaser Team for their support for this project: Alexei Alakoz, Simon Ellingsen, Ivan Litovchenko, James Moran, and Alexander Tolmachev. The authors thank Eduard Vorobyov (Uni. Vienna) for valuable discussions about the stability criteria for the disc. The RadioAstron project has been led by the AstroSpace Centre of the Lebedev Physical Institute of the Russian Academy of Sciences and the Lavochkin Scientific and Production Association under a contract with the State Space Corporation ROSCOSMOS, in collaboration with partner organisations in Russia and other countries. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under a cooperative agreement by Associated Universities, Inc. The European VLBI Network is a joint facility of independent European, African, Asian, and North American radio astronomy institutes. WAB acknowledges the support from the National Natural Science Foundation of China under grant No.11433008, and the Chinese Academy of Sciences President's International Fellowship Initiative under grants No. 2019VMA0040, 2021VMA0008 and 2022VMA0019. TA acknowledges the grant support from the Youth Innovation Promotion Association of CAS. AMS was supported by the Ministry of Education and Science of Russia (the basic part of the State assignment, K 1567 no. FEUZ-2020-0030) WAB coordinated the research, carried out the data reduction, and wrote the manuscript. WAB, TA, CH, HI, VK, and AS contributed to the discussion and interpretation of the data, and provided comments on the manuscript. WAB served as Principal Investigator of the _RadioAstron_ MegaMaser Key Science Program. Competing InterestsThe authors declare that they have no competing financial interests. Data availability and Code availabilityThe correlated data for RadioAstron experiments RAGS11AF, RAKS07AT and RAGS18H are available from the RadioAstron Data Archive at the Astrospace Center of the PN Lebedev Physics Institute ASC LPI) in Moscow at [http://opendata.asc.rssi.ru/index.php](http://opendata.asc.rssi.ru/index.php). Data reduction has been done with the Astronomical Image Processing System, AIPS, that has been developed by the National Radio Astronomy Observatory and is documented and available at [http://www.aips.nrao.edu/index.shtml](http://www.aips.nrao.edu/index.shtml). Plotting procedures have been used from the MATLAB Toolbox distributed by MathWorks, Inc. at [https://www.mathworks.com/products/matlab.html](https://www.mathworks.com/products/matlab.html). The PIMA software package for processing individual baselines can be found at [http://astrogeo.org/pima/](http://astrogeo.org/pima/). A description of the ASC Software correlator can be found in the literature[17]. Correspondence Correspondence should be addressed to WAB ([email protected]). **Figure Captions**
2309.15754
Weighted estimates for the Bergman projection on planar domains
We investigate weighted Lebesgue space estimates for the Bergman projection on a simply connected planar domain via the domain's Riemann map. We extend the bounds which follow from a standard change-of-variable argument in two ways. First, we provide a regularity condition on the Riemann map, which turns out to be necessary in the case of uniform domains, in order to obtain the full range of weighted estimates for the Bergman projection for weights in a B\'{e}koll\`{e}-Bonami-type class. Second, by slightly strengthening our condition on the Riemann map, we obtain the weighted weak-type $(1,1)$ estimate as well. Our proofs draw on techniques from both conformal mapping and dyadic harmonic analysis.
A. Walton Green, Nathan A. Wagner
2023-09-27T16:09:32Z
http://arxiv.org/abs/2309.15754v3
# Weighted estimates for the Bergman projection on planar domains ###### Abstract. Let \(\Omega\subset\mathbb{C}\) be a bounded, simply connected domain. We generalize well-known weighted inequalities due to Bekolle and Bonami on \(\mathbb{D}\) by giving a sufficient condition on the domain \(\Omega\) such that the Bergman projection \(\Pi_{\Omega}\) is bounded on \(L^{p}(\Omega,\sigma)\) for \(1<p<\infty\) and \(\sigma\) belonging to the Muckenhoupt class \(\mathrm{B}_{p}(\Omega)\) whose basis consists of the images, under the Riemann map, of Carleson boxes in \(\mathbb{D}\). We also show that in the special case of uniform domains, the same condition is also necessary for the full-range weighted estimates, and moreover provide an alternative characterization of the weight class that is intrinsic to \(\Omega\). Our approach uses techniques from both conformal mapping and dyadic harmonic analysis. We also apply these methods to prove endpoint weak-type estimates and limited-range weighted inequalities. Key words and phrases:Bergman projection, planar domains, weighted estimates, Riemann map, weak-type estimates 2020 Mathematics Subject Classification: Primary: 30H20, 42B20. Secondary: 30C20 A. W. Green's research partially supported by NSF grant DMS-2202813 N. A. Wagner's research partially supported by NSF grant DMS-2203272 locally integrable, almost everywhere positive function on \(\mathbb{D}\), belongs to the Bekolle-Bonami class \(\mathrm{B}_{p}(\mathbb{D})\) if the \(\mathrm{B}_{p}(\mathbb{D})\) characteristic, \[\left[\sigma\right]_{\mathrm{B}_{p}(\mathbb{D})}=\sup_{I}\left\langle\sigma \right\rangle_{Q_{I}}\left\langle\sigma^{-1}\right\rangle_{\frac{1}{p-1},Q_{I}} \tag{1.2}\] is finite, where we are using the following local average notation. For a measurable function \(f\), \(0<p<\infty\), a weight \(v\), and a measurable set \(E\), \[\left\langle f\right\rangle_{p,v,E}=\frac{\|f\|_{L^{p}(E,v)}}{\left(\int_{E}v \,dA\right)^{1/p}},\quad\|f\|_{L^{p}(E,v)}=\left(\int_{E}|f|^{p}\,v\,dA\right)^ {1/p}.\] When \(p=\infty\), we understand both quantities above to be the \(L^{\infty}(E)=L^{\infty}(E,v)\) norm. The measure \(dA\) is the area measure in \(\mathbb{C}\) (normalized so that \(A(\mathbb{D})=1\)) and when \(v\) is identically \(1\) or \(p=1\) we omit those parameters in the notation. Quantitative weighted estimates for \(\Pi_{\mathbb{D}}\) in terms of the characteristic (1.2) are also well-understood. For \(1<p<\infty\), \[\left[\sigma\right]_{\mathrm{B}_{p}(\mathbb{D})}^{\frac{1}{2p}}\lesssim\|\Pi_ {\mathbb{D}}\|_{\mathcal{L}(L^{p}(\mathbb{D},\sigma))}\lesssim\left[\sigma \right]_{\mathrm{B}_{p}(\mathbb{D})}^{\max\left\{1,\frac{1}{p-1}\right\}}. \tag{1.3}\] The lower inequality can be derived from [1, Proposition 2] together with arguments in [10], and the upper was first established in [11] and was extended to the unit ball in \(n\) dimensions in [10]1. A weak-type analog of (1.3) also holds when \(p=1\), see [1], though the sharp dependence on the \(\mathrm{B}_{1}(\mathbb{D})\) characteristic is not known. Footnote 1: The quantitative upper estimate is not simply a matter of tracking the constants in the qualitative case, but rather was delayed until the proof of the \(\mathrm{A}_{2}\) conjecture in harmonic analysis [10] and the ensuing “sparse revolution” that took place in the early 2010’s [1, 12, 13] The following question is natural: what is the appropriate generalization of (1.2) and (1.3) to general bounded simply connected planar domains \(\Omega\)? One immediate obstacle is the apparent lack of canonical replacement for Carleson boxes that respects the boundary geometry. A proposal was given by Burbea in [14, Theorem 2] which we slightly simplify here. Let \(\psi\) be a conformal map from \(\mathbb{D}\) onto \(\Omega\). For \(1\leq p<\infty\), we say a weight \(\sigma\) on \(\Omega\) belongs to the Bekolle-Bonami class \(\mathrm{B}_{p}(\Omega)\) if the \(\mathrm{B}_{p}(\Omega)\) characteristic, \[\left[\sigma\right]_{\mathrm{B}_{p}(\Omega)}=\sup_{I}\left\langle\sigma\right \rangle_{\psi(Q_{I})}\left\langle\sigma^{-1}\right\rangle_{\frac{1}{p-1},\psi( Q_{I})} \tag{1.4}\] is finite. For the domains we consider, the images \(\{\psi(Q_{I})\}_{I}\) do indeed form a Muckenhoupt basis (in the sense of [1, Definition 3.1]) and furthermore \(\left[\sigma\right]_{\mathrm{B}_{p}(\Omega)}\) is, up to an absolute constant, independent of the choice of conformal map (see Proposition 2.1 below). By changing variables, \(\left[\sigma\right]_{\mathrm{B}_{p}(\Omega)}\) can be related to a weighted characteristic on the unit disc. For weights \(u,v\) on \(\mathbb{D}\), and \(1\leq p<\infty\), we say \(u\) belongs to the weighted Bekolle-Bonami class \(\mathrm{B}_{p}(\mathbb{D},v)\) if \[[u]_{\mathrm{B}_{p}(\mathbb{D},v)}=\sup_{I}\left\langle u\right\rangle_{v,Q_{I}} \left\langle u^{-1}\right\rangle_{\frac{1}{p-1},v,Q_{I}}<\infty. \tag{1.5}\] When \(v\) is identically \(1\), we remove it from the notation and recover (1.2). A change of variable reveals \[\left\langle\sigma\right\rangle_{p,\psi(Q_{I})}=\left\langle\sigma\circ\psi \right\rangle_{p,|\psi^{\prime}|^{2},Q_{I}}, \tag{1.6}\] hence, for \(1\leq p<\infty\), \(u=(\sigma\circ\psi)\), and \(v=|\psi^{\prime}|^{2}\), \[[\sigma]_{\mathrm{B}_{p}(\Omega)}=[u]_{\mathrm{B}_{p}(\mathbb{D},v)}. \tag{1.7}\] On the other hand, the Bergman projection \(\Pi_{\Omega}\) can be connected to \(\Pi_{\mathbb{D}}\) through the conformal map \(\psi\). Consequently, \[\|\Pi_{\Omega}\|_{\mathcal{L}(L^{p}(\Omega,\sigma))}=\|\Pi_{\mathbb{D}}\|_{ \mathcal{L}(L^{p}(\mathbb{D},(\sigma\circ\psi)|\psi|^{2-p}))}\,; \tag{1.8}\] see, for example, [10, Theorem 2] or [21, Lemma 2.4]. So, in light of (1.7), (1.8), and (1.3), we pose the following question. Under what conditions on a weight \(v\) does it hold that \(u\in\mathrm{B}_{p}(\mathbb{D},v)\) implies \(uv^{1-p/2}\in\mathrm{B}_{p}(\mathbb{D})\)? A trivial sufficient condition is that \(v\in\mathrm{B}_{1}(\mathbb{D})\). Indeed, the definition of \([v]_{\mathrm{B}_{1}(\mathbb{D})}\) implies \[\left\langle uv^{1-p/2}\right\rangle_{Q_{I}}\left\langle uv^{1-p/2}\right\rangle _{\frac{1}{p-1},Q_{I}}\leq[v]_{\mathrm{B}_{1}(\mathbb{D})}^{p}[u]_{\mathrm{B}_ {p}(\mathbb{D},v)},\qquad 1\leq p<\infty. \tag{1.9}\] Thus, concerning the Bergman projection on \(\Omega\), the following trivial result holds, combining the above observations with the upper estimate in (1.3). **Theorem A**.: _Let \(\Omega\subsetneq\mathbb{C}\) be a bounded, simply connected domain with \(|\psi^{\prime}|^{2}\in\mathrm{B}_{1}(\mathbb{D})\), then for \(1<p<\infty\),_ \[\|\Pi_{\Omega}\|_{\mathcal{L}(L^{p}(\Omega,\sigma))}\lesssim\left(\left[\left| \psi^{\prime}\right|^{2}\right]_{\mathrm{B}_{1}(\mathbb{D})}^{p}\left[\sigma \right]_{\mathrm{B}_{p}(\Omega)}\right)^{\max\{1,\frac{1}{p-1}\}}. \tag{1.10}\] _Moreover, the implicit constant depends only on \(p\)._ In [1], Bekolle has shown that \(|\psi^{\prime}|^{2}\in\mathrm{B}_{1}(\mathbb{D})\) when \(\Omega\) is convex. Also, when the boundary of \(\Omega\) is sufficiently smooth (e.g. Dini smooth, which is slightly better than \(C^{1}\)), it is known that \(\psi^{\prime}\) extends to a continuous, non-vanishing function on \(\overline{\mathbb{D}}\)[13], so trivially \(|\psi^{\prime}|^{2}\in\mathrm{B}_{1}(\mathbb{D})\). A weaker version of Theorem A in this latter special case of Dini smooth domains appears to have been discovered by Burbea in [10]. Beyond this, the authors are not aware of geometric conditions on \(\Omega\) which are necessary or sufficient for \(|\psi^{\prime}|^{2}\in\mathrm{B}_{1}(\mathbb{D})\). Here, our goal is to go beyond the simple observations which to lead to Theorem A in two different, but related directions. ### Sharpened conditions on \(\psi\) for weighted estimates First we will show that the assumption \(|\psi^{\prime}|^{2}\in\mathrm{B}_{1}(\mathbb{D})\) can be weakened while maintaining the full range of weighted estimates. In fact, in Theorem E below, we give sufficient conditions on domains \(\Omega\), in terms of \(|\psi^{\prime}|\), for which full or limited range weighted estimates hold for \(\Pi_{\Omega}\) with respect to the weight classes defined by (1.4). We have the following special case of Theorem E for the full range of \(p\): **Theorem B**.: _Let \(\Omega\subsetneq\mathbb{C}\) be a bounded, simply connected domain. If_ \[|\psi^{\prime}|^{2}\in\bigcap_{p>1}\mathrm{B}_{p}(\mathbb{D}), \tag{1.11}\] _then for each \(\sigma\in\mathrm{B}_{p}(\Omega)\), \(\Pi_{\Omega}:L^{p}(\Omega,\sigma)\to L^{p}(\Omega,\sigma)\)._ Theorem B sharpens Theorem A in a non-trivial way. It is indeed the case that there exist domains \(\Omega\) for which (1.11) is satisfied, but \(|\psi^{\prime}|^{2}\not\in\mathrm{B}_{1}(\mathbb{D})\). See Section 2.2.1 below for an example. The main obstacle to obtaining the sufficiency in Theorem B is that \(\mathrm{B}_{p}(\mathbb{D})\) weights do not in general satisfy a reverse Holder inequality. If they did, then the strategy implemented by Johnson and Neugenbauer [10] concerning homeomorphisms which preserve the \(\mathrm{A}_{p}(\mathbb{R}^{n})\) classes could be easily repeated. In particular, a parallel theorem for the Szego projection can be established by directly adapting the arguments in the proof of [10, Theorem 2.7], so we do not focus on it in this paper. Nonetheless, we get around this difficulty for the Bergman projection by using the special form of the homeomorphism \(\psi\), namely its conformality, as well as more modern tools from dyadic harmonic analysis. In the case when \(\Omega\) is a uniform domain, the condition (1.11) is sharp for weighted estimates (see Theorem G below), and furthermore we can replace the \(\mathrm{B}_{p}(\Omega)\) characteristic by the following equivalent one which is intrinsic to \(\Omega\), \[\left[\sigma\right]_{\mathrm{D}_{p}(\Omega)}=\sup_{D}\left\langle\sigma \right\rangle_{D\cap\Omega}\left\langle\sigma^{-1}\right\rangle_{\frac{1}{p-1},D\cap\Omega} \tag{1.12}\] where the supremum is taken over all Euclidean disks \(D\) centered on the boundary \(\partial\Omega\). **Theorem C**.: _Let \(\Omega\) be a simply connected, bounded uniform domain. Then,_ \[|\psi^{\prime}|^{2}\in\bigcap_{p>1}\mathrm{B}_{p}(\mathbb{D}), \tag{1.13}\] _if and only if for each \(\sigma\in\mathrm{D}_{p}(\Omega)\), \(\Pi_{\Omega}:L^{p}(\Omega,\sigma)\to L^{p}(\Omega,\sigma)\)._ We postpone the precise definition of uniform domain until Section 2, but the class includes all Lipschitz domains and furthermore, the boundary can be quite irregular, allowing for Hausdorff dimension arbitrarily close to \(2\). Furthermore, the condition (1.13) is satisfied whenever \(\Omega\) is asymptotically conformal (see Section 2.2.2 below) therefore the full range of weighted estimates hold on these domains. We remark in passing that our results should be extendable to unbounded graph domains. One only needs the existence of a conformal map that maps the upper-half space to \(\Omega\), and has a continuous extension to the Riemann sphere that maps the real axis to \(\partial\Omega\setminus\{\infty\}\) and fixes the point at infinity (such a conformal map is guaranteed to exist for graph domains, see [10, Proposition 2.2]). In this case, one proceeds by using the upper-half plane rather than the unit disk as the model domain. The Bekolle-Bonami regions in the upper-half space are again Carleson tents, see [11]. We have not checked all the details, but we believe that the proofs should be completely analogous. ### Weighted weak-type estimates Our second extension of Theorem A is to obtain the weighted weak-type \((1,1)\) analogue of (1.10). The main difference is that when establishing the analogue of (1.8) for the \(L^{p,\infty}\) spaces, the conformal map appears partly as a multiplier, and partly as a measure (for the \(L^{p}\) norm there is no distinction between a multiplier and a measure). The argument is similar to the one outlined in [1, Lemma 3.1], but we will need a generalization of the result there to weighted spaces and we include all the arguments for completeness. Weak-type estimates in which some or all of the weight is treated as a multiplier, which now appear to be called mixed weak-type inequalities, begin with Muckenhoupt and Wheeden [14]. There was limited development of these types of estimates [17, 18] until their systematic study by Cruz-Uribe, Martell, and Perez [13]. [13] and subsequent modern developments are motivated by interpolation problems with a change of measure, but as noted in Proposition 5.2 below, and in [18, 19], they also arise in establishing weak-type estimates through a change of variable. The following theorem is proved in Section 5 below. **Theorem D**.: _Let \(\Omega\subsetneq\mathbb{C}\) be a simply connected domain with \(\left|\psi^{\prime}\right|^{2}\in\mathrm{B}_{1}(\mathbb{D})\), then_ \[\|\Pi_{\Omega}\|_{L^{1}(\Omega,\sigma)\to L^{1,\infty}(\Omega,\sigma)} \lesssim[\sigma]^{3}_{\mathrm{B}_{1}(\Omega)}, \tag{1.14}\] _where the implicit constant depends polynomially on \(\left[\left|\psi^{\prime}\right|^{2}\right]_{\mathrm{B}_{1}(\mathbb{D})}\)._ ### Organization This paper is organized as follows. In Section 2, we collect useful preliminaries regarding dyadic structures on \(\mathbb{D}\), uniform domains, properties of conformal maps, and weight classes. In Section 3, we introduce and prove a general version of Theorem B, namely Theorem E. In Section 4, we focus on the category of uniform domains and prove a converse to Theorem E which implies Theorem C. Finally, in Section 5 we prove Theorem D. ## 2. Preliminaries: Dyadic structures, conformal maps, and uniform domains In this section we collect some preliminaries concerning dyadic harmonic analysis, conformal maps, and uniform domains. ### Dyadic harmonic analysis in \(\mathbb{D}\) Given a Carleson box \(Q_{I}\), defined by (1.1), let \(T_{I}\) denote the corresponding "top half" \[T_{I}=\left\{re^{\mathrm{i}\theta}\in\mathbb{D}:\frac{\ell(I)}{2}\leq 1-r\leq \ell(I),|\theta-\theta_{0}|\leq\frac{\ell(I)}{2}\right\},\] where \(\ell(I)\) denotes the arclength of the interval \(I\), normalized so that \(\ell(\mathbb{T})=1\). \(T_{I}\) coincides with \(T_{I,\rho}\) from [1] with \(\rho=\frac{1}{2}\). It is well-known that any Carleson box \(Q_{I}\) can be well-approximated by \(Q_{J}\) where \(J\) belongs to one of two dyadic systems of intervals; say \(\mathcal{D}_{1},\mathcal{D}_{2}.\) By well-approximated, we mean that given an arbitrary interval \(I\) on \(\mathbb{T}\), there exists intervals \(J,J^{\prime}\in\mathcal{D}_{1}\cup\mathcal{D}_{2}\) satisfying \(J^{\prime}\subset I\subset J\) and \(|Q_{I}|\sim|Q_{J}|\sim|Q_{J^{\prime}}|\). For example, one can take \[\begin{split}&\mathcal{D}_{1}=\left\{\left[\frac{2\pi j}{2^{k}}, \frac{2\pi(j+1)}{2^{k}}\right):j,k\in\mathbb{N},0\leq j<2^{k}\right\},\\ &\mathcal{D}_{2}=\left\{\left[\frac{2\pi j}{2^{k}}+\frac{2\pi}{3},\frac{2\pi(j+1)}{2^{k}}+\frac{2\pi}{3}\right):j,k\in\mathbb{N},0\leq j<2^{k} \right\}.\end{split} \tag{2.1}\] Therefore, when computing the Bekolle-Bonami characteristic of \(u\), it suffices to compute averages over Carleson boxes \(Q_{I}\) where \(I\) belongs to \(\mathcal{D}_{1}\) or \(\mathcal{D}_{2}\). The precise form of \(\mathcal{D}_{j}\) is not important, but rather we note a few essential properties of any dyadic grid \(\mathcal{D}\): 1. For each \(k\in\mathbb{N}\), \(\{I\in\mathcal{D}:\ell(I)=2^{-k}\}\) forms a partition of \(\mathbb{T}\). 2. \(\{T_{I}:I\in\mathcal{D}\}\) forms a partition of \(\mathbb{D}\). 3. Any two \(I,J\in\mathcal{D}\) are either disjoint, or one is contained in the other. 4. For each \(I\in\mathcal{D}\) and \(k\geq 1\), we can subdivide \(I\) into subintervals \(\{I_{j}^{k}\}_{j=1}^{2^{k}}\) with side length \(\ell(I_{j}^{k})=2^{-k}\ell(I)\) which we call the \(k\)_-th generation_ of \(I\). For \(J=I_{j}^{1}\), we say \(I\) is the _dyadic parent_ of \(J\) and analogously, \(Q_{I}\) the dyadic parent of \(Q_{J}\). Throughout \(u\), \(v\), and \(w\) will be generic weights on \(\mathbb{D}\). We say a weight \(u\) is doubling (with respect to \(\mathcal{D}\)) if there exists \(c_{u}>0\) such that for all intervals \(I\subset J\) in \(\mathbb{T}\) (resp. in \(\mathcal{D}\)) with \(\ell(J)=2\ell(I)\), \[c_{u}^{-1}\int_{Q_{J}}u\,dA\leq\int_{Q_{I}}u\,dA\leq c_{u}\int_{T_{I}}u\,dA. \tag{2.2}\] If only the first inequality in (2.2) holds, then we say \(u\) is weakly doubling. It is not difficult to show that any weight \(u\in\mathrm{B}_{p}(\mathbb{D})\) is doubling, \(1\leq p<\infty\). For a weight \(v\) on \(\mathbb{D}\), define the maximal operator \[M_{v}f(z):=\sup_{I:Q_{I}\ni z}\,\langle f\rangle_{v,Q_{I}}\,,\quad f\in L^{1} (\mathbb{D},\sigma),z\in\mathbb{D}. \tag{2.3}\] Note when \(v\equiv 1\), \(M_{v}\) corresponds to the ordinary maximal function in the Bergman setting, and in this case we simply write \(M_{v}=M.\) When the supremum is restricted only to \(I\in\mathcal{D}\), we use the notation \(M_{v}^{\mathcal{D}}\). Standard considerations show that for any \(v\) and any dyadic grid \(\mathcal{D}\), \(M_{v}^{\mathcal{D}}:L^{p}(\mathbb{D},v)\to L^{p}(\mathbb{D},v)\) with norm \(p^{\prime}=\frac{p}{p-1}\)[10, Theorem 15.1]. By the approximation property of \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\), \[M_{v}f(z)\lesssim M_{v}^{\mathcal{D}_{1}}f(z)+M_{v}^{\mathcal{D}_{2}}f(z), \quad f\in L^{1}(\mathbb{D},v),z\in\mathbb{D} \tag{2.4}\] holds whenever \(v\) is doubling. We also outline the following sparse estimate in the Bergman setting which we will use, but is quite standard by now. Let \[\mathcal{A}_{\mathcal{D},v}f=\sum_{I\in\mathcal{D}}\left\langle f\right\rangle _{v,Q_{I}}\chi_{Q_{I}}. \tag{2.5}\] Then, \[\left|\left\langle\mathcal{A}_{\mathcal{D},v}f,g\,wv\right\rangle \right| =\sum_{I\in\mathcal{D}}\left(\int_{Q_{I}}v\,dA\right)\left\langle f \right\rangle_{v,Q_{I}}\left\langle gw\right\rangle_{v,Q_{I}}\] \[\leq c_{v}\sum_{I\in\mathcal{D}}\left(\int_{T_{I}}v\,dA\right) \left\langle w\right\rangle_{v,Q_{I}}\left\langle w^{-1}\right\rangle_{v,Q_{I }}\left\langle fw\right\rangle_{w^{-1}v,Q_{I}}\left\langle g\right\rangle_{wv,Q_{I}}\] \[\leq c_{v}\left[w\right]_{\mathrm{B}_{2}(v)}\int_{\mathbb{D}}M_{w ^{-1}v}(fw)w^{-\frac{1}{2}}M_{wv}(g)w^{\frac{1}{2}}v\,dA.\] \[\leq 4c_{v}\left[w\right]_{\mathrm{B}_{2}(v)}\|f\|_{L^{2}(wv)}\, \|g\|_{L^{2}(wv)}\,.\] This, together with a slight modification for \(p\neq 2\) (see, e.g. [14]) gives \[\|A_{\mathcal{D},v}\|_{L^{p}(\mathbb{D},wv)}\leq pp^{\prime}c_{v}\left[w \right]_{\mathrm{B}_{p}(v)}^{\max\left\{1,\frac{1}{p-1}\right\}}. \tag{2.6}\] ### Conformal estimates We first verify that the class \(\mathrm{B}_{p}(\Omega)\) is independent of the choice of conformal map, and that the collection \(\{\psi(Q_{I})\}_{I\subseteq\mathbb{T}}\) forms a Muckenhoupt basis, which means the maximal function \[M_{\psi}f(z)=\sup_{I:z\in\psi(Q_{I})}\left\langle f\right\rangle_{\psi(Q_{I})} \tag{2.7}\] is bounded on \(L^{p}(\Omega,\sigma)\) when \(\sigma\in\mathrm{B}_{p}(\Omega)\). **Proposition 2.1**.: _Suppose there exists a conformal map \(\psi_{0}\) from \(\mathbb{D}\) onto \(\Omega\) such that \(\left|\psi_{0}^{\prime}\right|^{2}\) is a doubling weight (2.2). Then for any other such conformal map \(\psi\),_ * \(\left|\psi^{\prime}\right|^{2}\) _is a weakly doubling measure,_ * \(\sup_{\psi}\sup_{I}\left\langle\sigma\right\rangle_{\psi(Q_{I})}\left\langle \sigma^{-1}\right\rangle_{\frac{1}{p-1},\psi(Q_{I})}\lesssim\inf_{\psi}\sup_{I }\left\langle\sigma\right\rangle_{\psi(Q_{I})}\left\langle\sigma^{-1}\right\rangle _{\frac{1}{p-1},\psi(Q_{I})}\)_._ _If in addition, \(\left|\psi_{0}^{\prime}\right|^{2}\in\cup_{p\in[1,\infty)}\mathrm{B}_{p}( \mathbb{D})\), then_ * \(\left|\psi^{\prime}\right|^{2}\) _is doubling,_ * \(\{\psi(Q_{I})\}_{I}\) _forms a Muckenhoupt basis._ Proof.: Since the claims are obvious for \(\psi\) which is a rotation of \(\psi_{0}\), it is enough to consider \(\psi=\psi_{0}\circ\tau_{\lambda}\) where \(\tau_{\lambda}(z)=\frac{\lambda-z}{1-\lambda z}\) is the automorphism of \(\mathbb{D}\) that exchanges the points \(0\) and \(\lambda\). First, notice that \(|\tau_{\lambda}^{\prime}|^{2}\) itself is a doubling weight with doubling constant independent of \(\lambda\). We also rely on the following almost invariance property of \(Q_{I}\) under \(\tau_{\lambda}\): there exists \(C>0\) such that for each \(\lambda\in\mathbb{D}\) and \(I\subset\mathbb{T}\), there exists \(I_{\lambda},I_{\lambda}^{*}\subset\mathbb{T}\) satisfying \[Q_{I_{\lambda}}\subset\tau_{\lambda}(Q_{I})\subset Q_{I_{\lambda}^{*}},\qquad C ^{-1}\leq\frac{\ell(I_{\lambda}^{*})}{\ell(I_{\lambda})}\leq C. \tag{2.8}\] This geometric property can be seen, for example, by considering the Mobius-invariant family of regions of \(\mathbb{D}\) bounded by a geodesic in the hyperbolic metric and an arc in \(\mathbb{T}\) (see, e.g. [10, p.321]), as such regions well-approximate Carleson tents. Now, let \(I\subset J\) with \(\ell(J)=2\ell(I)\). Since \(\tau_{\lambda}\) is doubling, \(|Q_{J_{\lambda}^{*}}|\sim|Q_{I_{\lambda}}|\), so \[|\psi(Q_{J})|\leq|\psi_{0}(Q_{J_{\lambda}^{*}}))|\lesssim|\psi_{0}(Q_{I_{ \lambda}})|\leq|\psi(Q_{I})|,\] which establishes i. To prove ii., let \(\psi_{1},\psi_{2}\) be conformal maps. Then, \(\psi_{1}=\psi_{2}\circ\tau_{\lambda}\) for some \(\lambda\in\mathbb{D}\). Therefore, by (2.8) and the doubling property of \(\psi_{2}\), \[\left\langle\sigma\right\rangle_{\psi_{1}(Q_{I})}\leq\frac{\int_{\psi_{2}(Q_{ I_{\lambda}^{*}})}\sigma}{|\psi_{2}(Q_{I_{\lambda}^{*}})|}\lesssim\left\langle \sigma\right\rangle_{\psi_{2}(Q_{I_{\lambda}^{*}})},\] and the same for \(\sigma^{\frac{-1}{p-1}}\), which establishes ii. The proof of iii. is easier than i., simply noting that \(|\tau_{\lambda}(Q_{J})|\sim|\tau_{\lambda}(Q_{I})|\sim|\tau_{\lambda}(T_{I})|\). Now, to prove iv., let \(f\in L^{1}(\Omega)\) be nonnegative, \(I\subset\mathbb{T}\), and \(z\in\psi(Q_{I})\). Then by change of variable, \[\left\langle f\right\rangle_{\psi(Q_{I})}=\left\langle f\circ\psi\right\rangle _{v,Q_{I}}\leq M_{v}(f\circ\psi)(\psi^{-1}(z)),\qquad v=\left|\psi^{\prime} \right|^{2},\] which immediately implies the pointwise inequality \[M_{\psi}f(z)\leq M_{v}(g)(\zeta),\quad g=f\circ\psi,\quad z\in\Omega.\] Fix \(\sigma\in\mathrm{B}_{p}(\Omega)\), which equivalently means \(u=\sigma\circ\psi\in\mathrm{B}_{p}(\mathbb{D},v)\) (recall (1.6)). By (2.6) and the obvious inequality \[M_{v}g\lesssim\sum_{j=1,2}\sum_{I\in\mathcal{D}_{j}}\langle g\rangle_{v,Q_{I} }\chi_{Q_{I}},\] \(M_{v}\) is bounded on \(L^{p}(\mathbb{D},uv)\). Then notice, again using change of variable, and assuming \(f\in L^{p}(\Omega,\sigma)\), \[\|M_{\psi}f\|_{L^{p}(\Omega,\sigma)} \leq\|[M_{v}g]\circ\psi^{-1}\|_{L^{p}(\Omega,\sigma)}\] \[=\|M_{v}g\|_{L^{p}(\mathbb{D},uv)}\] \[\lesssim\|g\|_{L^{p}(\mathbb{D},uv)}\] \[=\|f\|_{L^{p}(\Omega,\sigma)}.\] To conclude this section, we state a convenient version of the Koebe distortion theorem. **Proposition 2.2**.: _There exists an absolute constant \(K\) so that for all \(I\in\mathbb{T}\), all \(z_{1},z_{2}\in T_{I}\), and any \(\psi:\mathbb{D}\to\mathbb{C}\) which is conformal, one has_ \[K^{-1}\leq\frac{|\psi^{\prime}(z_{1})|}{|\psi^{\prime}(z_{2})|}\leq K. \tag{2.9}\] Proof.: A classical corollary to the Koebe distortion theorem [10, Cor 1.1.5] is that (2.9) holds for any \(z_{1},z_{2}\in\mathbb{D}\) and \(K=e^{6\lambda(z_{1},z_{2})}\) where \(\lambda\) is the hyperbolic metric in \(\mathbb{D}\). However, there exists an absolute constant \(r>0\) and points \(z_{I}\in\mathbb{D}\) such that \(T_{I}\subset\{z\in\mathbb{D}:\lambda(z,z_{I})\leq r\}\), whence (2.9) follows with \(K=e^{12r}\). #### 2.2.1. Example Next, to show Theorem E is a non-trivial qualitative extension of Theorem A, let us construct a conformal map \(\psi\) such that \(|\psi^{\prime}|^{2}\) belongs to \(\mathrm{B}_{p}(\mathbb{D})\) for all \(p>1\), but not to \(\mathrm{B}_{1}(\mathbb{D})\). Define \[\psi_{0}(z)=\frac{z}{\log z}.\] This a conformal map on the disk \(D(\frac{1}{4},\frac{1}{4})=\{z\in\mathbb{C}:|z-\frac{1}{4}|<\frac{1}{4}\}\) whose image is drawn in Figure 2.2.1. Compute \[|\psi_{0}^{\prime}(z)|^{2}=\left|\frac{\log z-1}{(\log z)^{2}}\right|^{2}\sim \frac{1}{|\log z|^{2}}.\] But \(|\log(z)|^{\alpha}\) is integrable over \(D(\frac{1}{4},\frac{1}{4})\) for every \(\alpha\in\mathbb{R}\). Define \(\psi_{1}(z)=\psi_{0}(\frac{z+1}{4})\) so that \(\psi_{1}(\mathbb{D})=\psi_{0}(D(\frac{1}{4},\frac{1}{4}))\). Then \(\psi_{1}\) is a conformal map on with \(|\psi_{1}^{\prime}|^{2}\in\mathrm{B}_{p}(\mathbb{D})\) for each \(p>1\). However, \(\psi_{1}^{\prime}(-1)=0\) so \(|\psi_{1}^{\prime}|^{2}\) does not belong to \(\mathrm{B}_{1}(\mathbb{D})\). #### 2.2.2. Asymptotically conformally domains Given a bounded domain \(\Omega\subset\mathbb{C}\) and two points \(z_{1},z_{2}\in\partial\Omega\), define \(\partial\Omega(z_{1},z_{2})\) to be the smaller arc of \(\partial\Omega\) between the points \(z_{1}\) and \(z_{2}\). \(\Omega\) is said to be asymptotically conformal if \[\max_{z\in\partial\Omega(z_{1},z_{2})}\frac{|z-z_{1}|+|z-z_{2}|}{|z_{1}-z_{2}| }\to 1\quad\text{as}\quad|z_{1}-z_{2}|\to 0.\] **Lemma 2.3**.: _If \(\Omega\) is asymptotically conformal, then \(|\psi^{\prime}|^{2}\in\mathrm{B}_{p}(\mathbb{D})\) for every \(p>1\)._ Proof.: In this case, \(\log\psi^{\prime}\) belongs to the little Bloch space [13]. But it is well-known [16, Theorem 8.15] that this is equivalent to \(\log\psi^{\prime}\) belonging to VMOA defined with respect to the Bergman metric. More precisely, \(\log\psi^{\prime}\in\mathrm{Hol}(\mathbb{D})\) and satisfies for any fixed \(r>0\): \[\lim_{R\to 0^{-}}\sup_{\begin{subarray}{c}z_{1}\\ 1>|z|\geq R\end{subarray}}\frac{1}{|\beta(z,r)|}\int_{\beta(z,r)}|\log\psi^{ \prime}-\log\psi_{z}^{\prime}|^{2}\,dA=0.\] Here \(\beta(z,r)\) denotes a disk in the hyperbolic metric centered at \(z\) and of radius \(r\), while for any function \(f\), we write \(f_{z}\) to indicate \(|\beta(z,r)|^{-1}\int_{\beta(z,r)}f\,dA\), omitting the implicit dependence on \(r\). We say \(f\in\mathrm{BMO}\) if \(f\) satisfies the weaker condition, that \[\|f\|_{\mathrm{BMO}}:=\sup_{z\in\mathbb{D}}\frac{1}{|\beta(z,r)|}\int_{\beta(z,r)}|f-f_{z}|^{2}\,dA\] is finite. Different choices of \(r\) give rise to equivalent norms, so this space (along with VMOA) is independent of \(r>0\). Moreover, any function in VMOA can be arbitrarily well-approximated in the BMO norm by elements of \(C(\overline{\mathbb{D}})\)[16, Proposition 8.12], and hence in particular by polynomials, which belong to VMOA. An additional key fact [16, pp.218] that we will use is that any \(f\in\mathrm{BMOA}=\mathrm{BMO}\cap\mathrm{Hol}(\mathbb{D})\) satisfies the Lipschitz-like condition with respect to the hyperbolic metric \(\lambda\): \[|f(z_{1})-f(z_{2})|\leq C\|f\|_{\mathrm{BMO}}\lambda(z_{1},z_{2}),\quad z_{1},z_{2}\in\mathbb{D},\] where \(C\) is an independent constant. Let \(p>1\) and write \(\log\psi^{\prime}=g+h\), where \(g\) is a polynomial and \(h\in\mathrm{BMOA}\) satisfying \(\|h\|_{\mathrm{BMO}}\leq\frac{\log(3/2)}{6Cr}\min\{1,p-1\}.\) We will show that \(|\psi^{\prime}|^{2}\in\mathrm{B}_{p}(\mathbb{D})\). Fix a Carleson tent \(Q_{I}\) corresponding to \(I\subset\mathbb{T}\) and let \(I_{j}^{k}\) denote the dyadic descendants of \(I\) belonging to the \(k\)-th generation, which (so \(k\geq 1\)), \(1\leq j\leq 2^{k}.\) We then may choose \(r>0\) and points \(z_{j,k}\in T_{I_{j}^{k}}\) so \[T_{I_{j}^{k}}\subset\beta(z_{j,k},r),\quad|T_{I_{j}^{k}}|\sim|\ \beta(z_{j,k},r)| \sim 2^{-2k}|Q_{I}|.\] Let \(z_{I}\) be such a point corresponding to \(T_{I}=T_{I_{1}^{0}}\). It is straightforward to see then that if \(z\in T_{I_{j}^{k}}\), there holds \[\lambda(z,z_{I})\leq(2k+1)r,\quad|h(z)-h(z_{I})|\leq 3Crk\|h\|_{\mathrm{BMO}}.\] Then, we can directly estimate: \[\frac{1}{|Q_{I}|}\int_{Q_{I}}|\psi^{\prime}|^{2} \leq e^{2\|g\|_{\infty}}\cdot e^{2\mathrm{Re}\,h(z_{I})}\cdot\frac {1}{|Q_{I}|}\int_{Q_{I}}e^{2\mathrm{Re}\,(h-h(z_{I}))}\] \[\leq\frac{e^{2(\|g\|_{\infty}+\mathrm{Re}\,h(z_{I}))}}{|Q_{I}|} \sum_{k\geq 0}\sum_{j=1}^{2^{k}}\int_{T_{I_{j}^{k}}}e^{2|h-h(z_{I})|}\] \[\lesssim e^{2(\|g\|_{\infty}+\mathrm{Re}\,h(z_{I}))}\sum_{k\geq 0 }2^{-k}e^{6Crk\|h\|_{\mathrm{BMO}}}\] \[=e^{2(\|g\|_{\infty}+\mathrm{Re}\,h(z_{I}))}\sum_{k\geq 0} \left(\frac{e^{6Cr\|h\|_{\mathrm{BMO}}}}{2}\right)^{k}\] \[\lesssim e^{2(\|g\|_{\infty}+\mathrm{Re}\,h(z_{I}))}\] A similar computation shows that \[\left(\frac{1}{|Q_{I}|}\int_{Q_{I}}|\psi^{\prime}|^{-2/(p-1)}\right)^{p-1} \lesssim e^{2(\|g\|_{\infty}-\mathrm{Re}\,h(z_{I}))},\] completing the proof. ### Uniform Domains and the \(\mathrm{D}_{p}(\Omega)\) class We begin with the definition of uniform domains, which have also been called \((\varepsilon,\infty)\) domains in the literature. Such domains were introduced in [10, 11] in the context of the extension problem for BMO and Sobolev spaces. A domain \(\Omega\) is called a _uniform domain_ if there exists \(\varepsilon>0\) so that for all points \(x,y\in\Omega\), there exists a rectifiable arc \(\gamma:[0,1]\to\Omega\) with \(\gamma(0)=x\) and \(\gamma(1)=y\) satisfying the following two properties: 1. \(\int_{0}^{1}|\gamma^{\prime}(t)|\,dt\leq\frac{1}{\varepsilon}|x-y|\) 2. \(z=\gamma(t)\) for \(t\in(0,1)\) satisfies \[\inf_{\zeta\in\Omega^{c}}|z-\zeta|\geq\varepsilon\frac{|x-z||y-z|}{|x-y|}.\] The next proposition gives two more equivalent definitions of uniform domains **Proposition 2.4**.: _Let \(\Omega\) be a bounded simply connected domain and \(\psi\) a conformal map from \(\mathbb{D}\) onto \(\Omega\). The following are equivalent._ 1. \(\Omega\) _is a uniform domain._ 2. _There exists_ \(C>0\) _such that for each_ \(z_{1},z_{2}\in\partial\Omega\)_,_ \[\operatorname{diam}\partial\Omega(z_{1},z_{2})\leq C|z_{1}-z_{2}|.\] C. _There exists \(\Psi:\mathbb{C}\to\mathbb{C}\) quasiconformal such that \(\Psi=\psi\) on \(\mathbb{D}\)._ Proof.: The equivalence of B. and C. is a well-known result due to Ahlfors [10, p. 94]. The equivalence of A. and C. goes through the extension problem for BMO functions, see [11, p. 42]. As discussed in the introduction, while the definition for \(\mathrm{D}_{p}(\Omega)\) in (1.12) is more intrinsic and geometric, the \(\mathrm{B}_{p}(\Omega)\) characteristic defined in (1.4) has many advantages over it. Not only does it clearly connect to properties of the conformal map, but it will also allow us to leverage the dyadic structure of \(Q_{I}\) on \(\mathbb{D}\). To prove the equivalence of these classes for uniform domains, we will use part C. of Proposition 2.4. The two important properties of quasiconformal maps \(\Psi\) which we will need are 1. \(|J\Psi|:=|\partial\Psi|^{2}-|\overline{\partial}\Psi|^{2}\) belongs to \(\mathrm{A}_{\infty}(\mathbb{C})\); 2. \(\Psi\) is quasisymmetric. Let us explain these two properties. First, there are many equivalent definitions of the weight class \(\mathrm{A}_{\infty}(\mathbb{C})\)[1, Corollary IV.2.13 and Theorem IV.2.15]. We will say a weight \(v\) belongs to \(\mathrm{A}_{\infty}(\mathbb{C})\) if there exists \(C,\delta>0\) such that for each cube \(Q\subset\mathbb{C}\) and each measurable set \(E\subset Q\), \[\frac{|E|}{|Q|}\leq C\left(\frac{\int_{E}v\,dA}{\int_{Q}v\,dA}\right)^{\delta}.\] Second, \(f:\mathbb{C}\to\mathbb{C}\) is _quasisymmetric_ if there exists an increasing function \(\eta:\mathbb{R}^{+}\to\mathbb{R}^{+}\) such that for \(z_{0},z_{1},z_{2}\in\mathbb{C}\), \[\frac{|f(z_{1})-f(z_{0})|}{|f(z_{2})-f(z_{0})|}\leq\eta\left(\frac{|z_{1}-z_{0 }|}{|z_{2}-z_{0}|}\right).\] It follows by taking \(z_{i}=f^{-1}(w_{i})\) that if \(f\) is quasisymmetric, so is \(f^{-1}\), though with different \(\eta\). (1) is a well-known result due to Gehring [10], see also [1, Theorem 13.4.2]. For (2), see [1, Theorem 3.5.3]. With these two facts, we can prove the following lemma for \(\psi\) which we will be crucial in showing the sharpness of our condition as well as the equivalence of \(\mathrm{D}_{p}(\Omega)\) and \(\mathrm{B}_{p}(\Omega)\). **Lemma 2.5**.: _Let \(\Omega\subsetneq\mathbb{C}\) be a bounded, simply connected uniform domain with \(\psi:\mathbb{D}\to\Omega\) conformal and fix \(I\subseteq\mathbb{T}.\) There exists \(\delta>0\) such that for all measurable \(E\subseteq Q_{I}\),_ \[\frac{|E|}{|Q_{I}|}\lesssim\left(\frac{|\psi(E)|}{|\psi(Q_{I})|}\right)^{ \delta}.\] Proof.: Let \(I\) be an interval in \(\mathbb{T}\). There exists a Euclidean cube \(P\supseteq Q_{I}\) with \(|P|\sim|Q_{I}|\). Since \(|J\Psi|\in\mathrm{A}_{\infty}(\mathbb{C})\), there exists \(\delta>0\) such that for any \(E\subset Q_{I}\subset P\), \[\frac{|E|}{|Q_{I}|} \lesssim\frac{|E|}{|P|}\lesssim\left(\frac{\int_{E}|J\Psi|\,dA}{ \int_{P}|J\Psi|\,dA}\right)^{\delta}\] \[\leq\left(\frac{\int_{E}|J\Psi|\,dA}{\int_{Q_{I}}|J\Psi|\,dA} \right)^{\delta}=\left(\frac{\int_{E}|\psi^{\prime}|^{2}\,dA}{\int_{Q_{I}}| \psi^{\prime}|^{2}}\,dA\right)^{\delta}\] \[=\left(\frac{|\psi(E)|}{|\psi(Q_{I})|}\right)^{\delta}.\] We are ready to prove the promised equivalence. **Proposition 2.6**.: _Suppose that \(\Omega\) is a bounded, simply connected uniform domain and \(\psi\) maps \(\mathbb{D}\) onto \(\Omega\) conformally. Then, for any weight \(\sigma\),_ \[[\sigma]_{\mathrm{B}_{p}(\Omega)}\sim[\sigma]_{\mathrm{D}_{p}(\Omega)}\,.\] Proof.: The upper inequality will follow from the fact that \(\Psi\) is quasisymmetric and the lower inequality from a similar argument for \(\Psi^{-1}\). To prove the upper bound, let \(Q_{I}\subset\mathbb{D}\), and let \(z_{I}\) be a point in \(Q_{I}\) satisfying \(|z_{I}-z|\sim\ell(I)\) for every \(z\in\partial Q_{I}\). Let \(r=\min_{z\in\partial Q_{I}}|\psi(z)-\psi(z_{I})|\) and \(R=\max_{z\in\partial Q_{I}}|\psi(z)-\psi(z_{I})|\). The quasisymmetry of \(\Psi\) shows that \(r\sim R\) and clearly \[D(\psi(z_{I}),r)\subset\psi(Q_{I})\subset D(\psi(z_{I}),R).\] Pick some \(p\in\partial\Omega\cap D(\psi(z_{I}),R)\), and by the triangle inequality, \(D(\psi(z_{I}),R)\subset D(p,2R)\). All in all, \[\psi(Q_{I})\subset D(p,2R)\cap\Omega\] and \[|\psi(Q_{I})| \geq|D(\psi(z_{I}),r)|\sim r^{2}\sim R^{2}\sim|D(p,2R)|\] \[\geq|D(p,2R)\cap\Omega|.\] This computation shows that every \(\psi(Q_{I})\) is contained in a disk \(D\) centered on \(\partial\Omega\) of comparable area, so \([\sigma]_{\mathrm{B}_{p}(\Omega)}\lesssim[\sigma]_{\mathrm{D}_{p}(\Omega)}\). On the other hand, let \(p\in\partial\Omega\) and \(s>0\). We want to find \(Q_{I}\) such that \[D(p,s)\cap\Omega\subset\psi(Q_{I}),\qquad|D(p,s)\cap\Omega|\gtrsim|\psi(Q_{I}) |\,, \tag{2.10}\] from which \([\sigma]_{\mathrm{D}_{p}(\Omega)}\lesssim[\sigma]_{\mathrm{B}_{p}(\Omega)}\) immediately follows. As before, set \(r=\min_{|z-p|=s}|\Psi^{-1}(z)-\Psi^{-1}(p)|\) and \(R=\max_{|z-p|=s}|\Psi^{-1}(z)-\Psi^{-1}(p)|\). Again, since \(\Psi\) is quasisymmetric, \(r\sim R\) and with \(q=\Psi^{-1}(p)\in\mathbb{T}\), \[D(q,r)\subset\Psi^{-1}(D(p,s))\subset D(q,R).\] Simple geometry shows that there exists \(Q_{I}\supset D(q,R)\cap\mathbb{D}\) with \(\left|Q_{I}\right|\sim R^{2}\). Clearly, \[D(p,s)\cap\Omega\subset\psi(Q_{I}).\] To establish (2.10), it remains to show \(\left|\psi(Q_{I})\right|\lesssim\left|D(p,s)\cap\Omega\right|\). Thus, the proof is concluded by applying Lemma 2.5 to obtain \[1\lesssim\left(\frac{\left|D(q,r)\cap\mathbb{D}\right|}{\left|Q_{I}\right|} \right)^{1/\delta}\lesssim\frac{\left|\psi(D(q,r)\cap\mathbb{D})\right|}{ \left|\psi(Q_{I})\right|}\lesssim\frac{\left|D(p,s)\cap\Omega\right|}{\left| \psi(Q_{I})\right|}.\] ## 3. Limited range weighted estimates for \(\Pi_{\Omega}\) In addition to the weight classes \(\mathrm{B}_{p}(\Omega)\) defined in (1.4), for \(q_{0}\geq 1\), define \[\mathrm{B}_{q_{0}^{+}}(\mathbb{D})=\bigcap_{q>q_{0}}\mathrm{B}_{q}(\mathbb{D}),\] and for \(1<s<\infty\), define the reverse Holder class \(\mathrm{RH}_{s}(\Omega)\) to be the weights \(\sigma\) on \(\Omega\) such that the characteristic \[\left[\sigma\right]_{\mathrm{RH}_{s}(\Omega)}=\sup_{I\subseteq\mathbb{T}} \left\langle\sigma\right\rangle_{s,\psi(Q_{I})}\left(\left\langle\sigma\right \rangle_{\psi(Q_{I})}\right)^{-1}\] is finite. The goal of this section is to prove the following of which Theorem B from the introduction is the special case \(p_{0}=q_{0}=1\). **Theorem E**.: _Let \(\Omega\) be a bounded, simply connected domain and \(\psi\) a conformal map from \(\mathbb{D}\) onto \(\Omega\). Let \(1\leq p_{0}<2\) and set \(q_{0}=\frac{p_{0}}{2-p_{0}}\). If \(\left|\psi^{\prime}\right|^{2}\in\mathrm{B}_{q_{0}^{+}}(\mathbb{D})\), \(p_{0}<p<p_{0}^{\prime}\), and_ \[\sigma\in\left\{\begin{array}{ll}\mathrm{B}_{p}(\Omega)&p_{0}=1,\\ \mathrm{B}_{\frac{p}{p_{0}}}(\Omega)\cap\mathrm{RH}_{\frac{p_{0}^{\prime}}{p_{ 0}^{\prime}-p}}(\Omega),&p_{0}>1,\end{array}\right. \tag{3.1}\] _then \(\Pi_{\Omega}:L^{p}(\Omega,\sigma)\to L^{p}(\Omega,\sigma)\)._ If \(\left|\psi^{\prime}\right|^{2}\) belonged to the smaller class \(\mathrm{B}_{q_{0}}(\mathbb{D})\), this theorem would be more or less trivial for the same reasons leading to Theorem A above. We outline that argument below, after the statement of Theorem F, the main technical result from which Theorem E follows. For two weights \(u,v\) on \(\mathbb{D}\) and a dyadic grid \(\mathcal{D}\) define the weighted characteristic \[\left[u\right]_{\mathrm{B}_{p}(\mathcal{D},v)}=\sup_{I\in\mathcal{D}}\left\langle u \right\rangle_{v,Q_{I}}\left\langle u^{-1}\right\rangle_{v,\frac{1}{p-1},Q_{I}}.\] We say \(u\) belongs to \(\mathrm{B}_{p}(\mathcal{D},v)\) if the above characteristic is finite, and we drop the dependence on \(v\) when \(v\equiv 1\). Similarly, we let \[\left[u\right]_{\mathrm{RH}_{s}(\mathcal{D},v)}=\sup_{I\in\mathcal{D}}\left\langle u \right\rangle_{s,v,Q_{I}}\left(\left\langle u\right\rangle_{v,Q_{I}}\right)^{- 1}.\] **Theorem F**.: _Let \(\psi:\mathbb{D}\to\mathbb{C}\) be conformal, \(\mathcal{D}\) a dyadic grid on \(\mathbb{T}\), and \(q_{0}\geq 1\). Suppose that \(v=\left|\psi^{\prime}\right|^{2}\in\mathrm{B}_{q_{0}^{+}}(\mathbb{D})\). If_ \[u\in\left\{\begin{array}{ll}\mathrm{B}_{2}(\mathcal{D},v),&q_{0}=1;\\ \mathrm{B}_{\frac{q_{0}+1}{q_{0}}}(\mathcal{D},v)\cap\mathrm{RH}_{q_{0}}( \mathcal{D},v)&1<q_{0}<\infty,\end{array}\right. \tag{3.2}\] _then \(u\in\mathrm{B}_{2}(\mathcal{D},v)\)._ If \(v\) belonged to the smaller class \(\mathrm{B}_{q_{0}}(\mathbb{D})\), this result would be trivial. Indeed, for \(q_{0}=1\), consult (1.9). For \(q_{0}>1\), \(r=\frac{1+q_{0}}{q_{0}}\) satisfies \(-q_{0}=\frac{1}{1-r}\). Therefore, \[\left\langle u\right\rangle_{Q_{I}}\left\langle u^{-1}\right\rangle_{Q_{I}} \leq[v]_{\mathrm{B}_{q_{0}}}^{\frac{2}{q_{0}}}\left\langle u\right\rangle_{q _{0},v,Q_{I}}\left\langle u^{-1}\right\rangle_{q_{0},v,Q_{I}}\leq[v]_{\mathrm{ B}_{q_{0}}}^{\frac{2}{q_{0}}}\left[u\right]_{\mathrm{RH}_{q_{0}}(v)}\left[u \right]_{\mathrm{B}_{r}(v)}. \tag{3.3}\] The key idea in the proof of Theorem F is to use dyadic versions of weight characteristics and apply a dyadic regularization. We remark a similar idea appears in [10] when proving weighted inequalities for the vector-valued Bergman projection with matrix weights. Recall the weighted dyadic maximal operators \(M_{v}^{\mathcal{D}}\) from (2.3) and the following paragraph. Define the class \(\mathrm{B}_{\infty}(\mathcal{D},v)\) to be those weights \(u\) satisfying \[\left[u\right]_{\mathrm{B}_{\infty}(\mathcal{D},v)}:=\sup_{I\in\mathcal{D}} \frac{\left\langle M_{v}^{\mathcal{D}}(u\chi_{Q_{I}})\right\rangle_{v,Q_{I}}} {\left\langle u\right\rangle_{v,Q_{I}}}<\infty. \tag{3.4}\] **Proposition 3.1**.: _If there exists \(C>0\) and \(0<s<1\) such that_ \[\left\langle u\right\rangle_{v,Q_{I}}\leq C\left\langle u\right\rangle_{s,v,Q_ {I}},\qquad\forall I\in\mathcal{D}, \tag{3.5}\] _then \(u\in\mathrm{B}_{\infty}(\mathcal{D},v)\)._ Proof.: Let \(I\in\mathcal{D}\) and \(z\in Q_{I}\). Then, \[M_{v}^{\mathcal{D}}(u\chi_{Q_{I}})(z)=\sup_{\begin{subarray}{c}J\in\mathcal{D }:\\ J\subset I\\ z\in Q_{J}\end{subarray}}\left\langle u\right\rangle_{v,Q_{J}}\leq C\left[M_{ v}^{\mathcal{D}}(u^{s}\chi_{Q_{I}})(z)\right]^{\frac{1}{s}}.\] Multiplying the above display by \(v(z)\), integrating over \(z\in Q_{I}\), and using the fact that \(M_{v}^{\mathcal{D}}\) is bounded on \(L^{1/s}(\mathbb{D},v)\) concludes the proof. An immediate consequence of Proposition 3.1 is \[\bigcup_{1<p<\infty}\left\{u:u\in\mathrm{B}_{p}(\mathcal{D},v)\right\}\cup \left\{w^{p}:w\in\mathrm{RH}_{p}(\mathcal{D},v)\right\}\subset\mathrm{B}_{ \infty}(\mathcal{D},v). \tag{3.6}\] We say a weight \(u\) satisfies the \(\mathrm{APR}(\mathcal{D})\) condition if there exists \(C>0\) such that for each \(I\in\mathcal{D}\), \[C^{-1}u(z_{2})\leq u(z_{1})\leq Cu(z_{2})\qquad\forall z_{1},z_{2}\in T_{I}. \tag{3.7}\] This is a special case of the condition introduced in [1] with \(\rho=\frac{1}{2}\) (though the condition in [1] is qualitatively independent of \(\rho\) and ranges over all intervals). **Proposition 3.2**.: _Suppose that \(v\) is a doubling weight on \(\mathcal{D}\), and \(w\in\mathrm{B}_{\infty}(\mathcal{D},v)\) satisfies the APR(\(\mathcal{D}\)) condition. Then there exists \(\tau>1\) only depending on \([w]_{\mathrm{B}_{\infty}(\mathcal{D},v)}\) and \(c_{v}\) so that \(w\in\mathrm{RH}_{\tau}(\mathcal{D},v)\)._ Proof.: This is a straightforward adaptation of the argument in [13, Lemma 2.8]. First note that, as a consequence of the APR(\(\mathcal{D}\)) condition for \(w\), for any Carleson box \(Q_{I}\) with \(I\in\mathcal{D}\) and \(z\in Q_{I}\), we have \(w(z)\lesssim M_{v}^{\mathcal{D}}(w\chi_{Q_{I}})(z)\). Indeed, note that if \(z\in Q_{I}\), there is a unique interval \(J\subset I\) in \(\mathcal{D}\) so \(z\in T_{J}\). We then estimate, using (3.7) and the second inequality in (2.2): \[w(z)\lesssim\left\langle w\right\rangle_{T_{J},v}\lesssim\left\langle w\right \rangle_{Q_{J},v}\leq M_{v}^{\mathcal{D}}(w\chi_{Q_{I}})(z).\] Therefore, we have reduced the problem to showing that for some \(\tau>1\), \[\left\langle M_{v}^{\mathcal{D}}(w\chi_{Q_{I}})\right\rangle_{\tau,v,Q_{I}} \lesssim\left\langle w\right\rangle_{v,Q_{I}}. \tag{3.8}\] Fix \(I\in\mathcal{D}\) and for each \(\lambda>0\), let \(\Omega_{\lambda}=\{z\in Q_{I}:M_{v}^{\mathcal{D}}(w\chi_{Q_{I}})(z)>\lambda\}\). For each \(\lambda>0\), write \(\Omega_{\lambda}\) as a disjoint union of maximal \(Q_{\lambda,j}\) and let \(\widehat{Q}_{\lambda,j}\) denote the corresponding dyadic parents. Note that by maximality \[\left\langle w\chi_{Q_{I}}\right\rangle_{v,\widehat{Q}_{\lambda,j }}\leq\lambda, \tag{3.10}\] \[M_{v}^{\mathcal{D}}(w\chi_{Q_{I}})(z)=M_{v}^{\mathcal{D}}(w \chi_{Q_{\lambda,j}})(z)\quad\text{for}\quad z\in Q_{\lambda,j}. \tag{3.9}\] Using the distribution function, for any \(\tau>1\), split \[\begin{split}&\int_{Q_{I}}[M_{v}(w\chi_{Q_{I}})]^{\tau}v\,dA\\ &\qquad=\int_{0}^{\langle w\rangle_{v,Q_{I}}}(\tau-1)\lambda^{ \tau-2}\left(\int_{\Omega_{\lambda}}M_{v}^{\mathcal{D}}(w\chi_{Q_{I}})v\,dA \right)\,d\lambda\\ &\qquad+\int_{\langle w\rangle_{v,Q_{I}}}^{\infty}(\tau-1)\lambda ^{\tau-2}\left(\int_{\Omega_{\lambda}}M_{v}^{\mathcal{D}}(w\chi_{Q_{I}})v\, dA\right)\,d\lambda.\end{split} \tag{3.11}\] By virtue of the containment \(\Omega_{\lambda}\subset Q_{I}\) and (3.4), the first integral is trivially bounded by \([w]_{\mathrm{B}_{\infty}(\mathcal{D},v)}\left\langle w\right\rangle_{v,Q_{I}}^ {\tau}v(Q_{I})\). To handle the second integral, notice that for each \(\lambda\) and \(j\), by (3.10), (3.4), (3.9), and (2.2), \[\int_{Q_{\lambda,j}}M_{v}^{\mathcal{D}}(w\chi_{Q_{I}})v\,dA\leq[w]_{\mathrm{ B}_{\infty}(\mathcal{D},v)}\int_{Q_{\lambda,j}}wv\,dA\leq[w]_{\mathrm{B}_{ \infty}(\mathcal{D},v)}c_{v}\lambda\int_{Q_{\lambda,j}}v\,dA.\] Summing this over \(j\), we obtain \[\int_{\langle w\rangle_{v,Q_{I}}}^{\infty} (\tau-1)\lambda^{\tau-2}\left(\int_{\Omega_{\lambda}}M_{v}^{ \mathcal{D}}(w\chi_{Q_{I}})v\,dA\right)\,d\lambda\] \[\leq[w]_{\mathrm{B}_{\infty}(\mathcal{D},v)}\times c_{v}\times \frac{\tau-1}{\tau}\int_{\langle w\rangle_{v,Q_{I}}}^{\infty}\tau\lambda^{\tau- 1}v(\Omega_{\lambda})\,d\lambda.\] Choosing \(\tau=\frac{2c_{v}[w]_{\mathrm{B}_{\infty}(\mathcal{D},v)}}{2c_{v}[w]_{\mathrm{B}_{ \infty}(\mathcal{D},v)}-1}\), the final term is bounded by \(\frac{1}{2}\) times the LHS of (3.11); therefore (3.8) is established. Given any two weights \(u\), \(v\), we introduce the following regularization associated to a dyadic grid \(\mathcal{D}\): \[u_{\mathcal{D},v}:=\sum_{I\in\mathcal{D}}\langle u\rangle_{v,T_{I}}\chi_{T_{I}}. \tag{3.12}\] When \(v\equiv 1\), we simply write \(u_{\mathcal{D}}\). Let us list some elementary properties of the regularizations. **Lemma 3.3**.: 1. \(u_{\mathcal{D},v}\) _satisfies the APR(\(\mathcal{D}\)) condition._ 2. _For each_ \(I\in\mathcal{D}\)_,_ \(\left\langle u\right\rangle_{v,Q_{I}}=\left\langle u_{\mathcal{D},v}\right\rangle _{v,Q_{I}}\)_._ 3. _For_ \(0<s\leq 1\)_,_ \((u^{s})_{\mathcal{D},v}\leq u^{s}_{\mathcal{D},v}\)_._ 4. _If_ \(u\) _satisfies (_3.5_), then so does_ \(u_{\mathcal{D},v}\)_._ Proof.: Statements (i) and (ii) are trivial. Statement (iii) follows from Holder's inequality. Statement (iv) follows by applying (ii), (3.5), (ii) again, and (iii) to obtain \[\left\langle u_{\mathcal{D},v}\right\rangle_{v,Q_{I}}=\left\langle u\right \rangle_{v,Q_{I}}\leq C\left\langle u\right\rangle_{s,v,Q_{I}}=C\left\langle( u^{s})_{\mathcal{D},v}\right\rangle^{\frac{1}{s}}_{v,Q_{I}}\leq C\left\langle u _{\mathcal{D},v}\right\rangle_{s,v,Q_{I}}.\] We are now ready for the final ingredient to the proof of Theorem F. **Proposition 3.4**.: _Let \(v=\left|\psi^{\prime}\right|^{2}\) for \(\psi\) which is conformal on \(\mathbb{D}\). Let \(1\leq q<\infty\) such that \(u^{q}\) satisfies (3.5). Then there exists \(r>q\) such that_ \[\left\langle u\right\rangle_{Q_{I}}\lesssim[v]^{\frac{1}{r}}_{\mathrm{B}_{r}( \mathbb{D})}\left\langle u\right\rangle_{q,v,Q_{I}},\qquad\forall I\in \mathcal{D}.\] Proof.: Let \(I\in\mathcal{D}\) and notice for any \(r>1\), \[\begin{split}\left\langle u\right\rangle_{Q_{I}}&= \left\langle u_{\mathcal{D}}\right\rangle_{Q_{I}}\\ &=\left\langle u_{\mathcal{D}}v^{\frac{1}{r}}v^{-\frac{1}{r}} \right\rangle_{Q_{I}}\\ &\leq\left\langle u_{\mathcal{D}}^{r}v\right\rangle^{\frac{1}{2}}_ {Q_{I}}\left\langle v^{-\frac{1}{r-1}}\right\rangle^{\frac{r-1}{r}}_{Q_{I}}\\ &\leq[v]^{\frac{1}{r}}_{\mathrm{B}_{r}}\left\langle u_{\mathcal{ D}}\right\rangle_{r,v,Q_{I}}.\end{split} \tag{3.13}\] However, since \(v=\left|\psi^{\prime}\right|^{2}\) and \(\psi\) is conformal, by the Koebe distortion theorem (Proposition 2.2), \(\left\langle u\right\rangle_{T_{I}}\sim\left\langle u\right\rangle_{v,T_{I}}\). Thus \(u_{\mathcal{D}}\sim u_{v,\mathcal{D}}\) which combined with (3.13) implies \[\left\langle u\right\rangle_{Q_{I}}\lesssim[v]^{\frac{1}{r}}_{\mathrm{B}_{r}} \left\langle u_{\mathcal{D},v}\right\rangle_{r,v,Q_{I}}.\] By Lemma 3.3 (i) and (iv), \((u^{q})_{\mathcal{D},v}\) satisfies the APR(\(\mathcal{D}\)) condition and (3.5), so Propositions 3.1 and 3.2 provide \(\tau>1\) such that \[\left\langle u_{\mathcal{D},v}\right\rangle^{q}_{\mathcal{D},v}\leq\left\langle (u^{q})_{\mathcal{D},v}\right\rangle_{\tau,v,Q_{I}}\lesssim\left\langle(u^{q}) _{\mathcal{D},v}\right\rangle_{v,Q_{I}}=\left\langle u\right\rangle^{q}_{q,v,Q _{I}}.\] Taking \(r=q\tau>q\) concludes the proof. Now the proof of Theorem F is immediate. Let \(q_{0}\geq 1\) and let \(u\) belong to the RHS of (3.2). By (3.6), \(u^{q_{0}}\) and \(u^{-q_{0}}\) both satisfy (3.5). Therefore, applying Proposition 3.4 to \(u\) and \(u^{-1}\) with exponent \(q=q_{0}\), gives \(r_{1},r_{2}>q_{0}\) such that (3.13) holds for the respective weights and parameters. Therefore, setting \(r^{*}=\min\{r_{1},r_{2}\}>q_{0}\), for each \(I\in\mathcal{D}\), \[\begin{split}\left\langle u\right\rangle_{Q_{I}}\left\langle u^{ -1}\right\rangle_{Q_{I}}&\lesssim[v]^{\frac{2}{r^{*}}}_{\mathbb{ P}_{r^{*}}(\mathbb{D})}\left\langle u\right\rangle_{q_{0},v,Q_{I}}\left\langle u ^{-1}\right\rangle_{q_{0},v,Q_{I}}\\ &\leq[v]^{\frac{2}{r^{*}}}_{\mathbb{P}_{r^{*}}(\mathbb{D})}\left\{ \begin{array}{ll}[u]_{\mathrm{RH}_{q_{0}}(\mathcal{D},v)}\left[u]_{\mathrm{ B}_{\frac{q_{0}+1}{q_{0}}(\mathcal{D},v)}}\,,&q_{0}>1;\\ \left[u\right]_{\mathrm{B}_{2}(\mathcal{D},v)},&q_{0}=1.\end{array}\right.\end{split} \tag{3.14}\] To derive Theorem E from Theorem F, let \(\sigma\) be as in (3.1) with \(p=2\). Setting \(u=\sigma\circ\psi\) and \(v=|\psi^{\prime}|^{2}\), (1.6) implies that \(u\) belongs to the RHS of (3.2) for any dyadic grid \(\mathcal{D}\). Therefore, Theorem F places \(u=\sigma\circ\psi\in\mathrm{B}_{2}(\mathcal{D},v)\), and taking \(\mathcal{D}=\mathcal{D}_{j}\) for \(j=1,2\) yields \(u\in\mathrm{B}_{2}(\mathbb{D})\). By (1.3) \(\Pi_{\mathbb{D}}\) is bounded on \(L^{2}(\mathbb{D},u)\), and applying (1.8) yields the conclusion when \(p=2\). The case of general \(p\in(p_{0},p_{0}^{\prime})\) follows by extrapolation [1, Theorems 3.22 and 3.31] since \(\{\psi(Q_{I})\}\) is a Muckenhoupt basis by Proposition 2.1. Theorem E can also be proved without extrapolation. A more general statement than (3.2) holds in Theorem F, namely that if \[v=\left|\psi^{\prime}\right|^{2}\in\mathrm{B}_{q_{0}^{+}}(\mathbb{D}),\text{ and }u\in\left\{\begin{array}{ll}\mathrm{B}_{\frac{p}{p_{0}}}(\mathcal{D},v)\cap \mathrm{RH}_{\frac{p_{0}^{\prime}}{p_{0}^{\prime}-p}}(\mathcal{D},v)&p_{0}>1, \\ \mathrm{B}_{p}(\mathcal{D},v)&p_{0}=1,\end{array}\right. \tag{3.15}\] then \(uv^{1-p/2}\in\mathrm{B}_{p}(\mathcal{D})\), where \(p_{0}\) is related to \(q_{0}\) by \(\frac{p_{0}^{\prime}}{p_{0}^{\prime}}=q_{0}^{\prime}\). Assuming this, Theorem E can be proved in the same way as \(p=2\) above. To prove that (3.15) implies \(uv^{1-p/2}\in\mathrm{B}_{p}(\mathcal{D})\), all that is needed is to tediously check many exponents and use the following generalization of Proposition 3.4. **Proposition 3.5**.: _Let \(v=\left|\psi^{\prime}\right|^{2}\) for \(\psi\) which is conformal on \(\mathbb{D}\) and \(\theta<\frac{1}{2}\). Let \(q\geq 1\) such that \((1-\theta)q^{\prime}\geq 1\) and \(u^{q}\) satisfies (3.5). Then there exists \(r>((1-\theta)q^{\prime})^{\prime}\) such that_ \[\left\langle uv^{\theta}\right\rangle_{Q_{I}}\lesssim[v]^{\frac{1}{r}}_{ \mathrm{B}_{r}(\mathbb{D})}\left\langle u\right\rangle_{q,v,Q_{I}}\left\langle v ^{-1}\right\rangle_{\frac{1}{r-1},Q_{I}}^{-\theta},\qquad\forall I\in\mathcal{ D}.\] Proposition 3.4 is the special case of \(\theta=0\). Its proof is the same, using the regularization \(u_{\mathcal{D},v^{\theta}}\) rather than \(u_{\mathcal{D}}\). Now assuming (3.15), apply Proposition 3.5 to \((u,q_{1},\theta_{1})\) and \((u^{-\frac{1}{p-1}},q_{2},\theta_{2})\) with \[q_{1}=\tfrac{p_{0}^{\prime}}{p_{0}-p},\quad q_{2}=\tfrac{p-1}{\frac{r}{p_{0}}- 1},\quad\theta_{1}=1-\tfrac{p}{2},\quad\theta_{2}=\tfrac{-1}{p-1}\left(1- \tfrac{p}{2}\right).\] The key computation is that \(((1-\theta_{j})q_{j}^{\prime})^{\prime}=q_{0}\) for \(j=1,2\), and then a similar argument to (3.14) shows \(uv^{1-p/2}\in\mathrm{B}_{p}(\mathcal{D})\). ## 4. Converse to Theorem E on uniform domains In this section, we will prove the following result showing that on uniform domains, Theorem E is sharp. Define the reverse Holder characteristic with respect to disks centered on the boundary by \[\left[\sigma\right]_{\mathrm{RHD}_{s}(\Omega)}=\sup_{D}\left\langle\sigma \right\rangle_{s,D\cap\Omega}\left(\left\langle\sigma\right\rangle_{D\cap \Omega}\right)^{-1}.\] **Theorem G**.: _Let \(\Omega\) be a bounded, simply connected uniform domain and \(\psi\) a conformal map from \(\mathbb{D}\) onto \(\Omega\). Suppose \(1\leq p_{0}<2\) and set \(q_{0}=\frac{p_{0}}{2-p_{0}}\). The following are equivalent._ 1. \(\left|\psi^{\prime}\right|^{2}\in\mathrm{B}_{q_{0}^{+}}(\mathbb{D})\)_._ 2. _For_ \(p_{0}<p<p_{0}^{\prime}\)_, if_ \[\sigma\in\left\{\begin{array}{ll}\mathrm{B}_{p}(\Omega)&p_{0}=1,\\ \mathrm{B}_{\frac{p}{p_{0}}}(\Omega)\cap\mathrm{RH}_{\frac{p_{0}^{\prime}}{p_ {0}^{\prime}-p}}(\Omega),&p_{0}>1,\end{array}\right.\] _then_ \(\Pi_{\Omega}:L^{p}(\Omega,\sigma)\to L^{p}(\Omega,\sigma)\)_._ 3. _For_ \(p_{0}<p<p_{0}^{\prime}\)_, if_ \[\sigma\in\left\{\begin{array}{ll}\mathrm{D}_{p}(\Omega)&p_{0}=1,\\ \mathrm{D}_{\frac{p}{p_{0}}}(\Omega)\cap\mathrm{RHD}_{\frac{p_{0}^{\prime}}{p_ {0}^{\prime}-p}}(\Omega),&p_{0}>1,\end{array}\right.\] _then_ \(\Pi_{\Omega}:L^{p}(\Omega,\sigma)\to L^{p}(\Omega,\sigma)\)_._ B. and C. are equivalent by Proposition 2.6 (the same argument there works for \(\mathrm{RH}_{s}(\Omega)\) and \(\mathrm{RHD}_{s}(\Omega)\)). A. implies B. by Theorem E so it remains to show that B. implies A. We will need the following two lemmata to establish this remaining implication. The first is a consequence of the Koebe Distortion Theorem (Proposition 2.2). **Lemma 4.1**.: _Let \(I_{1},I_{2}\) be neighboring intervals in \(\mathbb{T}\) and \(I_{0}=I_{1}\cup I_{2}\). Then_ \[\left|\psi(Q_{I_{0}})\right|\lesssim\left|\psi(Q_{I_{1}})\right|+\left|\psi(Q_ {I_{2}})\right|.\] The second concerns the maximal function \[M_{\psi}f(z)=\sup_{I\subset\mathbb{T}}\chi_{\psi(Q_{I})}\left\langle f\right \rangle_{\psi(Q_{I})},\qquad f\in L^{1}(\Omega).\] **Lemma 4.2**.: _Let \(f\in L^{1}(\Omega)\) and \(q<1\). Then,_ \[(M_{\psi}f)^{q}\in\mathrm{B}_{1}(\Omega).\] The proofs of Lemmata 4.2 and 4.1 are postponed to Section 4.1. For now, let us prove following proposition, which will quickly show that B. implies A. It is loosely based on the necessity argument from [11] concerning the homeomorphisms preserving \(\mathrm{BMO}(\mathbb{R})\). **Proposition 4.3**.: _Suppose \(1\leq s<\infty\) has the property that every \(w\in\mathrm{B}_{1}(\Omega)\cap\mathrm{RH}_{s}(\Omega)\) has the additional property that \(w\circ\psi\) is weakly doubling. Then for \(v=\left|\psi^{\prime}\right|^{2}\), any \(g\in L^{1}(\mathbb{D},v)\), and any \(q<\frac{1}{s}\),_ \[\left\langle M_{v}(\chi_{Q_{I}}g)\right\rangle_{q,Q_{I}}\lesssim\left\langle g \right\rangle_{v,Q_{I}}. \tag{4.1}\] Proof.: Let \(sq<1\), and \(g\in L^{1}(\mathbb{D},v)\). Set \(\phi=\psi^{-1}\) and \(f=g\circ\phi\). Fix an interval \(I\). Partition \(I=I_{1}\cup I_{2}\) where \(\left|\psi(Q_{I_{1}})\right|=\left|\psi(Q_{I_{2}})\right|\). By Lemma 4.1 these quantities are comparable to \(\left|\psi(Q_{I})\right|\). Let \(e^{i\theta_{j}}\) be the center of \(I_{j}\), and partition \(Q_{I}=Q_{1}\cup Q_{2}\) where \[Q_{j}=\left\{re^{i\theta}\in Q_{I}:\left|\theta-\theta_{j}\right|\leq\frac{ \ell(I_{j})}{2}\right\}.\] The sublinearity of the maximal function and the fact that \(q<1\) implies that for some \(j\in\{1,2\}\), \[2\int_{\psi(Q_{I})}M_{\psi}(\chi_{\psi(Q_{j})}f)^{q}\left|\phi^{\prime}\right| ^{2}\geq\int_{\psi(Q_{I})}M_{\psi}(\chi_{\psi(Q_{I})}f)^{q}\left|\phi^{\prime} \right|^{2}. \tag{4.2}\] Without loss of generality, assume (4.2) holds for \(j=1\) and set \(w=M_{\psi}(\chi_{\psi(Q_{1})}f)^{q}\). Then, by Lemma 4.2, \(w^{s}\in\mathrm{B}_{1}(\Omega)\) so \(w\in\mathrm{B}_{1}(\Omega)\cap\mathrm{RH}_{s}(\Omega)\). Take \(J\) to be a neighbor of \(I_{2}\) and \(I\) with \(\ell(J)=\ell(I)\), and we claim that \[w\lesssim\left(\left\langle f\right\rangle_{\psi(Q_{I})}\right)^{q}\quad\text { on }\psi(Q_{J}). \tag{4.3}\] Indeed, for \(z\in\psi(Q_{J})\) and any \(K\) such that \(z\in\psi(Q_{K})\) and \(K\) has nonempty intersection with \(I_{1}\), \(I_{2}\) must be contained in \(K\). Therefore, \(\left|\psi(Q_{K})\right|\geq\left|\psi(Q_{I_{2}})\right|\gtrsim\left|\psi(Q_{I })\right|\), which establishes (4.3). Now we pull back with \(\psi\). Set \(u=w\circ\psi\) and notice that \[\left\langle u\right\rangle_{Q_{J\cup I}}\gtrsim\frac{1}{\left|Q_{I}\right|} \int_{Q_{I}}w\circ\psi\gtrsim\frac{1}{\left|Q_{I}\right|}\int_{\psi(Q_{I})}M_{ \psi}(\chi_{\psi(Q_{I})}f)^{q}\left|\phi^{\prime}\right|^{2},\] where the last inequality follows by changing variables and (4.2). On the other hand, since we assume that \(u\) is weakly doubling, by (4.3), \[\left\langle u\right\rangle_{Q_{J\cup I}}\lesssim\left\langle u\right\rangle_{ Q_{J}}\lesssim\left(\left\langle f\right\rangle_{\psi(Q_{I})}\right)^{q}.\] Combining these two displays, we obtain \[\left(\frac{1}{\left|Q_{I}\right|}\int_{\psi(Q_{I})}M_{\psi}(\chi_{\psi(Q_{I} )}f)^{q}\left|\phi^{\prime}\right|^{2}\right)^{\frac{1}{q}}\lesssim\left\langle f \right\rangle_{\psi(Q_{I})}.\] Changing variables, we obtain (4.1). Proof of B. implies A. in Theorem G.: Let \(1\leq p_{0}<2\), and let \[w\in\left\{\begin{array}{ll}\mathrm{B}_{1}(\Omega)&p_{0}=1;\\ \mathrm{B}_{1}(\Omega)\cap\mathrm{RH}_{q_{0}}(\Omega),&p_{0}>1.\end{array}\right.\] Then, since \(\mathrm{B}_{1}(\Omega)\subset\mathrm{B}_{\frac{2}{p_{0}}}(\Omega)\), part B. implies that \(\Pi_{\Omega}\) is bounded on \(L^{2}(\Omega,w)\), or equivalently by (1.8) and (1.3), \(w\circ\psi\in\mathrm{B}_{2}(\mathbb{D}).\) Since any weight in \(\mathrm{B}_{2}(\mathbb{D})\) is weakly doubling, we have inequality (4.1) for any \(q<\frac{1}{q_{0}}\). Now, let \(p>q_{0}\) and select \(g=v^{-\frac{p}{p-1}}\) so that on the one hand \[\left\langle g\right\rangle_{v,Q_{I}}=\frac{\int_{Q_{I}}v^{-\frac{p}{p-1}+1}\, dA}{\int_{Q_{I}}v\,dA}=\frac{\int_{Q_{I}}v^{-\frac{1}{p-1}}\,dA}{\int_{Q_{I}}v\, dA}.\] On the other hand, we claim to have the pointwise domination \[M_{v}(\chi_{Q_{I}}g)(\zeta)\gtrsim g(\zeta),\qquad\zeta\in Q_{I}.\] To see this, fix \(\zeta\in Q_{I}\) and find \(J\subseteq I\) so that \(\zeta\in T_{J}.\) Then, using the doubling property of \(v\) from Lemma 2.5 together with Proposition 2.2, we have \[g(\zeta)\sim\left\langle g\right\rangle_{v,T_{J}}\lesssim\left\langle g \right\rangle_{v,Q_{J}}\lesssim M_{v}(\chi_{Q_{I}}g)(\zeta).\] Therefore, applying (4.1) with \(q=\frac{1}{p}<\frac{1}{q_{0}}\), we obtain \[\frac{\int_{Q_{I}}v^{-\frac{1}{p-1}}\,dA}{\int_{Q_{I}}v\,dA} \gtrsim\left\langle M_{v}(\chi_{Q_{I}}g)\right\rangle_{q,Q_{I}}\gtrsim\left( \frac{1}{|Q_{I}|}\int_{Q_{I}}v^{-\frac{1}{p-1}}\,dA\right)^{p}.\] Simple algebra and taking a supremum over all intervals \(I\) then implies \(v\in\mathrm{B}_{p}(\mathbb{D})\), and since \(p>q_{0}\) was arbitrary we conclude \(v\in\mathrm{B}_{q_{0}^{+}}(\mathbb{D})\). **Remark 4.4**.: Notice that the only place we used that \(\Omega\) was a uniform domain was in the Proof of B. implies A. to bound \(g\) pointwise by the maxmal function, which relied on \(v\) being doubling. So even in the case of non-uniform domain, we obtain the necessary condition \[\left\langle M_{v}(\chi_{Q_{I}}v^{-\frac{p}{p-1}})\right\rangle_{\frac{1}{p}, Q_{I}}\left\langle v\right\rangle_{Q_{I}}\lesssim\left\langle v^{-\frac{1}{p-1}} \right\rangle_{Q_{I}},\quad p>q_{0},\] which reduces to the \(\mathrm{B}_{q_{0}^{+}}(\mathbb{D})\) condition if \(M_{v}(\chi_{Q_{I}}v^{-\frac{p}{p-1}})\gtrsim v^{-\frac{p}{p-1}}\), which holds in particular if \(v\) is doubling. ### Proofs of Lemmata 4.1 and 4.2 Proof of Lemma 4.1.: We may assume \(\ell(I_{1})\geq\ell(I_{2})\) and \(I_{1}\) is the right neighbor of \(I_{2}\). Let \(\theta_{j}\) be the center of \(I_{j}\). Then \(Q_{I_{0}}\) can be partitioned into \(4\) regions: \(Q_{I_{1}}\), \(Q_{I_{2}}\), \[B=\left\{re^{i\theta}:1-\ell(I_{1})\leq r<1-\ell(I_{2}),|\theta-\theta_{2}| \leq\frac{\ell(I_{2})}{2}\right\},\] and \[T=\left\{re^{i\theta}:1-\ell(I_{0})\leq r<1-\ell(I_{1}),|\theta-\theta_{0}| \leq\frac{\ell(I_{0})}{2}\right\}.\] First, let \(J\) be an interval with the same right endpoint as \(I_{1}\) but with \(\ell(J)=\frac{\ell(I_{1})+\ell(I_{0})}{2}\). There exists \(z_{1}\in T\cap T_{J}\) and \(z_{2}\in T_{I_{1}}\cap T_{J}\). Furthermore, since \(T\subset T_{I_{0}}\), by Proposition 2.2, for any \(z_{3}\in T\) and \(z_{4}\in T_{I_{1}}\), \[\left|\psi^{\prime}(z_{3})\right|\sim\left|\psi^{\prime}(z_{1})\right|\sim \left|\psi^{\prime}(z_{2})\right|\sim\left|\psi^{\prime}(z_{4})\right|.\] Since \(|T|\lesssim|T_{I_{1}}|\) the above display implies \(|\psi(T)|\lesssim|\psi(T_{I_{1}})|\leq|\psi(Q_{I_{1}})|\). To handle \(B\), for each \(k\in\mathbb{N}\), let \(J_{k}\) be the interval with the same left endpoint as \(I_{2}\), but with \(\ell(J_{k})=2^{k}\ell(I_{2})\). Then, \(B\subset\cup_{k=1}^{K}T_{J_{k}}\) where \(K=\lceil\log_{2}\frac{\ell(I_{1})}{\ell(I_{2})}\rceil\). \(\{T_{J_{k}}\}_{k=1}^{K}\) is pairwise disjoint and \[|T_{J_{k}}|\lesssim|(Q_{I_{1}}\cup T)\cap T_{J_{k}}|.\] Therefore, applying again Proposition 2.2 and summing over \(k\), \[|\psi(B)|\leq\sum_{k=1}^{K}|\psi(T_{J_{k}})|\lesssim\sum_{k=1}^{K}|\psi((Q_{I_ {1}}\cup T)\cap T_{J_{k}})|\lesssim|\psi(Q_{I})|\,.\] To prove Lemma 4.2, we will use the following Vitali covering lemma for \(\psi(Q_{I})\). **Lemma 4.5**.: _Let \(\mathcal{I}\) be a finite collection intervals in \(\mathbb{T}\). Then, there exist disjoint intervals \(\{I_{k}\}_{k=1}^{K}\subset\mathcal{I}\), such that_ \[\left|\bigcup_{I\in\mathcal{I}}\psi(Q_{I})\right|\lesssim\sum_{k=1}^{K}|\psi(Q _{I_{k}})|\,,\] _where the implicit constant is absolute._ Proof.: For any non-empty collection of intervals \(\mathcal{J}\), let \(I(\mathcal{J})\) to be an interval in \(\mathcal{J}\) with \(|\psi(Q_{I(\mathcal{J})})|=\max_{I\in\mathcal{J}}|\psi(Q_{I})|\). Let \(\mathcal{I}_{0}=\mathcal{I}\) and inductively define for \(k\geq 0\), \[I_{k}=I(\mathcal{I}_{k}),\quad\mathcal{I}_{k+1}=\{I\in\mathcal{I}_{k}:I\cap I_{k} =\varnothing\}.\] Since \(\mathcal{I}\) is a finite collection, this process terminates after say \(K\) steps once \(\mathcal{I}_{K+1}=\varnothing\). By construction \(\{I_{k}\}_{k=0}^{K}\) are disjoint. Fix \(k\) and let \(J_{k}^{1}\) and \(J_{k}^{2}\) be the left and right neighbors of \(I_{k}\) such that \[|\psi(Q_{J_{k}^{1}})|=|\psi(Q_{I_{k}})|=|\psi(Q_{J_{k}^{2}})|.\] Setting \(J_{k}=J_{k}^{1}\cup I_{k}\cup J_{k}^{2}\), Lemma 4.1 implies \(|\psi(Q_{J_{k}})|\lesssim|\psi(Q_{I_{k}})|\). The lemma will be proved if we can show that for each \(I\in\mathcal{I}\), there exists \(k\) such that \(I\subset J_{k}\). To this end, given \(I\in\mathcal{I}\), let \(k\) be the first index such that \(I\not\in\mathcal{I}_{k+1}\). Then \(I\in\mathcal{I}_{k}\) and \(I\cap I_{k}\neq\varnothing\). Suppose toward a contradiction that \(I\backslash J_{k}\neq\varnothing\). Then \(I\supsetneq J_{k}^{j}\) for some \(j\in\{1,2\}\), but since \(I\in\mathcal{I}_{k}\), \[|\psi(Q_{I})|\leq|\psi(Q_{I_{k}})|=\left|\psi(Q_{J_{k}^{j}})\right|<|\psi(Q_{I })|\,,\] which is a contradiction. As a consequence, **Lemma 4.6**.: \(M_{\psi}\) _is of weak type \((1,1)\)._ Proof.: Let \(f\in L^{1}(\Omega)\), \(\lambda>0\), and \(E=\{z\in\Omega:M_{\psi}f(z)>\lambda\}\). For each \(z\in E\), there exists \(I(z)\) such that \(z\in\psi(Q_{I(z)})\) and \(\left\langle f\right\rangle_{\psi(Q_{I(z)})}>\lambda\). Therefore \(\{\psi(Q_{I(z)})\}_{z\in E}\) covers \(E\). Now let \(F\) be an arbitrary compact subset of \(E\). \(F\) can be covered by a finite subcollection of \(\{\psi(Q_{I(z)})\}_{z\in E}\), say \(\{\psi(Q_{I})\}_{I\in\mathcal{I}}\). Applying Lemma 4.5 above, there exists a pairwise disjoint subcollection \(\{I_{k}\}_{k=1}^{K}\subset\mathcal{I}\) such that \[|F|\lesssim\sum_{k=1}^{K}|\psi(Q_{I_{k}})|\leq\frac{1}{\lambda}\sum_{k=1}^{K} \int_{\psi(Q_{I_{k}})}|f|\,dA\leq\frac{1}{\lambda}\,\|f\|_{L^{1}(\Omega)}\,.\] We conclude by taking the supremum over all \(F\) which are compact subsets of \(E\). Proof of Lemma 4.2.: The \(\mathrm{B}_{1}(\Omega)\) condition can equivalently be stated as follows. There exists \(C>0\) such that for each \(z\in\Omega\) and \(I\) such that \(z\in\psi(Q_{I})\), \[\left\langle M_{\psi}f\right\rangle_{q,\psi(Q_{I})}\leq C^{\frac{1}{q}}M_{\psi }f(z).\] Fix such \(z\) and \(I\). Let \(I_{1},I_{2}\) be the right and left neighbors of \(I\) such that \(|\psi(Q_{I_{j}})|=|\psi(Q_{I})|\). Setting \(J=I_{1}\cup I\cup I_{2}\), Lemma 4.1 implies that \[|\psi(Q_{I})|\sim|\psi(Q_{J})|. \tag{4.4}\] Split \(f=f_{1}+f_{2}\) where \(f_{1}=f\chi_{\psi(Q_{J})}\). Using the distribution function and the weak type estimate, for any \(T>0\), \[\int_{\psi(Q_{I})}(M_{\psi}f_{1})^{q}\,dA \leq\int_{0}^{T}+\int_{T}^{\infty}q\lambda^{q-1}|\{z\in\psi(Q_{I}) :M_{\psi}f(z)>\lambda\}|\,d\lambda\] \[\leq T^{q}|\psi(Q_{I})|+\|f_{1}\|_{L^{1}(\Omega)}\,\frac{q}{1-q}T ^{q-1}.\] Taking \(T=\left\|f_{1}\right\|_{L^{1}(\Omega)}|\psi(Q_{I})|^{-1}\), we obtain \[\left\langle M_{\psi}f\right\rangle_{q,\psi(Q_{I})}\lesssim\left(\frac{\left\| f_{1}\right\|_{L^{1}(\Omega)}}{|\psi(Q_{I})|}\right)^{q}\lesssim\left\langle f \right\rangle_{\psi(Q_{J})}\leq M_{\psi}f(z),\] where the second inequality comes from (4.4) and the support of \(f_{1}\). For \(f_{2}\), let \(w\in\psi(Q_{I})\cap\psi(Q_{K})\). If \(K\subset J\), then \(\left\langle f\right\rangle_{\psi(Q_{K})}=0\). Otherwise, \(K\) exits \(J\), but since it also has nonempty intersection with \(I\), either \(I_{1}\) or \(I_{2}\) is contained in \(K\). WLOG assume it is \(I_{1}\). Then, setting \(L=K\cup I\) and \(K_{1}=K\backslash I\), Lemma 4.1, applied to \(K_{1}\) and \(I\), implies \[|\psi(Q_{L})|\lesssim|\psi(Q_{K_{1}})|+|\psi(Q_{I})|=|\psi(Q_{K_{1}})|+|\psi(Q _{I_{1}})|\leq 2|\psi(Q_{K})|.\] Therefore, \(\left\langle f_{2}\right\rangle_{\psi(Q_{K})}\lesssim\left\langle f\right\rangle _{\psi(Q_{L})}\) which implies \(M_{\psi}f_{2}(w)\lesssim M_{\psi}f(z)\). ## 5. Proof of Theorem D Let \(\sigma\in\mathrm{B}_{1}(\Omega)\). Set \(u=\sigma\circ\psi\), \(v=|\psi^{\prime}|^{2}\) and \(w=|\psi^{\prime}|\). The weights \(u\), \(v\), and \(w\) are defined on \(\mathbb{D}\) so we omit the explicit reference to \(\mathbb{D}\) in the \(\mathrm{B}_{p}\) characteristics that follow. We also identify weights with the absolutely continuous measure they induce via the notation \(u(E)=\int_{E}u\,dA\) for measurable sets \(E\). The following weak-type estimate for the weighted dyadic maximal function can be proven using a standard maximal covering argument that is independent of the underlying measure \(w\,dA\), which we omit. **Proposition 5.1**.: _Suppose \(u,v,w\) are weights on \(\mathbb{D}\) such that \(uw\in\mathrm{B}_{1}(w)\) and \(v=w^{2}\). Then, for all \(f\in L^{1}(\mathbb{D},uv),\)_ \[uv\left(\left\{M_{w}^{\mathcal{D}}f>\lambda\right\}\right)\leq\frac{[uw]_{ \mathrm{B}_{1}(w)}}{\lambda}\int_{\mathbb{D}}|f|\,uv\,dA. \tag{5.1}\] Next, we compute the analogue of (1.8) for the weak-type estimate. **Proposition 5.2**.: _For each \(\lambda>0\),_ \[\begin{split}&\sup_{\left\|f\right\|_{L^{1}(\Omega,\sigma)}=1} \sigma\left(\{z\in\Omega:|\Pi_{\Omega}f(z)|>\lambda\}\right)\\ &=\sup_{\left\|g\right\|_{L^{1}(\mathbb{D},uw)}=1}uv\left(\left\{ \zeta\in\mathbb{D}:\left|w(\zeta)^{-1}\Pi_{\mathbb{D}}g(\zeta)\right|>\lambda \right\}\right)\end{split} \tag{5.2}\] Proof.: Let \(\lambda>0\), \(f\in L^{1}(\Omega,\sigma)\) with unit norm, and set \[E=\left\{z\in\Omega:|\Pi_{\Omega}f(z)|>\lambda\right\}.\] Let \(g:\mathbb{D}\to\mathbb{C}\) be defined by \(g=(f\circ\psi)\cdot\psi^{\prime}\). Thus, by change of variable, \[1=\int_{\Omega}|f|\,\sigma\,dA=\int_{\mathbb{D}}|g|\,u\,\big{|}\psi^{\prime} \big{|}\ dA,\quad\sigma\,(E)=\int_{\psi^{-1}(E)}u\,\big{|}\psi^{\prime}\big{|}^ {2}\ dA.\] Using the transformation law for the Bergman kernel (see for example [10, p. 72]), one can conclude \[\psi^{-1}(E)=\left\{\zeta\in\mathbb{D}:\,\frac{|\Pi_{\mathbb{D}}g(\zeta)|}{| \psi^{\prime}(\zeta)|}>\lambda\right\}.\] Equipped with Propositions 5.2 and 5.1, we are ready to prove Theorem D. First, we claim \[\max\left\{[uw]_{\mathrm{B}_{1}(w)},[uw]_{\mathrm{B}_{1}}\right\}\leq[v]_{ \mathrm{B}_{1}}[\sigma]_{\mathrm{B}_{1}(\Omega)}. \tag{5.3}\] The estimate in (5.3) for \([uw]_{\mathrm{B}_{1}}\) follows from (1.9) with \(p=1\) and a slight modification will show the estimate for \([uw]_{\mathrm{B}_{1}(w)}.\) Indeed, there holds for each \(Q_{I}\), \[[\sigma]_{\mathrm{B}_{1}(\Omega)}[v]_{\mathrm{B}_{1}} \geq\frac{[v]_{\mathrm{B}_{1}}}{\int_{Q_{I}}v\,dA}\int_{Q_{I}} uv\,dA\cdot\|u^{-1}\|_{L^{\infty}(Q_{I})}\] \[\geq\frac{1}{\inf_{Q_{I}}v}\left\langle uv\right\rangle_{Q_{I}} \|u^{-1}\|_{L^{\infty}(Q_{I})}\] \[\geq\frac{1}{(\inf_{Q_{I}}v)^{1/2}}\left\langle uv\right\rangle_{ Q_{I}}\|u^{-1}v^{-1/2}\|_{L^{\infty}(Q_{I})}\] \[=\frac{\left\langle w\right\rangle_{Q_{I}}}{\inf_{Q_{I}}w}\left \langle uw\right\rangle_{w,Q_{I}}\|u^{-1}w^{-1}\|_{L^{\infty}(Q_{I})}\] \[\geq\left\langle uw\right\rangle_{w,Q_{I}}\|u^{-1}w^{-1}\|_{L^{ \infty}(Q_{I})}.\] Taking the supremum over \(I\) establishes (5.3). Therefore, by (5.3), Proposition 5.2, and the trivial estimates \(c_{w}\leq[w]_{\mathrm{B}_{1}}\leq[v]_{\mathrm{B}_{1}}\), it is enough to show \[uv\left(\left\{\zeta\in\mathbb{D}:\,\frac{|\Pi_{\mathbb{D}}g(\zeta)|}{w(\zeta )}>\lambda\right\}\right)\lesssim\frac{\left(c_{w}[w]_{\mathrm{B}_{1}}\right) ^{3}[uw]_{\mathrm{B}_{1}(w)}^{2}[uw]_{\mathrm{B}_{1}}}{\lambda}\int_{\mathbb{D }}|g|\,uw \tag{5.4}\] Moreover, \(\Pi_{\mathbb{D}}\) is pointwise equivalent to \(\mathcal{A}_{\mathcal{D}_{1}}+\mathcal{A}_{\mathcal{D}_{2}}\), defined by (2.5) with \(v\equiv 1\) (see [12, Lemma 5]), so it is enough to prove (5.4) for \(\mathcal{A}_{\mathcal{D}}\) in place of \(\Pi_{\mathbb{D}}\) for a generic dyadic grid \(\mathcal{D}\) and for \(g\) positive. To this end, let \(E_{\lambda}=\{\zeta\in\mathbb{D}:M_{w}^{\mathcal{D}}(gw^{-1})(\zeta)>\lambda\}\) and write \(E_{\lambda}\) as a union of disjoint maximal Carleson boxes \(Q_{\lambda,k}\) and let \(\hat{Q}_{\lambda,k}\) denote the dyadic parents. Write \[g=g_{1}+g_{2},\quad g_{1}=g\chi_{E_{\lambda}^{c}},\quad g_{2}=g\chi_{E_{ \lambda}}.\] Notice that if \(I\in\mathcal{D}\), then either there exists \(k\) so that \(Q_{I}\subset Q_{\lambda,k}\) or \(Q_{I}\) intersects \(E_{\lambda}^{c}\). In the latter case, note by definition of \(Q_{\lambda,k}\) and the doubling of \(w\), we have \[\left\langle g_{1}w^{-1}\right\rangle_{w,T_{I}}\leq c_{w}\left\langle g_{1}w^{ -1}\right\rangle_{w,Q_{I}}\leq c_{w}\lambda, \tag{5.5}\] In the former case, \(\left\langle g_{1}w^{-1}\right\rangle_{w,T_{I}}=0\) since \(Q_{\lambda,k}\subset E_{\lambda}\), so (5.5) holds for all \(I\in\mathcal{D}\). Similarly, for the function \(g_{2}\), by maximality of \(Q_{\lambda,k}\) and doubling of \(w\), \[\left\langle g_{2}w^{-1}\right\rangle_{w,Q_{\lambda,k}}\leq c_{w}\left\langle g _{2}w^{-1}\right\rangle_{w,\tilde{Q}_{\lambda,k}}<c_{w}\lambda. \tag{5.6}\] Write \[\begin{split} uv\left(\left\{\zeta\in\mathbb{D}:\,\frac{| \mathcal{A}_{\mathcal{D}}g(\zeta)|}{w(\zeta)}>\lambda\right\}\right)& \leq uv\left(\left\{\zeta\in\mathbb{D}:\,\frac{|\mathcal{A}_{ \mathcal{D}}g_{1}(\zeta)|}{w(\zeta)}>\frac{\lambda}{2}\right\}\right)\\ &\quad+uv\left(\left\{\zeta\in E_{\lambda}^{c}:\,\frac{| \mathcal{A}_{\mathcal{D}}g_{2}(\zeta)|}{w(\zeta)}>\frac{\lambda}{2}\right\} \right)\\ &\quad+uv(E_{\lambda})\\ &:=(I)+(II)+(III).\end{split}\] \((III)\) is controlled by \(\frac{[uw]_{\mathrm{B}_{1}(w)}}{\lambda}\int_{\mathbb{D}}|g|\,uw\) using Proposition 5.1. To control \((I)\) and \((II)\), note that using the \(\mathrm{B}_{1}\) condition for \(w\), \[\mathcal{A}_{\mathcal{D}}g_{1}(\zeta)w^{-1}(\zeta)\leq[w]_{\mathrm{B}_{1}}\sum _{I\in\mathcal{D}}\frac{\int_{Q_{I}}g_{1}\,dA}{\int_{Q_{I}}w\,dA}\chi_{Q_{I}}( \zeta)=[w]_{\mathrm{B}_{1}}\mathcal{A}_{\mathcal{D},w}(g_{1}w^{-1})(\zeta)\] Recalling the dyadic regularization from (3.12), set \(\widetilde{g_{1}}=(g_{1})_{\mathcal{D}}\). By Lemma 3.3.ii, \(\int_{Q_{I}}\widetilde{g}_{1}\,dA=\int_{Q_{I}}g_{1}\,dA\) for each \(I\in\mathcal{D}\) whence \[\mathcal{A}_{\mathcal{D},w}(g_{1}w^{-1})=\mathcal{A}_{\mathcal{D},w}( \widetilde{g_{1}}w^{-1}).\] Furthermore, for \(I\in\mathcal{D}\) and \(\zeta\in T_{I}\subset Q_{I}\), \(w^{-1}(\zeta)\leq[w]_{\mathrm{B}_{1}}\left(\int_{T_{I}}w\,dA\right)^{-1}\), which, combined with (5.5), yields \[\widetilde{g_{1}}w^{-1}\leq[w]_{\mathrm{B}_{1}}\sum_{I\in\mathcal{D}}\langle g _{1}w^{-1}\rangle_{w,T_{I}}\chi_{T_{I}}\leq c_{w}\times[w]_{\mathrm{B}_{1}} \times\lambda.\] On the other hand, defining \(\widetilde{g_{2}}=\sum_{k}\langle g_{2}\rangle_{Q_{\lambda,k}}\chi_{Q_{ \lambda,k}}\), (5.6) gives \(\widetilde{g_{2}}w^{-1}\leq c_{w}[w]_{\mathrm{B}_{1}}\lambda\) and by the support condition on \(g_{2}\), for \(\zeta\in E_{\lambda}^{c}\) there holds \[\begin{split}\mathcal{A}_{\mathcal{D}}g_{2}(\zeta)w^{-1}(\zeta)& \leq[w]_{\mathrm{B}_{1}}\sum_{I\in\mathcal{D}}\frac{\int_{Q_{I}}g_{2}}{ \int_{Q_{I}}w}\chi_{Q_{I}}(\zeta)\\ &\leq[w]_{\mathrm{B}_{1}}\sum_{k}\sum_{\begin{subarray}{c}I\in \mathcal{D}\\ Q_{I}\supset Q_{\lambda_{k}}\end{subarray}}\frac{\int_{Q_{I}}g_{2}}{\int_{Q_{I }}w}\chi_{Q_{I}}(\zeta)\\ &\leq[w]_{\mathrm{B}_{1}}\mathcal{A}_{\mathcal{D},w}(\widetilde{ g_{2}}w^{-1})(\zeta).\end{split}\] Therefore, \[(I)+(II) \lesssim\sum_{j=1,2}\frac{\left([w]_{\mathrm{B}_{1}}\right)^{2}}{ \lambda^{2}}\int_{\mathbb{D}}\left|\mathcal{A}_{\mathcal{D},w}(\widetilde{g_{j}} w^{-1})\right|^{2}\,uv\,dA\] \[\lesssim\sum_{j=1,2}\frac{\left(c_{w}[w]_{\mathrm{B}_{1}}[uw]_{ \mathrm{B}_{2}(w)}\right)^{2}}{\lambda^{2}}\int_{\mathbb{D}}\left|\widetilde{g _{j}}w^{-1}\right|^{2}\,uv\,dA\] \[\lesssim\sum_{j=1,2}\frac{\left(c_{w}[w]_{\mathrm{B}_{1}}\right) ^{3}\left([uw]_{\mathrm{B}_{2}(w)}\right)^{2}}{\lambda}\int_{\mathbb{D}}\left| \widetilde{g_{j}}w^{-1}\right|\,uv\,dA,\] where the second inequality follows from (2.6) with \(p=2\), \(w\mapsto uw\), and \(v\mapsto w\). The proof of (5.4) is concluded by again invoking the fact \(uw\in\mathrm{B}_{1}\) to obtain \[\int_{F}\left\langle h\right\rangle_{F}uw\,dA\leq[uw]_{\mathrm{B}_{1}}\int_{F }|h|uw\,dA,\quad F\in\{Q_{I},T_{I}\}_{I\in\mathcal{D}},\] for any \(h\in L^{1}(uw)\), whence \[\int_{\mathbb{D}}\left|\widetilde{g_{j}}w^{-1}\right|\,uv\,dA\lesssim[uw]_{ \mathrm{B}_{1}}\int_{\mathbb{D}}|g|\,\,uw\,dA.\]
2309.03136
Concepts in Monte Carlo sampling
We discuss modern ideas in Monte Carlo algorithms in the simplified setting of the one-dimensional anharmonic oscillator. After reviewing the connection between molecular dynamics and Monte Carlo, we introduce to the Metropolis and the factorized Metropolis algorithms and to lifted non-reversible Markov chains. We furthermore illustrate the concept of thinning, where moves are accepted by simple bounding potentials rather than, in our case, the harmonic and quartic constituents of the anharmonic oscillator. We point out the multiple connections of our example algorithms with real-world sampling problems. The paper is fully self-contained and Python implementations are provided.
Gabriele Tartero, Werner Krauth
2023-09-06T16:15:54Z
http://arxiv.org/abs/2309.03136v1
# Concepts in Monte Carlo sampling ###### Abstract We discuss modern ideas in Monte Carlo algorithms in the simplified setting of the one-dimensional anharmonic oscillator. After reviewing the connection between molecular dynamics and Monte Carlo, we introduce to the Metropolis and the factorized Metropolis algorithms and to lifted non-reversible Markov chains. We furthermore illustrate the concept of thinning, where moves are accepted by simple bounding potentials rather than, in our case, the harmonic and quartic constituents of the anharmonic oscillator. We point out the multiple connections of our example algorithms with real-world sampling problems. The paper is fully self-contained and Python implementations are provided. ## I Introduction The Monte Carlo method is an important tool for producing samples \(x\) from a given probability distribution \(\pi(x)\). In real-life applications, algorithms and computer implementations for this sampling problem can be highly complex. In this paper, we rather discuss a dozen of distinct Monte Carlo algorithms in the severely stripped-down setting of a particle in a one-dimensional anharmonic potential \[U_{24}(x)=\frac{x^{2}}{2}+\frac{x^{4}}{4} \tag{1}\] consisting of a harmonic term, \(U_{2}=x^{2}/2\), and a quartic one, \(U_{4}=x^{4}/4\). For concreteness, we also provide short example programs. For the anharmonic oscillator, the distribution to be sampled is the Boltzmann distribution \[\pi_{24}(x)=\exp\left[-\beta U_{24}(x)\right], \tag{2}\] where \(\beta=(k_{B}T)^{-1}\) is the inverse of the temperature \(T\), and \(k_{B}\) denotes the Boltzmann constant. The connection between the potential \(U_{24}\) and the distribution \(\pi_{24}\) derives from the following. In classical mechanics, an isolated particle is governed by Newton's law and, in a one-dimensional confining potential, oscillates between two turning points. A certain function \(\pi^{\text{iso}}(x)\) describes the fraction of time that the particle spends at position \(x\) during one period and, therefore, during a long time interval containing many periods. If the particle is in contact with a thermostat, this function turns into a probability distribution for finding the particle at a position \(x\) at large times \(t\), and it is exactly the Boltzmann distribution \(\pi_{24}(x)\) of Eq. (2), as we will discuss (see Sec. II). The molecular-dynamics method generally accesses this distribution through the numerical solution of Newton's equation in contact with a thermostat. The Monte Carlo method addresses the sampling problem more abstractly than molecular dynamics, as it samples (obtains samples \(x\) from) the distribution \(\pi_{24}(x)\) without simulating a physical process. The sequence of twelve short yet intricate Monte Carlo algorithms that we present here will lead us from the beginning of the method, namely direct sampling and the reversible Metropolis algorithm and its extensions (Sec. III), to non-reversible Markov-chain algorithms (Sec. IV) and to advanced approaches that sample the target distribution with a minimum of evaluations of the potential (Sec. V). Some mathematical results are collected separately (App. A). Our algorithms are presented in compact pseudo-code (as in [1]) and implemented in short, openly accessible, Python programs (App. B). Their correctness is tested to high precision (App. C). A companion paper [2] will translate the concepts discussed here to real-life settings and address efficiency questions whereas in the present paper, we are only concerned with the correctness of the sampling algorithms. Figure 1: Isolated anharmonic oscillator at energy \(E\), subject to the potential \(U_{24}\) of Eq. (1). From classical to statistical mechanics If isolated from the environment, so that the energy is conserved, the anharmonic oscillator of Fig. 1 is a classical, periodic, one-dimensional deterministic system, and we may track the fraction of time per period that the particle spends near a given position \(x\) (Sec. II.1). When interacting with a heatbath (for which we suppose a concrete realization), the motion is piecewise deterministic [3], yet random. In this case, we may sample the Boltzmann distribution \(\pi_{24}\) through a molecular-dynamics modeling of the particle subject to Newton's laws and interacting with the thermostat (Sec. II.2). At the end of this section, we provide a Monte Carlo algorithm that directly samples \(x\) from the Boltzmann distribution (Sec. II.3). ### The isolated anharmonic oscillator We may hold the particle fixed--with velocity \(v=0\)--then release it at time \(t=0\) from a position \(x_{\rm max}>0\). If it is isolated, the anharmonic oscillator conserves its energy \(E\), given by the sum of the kinetic and potential energies at all times \(t\geq 0\). It thus picks up velocity until it reaches the minimum of the potential at \(x=0\), then slows down and turns around at \(-x_{\rm max}\), where \(E\) equals the potential energy and the velocity again vanishes (see Fig. 2a). The energy \(E\) is then \[E=\frac{x_{\rm max}^{2}}{2}+\frac{x_{\rm max}^{4}}{4}\Leftrightarrow x_{\rm max }=\sqrt{-1+\sqrt{1+4E}}, \tag{3}\] as follows from solving a quadratic equation and taking a square root. In between the turning points \(-x_{\rm max}\) and \(x_{\rm max}\) the kinetic energy \(\frac{1}{2}({\rm d}x/{\rm d}t)^{2}\) is positive, and the conservation of energy can be written as \[E=\frac{1}{2}\left(\frac{{\rm d}x}{{\rm d}t}\right)^{\!\!2}\!\!+U_{24}(x) \Leftrightarrow\frac{{\rm d}x}{{\rm d}t}=\pm\sqrt{2\left[E-U_{24}(x)\right]}, \tag{4}\] which gives \[{\rm d}t=\pm\sqrt{\frac{1}{2\left[E-U_{24}(x)\right]}}{\rm d}x. \tag{5}\] The period \(\tau\) of the motion, i.e. the time between two realizations of a given position and velocity, corresponds to four times the interval from \(x=0\) to \(x_{\rm max}\), \[\tau=4\int_{0}^{\tau_{\rm max}}\!\!{\rm d}t=4\int_{0}^{\sqrt{-1+ \sqrt{1+4E}}}\frac{1}{\sqrt{2\left[E-U_{24}(x)\right]}}{\rm d}x\\ =4\sqrt{\frac{2}{1+\sqrt{1+4E}}}\,K\left(\frac{1-\sqrt{1+4E}}{1+ \sqrt{1+4E}}\right), \tag{6}\] where \(K\) is the complete elliptic integral of the first kind (see Fig. 3). For small \(E\), the period \(\tau\) agrees with that of the harmonic oscillator, which is famously independent of \(x_{\rm max}\), thus of \(E\). For large \(E\), in contrast, the period \(\tau\sim E^{-1/4}\) approaches that of the quartic oscillator (see App. A for some mathematical details). Equation (5) yields the fraction \(\pi^{\rm iso}(x)\) of time that the particle spends between \(x\) and \(x+{\rm d}x\) over a semi-period, \[\pi^{\rm iso}(x)=\frac{2}{\tau}\sqrt{\frac{1}{2\left[E-U_{24}(x)\right]}}, \tag{7}\] with \(-x_{\rm max}<x<x_{\rm max}\). The function \(\pi^{\rm iso}(x)\) is normalized, but it does not represent the probability for the particle to be at \(x\) at a fixed time \(t\), because of the deterministic nature of the motion (see Fig. 2b). To simulate the isolated anharmonic oscillator, we could numerically integrate the first-order ordinary differential equation on the right of Eq. (4) over a quarter period and then piece together the entire trajectory Figure 3: Period \(\tau\) of the isolated anharmonic oscillator as a function of the energy \(E\). The period of the harmonic oscillator is independent of \(E\), while that of the quartic oscillator scales as \(E^{-1/4}\). Here, \(\Gamma\) denotes the Euler gamma function (see App. A). Figure 2: Isolated anharmonic oscillator, as represented in Fig. 1. (a): Periodic trajectory with amplitude \(2x_{\rm max}\) and period \(\tau\). (b): Normalized function \(\pi^{\rm iso}\). The fraction of time \({\rm d}t/\tau\) spent per period between \(x\) and \(x+{\rm d}x\) is \(\pi^{\rm iso}(x){\rm d}x\). of Fig. 2a. However, this method is specific to one-dimensional dynamical systems [4, SS11]. In order to reflect the general case, we numerically integrate Newton's law for the force \(F\): \[F=m\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}x(t),\quad\text{with }F=-\frac{ \mathrm{d}U_{24}}{\mathrm{d}x}=-x-x^{3}. \tag{8}\] Substituting the time differential \(\mathrm{d}t\) by a very small finite interval \(\Delta t\), appropriate for stepping through, from \(t\) to \(t+\Delta t\), and to \(t+2\Delta t\), and so on, we obtain \[x(t+\Delta t) =x(t)+v(t)\Delta t, \tag{9}\] \[v(t+\Delta t) =v(t)-(x+x^{3})\Delta t. \tag{10}\] Alg. 0 (isolated-dynamics) implements one iteration of this naive algorithm, that we set off with an initial position \(x(t=0)=x_{\mathrm{max}}\), and an initial velocity \(v(t=0)=0\). The output can then be fed back into the input of the program. As most isolated-molecular-dynamics codes, Alg. 0 is unstable--the energy will slowly increase with time, then diverge. To obtain good approximate results, we should use a small discretization \(\Delta t\) and not run the program up to excessively large values of \(t\). ``` procedureisolated-dynamics input\(x,v,t\) \(t\gets t+\Delta t\) \(x^{\prime}\gets x+v\Delta t\) \(v\gets v-\left(x+x^{3}\right)\Delta t\) \(x\gets x^{\prime}\) output\(x,v,t\) ``` **Algorithm 0**isolated-dynamics. Naive integration of Newton's equations for the isolated anharmonic oscillator. ### Introducing a thermal bath Liquids, gases and other systems described by statistical mechanics are generally composed of particles that interact and exchange energy and momentum. Any subsystem interacts with its environment and therefore does not conserve energy and momentum. For the anharmonic oscillator, this may be modeled by an external heatbath at temperature \(T\), represented by a box composed of a very large number of hard-sphere particles of mass \(m=1\) that fly about randomly with velocities given by the Maxwell distribution. For concreteness, we imagine the anharmonic oscillator to be in contact with the heatbath through a semi-permeable elastic "thermostat", a stick that vibrates back and forth in an infinitesimal interval around \(x=0\), and that is also of mass one. At each collision of the thermostat with a heatbath particle, their two velocities are exchanged. We may imagine that the anharmonic oscillator, as it approaches \(x=0\), passes through the thermostat without interaction with probability \(1/2\), and otherwise bounces off with the velocity of the stick. The particle trajectory is then deterministic except at the origin (see Fig. 4). Statistical mechanics teaches us that, although all the particles in the heatbath are Maxwell-distributed, the thermostat behaves differently. In particular, since the latter lies at a fixed position (up to an infinitesimal interval), its velocity follows the distribution \[\pi(v)\mathrm{d}v=\beta|v|\mathrm{e}^{-\beta v^{2}/2}\mathrm{d}v, \tag{11}\] often called the Maxwell boundary condition (see [1, Sec. 2.3.1]). It differs by the prefactor \(\beta|v|\) from the Maxwell distribution of one velocity component. The velocity distribution of the thermostat in Eq. (11) can be sampled as \[v=\pm\sqrt{\frac{-2\log\mathsf{ran}(0,1)}{\beta}}, \tag{12}\] Figure 4: Anharmonic oscillator of Eq. (1) interacting with a heatbath at temperature \(T\) through an elastic semi-permeable thermostat vibrating in an infinitesimal interval about \(x=0\). Figure 5: Anharmonic oscillator in contact with the thermostat of Fig. 4. (a): Piecewise deterministic trajectory with random kicks at \(x=0\). (b): At large \(t\), when the initial configuration \(x(t=0)\) is forgotten, the particle position follows the Boltzmann distribution \(\pi_{24}\) of Eq. (2). and the Maxwell boundary condition, thus realized with a single random number, exactly represents the infinite heatbath of Fig. 4. After a few collisions (see Fig. 5a), the particle has forgotten its initial position \(x(0)\), and it makes sense to speak of the probability distribution at time \(t\). Exactly given by \(\pi_{24}(x)\) in the limit \(\Delta t\to 0\), it substantially differs from \(\pi^{\text{iso}}\) of Fig. 2b and is naively sampled by Alg. 1 (thermostat-dynamics). ``` procedurethermostat-dynamics input\(x,v,t\) \(x^{\prime}\gets x+v\Delta t\) \(t\gets t+\Delta t\) \(\Upsilon\leftarrow\texttt{ran}(0,1)\) if\(x\prime<0\) and \(\Upsilon<1/2\): \(\{\,v\leftarrow-\texttt{sign}(v)\sqrt{-2\beta^{-1}\log\texttt{ran}(0,1)}\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ requires that, at large \(t\), \(x_{t}\) samples the distribution \(\pi^{\{t+\infty\}}=\pi\). For this to take place, the transition matrix \(P\) must satisfy, for all \(x\in\Omega\), the global-balance condition, \[\pi(x)=\sum_{x^{\prime}\in\Omega}\pi(x^{\prime})P(x^{\prime},x)\quad\text{( global balance)}, \tag{16}\] which is nothing but the steady-state version of Eq. (15). The strategy for sampling \(\pi\) implied in Eqs. (15) and (16) represents a monumental investment, as we have to wait a long time until \(\pi^{\{t\}}\sim\pi\) in order to get a single sample of \(\pi\). It is not uncommon for this mixing time to correspond to weeks or even years of computer time [5]. The algorithms in this section are more restrictive than required by Eq. (16). They satisfy, for all \(x,x^{\prime}\in\Omega\), the detailed-balance condition: \[\pi(x)P(x,x^{\prime})=\pi(x^{\prime})P(x^{\prime},x)\quad\text{( detailed balance)}. \tag{17}\] It suffices to sum Eq. (17) over all \(x^{\prime}\in\Omega\) (using the conservation of probabilities \(\sum_{x^{\prime}}P(x,x^{\prime})=1\)), in order to see that detailed balance implies global balance. Detailed-balance algorithms are time-reversible. This means that, at large \(t\) (in equilibrium), any segment of the chain (for example \([a\to b\to c]\) in Fig. 7) at subsequently time steps is sampled with the same probability \(\mathbb{P}\) as the time-reversed segment. In our example, \(\mathbb{P}(a\to b\to c)\) is pieced together from the probability \(\pi(a)\) to sample \(a\) and the transition-matrix probabilities to move from \(a\) to \(b\) and then from \(b\) to \(c\), so that \[\mathbb{P}(a\to b\to c)=\underbrace{\pi(a)P(a,b)}_{\pi(b)P(b,a)\text{ etc.}}P(b,c)\\ =\pi(c)P(c,b)P(b,a)=\mathbb{P}(c\to b\to a), \tag{18}\] where we have twice used the detailed-balance condition. By construction, reversible algorithms thus have no net flows (the flow \(a\to b\to c\) is cancelled by the flow \(c\to b\to a\)), and this points to a very serious restriction imposed by the detailed-balance condition: they can usually only move around \(\Omega\) diffusively, that is, slowly. In this section, we will first discuss the seminal reversible algorithm due to Metropolis et al. (Sec. III.1). We will then explore a variant of the Metropolis algorithm which introduces a crucial factorization (Sec. III.2). We finally discuss the consensus principle at the origin of modern developments (Sec. III.3). ### The Metropolis chain To sample the distribution \(\pi_{24}\) with a reversible transition matrix \(P(x,x^{\prime})\), we impose the detailed-balance condition \(\pi(x)P(x,x^{\prime})=\pi(x^{\prime})P(x^{\prime},x)\) for any pair \(x\) and \(x^{\prime}\) in \(\Omega\). To this end, we may choose \[\pi(x)P(x,x^{\prime})\propto\min\left[\pi(x),\pi(x^{\prime})\right]\quad\text {for }x\neq x^{\prime}. \tag{19}\] The right-hand side of Eq. (19) is symmetric in \(x\) and \(x^{\prime}\), so that the left-hand side must also be symmetric. Therefore, detailed balance is automatically satisfied. Dividing both sides by \(\pi(x)\), we arrive at the equation famously proposed by Metropolis et al. in 1953: \[P^{\text{Met}}(x,x^{\prime})\propto\min\left[1,\frac{\pi(x^{\prime})}{\pi(x)} \right]\quad\text{for }x\neq x^{\prime}. \tag{20}\] Let us discuss the difference between a transition matrix and a filter, in order to render Eq. (20) explicit and get rid of the proportionality sign. Indeed, the move from \(x\) to \(x^{\prime}\neq x\) proceeds in two steps. It is first proposed with a symmetric _a priori_ probability \(\mathcal{A}(x,x^{\prime})\) and then is accepted or rejected with a filter: \[\underbrace{P^{\text{Met}}(x,x^{\prime})}_{\text{transition matrix}}= \underbrace{\mathcal{A}(x,x^{\prime})}_{a\text{ priori probability}}\overbrace{ \mathcal{P}^{\text{Met}}(x,x^{\prime})}^{\text{Metropolis filter}}.\] For the Metropolis algorithm, a proposed move \(x\to x^{\prime}\) (with \(x^{\prime}\neq x\)) is thus accepted with probability \[\mathcal{P}^{\text{Met}}(x,x^{\prime})=\min\left[1,\frac{\pi(x^{\prime})}{\pi( x)}\right]. \tag{21}\] If the move \(x\to x^{\prime}\) is rejected, the particle remains at \(x\). This sets the diagonal transition matrix elements \(P(x,x)\) and guarantees that \(\sum_{x^{\prime}}P(x,x^{\prime})=1\). Algorithm 3 (metropolis) implements the symmetric _a priori_ probability as a uniform displacement \(\Delta=x^{\prime}-x\) which is as likely as \(-\Delta\). The Metropolis filter is implemented with a uniform random number \(\Upsilon\) between \(0\) and \(1\), that we refer to as a "pebble". For large times \(t\), when the initial configuration is forgotten, the algorithm samples \(\pi_{24}\). In all the following Markov-chain algorithm, this large-\(t\) condition is silently understood. ### Factorizing the Metropolis filter The Metropolis algorithm is really famous, but it is not the end of history. A modern variant is useful for distributions \(\pi\) that factorize: \[\pi=\pi_{a}\pi_{b}\pi_{c}\cdots\pi_{k}=\prod_{\xi=a,\ldots,k}\pi_{\xi}. \tag{22}\] For example, the Boltzmann distribution \(\pi=\exp\left(-\beta U\right)\) takes the above form if its potential \(U\) can be written as the sum over pair potentials. The Metropolis filter of Eq. (21) is then \[\mathcal{P}^{\text{Met}}(x,x^{\prime})=\min\left[1,\frac{\pi_{a}( x^{\prime})\pi_{b}(x^{\prime})\cdots\pi_{k}(x^{\prime})}{\pi_{a}(x)\pi_{b}(x) \cdots\pi_{k}(x)}\right]\\ =\min\left[1,\prod_{\xi=a,\ldots,k}\frac{\pi_{\xi}(x^{\prime})}{ \pi_{\xi}(x)}\right], \tag{23}\] and it is implemented in this way in countless computer programs. An alternative to Eq. (23) is the factorized Metropolis filter [6], \[\mathcal{P}^{\text{fact}}(x,x^{\prime})=\min\left[1,\frac{\pi_{a} (x^{\prime})}{\pi_{a}(x)}\right]\cdots\min\left[1,\frac{\pi_{k}(x^{\prime})}{ \pi_{k}(x)}\right]\\ =\prod_{\xi=a,\ldots,k}\min\left[1,\frac{\pi_{\xi}(x^{\prime})}{ \pi_{\xi}(x)}\right]. \tag{24}\] If used naively, it gives lower acceptance probabilities than the Metropolis filter, but it also satisfies the detailed-balance condition. Let us prove this for the anharmonic oscillator, where \[\mathcal{P}^{\text{fact}}_{24}(x,x^{\prime})=\min\left[1,\frac{\pi_{2}(x^{ \prime})}{\pi_{2}(x)}\right]\min\left[1,\frac{\pi_{4}(x^{\prime})}{\pi_{4}(x)} \right], \tag{25}\] and where \[\frac{\pi_{24}(x)}{}=\exp\left(-\tfrac{x^{2}}{2}-\tfrac{x^{4}}{4} \right)=\\ \exp\left(-\tfrac{x^{2}}{2}\right)\exp\left(-\tfrac{x^{4}}{4} \right)=\frac{\pi_{2}(x)\pi_{4}(x)}{}, \tag{26}\] illustrating that a potential that is a sum of terms yields a Boltzmann distribution that factorizes. Detailed balance is satisfied because of the following: \[\frac{\pi_{24}(x)P^{\text{fact}}_{24}(x,x^{\prime})}{\infty\underbrace{\pi_{ 2}(x)\min\left[1,\frac{\pi_{2}(x^{\prime})}{\pi_{2}(x)}\right]}_{\min\left[\pi _{2}(x),\pi_{2}(x^{\prime})\right]:\ x\approx x^{\prime}}}\underbrace{\pi_{4} (x)\min\left[1,\frac{\pi_{4}(x^{\prime})}{\pi_{4}(x)}\right]}_{\min\left[\pi_{ 4}(x),\pi_{4}(x^{\prime})\right]:\ x\approx x^{\prime}}\\ \propto\frac{\pi_{24}(x^{\prime})P^{\text{fact}}_{24}(x^{\prime},x)}{}, \tag{27}\] where we have dropped the symmetric _a priori_ probability \(\mathcal{A}\). Algorithm 4 (factor-metropolis) samples \(\pi_{24}\). It implements the factorized filter in a way that we will soon discover to be naive. ``` procedurefactor-metropolis input\(x\) \(\Delta\leftarrow\texttt{ran}(-\delta,\delta)\) \(x^{\prime}\gets x+\Delta\) \(\Upsilon\leftarrow\texttt{ran}(0,1)\) if\(\Upsilon<\min\left[1,\frac{\pi_{2}(x^{\prime})}{\pi_{2}(x)}\right]\min \left[1,\frac{\pi_{4}(x^{\prime})}{\pi_{4}(x)}\right];\) \(\big{\{}\ x\gets x^{\prime}\) output\(x\) ---- ``` **Algorithm 4**factor-metropolis. Sampling \(\pi_{24}\) naively with the factorized Metropolis filter. ### The consensus principle The factorized Metropolis algorithm will turn out to be particularly powerful, in the presence of many factors, even an infinite number of them. This is because of the consensus principle, that we now discuss, and which, in the end, will avoid the evaluation of the lengthy product in Eq. (24). For the anharmonic oscillator, the consensus principle simply relies on the fact that the filter \[\mathcal{P}^{\text{fact}}_{24}(x,x^{\prime})=\underbrace{\min\left[1,\frac{ \pi_{2}(x^{\prime})}{\pi_{2}(x)}\right]}_{p_{2}\ \text{(in Table 1)}}\underbrace{\min\left[1,\frac{\pi_{4}(x^{\prime})}{\pi_{4}(x)} \right]}_{p_{4}\ \text{(in Table 1)}} \tag{28}\] is a product \(p_{2}p_{4}\) of probabilities that may be interpreted as independent (see Table 1). This holds although the two factors are evidently correlated and, \begin{table} \begin{tabular}{|c|c|c|} \hline \hline HarmonicQuartic & Accept (\(p_{4}\)) & Reject (\(1-p_{4}\)) \\ \hline Accept (\(p_{2}\)) & \(p_{2}p_{4}\ for example, \(\pi_{2}\) is small when \(\pi_{4}\) is. In Alg. 5 (factor-metropolis(patch)), two independent decisions are taken, one for the harmonic and one for the quartic factor, and the proposed move is finally accepted only if it is accepted by both factors. The output is identical to that of Alg. 4 (factor-metropolis), and it again samples the Boltzmann distribution \(\pi_{24}\). ``` procedurefactor-metropolis(patch) input\(x\) \(\Delta\leftarrow\texttt{ran}(-\delta,\delta)\) \(x^{\prime}\gets x+\Delta\) \(\Upsilon_{2}\leftarrow\texttt{ran}(0,1)\); \(\Upsilon_{4}\leftarrow\texttt{ran}(0,1)\) if\(\Upsilon_{2}<\min\left[1,\frac{\pi_{2}(x^{\prime})}{\pi_{2}(x)}\right]\) and \(\Upsilon_{4}<\min\left[1,\frac{\pi_{4}(x^{\prime})}{\pi_{4}(x)}\right]\): \(\left\{\begin{array}{l}x\gets x^{\prime}\qquad\text{(move accepted by consensus)}\end{array}\right.\) output\(x\) ``` **Algorithm 5**: factor-metropolis(patch). Patch of Alg. 4, implementing the consensus principle. ## IV Going beyond reversibility In a tradition that started with the Metropolis algorithm, many decades ago, Markov chains are normally designed with the quite restrictive detailed-balance condition, although they are only required to satisfy global balance. In this section, we illustrate modern attempts to overcome the detailed-balance condition in a systematic way, within the framework of "lifted" Markov chains [7; 8]. Our first lifted Markov chain, Alg. 6 (lifted-metropolis), holds in fewer than a dozen lines of code, but is quite intricate (Sec. IV.1). In recent applications, lifted Markov chains are often formulated for continuous time. For the anharmonic oscillator, this gives the "zig-zag" algorithm [9], where the particle moves back and forth as in molecular dynamics (as in Alg. 0 (isolated-dynamics)), but at fixed velocity. Newton's equations are not solved, but \(\pi_{24}\) is still sampled exactly, and quite magically so (Sec. IV.2). The decision to reverse the velocity (from "zig" to "zag") may again be broken up into independent decisions of the harmonic and the quartic factors foreshadowing strategies that have profoundly impacted real-life sampling approaches (Sec. IV.3). ### Lifting the Metropolis chain The Metropolis algorithm, from a position \(x\), proposes positive and negative displacements \(\Delta\) for the anharmonic oscillator with symmetric _a priori_ probabilities (see Alg. 3 (metropolis)). The filter then imposes that the net flow vanishes, so there will be as many particles to go from \(x\) to \(x+\Delta\) as in the reverse direction, even if, say, \(\pi(x)\ll\pi(x+\Delta)\). We will now break detailed balance with a non-reversible "lifted" Markov chain [7; 8] that only respects global balance, while having \(\pi_{24}\) as its stationary distribution. Let us suppose, in a first step, that the positions \(x\) lie on the grid \(\{\ldots,-2\Delta,-\Delta,0,\Delta,2\Delta,\ldots\}\), with moves allowed only between nearest neighbors. Each configuration \(x\) is duplicated ("lifted") into two configurations, a forward-moving one \(\{x,+1\}\), and a backward-moving one \(\{x,-1\}\). From a lifted configuration \(\{x,\sigma\}\), the lifted Metropolis algorithm only proposes a forward move if \(\sigma=1\), and only a backward move if \(\sigma=-1\). In summary, \[P^{\text{lift}}\left(\{x,\sigma\},\{x+\sigma\Delta,\sigma\}\right)=\min\left[ 1,\frac{\pi_{24}(x+\sigma\Delta)}{\pi_{24}(x)}\right],\] where \(\sigma=\pm 1\). When this move is not accepted by the Metropolis filter, the algorithm flips the direction and instead moves from \(\{x,\sigma\}\) to \(\{x,-\sigma\}\): \[P^{\text{lift}}(\{x,\sigma\},\{x,-\sigma\})=1-\min\left[1,\frac{\pi_{24}(x+ \sigma\Delta)}{\pi_{24}(x)}\right]. \tag{29}\] This algorithm clearly violates detailed balance as, for example, \[P^{\text{lift}}(\{x,+1\},\{x+\Delta,+1\})>0,\] \[P^{\text{lift}}(\{x+\Delta,+1\},\{x,+1\})=0.\] There is thus no backward flow for \(\sigma=+1\) and no forward flow for \(\sigma=-1\). On the other hand, the lifted Metropolis algorithm satisfies the global-balance condition of Eq. (16) with the "ansatz" \[\pi_{24}^{\text{lift}}(\{x,\sigma\})=\frac{1}{2}\pi_{24}(x)\quad\text{for } \sigma=\pm 1. \tag{30}\] For example, the flow into the lifted configuration \(\{x,+1\}\) satisfies \[\pi_{24}(\{x,+1\})=\\ \pi_{24}(\{x-\Delta,+1\})P^{\text{lift}}(\{x-\Delta,+1\},\{x,+1\})\] \[\qquad\qquad+\pi_{24}(\{x,-1\})P^{\text{lift}}(\{x,-1\},\{x,+1\}). \tag{31}\] The two contributions on the right-hand side of eq. (IV.1) correspond on the one hand to the accepted moves from \(\{x-\Delta,+1\}\), and on the other hand to the lifted moves from \(\{x,-1\}\), when the move from \(\{x,-1\}\) towards \(\{x-\Delta,-1\}\) is rejected (see Fig. 8). Equation (IV.1) can be transformed into \[\pi_{24}(x)=\pi_{24}(x-\Delta)\min\left[1,\frac{\pi_{24}(x)}{\pi_{ 24}(x-\Delta)}\right]+\\ \pi_{24}(x)\left\{1-\min\left[1,\frac{\pi_{24}(x-\Delta\sigma)}{ \pi_{24}(x)}\right]\right\},\] which is identically satisfied. We have shown that the lifted Metropolis algorithm satisfies the global-balance condition for the ansatz of eq. (30), which splits \(\pi_{24}(x)\) equally between \(\{x,+1\}\) and \(\{x,-1\}\). The sequence \(\pi^{\{t\}}\) will actually converge towards this stationary distribution under very mild conditions that are satisfied for the anharmonic oscillator [10; 11]. In the lifted Metropolis algorithm, the particle, starting from \(x_{0}=0\), climbs uphill in direction \(\sigma\) until a move is rejected by the filter, when it remains at its current position but reverses its velocity to \(-\sigma\). The following downhill moves, again without rejections, are followed by another uphill climb, and so on, criss-crossing between the two wings of the potential \(U_{24}\). Algorithm 6 (lifted=metropolis) implements a version of the lifted Metropolis algorithm where the displacements \(\Delta\) are sampled from a positive interval. The algorithm outputs lifted configurations \(\{x,\sigma\}\) of which, remarkably, the \(x\) positions sample \(\pi_{24}\). ``` procedurelifted-metropolis input\(\{x,\sigma\}\) (lifted sample at time \(t\)) \(\Delta\leftarrow\mathsf{ran}(0,\delta)\) (\(\delta>0\)) \(x^{\prime}\gets x+\sigma\Delta\) (\(x^{\prime}\) in direction \(\sigma\) from \(x\)) \(\Upsilon\leftarrow\mathsf{ran}(0,1)\) if\(\Upsilon<\min\left[1,\frac{\pi_{24}(x^{\prime})}{\pi_{24}(x)}\right]\): \(x\gets x^{\prime}\) else: \(\sigma\leftarrow-\sigma\) output\(\{x,\sigma\}\) (lifted sample at time \(t+1\)) ---- ``` **Algorithm 6**lifted-metropolis. Non-reversible lifted version of Alg. 3 (metropolis). The \(x\)-positions that are output by this program sample \(\pi_{24}\). ### From discrete to continuous time So far, we have discussed Markov chains that move between configurations indexed by an integer time \(t\), from \(x_{t}\) to \(x_{t+1}\). We now consider algorithms in continuous time (technically speaking, we consider Markov "processes"). For simplicity, we revisit the lifted Metropolis algorithm with its grid of positions \(\{\ldots,-2\Delta,-\Delta,0,\Delta,2\Delta,\ldots\}\) and with its nearest-neighbor moves, but consider the case of small \(\Delta\). It is then appropriate to rescale time such that a displacement \(\pm\Delta\) is itself undertaken in a time interval \(\Delta\). The particle in the anharmonic oscillator thus moves with unit absolute velocity, whose sense is reversed when there is a rejection. The downhill moves are all accepted, and even uphill moves are accepted with a probability close to one. One may sample the position of the next rejection, rather than running through the sequence of individual moves, because an uphill move starting, say, in positive direction from \(x=0\) is accepted with probability \(\exp\left[-\beta\Delta U_{24}(x=0)\right]\). Likewise, the probability for accepting a whole sequence of \(n\) uphill moves, at subsequent positions \(0,\Delta,\ldots,(n-1)\Delta\), and then rejecting the move \(n+1\), is \[\mathbb{P}(0\to x_{\text{ev}})=\\ \underbrace{\mathrm{e}^{-\beta\Delta U_{24}(0)\cdots\Delta U_{24 }[(n-1)\Delta]}}_{\text{$n$ accepted moves}}\underbrace{\left[1-\mathrm{e}^{- \beta\Delta U_{24}(n\Delta)}\right]}_{\text{rejection}}\\ \to\beta\mathrm{e}^{-\beta U_{24}}\mathrm{d}U_{24}. \tag{32}\] In the small-\(\Delta\) limit, the rejection is here expanded to first order, and \(\Delta U\) is replaced by \(\mathrm{d}U\). In our example of the anharmonic oscillator starting at \(x=0\), all the increments of \(\Delta U_{24}\) up to position \(x\) add up to the potential \(U_{24}(x)\). Equation (32) indicates that the value of \(U_{24}\) at which the velocity is reversed follows an exponential distribution in \(U_{24}\)[12]. As an exponential random number can be obtained as a logarithm of a uniform random number (see (1, Sec. 1.2.4)), this yields \[U_{24}(x_{\text{ev}})=-\beta^{-1}\,\log\mathsf{ran}(0,1)\,. \tag{33}\] Inverting \(U_{24}(x_{\text{ev}})=x_{\text{ev}}^{2}/2+x_{\text{ev}}^{4}/4\), this results in \[x_{\text{ev}}=\sigma\sqrt{-1+\sqrt{1-4\beta^{-1}\,\log\mathsf{ran}(0,1)}}. \tag{34}\] To sample the Boltzmann distribution \(\pi_{24}\), it now suffices to sample the turning points \(x_{\text{ev}}\) of the constant-velocity motion, alternatingly on the negative and positive branches of the potential, and then to sample the particle positions at equal time steps, as implemented in Alg. 7 (zig-zag). This event-driven continuous-time algorithm samples the Boltzmann distribution \(\pi_{24}\) (see Fig. 9). The event-driven version of Alg. 6 exists also for fixed, finite \(\Delta\), and it is often classified as "faster-than-the-clock" (see (1, Sec. 7.1.1)). ### Extending the consensus principle We now replace the Metropolis filter in Alg. 7 (zig-zag) (contained in the formula for \(x_{\text{ev}}\)) by the factorized Metropolis filter, and then use the consensus principle. Starting again at \(x=0\), the particle now climbs up one hill for the harmonic factor and one for the quartic factor (see Fig. 10). For each factor, we can redo the Figure 8: Discretized lifted Metropolis algorithm for the anharmonic oscillator. The flow into the lifted configuration \(\{x,+1\}\) is indicated (see Eq. (31)). argument of Eq. (32), with \(U_{2}\) or \(U_{4}\) instead of \(U_{24}\). In analogy with eqs. (33) and (34), we can thus sample two "candidate" events, \[x_{\rm ev}^{(2)} =\sigma\sqrt{-2\beta^{-1}\,\log\,\mathtt{ran}(0,1)}, \tag{35}\] \[x_{\rm ev}^{(4)} =\sigma\sqrt[4]{-4\beta^{-1}\,\log\,\mathtt{ran}(0,1)}, \tag{36}\] with two independent random numbers. The consensus of the two factors is broken by the candidate event that comes first, \[x_{\rm ev}=\sigma\min\left(|x_{\rm ev}^{(2)}|,|x_{\rm ev}^{(4)}|\right), \tag{37}\] when the velocity must be reversed. We may again collect positions \(x\) at equal time steps. This is implemented in Alg. 8 (factor-zig-zag), which samples the Boltzmann distribution \(\pi_{24}\). ## V Thinning: Or, Avoiding Evaluation In molecular-dynamics algorithms such as Alg. 1, forces must be computed precisely in order to keep the trajectory on track. In contrast, Monte Carlo algorithms are decision problems where proposed moves must be accepted with a filter, for example the Metropolis filter \(\min[1,\exp\left(-\beta\Delta U\right)]\). As we discuss in this section, one can often base the accept/reject decision on a bounding potential \(\widehat{U}\), and thus avoid computing \(U\), \(\Delta U\), and their exponentials (Sec. V.1). In the continuous-time setting, one simply evaluates the derivative of the bounding potential and of the potential \(U\), in order to eliminate all bias due to the bounding (Sec. V.2). Combining this so-called "thinning" approach [13] with the factorization, we may, in the anharmonic oscillator, base our decision to accept moves on the consensus of harmonic and quartic bounding potentials. At the end, we will set up a Monte Carlo algorithm that evaluates a single factor potential, and only at the position where the proposed move is rejected by the bounding potential of that same factor (Sec. V.3). In the companion paper [2], we generalize this approach to real-life simulations of particles with long-range interactions that sample the Boltzmann distribution \(\exp\left(-\beta U\right)\) without ever evaluating \(U\). Figure 10: Factorized zig-zag algorithm. Starting from \(x\) (here with \(\sigma=-1\), the next event is given by the earliest event between \(x_{\rm ev}^{(2)}\) and \(x_{\rm ev}^{(4)}\) (here, by \(x_{\rm ev}^{(4)}=x_{\rm ev}\)). Figure 9: Zig-zag algorithm (continuous-time event-driven lifted Metropolis chain). (a): The particle swings about the origin, turning around at positions \(x_{\rm ev}\) (sampled by Eq. (34)). (b): Piecewise deterministic constant-velocity trajectory. Particle positions are sampled at equal time steps. ### Introducing the bounding potential We say that \(\widehat{U}\) is a bounding potential of a potential \(U\) if, for any pair of configurations \(x\) and \(x^{\prime}\), it satisfies \[\min\Big{(}1,\mathrm{e}^{-\beta\Delta\widehat{U}}\Big{)}\leq\min\big{(}1, \mathrm{e}^{-\beta\Delta U}\big{)}\quad\forall\,x,x^{\prime}\in\Omega, \tag{38}\] where \(\Delta\widehat{U}=\widehat{U}(x^{\prime})-\widehat{U}(x)\) and \(\Delta U=U(x^{\prime})-U(x)\). This requires \(\mathrm{d}\widehat{U}/\mathrm{d}x\) and \(\mathrm{d}U/\mathrm{d}x\) to have the same sign everywhere, with \(|\mathrm{d}\widehat{U}/\mathrm{d}x|\geq|\mathrm{d}U/\mathrm{d}x|\). Concretely, we define the harmonic and quartic bounding potentials as \[\widehat{U}_{2}(n) =\begin{cases}0&\text{if }n=0\\ \widehat{U}_{2}(|n|-1)+|n|&\text{if }n\in\mathbb{Z}\backslash\{0\}\end{cases},\] \[\widehat{U}_{4}(n) =\begin{cases}0&\text{if }n=0\\ \widehat{U}_{4}(|n|-1)+|n^{3}|&\text{if }n\in\mathbb{Z}\backslash\{0\}.\end{cases}\] These definitions are extended to non-integer arguments \(x\) through linear interpolation. The anharmonic bounding potential is then defined as \(\widehat{U}_{24}(x)=\widehat{U}_{2}(x)+\widehat{U}_{4}(x)\) (see Fig. 11). A bounding potential can simplify the decision to accept a move as, evidently, a pebble \(0<\Upsilon<1\) that falls below \(\exp\bigl{(}-\beta\Delta\widehat{U}\bigr{)}\) also falls below \(\exp\bigl{(}-\beta\Delta U\bigr{)}\) (see Fig. 12a). In the remaining algorithms of this paper, we rather use a two-pebble strategy for the decision to accept or reject a move. A first pebble \(0<\Upsilon_{1}<1\) then decides whether a move is accepted with respect to the bounding potential. Otherwise (if \(\Upsilon_{1}\) rejects the move), we use a second pebble \(\Upsilon_{2}\) to decide whether the first-pebble rejection with respect to \(\widehat{U}\) stands with respect to \(U\) (see Fig. 12b). A rescaling, with \(0<\Upsilon_{2}<1\), allows us to definitely reject the move if \[\Upsilon_{2}<\frac{1-\mathrm{e}^{-\beta\Delta U}}{1-\mathrm{e}^{-\beta\Delta \widehat{U}}}. \tag{39}\] The two-pebble bounding-potential algorithm is implemented in Alg. 9 (bounded-lifted) for the anharmonic oscillator. It again samples the Boltzmann distribution \(\pi_{24}\). ``` procedurebounded-lifted input\(\{x,\sigma\}\) (lifted sample at time \(t\)) \(\Delta\leftarrow\mathtt{ran}(0,\delta)\quad(\delta>0)\) \(x^{\prime}\gets x+\sigma\,\Delta;\;\Upsilon_{1}\leftarrow\mathtt{ran}(0,1)\); if\(\Upsilon_{1}<\min\left(1,\mathrm{e}^{-\beta\Delta\widehat{D}_{24}}\right)\); \(\Big{\{}\;x\gets x^{\prime}\) else: \(\left\{\begin{array}{l}\Upsilon_{2}\leftarrow\mathtt{ran}(0,1)\\ \mathtt{if}\Upsilon_{2}>\frac{1-\mathrm{e}^{-\beta\Delta U_{24}}}{1-\mathrm{e }^{-\beta\Delta\widehat{D}_{24}}};\;\;x\gets x^{\prime}\\ \mathtt{else:\;\;\sigma\leftarrow-\sigma}\end{array}\right.\) output\(\{x,\sigma\}\) (lifted sample at time \(t+1\)) ``` **Algorithm 9**bounded-lifted. Discrete-time bounded-lifted Metropolis algorithm using two-pebble decisions. The second pebble is used and the true potential \(U_{24}\) is evaluated only after a first-pebble rejection with respect to the bounding potential \(\widehat{U}_{24}\). Figure 12: Single-pebble and two-pebble decisions in the Metropolis algorithm. (a): A single pebble \(\Upsilon\) illustrating that acceptance with respect to the bounding potential implies acceptance with respect to \(U\). (b): A first pebble \(\Upsilon_{1}\) takes a decision with respect to the bounding potential. In case of rejection, a second pebble \(\Upsilon_{2}\) definitely decides on the move. ### Continuous-time thinning The bounded-lifted Metropolis algorithm, Alg. 9 (bounded-lifted), generalizes to continuous time. In the anharmonic oscillator, we first consider \(\sigma=+1\) and positive \(x\) between \(n\) and \(n+1\), where the decision of Eq. (39), for the second pebble, turns into \[\Upsilon_{2}<\frac{1-\mathrm{e}^{-\beta\Delta U_{24}}}{1-\mathrm{e}^{-\beta \Delta\widehat{U}_{24}}}\rightarrow\Upsilon_{2}<\frac{\mathrm{d}U_{24}/ \mathrm{d}x}{\mathrm{d}\widehat{U}_{24}/\mathrm{d}x}. \tag{40}\] The piecewise linear anharmonic bounding potential \(\widehat{U}_{24}\) simplifies the event-driven formulation. Rather than to walk up the anharmonic potential until the change of potential satisfies \(\Delta U_{24}=-\beta^{-1}\log\mathtt{ran}(0,1)\) (see Eq. (33) and Fig. 9), we now run up a bounding potential of constant slope \(\hat{q}\) with \[\hat{q}=\frac{\mathrm{d}}{\mathrm{d}x}\widehat{U}_{24}(x)\Big{|}_{x\in S_{n}} =n+1+(n+1)^{3}, \tag{41}\] where \(S_{n}=[n,n+1)\) and \(n\in\mathbb{N}\). The change in potential \(\Delta\widehat{U}_{24}(x)=-\beta^{-1}\log\mathtt{ran}(0,1)\) then translates into the advance of the position as \[x_{\mathrm{ev}}=x_{0}+(\beta\hat{q})^{-1}\log\mathtt{ran}(0,1)\,. \tag{42}\] The event rate \(\beta\hat{q}\) is constant in the sector \(S_{n}\), but if \(x_{\mathrm{ev}}\) falls outside of \(S_{n}\), it is invalid. In this case, a "boundary event" is triggered, and the particle is placed at the right boundary of \(S_{n}\), without changing the direction \(\sigma\). Otherwise (if \(x_{\mathrm{ev}}\in S_{n}\)), the direction \(\sigma\) is reversed if the condition on the pebble \(\Upsilon_{2}\) in Eq. (40) is satisfied (see Fig. 13). Our description of the continuous-time bounded-lifted Metropolis algorithm was for the case \(\sigma=1\), that is, for a pebble that climbs up the \(x>0\) branch of the potential. The general case is implemented in Alg. 10 (bounded-zig-zag), and it again samples the Boltzmann distribution \(\pi_{24}\). ### Thinning with consensus Algorithm 10 (bounded-zig-zag) avoids the inversion in Eq. (34) of the potential \(U_{24}\), and only evaluates the derivative \(\mathrm{d}U_{24}/\mathrm{d}x\) at \(x=\hat{x}_{\mathrm{ev}}\). At the end of our journey through advanced Markov chain Monte Carlo sampling, we combine the consensus principle underlying factorization with that of thinned, lifted Metropolis chains and sample \(\pi_{24}=\exp{(-\beta U_{24})}\) without ever evaluating the potential \(U_{24}\) nor its derivative. The use of bounding potentials generalizes to applications in particle systems with long-range interactions. In the anharmonic oscillator, we illustrate the basic idea [14] with the harmonic and quartic factor potentials \(U_{2}\) and \(U_{4}\) and their bounding potentials \(\widehat{U}_{2}\) and \(\widehat{U}_{4}\). With factorization, two candidate events \(x_{\mathrm{ev}}^{(2)}\) and \(x_{\mathrm{ev}}^{(4)}\) can be sampled by means of Eq. (42), with bounding event rates \(\beta(n+1)\) and \(\beta(n+1)^{3}\), respectively. When both events fall outside the sector \(S_{n}\) where the bounding rates are valid, a boundary event is triggered. Otherwise, the earliest candidate event \(x_{\mathrm{ev}}\in S_{n}\) (either \(x_{\mathrm{ev}}^{(2)}\) or \(x_{\mathrm{ev}}^{(4)}\)) is confirmed with one of the probabilities \[\left.\frac{\mathrm{d}U_{2}/\mathrm{d}x}{\mathrm{d}\widehat{U}_{2}/\mathrm{d} x}\right|_{x_{\mathrm{ev}}=x_{\mathrm{ev}}^{(2)}}\!\! are different ways to reach a statistically correct decision. In molecular dynamics, in contrast, only a single Newtonian trajectory exists. ``` procedurebounded-factor-zig-zag input\(\{x,\sigma\},t\) (lifted sample at time \(t\)) if\(\sigma x<0\):\(x_{0}\gets 0\); else:\(x_{0}\gets x\) (starting point) \(n\leftarrow\mathrm{int}(|x_{0}|)\); \(\hat{q}^{(2)}\gets n+1\); \(\hat{q}^{(4)}\leftarrow(n+1)^{3}\); \(\tilde{\sigma}\leftarrow\sigma\) \(x_{\mathrm{ev}}^{(2)}\gets x_{0}+\sigma\left[-(\beta\hat{q}^{(2)})^{-1} \log\mathtt{ran}(0,1)\right]\) (see Eq. (42)) \(x_{\mathrm{ev}}^{(4)}\gets x_{0}+\sigma\left[-(\beta\hat{q}^{(4)})^{-1} \log\mathtt{ran}(0,1)\right]\) (see Eq. (42)) if\(\min(|x_{\mathrm{ev}}^{(2)}|,|x_{\mathrm{ev}}^{(4)}|)>n+1\):\(x_{\mathrm{ev}}\leftarrow\sigma(n+1)\) elseif\(|x_{\mathrm{ev}}^{(2)}|<|x_{\mathrm{ev}}^{(4)}|\) \(\Big{\{}\)\(x_{\mathrm{ev}}\gets x_{\mathrm{ev}}^{(2)}\); if\(\mathtt{ran}(0,1)<|x_{\mathrm{ev}}|/\hat{q}^{(2)}\):\(\tilde{\sigma}=-\sigma\) else: \(\Big{\{}\)\(x_{\mathrm{ev}}\gets x_{\mathrm{ev}}^{(4)}\); if\(\mathtt{ran}(0,1)<|x_{\mathrm{ev}}|^{3}/\hat{q}^{(4)}\):\(\tilde{\sigma}=-\sigma\) \(t_{\mathrm{ev}}\gets t+|x_{\mathrm{ev}}-x|\) for\(t^{*}=\mathrm{int}(t)+1,\ldots,\mathrm{int}(t_{\mathrm{ev}})\): \(\Big{\{}\)\(\mathtt{print}\)\(x+\sigma(t^{*}-t)\) ( equal-time samples) \(x\gets x_{\mathrm{ev}}\); \(\sigma\leftarrow\tilde{\sigma}\); \(t\gets t_{\mathrm{ev}}\) ("zig-zag") output\(\{x,\sigma\},t\) ---- ``` **Algorithm 11**bounded-factor-zig-zag. Factorized version of Alg. 10, with one candidate event for each factor (see patch). For each event, only one factor derivative is evaluated. Algorithm 11 (bounded-factor-zig-zag) samples as many candidate events as there are factors (in our case, \(\hat{x}_{\mathrm{ev}}^{(2)}\) and \(\hat{x}_{\mathrm{ev}}^{(4)}\) for the harmonic and quartic factors), thus adopting a strategy that runs into trouble when there are too many factors. A patch of Alg. 11 illustrates, in a nutshell, how factors can be bundled in the continuous-time setting, where the total event rate is the sum of the individual factor rates (see Table 2). In the anharmonic oscillator, the total bounding event rate is the sum of the harmonic and the quartic bounding rates, giving us the next event with a single random number. It then remains to decide whether this event is a harmonic-bounding or a quartic-bounding event, as implemented in Alg. 12 (bounded-factor-zig-zag(patch)). Even for a large number of factors, we can take this decision in a few steps, using the famous Walker algorithm [15]. It is this very program that is used in state-of-the-art programs to handle millions of factors in constant time [14], as we will further discuss in the companion paper [2]. ``` procedurebounded-factor-zig-zag(patch) input\(\{x,\sigma\},t\) (lifted sample at time \(t\)) if\(\sigma x<0\):\(x_{0}\gets 0\); else:\(x_{0}\gets x\) (starting point) \(n\leftarrow\mathrm{int}(|x_{0}|)\) \(\hat{q}^{(2)}\gets n+1\); \(\hat{q}^{(4)}\leftarrow(n+1)^{3}\); \(\hat{q}\leftarrow\hat{q}^{(2)}+\hat{q}^{(4)}\); \(\tilde{\sigma}\leftarrow\sigma\) \(x_{\mathrm{ev}}\gets x_{0}+\sigma\left[-(\beta\hat{q})^{-1}\log\mathtt{ ran}(0,1)\right]\) (see Eq. (42)) if\(|x_{\mathrm{ev}}|>n+1\):\(x_{\mathrm{ev}}\leftarrow\sigma(n+1)\) elseif\(\mathtt{ran}(0,\hat{q})<\hat{q}^{(2)}\): \(\Big{\{}\)\(\mathtt{if}\mathtt{ran}(0,1)<|x_{\mathrm{ev}}|/\hat{q}^{(2)}\):\(\tilde{\sigma}=-\sigma\) else: \(\Big{\{}\)\(\mathtt{if}\mathtt{ran}(0,1)<|x_{\mathrm{ev}}|^{3}/\hat{q}^{(4)}\):\(\tilde{\sigma}=-\sigma\) \(t_{\mathrm{ev}}\gets t+|x_{\mathrm{ev}}-x|\) for\(t^{*}=\mathrm{int}(t)+1,\ldots,\mathrm{int}(t_{\mathrm{ev}})\): \(\Big{\{}\)\(\mathtt{print}\)\(x+\sigma(t^{*}-t)\) ( equal-time samples) \(x\gets x_{\mathrm{ev}}\); \(\sigma\leftarrow\tilde{\sigma}\); \(t\gets t_{\mathrm{ev}}\) ("zig-zag") output\(\{x,\sigma\},t\) ---- ``` **Algorithm 12**bounded-factor-zig-zag(patch). Patch of Alg. 11 illustrating the bundling of two factors into a single candidate event. ## VI Conclusion In this paper, we have introduced to a number of modern developments in Monte Carlo sampling that go much beyond direct sampling and the Metropolis algorithm. New Monte Carlo algorithms build on notions such as factorization, non-reversibility and thinning. They increasingly find applications in physics and other sciences. The severely stripped-down one-dimensional anharmonic oscillator has hopefully allowed us to lay bare the foundations of these non-trivial theoretical developments. In a first step, we have concentrated on the correctness of the sampling algorithms. Questions of efficiency will be the subject of the companion paper [2]. ###### Acknowledgements. We thank K. J. Wiese for helpful discussions. We thank the mathematical research institute MATRIX in Australia where part of this research was performed. \begin{table} \begin{tabular}{|c|c|c|} \hline HarmonicQuartic & Accept \((1-\beta\hat{q}^{(4)}\mathrm{d}t)\) & Reject \((\beta\hat{q}^{(4)}\mathrm{d}t)\) \\ \hline Accept \((1-\beta\hat{q}^{(2)}\mathrm{d}t)\) & \(1-\beta(\hat{q}^{(2)}+\hat{q}^{(4)}\mathrm{d}t\) & \(\beta\hat{q}^{(4)}\mathrm{d}t\) \\ \hline Reject \((\beta\hat{q}^{(2)}\mathrm{d}t)\) & \(\beta\hat{q}^{(2)}\mathrm{d}t\) & \(0\) \\ \hline \end{tabular} \end{table} Table 2: Consensus probabilities of Table 1 for Alg. 11 (bounded-factor-zig-zag) and its patch. The total event rate (the total rate of rejection by consensus) is the sum of the factor event rates, as terms of order \((\mathrm{d}t)^{2}\) drop out. ## Appendix A Mathematical details In this appendix, we present some mathematical details that, for the sake of conciseness, were omitted in the main text. As stated in Eq. (6), the period of the isolated anharmonic oscillator at energy \(E\) is \[\tau(E)=4\sqrt{\frac{2}{1+\sqrt{1+4E}}}\,K\left(\frac{1-\sqrt{1+4E}}{1+\sqrt{1+4 E}}\right), \tag{10}\] where \(K\) is the complete elliptic integral of the first kind. This non-trivial integral follows from the theory of elliptic functions (see e.g. [16, Ch. 19] for a discussion on the subject). It can be obtained indirectly using the Integrate function of the Mathematica software, as illustrated in a Mathematica notebook file made available in the software package (see App. B). For \(E\to 0\), the amplitude \(x_{\rm max}\) of the oscillation is small. Consequently, the anharmonic potential \(U_{24}(x)\) of Eq. (1) can be safely replaced with the harmonic one in this regime: \[U_{24}(x)\sim U_{2}(x)=x^{2}/2\quad\text{for $x_{\rm max}\to 0$}. \tag{11}\] Indeed, expanding \(\tau(E)\) in Eq. (10) about \(E=0\), we obtain \[\tau(E)=2\pi-\frac{3\pi}{2}E+\mathcal{O}(E^{2}).\] (see App. B for a Mathematica notebook file using the Series function). For small \(E\), the period of the anharmonic oscillator coincides with that of the harmonic one, \(\tau=2\pi\), since the quartic term in the potential is negligible for \(|x|\ll 1\). On the other hand, for large \(E\), the quartic term dominates: \[U_{24}(x)\sim U_{4}(x)=x^{4}/4\quad\text{for $x_{\rm max}\gg 0$}.\] In this case, expanding \(\tau\) for large \(E\), we have \[\tau(E)=\frac{\sqrt{\pi}\,\Gamma(1/4)}{\Gamma(3/4)}E^{-1/4}+\mathcal{O}(E^{-3/ 4}),\] where \(\Gamma\) denotes the Euler gamma function (see again App. B for the corresponding Mathematica notebook file). The dominant term of the above expression coincides with the period of the quartic oscillator, computed using the equivalent of Eq. (6), with amplitude \(x_{\rm max}=(4E)^{1/4}\). Finally, the partition function \(Z(\beta)\) of the harmonic oscillator in Eq. (13) can be easily computed by means of the Mathematica Integrate function. ## Appendix B Computer programs, Mathematica notebook files The present paper is accompanied by the MCMCNntshell software package, which is published as an open-source project under the GNU GPLv3 license. MCMCNutshell is available on GitHub as part of the JeLLyFysh organization [17]. The package contains Python implementations for \(\beta=1\) of the algorithms that were discussed here and that were used to produce the results of Table 3. It also contains the Mathematica Notebook files discussed in App. A. ## Appendix C Numerical tests Except for Alg. 0 (isolated-dynamics), the eleven Monte Carlo algorithms and one molecular-dynamics algorithm all sample the Boltzmann distribution \(\pi_{24}\) of Eq. (2). To check the correctness of our implementations, we fixed an arbitrary non-zero value of \(\overline{x}=0.63\) for \(\beta=1\), computed for each algorithm the empirical probability with which the samples \(x\) satisfy \(x<\overline{x}\), and compared it with the exact result: \[\mathbb{P}(x<0.63)=Z^{-1}\int_{-\infty}^{0.63}\pi_{24}(x^{\prime})\mathrm{d}x ^{\prime}=0.8030245. \tag{12}\] Single-standard-deviation error bars were obtained from the bunching method [1, Sec. 1.3.5], except for Alg. 2 (direct-sampling), where we performed a standard Gaussian analysis. For all twelve algorithms, results are consistent with Eq. (12) within three standard deviations (see Table 3). \begin{table} \begin{tabular}{c|c} Algorithm & \(\mathbb{P}(x<0.63)\) \\ \hline 1 thermostat-dynamics & \(0.8038\pm 0.0025\) \\ 2 direct-sampling & \(0.8029\pm 0.0001\) \\ 3 metropolis & \(0.8029\pm 0.0026\) \\ 4 factor-metropolis & \(0.8004\pm 0.0035\) \\ 5 factor-metropolis(patch) & \(0.8029\pm 0.0015\) \\ 6 lifted-metropolis & \(0.8033\pm 0.0003\) \\ 7 zig-zag & \(0.80292\pm 0.00009\) \\ 8 factor-zig-zag & \(0.8030\pm 0.0001\) \\ 9 bounded-lifted & \(0.8036\pm 0.0004\) \\ 10 bounded-zig-zag & \(0.80297\pm 0.00009\) \\ 11 bounded-factor-zig-zag(patch) & \(0.80297\pm 0.00007\) \\ \end{tabular} \end{table} Table 3: Estimated probability \(\mathbb{P}(x<0.63)\) for the anharmonic oscillator computed by the algorithms discussed in this paper (single-\(\sigma\) error bars).
2309.04659
Progressive Feature Adjustment for Semi-supervised Learning from Pretrained Models
As an effective way to alleviate the burden of data annotation, semi-supervised learning (SSL) provides an attractive solution due to its ability to leverage both labeled and unlabeled data to build a predictive model. While significant progress has been made recently, SSL algorithms are often evaluated and developed under the assumption that the network is randomly initialized. This is in sharp contrast to most vision recognition systems that are built from fine-tuning a pretrained network for better performance. While the marriage of SSL and a pretrained model seems to be straightforward, recent literature suggests that naively applying state-of-the-art SSL with a pretrained model fails to unleash the full potential of training data. In this paper, we postulate the underlying reason is that the pretrained feature representation could bring a bias inherited from the source data, and the bias tends to be magnified through the self-training process in a typical SSL algorithm. To overcome this issue, we propose to use pseudo-labels from the unlabelled data to update the feature extractor that is less sensitive to incorrect labels and only allow the classifier to be trained from the labeled data. More specifically, we progressively adjust the feature extractor to ensure its induced feature distribution maintains a good class separability even under strong input perturbation. Through extensive experimental studies, we show that the proposed approach achieves superior performance over existing solutions.
Hai-Ming Xu, Lingqiao Liu, Hao Chen, Ehsan Abbasnejad, Rafael Felix
2023-09-09T01:57:14Z
http://arxiv.org/abs/2309.04659v1
# Progressive Feature Adjustment for ###### Abstract As an effective way to alleviate the burden of data annotation, semi-supervised learning (SSL) provides an attractive solution due to its ability to leverage both labeled and unlabeled data to build a predictive model. While significant progress has been made recently, SSL algorithms are often evaluated and developed under the assumption that the network is randomly initialized. This is in sharp contrast to most vision recognition systems that are built from fine-tuning a pretrained network for better performance. While the marriage of SSL and a pretrained model seems to be straightforward, recent literature suggests that naively applying state-of-the-art SSL with a pretrained model fails to unleash the full potential of training data. In this paper, we postulate the underlying reason is that the pretrained feature representation could bring a bias inherited from the source data, and the bias tends to be magnified through the self-training process in a typical SSL algorithm. To overcome this issue, we propose to use pseudo-labels from the unlabelled data to update the feature extractor that is less sensitive to incorrect labels and only allow the classifier to be trained from the labeled data. More specifically, we progressively adjust the feature extractor to ensure its induced feature distribution maintains a good class separability even under strong input perturbation. Through extensive experimental studies, we show that the proposed approach achieves superior performance over existing solutions. ## 1 Introduction Semi-supervised learning (SSL) is considered one of the most practical learning paradigms which can leverage both labeled and unlabeled samples to build a prediction model [4]. With the rapid development of deep neural networks (DNNs), extensive research on deep SSL methods [18, 28, 23, 30, 3, 35, 26] have been studied. Among those studies, most of them are evaluated and developed based on randomly initialized parameters. In recent years, the release and re-use of pretrained DNNs to alleviate training costs are becoming common practice for computer vision research and applications [42]. It seems that applying the existing SSL method to a pretrained model is straightforward. However, recent literatures [45, 32] suggest that such a naive solution fails to unleash the full potential of training data and there seems to be a big room to improve the performance of SSL when a pretrained model is used. In this study, we postulate the key issue preventing existing SSL solutions from attaining their full potential with pretrained models is due to the bias of the pretrained feature extractor: trained from the source domain data, e.g., ImageNet, the feature extractor may not be optimal for the target problem, e.g., fine-grained visual recognition. Such a bias could be a much more severe issue for SSL than for supervised learning. This is because the self-training (or pseudo-labeling) procedure commonly used in SSL tends to magnify the bias. For example, at the beginning of the training, a biased feature extractor and few labeled data could Figure 1: Visualization of the correct keeping and error correcting ability of FixMatch and ours approach with a pretrained model. The y-axis of the left figure denotes the percentage of the initially correctly labeled data keeping their correct class at a given iteration. The y-axis of the right figure denotes the percentage of the initially incorrectly labeled data being predicted to the correct class at a given iteration. The experiment is conducted on _FGVC Aircraft_ dataset with 15% labels. As seen, FixMatch with a pretrained model shows weaker error correcting ability than the ours approach. Please refer to Section 4.1 for more details. make the classifier vulnerable to spurious correlated patterns, resulting in a wrong prediction on the unlabeled data. The wrong prediction, however, will be fed back to the classifier again as pseudo labels and reinforce the bias. While SSL from randomly initialized network also suffer from the incorrect pseudo-labels, their feature extractors do not have the bias inherited from the source domain and thus could be easier adjusted through standard SSL methods towards the target problem. In reality, the bias of the feature extractor leads to the phenomenon that the SSL process from a pretrained model is less likely to correct its wrong prediction at the early training stage, as shown in Figure 1. In this work, we find that a surprisingly simple solution can largely resolve such an issue: we do not use unlabeled data and the corresponding pseudo-label to update the classifier but the feature extractor. Only labeled data are used to train the classifier. The rationale for this strategy is that the feature extractor is more tolerant to the label noise since it does not directly produce the final prediction, and with a good feature representation, it is possible to achieve good performance with only a few labeled data. More specifically, we require the feature representation should become closer to its class center while being pushed away from other class centers. Inspired by FixMatch [26], we also expect the above property holds for strongly-augmented data. The proposed method only modifies existing FixMatch by changing a few lines of code but has demonstrated a dramatic performance boost and even outperforms other carefully designed contrastive-learning-based approaches by a large margin. In summary, the main contribution of this paper are as follows: * We provide an insight into the issue that standard SSL methods perform unsatisfactory with pretrained models and provide empirical evidence for a better understanding. * We discover that a simple solution that can significantly improve SSL from a pretrained model. Note that we do not claim the operation of aligning feature to its class-wise embedding is our novelty but the discovery that such a simple strategy can be a good solution to our studied issue. * We establish a strong baseline for SSL with pretrained model, providing practioner a simple-to-use solution for practical semi-supervised classification. ## 2 Related Work Semi-supervised learning (SSL) has experienced rapid progress with the development of deep neural networks (DNNs) [3, 2, 26, 44, 33]. The current state-of-the-art SSL approaches [18, 28, 23, 3, 26, 20, 37] usually depend on the consistency regularization [3] and pseudo-labeling [20]. A common framework is to employ two processes: one process generates a prediction target, usually in the form of pseudo-labeling [26, 20], but could also be logits [28] or other supervision forms [3, 38, 36]. Then the generated pseudo supervision will be used to update the network with a different input, e.g., a different augmentation of the original input image [26, 7], mixed image [3], or a network with different parameters [28]. However, existing SSL approaches are primarily optimized from randomly initialized weights, and recent works [45, 32] show that the impressive performance improvement of these standard SSL methods (includes the state-of-the-art method FixMatch [26]) will disappear when models are training from a pretrained model. Despite the initial findings reported in [45], it still lacks a clear picture of why existing SSL methods perform unsatisfactory when pretrained models are used. In this paper, we investigate the optimization procedure of SSL from pretrained models and provide some empirical evidences to reveal barriers that limit performance. Based on the analysis, we further propose a feature adjustment module to progressively adjust the feature extractor and achieve great performance improvement on multiple vision benchmarks. ## 3 Preliminary of Semi-supervised Learning In semi-supervised learning, two sets of samples are normally provided: \(\{x_{1}^{1},x_{2}^{2},\cdots,x_{l}^{N_{l}}\}\in\mathcal{X}_{\mathcal{L}}\) whose annotations \(\{y_{1}^{1},y_{2}^{2},\cdots,y_{l}^{N_{l}}\}\in\mathcal{Y}_{\mathcal{L}}\) are available and \(\big{\{}x_{u}^{1},x_{u}^{2},\cdots,x_{u}^{N_{u}}\big{\}}\in\mathcal{X}_{ \mathcal{U}}\) where \(N_{u}\gg N_{l}\) but without accessing label information. Although there are many existing SSL methods in the literature, this work mainly takes one of the state-of-the-art approaches FixMatch [26] as an example. It is because FixMatch successfully integrates two popular techniques in SSL together, i.e., consistency regularization and pseudo-labeling, through decoupling the artificial label generation and model update with weak and strong data augmentations. Specifically, for samples from \(\mathcal{X}_{\mathcal{L}}\), the model \(\mathcal{M}\) is trained via a standard classification loss. For each unlabeled data \(x_{u}^{i}\in\mathcal{X}_{\mathcal{U}}\), "hard" pseudo label is firstly produced on the weakly augmented image \[p(y|x_{u}^{i})=\mathcal{M}\left(A_{0}(x_{u}^{i})\right);\;\tilde{y}=\underset {c}{\text{argmax}}\,p(y=c|x_{u}^{i}), \tag{1}\] where \(A_{0}(\cdot)\) denotes weak data augmentation. Then, the model is optimized to have a consistent prediction on the strongly augmented image \[\mathcal{L}_{u}=\frac{1}{|\mathcal{B}_{u}|}\sum_{i\in\mathcal{B}_{u}}\mathbb{1 }\big{(}p(\tilde{y}|x_{u}^{i})\geq\tau\big{)}\text{CE}\Big{(}\mathcal{M}\big{(} A_{1}(x_{u}^{i})\big{)},\,\tilde{y}\Big{)}, \tag{2}\] where \(p(\tilde{y}|x_{u}^{i})\) means the \(\tilde{y}\)-th output probability on weakly augmented \(x_{u}^{i}\). Here, \(\mathbb{1}(\cdot)\) is the indicator function to select samples whose predicting confidence is greater than a predefined confidence threshold \(\tau\). Additionally, \(\text{CE}(\cdot,\cdot)\) is the standard cross-entropy loss, \(A_{1}(\cdot)\) is the strong augmentation and \(\mathcal{B}_{u}\) denotes the size of unlabeled samples in one mini-batch. ## 4 Semi-supervised Learning from Pretrained Models In this section, we first investigate the bias inherented in pretrained models and how existing SSL methods are troubled to achieve satisfactory performance. Then, we propose a feature adjustment module for SSL to resist the bias. ### Pretrained Models as a Double-edged Sword for SSL With the growth of data and the enrichment of computing resources, developing powerful pretrained models have attracted increasing attention from both the academia and industry [10, 13, 5, 24, 40, 43]. On the one hand, pretraining on large-scale images gives models the general feature extraction ability for various kinds of downstream applications [46, 22, 39]. On the other hand, the generated feature representations will inevitably bring a bias inherited from the source data, and thus it is usually necessary to fine-tune the model on target datasets for effective usage of pretrained models [34, 12, 15]. For the SSL scenario, it is natural to expect that state-of-the-art performance can be achieved by applying the state-of-the-art SSL method to a pretrained model, i.e., semi-supervised fine-tuning. However, evidence from recent literatures [45, 32] show that this solution is far from the best, and there is a big room to improve SSL when a pretrained model is used. This motivates us to revisit SSL and understand what hinders it from achieving its full potential. We postulate the major issue is that the use of pretrained model is a double-edged sword for SSL: on the one hand, it brings the prior knowledge learned from the source data and boosts the performance. On the other hand, it also introduces a strong prediction bias inherited from the source data. After all, the feature learned from the source domain data may not be optimal for the target task. In effect, the prediction bias will encourage the classifier to use certain features or visual patterns for prediction, especially when the labeled data is limited in quantity and diversity. However, not all those visual patterns are true causal factors to determine the class and the spurious correlation might be mistakenly identified during training, e.g., prediction could rely on the clue from background [25]. Worse, as most of the state-of-the-art SSL methods [26, 3, 20, 2, 44] are built upon self-training, a.k.a., pseudo-labeling framework, which generates pseudo labels from the prediction on the unlabeled data, such a process tends to further magnify the prediction bias. For example, we can consider the scenario of applying FixMatch, one of the most commonly used SSL methods, with a pretrained model. At the beginning of training, due to the limited amount of labeled (and pseudo-labeled) samples and biased feature representation, the learned classifier tends to be affected by the spurious correlation between features and class labels. Then if the classifier generates incorrect pseudo labels from unlabeled data, the bias will thus reinforce itself by further training with such pseudo labels. Consequently, this will make the SSL less prone to correct its wrong prediction made during the training process. In order to verify our assumption, we conduct an empirical analysis on FixMatch with pretrained models. Specifically, we introduce two measurements called correct-keeping rate (CKR) and error-correcting rate(ECR). The former is defined as a percentage of the initially1 correctly labeled data keeping their correct class at a given iteration, while the latter one is defined as the percentage of the initially incorrectly labeled data being predicted to the correct class at a given iteration. The statistical results are shown in Figure 1. Then we can observe that FixMatch (with a pretrained model) has a descent CKR metric when training converges. However, the ECR of FixMatch (with a pre-trained model) quickly reaches a plateau and has a poor ECR metric. This issue becomes more evident by comparing its ECR with the proposed method. Footnote 1: “Initially” here means an unlabeled sample was falsely labeled for the first time in the whole training process. ### Progressive Feature Adjustment The above analysis suggests that the standard SSL approach could suffer more from the biased feature extractor and we may need a special process to alleviate the impact of bias. In this work, we propose to only use the labeled data to train the classifier, while pseudo-labels generated from the larger amount of unlabeled samples will only be used to update the feature extractor. In this way, unlabeled data influences the classifier indirectly by producing better feature representations. As the classifier is always trained on noise-free labeled data, even if the feature representation is imperfect, the classifier can suppress the noisy dimensions and identify the discriminative patterns in the feature representation. Thus the feature extractor can be more tolerant to the noise in pseudo-labels. It seems that one drawback of the above method is the lack of training examples for the classifier. However, since a pretrained feature extractor has already been able to provide a reasonable starting point and will be further refined by the proposed progressive adjustment method, training the classifier on a limited number of samples can guarantee a good performance. The overall architecture is shown in Figure 2. Specifically, given a batch of labeled samples \(\mathcal{B}_{l}\), both of the feature extractor \(f\) and the linear classifier \(\mathbf{W}:=\{\mathbf{w}_{1},\mathbf{w}_{2},\cdots,\mathbf{w}_{c},\cdots\}\) will be optimized together \[p(y=c|x_{l}^{i})=\frac{\exp\bigl{(}\mathbf{w}_{c}^{T}f(x_{l}^{i})\bigr{)}}{\sum_ {j}\exp\bigl{(}\mathbf{w}_{j}^{T}f(x_{l}^{i})\bigr{)}},\] \[\mathcal{L}_{l}=\frac{1}{|\mathcal{B}_{l}|}\sum_{i\in\mathcal{B}_{l}}\text{CE} \bigl{(}p(y|x_{l}^{i}),y_{l}^{i}\bigr{)} \tag{3}\] where \(\mathbf{w}_{c}\) denotes the classifier for class \(c\). \(\text{CE}(\cdot,\cdot)\) is the standard cross-entropy loss. For a batch of unlabeled samples \(x_{u}\in\mathcal{B}_{u}\), we utilize the up-to-date classifier to generate posterior probability estimation \(p(y|x_{u}^{i})\) \[p(y=c|x_{u}^{i})=\frac{\exp\Bigl{(}\mathbf{w}_{c}^{T}f\bigl{(}A_{0}(x_{u}^{i}) \bigr{)}\Bigr{)}}{\sum_{j}\exp\Bigl{(}\mathbf{w}_{j}^{T}f\bigl{(}A_{0}(x_{u}^{ i})\bigr{)}\Bigr{)}}, \tag{4}\] where \(f(A_{0}(x_{u}^{i}))\) denotes the feature extracted by first going through the weak data augmentation module \(A_{0}\) and then the feature extractor \(f\) (same as that in FixMatch). The class corresponding to the maximal posterior probability is the predicted class of the given unlabeled sample, that is, \(\tilde{y}^{i}=argmax_{c}\ p(y=c|x_{u}^{i})\). If \(p\bigl{(}\tilde{y}^{i}|x_{u}^{i}\bigr{)}\geq\tau\) where \(\tau\) is the confidence threshold which can be a fixed scalar, e.g., 0.95 as in FixMatch [26] or a dynamic generated scalar as in FlexMatch [44], then \(\tilde{y}^{i}\) will be used as a pseudo-label for the corresponding unlabeled sample. Instead of using the pseudo-labeled unlabeled samples to train on the feature extractor and classifier altogether, as in FixMatch, we propose to use them to adjust the feature extractor only. Without introducing an additional linear connected layer, we maintain a set of class-wise embeddings \(\{\mu_{c}\}_{0}^{C}\) and try to minimize the following loss: \[\hat{\mathcal{L}}_{f}=\frac{1}{|\mathcal{B}_{u}|}\sum_{i\in \mathcal{B}_{u}}\mathbb{1}\bigl{(}p(\tilde{y}^{i}|x_{u}^{i})\geq\tau\bigr{)} \mathcal{L}_{f}(x_{u}^{i},\tilde{y}^{i})+\frac{1}{|\mathcal{B}_{l}|}\sum_{i \in\mathcal{B}_{l}}\mathcal{L}_{f}(x_{l}^{i},y_{l}^{i}),\] \[where,\mathcal{L}_{f}(x,y)=-\log\frac{\exp\biggl{(}\cos\bigl{(} \mu_{y},f\bigl{(}A_{1}(x)\bigr{)}\Bigr{)}/T\biggr{)}}{\sum_{j}\exp\biggl{(} \cos\bigl{(}\mu_{j},f\bigl{(}A_{1}(x)\bigr{)}\Bigr{)}/T\biggr{)}}, \tag{5}\] where \(cos(\cdot,\cdot)\) denotes the cosine similarity and \(T\) is a temperature hyperparameter. We empirically set \(T=0.1\) in our study. \(A_{1}\) denotes a different type of data augmentation to \(A_{0}\) and we use RandAugment [8] followed by Cutout [11] as the strong data augmentation \(A_{1}\). \(\mu_{c}\) is the running class mean vector for the \(c\)-th class. In effect, the above loss function will pull features from the same class closer while push features from different classes far apart. For labeled data, we could use the ground-truth class label for assigning samples to their corresponding class embeddings. For unlabeled data, we use pseudo-labels instead and only apply the loss to samples that can generate pseudo-labels. Also, motivated by FixMatch, we propose to apply this loss on strongly augmented data to further avoid the confirmation bias. Note that the above loss also implicitly encourages same-class features from the labeled and unlabeled data move closer relative to the distance to other class samples. Thus it tends to make the classifier learned from the labeled data more generalizable to unlabeled data. To sum up, we train the classifier with labeled data only and both labeled and unlabeled data with the loss in Eq. 5. The overall loss function \(\mathcal{L}\) is the weighted summation of both: \(\mathcal{L}=\mathcal{L}_{l}+\lambda\cdot\hat{\mathcal{L}}_{f}\), where \(\lambda\) is the fixed weight hyper Figure 2: Overview of our approach. The feature extractor is initialized with a pretrained model and the classifier is random initialized for the target dataset. In order to alleviate the bias as presented in Section 4.1, we let the classifier only trained on the labeled samples and use the large amount of unlabeled samples to adjust the feature extractor alone through pulling the immediate feature representation to its corresponding class embedding and pushing away to other class embeddings. The class-wise embeddings are progressively updated along the model optimization as presented in Eq. 6 and Eq. 7. parameter. In order to adjust the feature extractor efficiently, oracle class-wise embeddings of target dataset should be an ideal choice. However, it is unrealistic due to the lack of annotations for unlabeled samples and the inherent bias in the pretrained feature extractor. Thus, we propose to progressively update these class-wise embeddings from both labeled and unlabeled data. For labeled data, \(\mu_{c}\) is updated via \[\mu_{c}^{new}=\beta\mu_{c}^{old}+(1-\beta)f\big{(}A_{1}(x_{l}^{i})\big{)} \mathbb{1}(y_{l}^{i}=c), \tag{6}\] and for unlabeled data, \(\mu_{c}\) is updated via \[\mu_{c}^{new}=\beta\mu_{c}^{old}+(1-\beta)f\big{(}A_{1}(x_{u}^{i}) \big{)}\mathbb{1}\Big{(}p\big{(}c|A_{0}(x_{u}^{i})\big{)}\geq\tau\Big{)}, \tag{7}\] where the indicator function \(\mathbb{1}(y_{l}^{i}=c)\) selects samples from the \(c\)-th class from the labeled data, the indicator function \(\mathbb{1}\left(p^{t}\big{(}c|A_{0}(x_{u}^{i})\big{)}\geq\tau\right)\) selects unlabeled samples that are confidently classified into the \(c\)-th class by the classifier. This is identical to the criterion of generating pseudo-labels. \(\beta\) is a momentum term that controls how far the class-wise feature reaches into embedding history. ## 5 Experimental results In this section, we compare our approach with several SSL methods with pretrained models. ### Experimental details We strictly follow [32] to design our experiment, including the evaluation datasets and pretrained model choices. We made such a choice since Self-Tuning [32] has demonstrated the state-of-the-art performance and its experimental evaluation is comprehensive and realistic. Some experimental details are as follows: **Datasets**: Following the protocol of [32], which addressed the same research problem as this paper, four vision benchmarks are evaluated, i.e., _FGVC Aircraft_[21], _Stanford Cars_[16], _CUB-200-2011_[31], and _CIFAR-100_[17]. Specifically, the first three are challenging fine-grained classification datasets and label proportions ranging from 15% to 50% are tested. Label partition of _CIFAR-100_ follows the standard SSL protocol: 4/25/100 labeled images per class. **Methods**: Nine popular deep SSL approaches are included for comparison, i.e., \(\Pi\)-model [19], Pseudo-Labeling [20], Mean Teacher [28], UDA [35], FixMatch [26], FlexMatch [44], SimCLRv2 [6], FixMatch+AKC+ARC [1] and Self-Tuning [32]. Meanwhile, performance of Fine-Tuning on labeled data is also reported for a reference baseline. For our approach, we also consider a simple extension by incorporating it into the recently proposed consistency-based SSL method FlexMatch [44], which uses dynamically assigned threshold with a FixMatch framework. We call this extension **Ours+**. Note that it shows our approach can still boost the performance even with more advanced SSL algorithms. In our work, all experiments were implemented in PyTorch and run on a GeForce RTX 2080Ti GPU with 11GB memory. **Pretrained models**: Following [32], three models pretrained on ImageNet [9] are chosen for evaluation, i.e., ResNet-50 [14] and EfficientNet [27] which are pretrained in a supervised way, and a ResNet-50 which is trained through an unsupervised learning method MoCo v2 [13]. ### Train from Supervised Pretrained Models In this section, we compare various SSL methods trained from supervised pretrained models. **Fine-grained Classification benchmarks**: We use a ResNet-50 network, which is supervised pretrained on ImageNet, to initialize all SSL models. The results are shown in Table 2. It is clear that our proposed method achieves overall significant improvement than other comparing SSL approaches. Specifically, compared with traditional SSL methods, our approach increases the test accuracy by a large margin on all kinds of partitions of three benchmarks. Taking the state-of-the-art method FixMatch [26] as an example, the performance gain of our approach exceeds 10 percent on both _Stanford Cars_ and _CUB-200-2011_ with 15% labels. This is thanks to the proposed feature adjustment module in our approach which greatly reduces the bias inherented in the pretrained model. Furthermore, our approach is also superior to the recently proposed Self-Tuning method [32] especially when labels are limited, e.g., only 15% training samples are labeled. When the CPL module proposed in FlexMatch [44] is added to our approach, Ours+ leads to a further performance boost. **Standard SSL benchmarks**: We choose _CIFAR-100_ dataset [17] which is one of the most challenging datasets among standard SSL benchmarks to evaluate SSL methods from a pretrained model. Due to the lack of open-resourced pretrained checkpoints on WideResNet-28-8 model [41], \begin{table} \begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Label Number} \\ \cline{2-4} & 400 & 2500 & 10000 \\ \hline Fine-Tuning (baseline) & 60.79 & 31.69 & 21.74 \\ Pseudo-Labeling [20] & 59.21 & – & – \\ MT [28] & 60.68 & – & – \\ UDA [35] & 58.32 & – & – \\ FixMatch\(\dagger\)[26] & 52.88 & 25.63 & 18.38 \\ FlexMatch\(\dagger\)[44] & 40.41 & 23.19 & 17.73 \\ Self-Tuning [32] & 47.17 & 24.16 & 17.57 \\ **Ours\(\dagger\)** & 45.48 & 23.12 & 16.89 \\ **Ours+\(\dagger\)** & **37.36** & **22.06** & **16.58** \\ \hline \hline \end{tabular} \end{table} Table 1: Error rates (%) on _CIFAR-100_ with EfficientNet-B2. \(\dagger\) means ours implementation based on [32]. EfficientNet-B2 model [27] supervised pretrained on ImageNet is adopted in this work. Table 1 presents the error rates of each method. Our proposed method yields the best performance among the comparing methods. ### Train from Unsupervised Pretrained Models Various semi-supervised learning approaches have been shown to benefit from supervised pretrained models in Section 5.2, we continue to study the transfer effect from MoCov2 [13] which is pretrained on ImageNet without using any annotations. As the test accuracy presented in Figure 4, our best performed model, Ours+, excels to other semi-supervised learning baselines. ### Ablation Study We are interested in ablating our approach from the following perspective views: #### 5.4.1 The distribution of feature representation: In our approach, the progressive feature adjustment module is introduced to update the feature extractor separately for alleviating the bias inherented in pretrained models. Therefore, we are interested in the effect of using such module \begin{table} \begin{tabular}{c|l|c c c} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{Label Proportion} \\ \cline{3-5} & & 15 \% & 30 \% & 50 \% \\ \hline \multirow{10}{*}{_FGVC Aircraft_} & Fine-Tuning (baseline) & 39.57\(\pm\)0.20 & 57.46\(\pm\)0.12 & 67.93\(\pm\)0.28 \\ & \(\Pi\)-model [19] & 37.32\(\pm\)0.25 & 58.49\(\pm\)0.26 & 65.63\(\pm\)0.36 \\ & Pseudo-Labeling [20] & 46.83\(\pm\)0.30 & 62.77\(\pm\)0.31 & 73.21\(\pm\)0.39 \\ & Mean Teacher [28] & 51.59\(\pm\)0.23 & 71.62\(\pm\)0.29 & 80.31\(\pm\)0.32 \\ & UDA\(\dagger\)[35] & 59.50\(\pm\)0.36 & 74.08\(\pm\)0.41 & 81.10\(\pm\)0.42 \\ & FixMatch\(\dagger\)[26] & 60.19\(\pm\)0.43 & 75.28\(\pm\)0.39 & 81.19\(\pm\)0.41 \\ & FlexMatch\(\dagger\)[44] & 63.21\(\pm\)0.15 & 77.08\(\pm\)0.34 & 82.56\(\pm\)0.22 \\ & SimCLRv2 [6] & 40.78\(\pm\)0.21 & 59.03\(\pm\)0.29 & 68.54\(\pm\)0.30 \\ & FixMatch+AKC+ARC\(\dagger\)[1] & 63.87\(\pm\)0.41 & 75.99\(\pm\)0.38 & 81.24\(\pm\)0.31 \\ & Self-Tuning [32] & 64.11\(\pm\)0.32 & 76.03\(\pm\)0.25 & 81.22\(\pm\)0.29 \\ \cline{2-5} & **Ours\(\dagger\)** & 69.64\(\pm\)0.41 & 82.36\(\pm\)0.44 & 85.02\(\pm\)0.33 \\ & **Ours+\(\dagger\)** & **71.23\(\pm\)**0.26 & **82.80\(\pm\)**0.15 & **85.53\(\pm\)**0.32 \\ \hline \multirow{10}{*}{_Stanford Cars_} & Fine-Tuning (baseline) & 36.77\(\pm\)0.12 & 60.63\(\pm\)0.18 & 75.10\(\pm\)0.21 \\ & \(\Pi\)-model [19] & 45.19\(\pm\)0.21 & 57.29\(\pm\)0.26 & 64.18\(\pm\)0.29 \\ \cline{1-1} & Pseudo-Labeling [20] & 40.93\(\pm\)0.23 & 67.02\(\pm\)0.19 & 78.71\(\pm\)0.30 \\ \cline{1-1} & Mean Teacher [28] & 54.28\(\pm\)0.14 & 66.02\(\pm\)0.21 & 74.24\(\pm\)0.23 \\ \cline{1-1} & UDA\(\dagger\)[35] & 61.88\(\pm\)0.39 & 79.16\(\pm\)0.36 & 86.79\(\pm\)0.31 \\ \cline{1-1} & FixMatch\(\dagger\)[26] & 64.97\(\pm\)0.37 & 81.23\(\pm\)0.31 & 87.74\(\pm\)0.35 \\ \cline{1-1} & FlexMatch\(\dagger\)[44] & 71.96\(\pm\)0.28 & 83.81\(\pm\)0.26 & 88.12\(\pm\)0.21 \\ \cline{1-1} & SimCLRv2 [6] & 45.74\(\pm\)0.16 & 61.70\(\pm\)0.18 & 77.49\(\pm\)0.24 \\ \cline{1-1} & FixMatch+AKC+ARC\(\dagger\)[1] & 68.63\(\pm\)0.38 & 82.81\(\pm\)0.27 & 87.98\(\pm\)0.32 \\ \cline{1-1} & Self-Tuning [32] & 72.50\(\pm\)0.45 & 83.58\(\pm\)0.28 & 88.11\(\pm\)0.29 \\ \cline{1-1} \cline{2-5} & **Ours\(\dagger\)** & 77.22\(\pm\)0.42 & 86.91\(\pm\)0.07 & 90.38\(\pm\)0.16 \\ \cline{1-1} & **Ours+\(\dagger\)** & **79.70\(\pm\)**0.31 & **87.92\(\pm\)**0.32 & **90.71\(\pm\)**0.13 \\ \hline \multirow{10}{*}{_CUB-200-2011_} & Fine-Tuning (baseline) & 45.25\(\pm\)0.12 & 59.68\(\pm\)0.21 & 70.12\(\pm\)0.29 \\ \cline{1-1} & \(\Pi\)-model [19] & 45.20\(\pm\)0.23 & 56.20\(\pm\)0.29 & 64.07\(\pm\)0.32 \\ \cline{1-1} & Pseudo-Labeling [20] & 45.33\(\pm\)0.24 & 62.02\(\pm\)0.31 & 72.30\(\pm\)0.29 \\ \cline{1-1} & Mean Teacher [28] & 53.26\(\pm\)0.19 & 66.66\(\pm\)0.20 & 74.37\(\pm\)0.30 \\ \cline{1-1} & UDA\(\dagger\)[35] & 52.23\(\pm\)0.23 & 67.93\(\pm\)0.25 & 75.63\(\pm\)0.28 \\ \cline{1-1} & FixMatch\(\dagger\)[26] & 54.21\(\pm\)0.26 & 69.28\(\pm\)0.28 & 77.49\(\pm\)0.31 \\ \cline{1-1} & FlexMatch\(\dagger\)[44] & 61.26\(\pm\)0.18 & 71.62\(\pm\)0.32 & 78.06\(\pm\)0.21 \\ \cline{1-1} & SimCLRv2 [6] & 45.74\(\pm\)0.15 & 62.70\(\pm\)0.24 & 71.01\(\pm\)0.34 \\ \cline{1-1} & FixMatch+AKC+ARC\(\dagger\)[1] & 63.21\(\pm\)0.35 & 73.61\(\pm\)0.32 & 79.08\(\pm\)0.29 \\ \cline{1-1} & Self-Tuning [32] & 64.17\(\pm\)0.47 & 75.13\(\pm\)0.35 & 80.22\(\pm\)0.36 \\ \cline{1-1} \cline{2-5} & **Ours\(\dagger\)** & 65.55\(\pm\)0.21 & 74.99\(\pm\)0.33 & 80.00\(\pm\)0.11 \\ \cline{1-1} & **Ours+\(\dagger\)** & **68.06\(\pm\)**0.22 & **76.09\(\pm\)**0.34 & **80.40\(\pm\)**0.21 \\ \hline \end{tabular} \end{table} Table 2: Test accuracy (%) \(\dagger\) on three fine-grained SSTL benchmarks. We empirically find strong augmentation for labeled data used in Self-Tuning [32] can bring performance gains to other SSL methods. Following the same setting of Self-Tuning, Methods with \(\dagger\) are implemented by ourself based on the released codebase of Self-Tuning [32]. or not on the feature distribution. Figure 3 presents the feature distribution of FixMatch and ours approach for some classes of _FGVC Aircraft_ with t-SNE [29]. We can find that our method encourages same class features to be close to each other while staying away from the other class samples and produces a more distinguishable distribution for the target data, while FixMatch suffers from the inherented bias of pretrained model and poorly adapts the feature distribution given the observation whose features from different classes are mixed together, thus its performance is heavily limited. #### 5.4.2 Is our approach effective for randomly initialized network? In our formulation, the progressive feature adjustment module can be seen as a special SSL method for SSL from a pretrained model. So we are interested to know its effectiveness for randomly initialized network. To investigate this, we train our approach with a randomly initialized WideResNet-28-8 [41] network on CIFAR-100. As the results shown in Table 3, our approach does not produce significant improvement as what we have observed in the SSL with pretrained models task. We postulate that this is because the feature extractor of a randomly initialized network does not inherited the prediction bias from the source domain, and thus the original design in FixMatch algorithm has already been sufficient to adjust the feature extractor for the target problem. \begin{table} \begin{tabular}{l|c c c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Label Proportion} \\ \cline{2-4} & 15 & 30 & 50 \\ \hline UDA [35] & 59.50 & 74.08 & 81.10 \\ **UDA+Feature Adjustment(ours)** & **65.74** & **80.11** & **83.83** \\ \hline \end{tabular} \end{table} Table 4: Ablation study to the effectiveness of the proposed progressive feature adjustment module to the popular consistency-regularization based SSL method UDA on _FGVC Aircraft_. Figure 4: Test accuracy (%) \(\uparrow\) of comparing methods on _CUB-200-2011_ with MoCov2 which is unsupervisedly pre-trained on ImageNet1K [9]. Figure 3: Feature embedding visualizations of (left) FixMatch and (right) Ours approach for the first 10 classes of _FGVC Aircraft_ dataset by using t-SNE [29]. Both of the models are initialized with identical ResNet-50 supervised pre-trained on ImageNet. \begin{table} \begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Label Number} \\ \cline{2-4} & 400 & 2500 & 10000 \\ \hline FixMatch [26] & 42.50 & 27.07 & 21.88 \\ **HCCMatch (ours)** & 42.69 & 26.16 & 21.39 \\ \hline \hline \end{tabular} \end{table} Table 3: Error rates (%) \(\downarrow\) on _CIFAR-100_ with a _randomly initialized_ WideResNet-28-8 [41] network. We implement HCCMatch based on the PyTorch implementation3 of FixMatch which has obtained better performance than the reported ones in [26]. #### 5.4.3 Does our approach work for other SSL method? We are also interested in if the proposed progressive feature adjustment module can be extended to other SSL methods. To investigate this, we apply this module to UDA [35] which is another popular consistency-regularization based SSL method. We conduct experiments on _FGVC Aircraft_ dataset and present the results in Table 4. As seen, by incorporating the proposed progressive feature adjustment module, we can significantly improve UDA in the SSL from pretrained models setting. This suggests that the proposed progressive feature adjustment module could be used to upgrade various consistency-regularization based SSL methods when pretrained models are available. #### 5.4.4 The ways of updating class-wise embeddings In our approach, the class-wise embedding, i.e., class mean vectors, are dynamically updated from features of strongly augmented labeled images and features of strongly augmented unlabeled samples whose pseudo-supervisions are confident enough. In this section, we investigate two alternative strategies: 1) accumulate features of weakly augmented images to update mean vectors, 2) estimate parameters from features of unlabeled samples without a confidence threshold. As the results shown in Table 5, both of these two alternatives will result in a slight performance drop to our method. #### 5.4.5 The sensitivity analysis of hyperparameter selection in our approach: There are two hyperparameters in our method: one is the momentum term beta (i.e., \(\beta\)) for updating the class-wise embedding \(\mu\) in Eq. 6 and Eq. 7, and the other one is the balance weight lambda (i.e., \(\lambda\)) for the overall loss. As shown in Figure 5, Our method is robust to the selection of both \(\beta\) and \(\lambda\) hyperparameters. ## 6 Conclusion Semi-supervised learning from pretrained models is an encouraging research direction, because it combines the advantages of the two learning paradigms to achieve more data-efficient learning. Given the observations in the literature show that existing semi-supervised learning algorithms do not produce a satisfactory performance boost compared to their training-from-scratch version, we investigate the learning procedure of semi-supervised learning from pretrained models and find that the bias inherented in the original pretrained models may be magnified along the semi-supervised training. Empirical evidences are also provided for a better understanding. Based upon the analysis, we propose a progressive feature adjustment module to decouple the process of pseudo-supervision generation and model update and thus alleviate the bias successfully. Extensive experimental results on four vision benchmarks verify the effectiveness of our proposed approach. **Acknowledgement.** This work was done in Adelaide Intelligence Research (AIR) Lab and Hai-Ming Xu and Lingqiao Liu are supported by the Centre of Augmented Reasoning (CAR). Figure 5: Ablation study to the hyperparameter sensitivity. \begin{table} \begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Ways of updating \(\mu\) in HCCMatch} & \multicolumn{3}{c}{Label Proportion} \\ \cline{2-4} & 15 & 30 & 50 \\ \hline w/ weakly augmented images & 68.38 & 82.06 & 84.79 \\ w/o confidence threshold & 68.73 & 81.61 & 84.49 \\ **default (ours)** & **69.64** & **82.36** & **85.02** \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study to the ways of implementing online generative classifier learning in the proposed HCCMatch approach on _FGVC Aircraft_ dataset.
2309.13325
Joint Explainability and Sensitivity-Aware Federated Deep Learning for Transparent 6G RAN Slicing
In recent years, wireless networks are evolving complex, which upsurges the use of zero-touch artificial intelligence (AI)-driven network automation within the telecommunication industry. In particular, network slicing, the most promising technology beyond 5G, would embrace AI models to manage the complex communication network. Besides, it is also essential to build the trustworthiness of the AI black boxes in actual deployment when AI makes complex resource management and anomaly detection. Inspired by closed-loop automation and Explainable Artificial intelligence (XAI), we design an Explainable Federated deep learning (FDL) model to predict per-slice RAN dropped traffic probability while jointly considering the sensitivity and explainability-aware metrics as constraints in such non-IID setup. In precise, we quantitatively validate the faithfulness of the explanations via the so-called attribution-based \emph{log-odds metric} that is included as a constraint in the run-time FL optimization task. Simulation results confirm its superiority over an unconstrained integrated-gradient (IG) \emph{post-hoc} FDL baseline.
Swastika Roy, Farhad Rezazadeh, Hatim Chergui, Christos Verikoukis
2023-09-23T10:08:57Z
http://arxiv.org/abs/2309.13325v1
# Joint Explainability and Sensitivity-Aware Federated Deep Learning for Transparent 6G RAN Slicing ###### Abstract In recent years, wireless networks are evolving complex, which upsurges the use of zero-touch artificial intelligence (AI)-driven network automation within the telecommunication industry. In particular, network slicing, the most promising technology beyond 5G, would embrace AI models to manage the complex communication network. Besides, it is also essential to build the trustworthiness of the AI black boxes in actual deployment when AI makes complex resource management and anomaly detection. Inspired by closed-loop automation and Explainable Artificial intelligence (XAI), we design an Explainable Federated deep learning (FDL) model to predict per-slice RAN dropped traffic probability while jointly considering the sensitivity and explainability-aware metrics as constraints in such non-IID setup. In precise, we quantitatively validate the faithfulness of the explanations via the so-called attribution-based _log-odds metric_ that is included as a constraint in the run-time FL optimization task. Simulation results confirm its superiority over an unconstrained integrated-gradient (IG) _post-hoc_ FDL baseline. 6G, classification, FL, game theory, proxy-Lagrangian, SLA, stochastic policy, traffic drop, XAI, ZSM ## I Introduction The most promising 6G network slicing technology insists on adopting autonomous management and orchestration of the end-to-end (E2E) network resources at the network domains because the isolation of slices may induce a high cost in terms of efficiency [1, 2]. So, ETSI standardized zero-touch network and service management (ZSM) framework has been considered [3].Here, zero-touch refers to the automation and management of resources without human interference. Besides, developing cognitive slice management solutions in 6G networks is essential to automatically orchestrate and manage network slices, particularly network resources across different technological domains (TDs), along with ensuring the end-user's QoE and QoS [4, 5]. Hence, the [6] has proposed an AI-native network slicing management solution of 6G networks to support emerging AI services. Also, AI algorithms should be driven by the distributed nature of datasets to acquire the full potential of network slicing automation, which will solve the problematic behavior of the cloud-centric traditional ML schemes. Thus, a decentralized learning approach is required to handle distributed network slices efficiently. For this, we choose Federated learning (FL) [7, 8] to handle distributed network slices efficiently like our another research work [9]. Besides, even if DNN hold the state-of-the-art [10, 11, 12] in solving resource allocation and orchestration problems of network slicing, the black-box nature of such ML models impedes understanding of their decisions, any flaws in the datasets or the model's performance behavior. Moreover, the 6G network is going to be "machine-centric" technology which signifies that all the corresponding "smart things" in the 6G network will operate intelligently but as a smart black box [13]. Here, the smart black box is not transparent in its action or decision-making processes and could have adverse effects on the network's operations of the 6G technology. In this concern, XAI provides human interpretable methods to adequately explain the AI system and its decisions for gaining Figure 1: RAN federated traffic drop classification in NS the human's trust in the loop. Also, [14] indicates that it is a prerequisite of any ZSM-based AI models in 6G to enrich translucency of their models. Viewing this fact, zero-touch XAI-driven FL will be fetching a particular emphasis for its automation and unique advantages, which are essential for end-user trust and secured procedure. In contrast, the conventional XAI focuses only on the interpretability and transparency of any ML system. Some works of XAI [15, 16, 17] indicate the importance of explainability and present some research works on handover and resource allocation, etc., in the beyond 5G networks. In [18], XAI for physical/MAC layers in 6G networks are focused. In comparison, the authors of [19] present a trust-aware federated deep reinforcement learning-based device selection technique in an autonomous driving scenario. And, to evaluate the performance of XAI models, the paper [20] introduces some essential metrics. So, in this work, we will present a novel zero-touch Explainable Federated learning (FL) as the decentralized approach for traffic drop classification in 6G network slices [7]. ### _Contributions_ In this paper, we present the following contributions * We introduce a novel iterative explainable federated learning approach, where a constrained traffic drop detection classifier and an _explainer_ exchange--in a closed loop way-- attributions of the features as well as predictions to achieve a transparent zero-touch service management of 6G network slices at RAN in a non-IID setup. * We adopt the integrated gradients XAI method to showcase features attributions. * The generated attributions are then used to quantitatively validate the faithfulness of the explanations via the so-called _log-odds_ metric which is included as a constraint in the FL optimization task. * We formulate the corresponding joint recall and log-odds-constrained FL optimization problem under the _proxy-Lagrangian_ framework and solve it via a non-zero sum two-player game strategy [21], while comparing with the unconstrained integrated-gradient post-hoc FL baseline. ## II RAN Architecture and Datasets A shown in Fig. 1, we consider a radio access network (RAN), which is composed of a set of \(K\) the base station (BSs), wherein a set of \(N\) parallel slices are deployed. Each BS runs a local control closed-loop (CL) which collects monitoring data and performs traffic drop prediction. Specifically, the collected data serves to build local datasets for slice \(n\,(n=1,\ldots,N)\), i.e., \(\mathcal{D}_{k,n}=\{\mathbf{x}_{k,n}^{(i)},y_{k,n}^{(i)}\}_{i=1}^{D_{k,n}}\), where \(\mathbf{x}_{k,n}^{(i)}\) stands for the input features vector while \(y_{k,n}^{(i)}\) represents the corresponding output. In this respect, Table I summarizes the features and the output of the local datasets. These accumulated datasets are non-IID due to the different traffic profiles induced by the heterogeneous users' distribution and channel conditions. Moreover, since the collected datasets are generally non-exhaustive to train accurate anomaly detection classifiers, the local CLs take part in a federated learning task wherein an E2E slice-level federation layer plays the role of a model aggregator. ## III Explainable FDL for Transparent Traffic Drop Classification Here, we describe the different stages of the joint explainability and sensitivity-aware FDL as summarized in Fig. 2. ### _Closed-Loop Description_ We propose a federated deep learning architecture where the local learning is performed iteratively with run-time explanation in a closed loop way as shown in Fig. 2. We design a deep neural network FL model. For each local epoch, the Learner module feeds the posterior symbolic model graph to the Tester block which yields the test features and the corresponding predictions \(\hat{y}_{k,n}^{(i)}\) to the Explainer. The latter first generates the features attributions using integrated gradients XAI method. The _Log-odds Mapper_ then uses these attributions to select the top \(p\) features that are then masked. The corresponding soft probability outputs are afterward used to calculate the the log-odds (LO) metric that is fed back to the Learner to include it in the local constrained optimization in step 6. Similarly, the _Recall Mapper_ calculate the recall score \(\rho_{k,n}\) based on the predicated and true positive values at stage 3 and 4 to include it in the local constrained optimization in step 6. Indeed, for each local CL \((k,n)\), the predicted traffic drop class \(\hat{y}_{k,n}^{(i)},\,(i=1,\ldots,D_{k,n})\), should minimize the main loss function with respect to the ground truth \(y_{k,n}^{(i)}\), while jointly respecting some long-term statistical constraints defined over its \(D_{k,n}\) samples and jointly corresponding to recall and explainability log-odds. As shown in steps 1 and 7 of Fig. 2, the optimized local weights at round \(t\), \(\mathbf{W}_{k,n}^{(t)}\), are sent to the server which generates a global FL model for slice \(n\) as, \[\mathbf{W}_{n}^{(t+1)}=\sum_{k=1}^{K}\frac{D_{k,n}}{D_{n}}\mathbf{W}_{k,n}^{( t)}, \tag{1}\] where \(D_{n}=\sum_{k=1}^{K}D_{k,n}\) is the total data samples of all datasets related to slice \(n\). The server then broadcasts the global model to all the \(K\) CLs that use it to start the next round of iterative local optimization. Specifically, it leverages a two-player game strategy to jointly optimize over the objective and original constraints as well as their smoothed surrogates and detailed in the sequel. ### _Model Testing and Explanation_ As depicted in stage 2 of Fig. 2, upon the reception of the updated model graph, the Tester uses a batch drawn from the local dataset to reconstruct the test predictions \(\hat{\mathbf{y}}_{k,n}^{(i)}\). All the graph, test dataset and the predictions are fed to the Explainer at stage 3. After that, at stage 4, Explainer generates the attributions by leveraging the low-complexity Integrated Gradient (IG) scheme [22], which is based on the gradient variation when sampling the neighborhood of a feature. Attributions are a quantified impact of each single feature on the predicted output. Let \(\mathbf{a}_{k,n}^{(i)}\in\mathbb{R}^{Q}\) denote the attribution vector of sample \(i\), which can be generated by any attribution-based XAI method. ### _Log-odds Mapping_ To characterize the trustworthiness of the local model, we calculate the log-odds metric, \(\theta_{k,n}\)[23]. It measures the influence of the top-attributed features on the model's prediction. Specifically, the log-odds score is defined as the average difference of the negative logarithmic probabilities on the predicted class before and after masking the top \(p\)% features with zero padding [23]. In this respect, the _log-odds Mapper_ at stage 5 of Fig. 2 starts by selecting top \(p\)% features based on their attributions which is collected from stage 4 and replace them with zero padding. That is, \[\theta_{k,n}=-\frac{1}{D_{k,n}}\sum_{i=1}^{D_{k,n}}\log\frac{\Pr\left(\hat{y} _{k,n}^{(i)}|\hat{\mathbf{x}}_{k,n}^{(i)}\right)}{\Pr\left(\hat{y}_{k,n}^{(i) }|\mathbf{x}_{k,n}^{(i)}\right)}, \tag{2}\] where, \(\hat{y}_{k,n}^{(i)}\) is the predicted class, \(\mathbf{x}_{k,n}^{(i)}\) are the features in the original dataset and \(\hat{\mathbf{x}}_{k,n}^{(i)}\) denotes the features in the modified dataset with top \(p\)% features zero-padded. Finally, the log-odds Mapper reports the log-odds score, which is used as one of the constraints for the constrained FL optimization task. ### _Joint Recall and Explainability-Aware Traffic Drop Classification_ Besides the log-odds score used for explainability,as shown in steps 3 and 4, we invoke the _recall_ as a measure of the sensitivity of the FL local classifier, which we denote \(\rho_{k,n}\), i.e., \[\rho_{k,n}=\pi^{+}\left(\mathcal{D}_{k,n}\left[\hat{y}_{k,n}^{(i)}=1\right]\right) \tag{3}\] Where, \(\pi^{+}(\mathcal{D}_{k,n})\) defines the proportion of \(\mathcal{D}_{k,n}\) classified positive, and \(\mathcal{D}_{k,n}[*]\) is the subset of \(\mathcal{D}_{k,n}\) satisfying expression *. In order to trust the traffic drop anomaly detection/classification, a set of AI SLA is established between the slice tenant and the infrastructure provider, where a lower bound \(\alpha_{n}\) is imposed to the recall score, while an upper bound \(\beta_{n}\) is set for the log-odds score. This translates into solving a constrained local classification problem in iterations specified by the epochs as well as in FL rounds \(t\left(t=0,\ldots,T-1\right)\) i.e., \[\min_{\mathbf{W}_{k,n}^{(t)}}\frac{1}{D_{k,n}}\sum_{i=1}^{D_{k,n}} \ell\left(y_{k,n}^{(i)},\hat{y}_{k,n}^{(i)}\left(\mathbf{W}_{k,n}^{(t)}, \mathbf{x}_{k,n}\right)\right), \tag{4a}\] \[\mathrm{s.t.}\hskip 14.226378pt\rho_{k,n}\geq\alpha_{n},\] (4b) \[\theta_{k,n}\leq\beta_{n}, \tag{4c}\] which is solved by invoking the so-called _proxy Lagrangian_ framework [24], since the recall is not a smooth constraint. This consists first on constructing two Lagrangians as follows: \[\mathcal{L}_{\mathbf{W}_{k,n}^{(t)}}= \frac{1}{D_{k,n}}\sum_{i=1}^{D_{k,n}}\ell\left(y_{k,n}^{(i)},\hat {y}_{k,n}^{(i)}\left(\mathbf{W}_{k,n}^{(t)},\mathbf{x}_{k,n}\right)\right) \tag{5a}\] \[+\lambda_{1}\Psi_{1}\left(\mathbf{W}_{k,n}^{(t)}\right)+\lambda_{2} \Psi_{2}\left(\mathbf{W}_{k,n}^{(t)}\right),\] \[\mathcal{L}_{\lambda}=\lambda_{1}\Phi_{1}\left(\mathbf{W}_{k,n}^{(t)} \right)+\lambda_{2}\Phi_{2}\left(\mathbf{W}_{k,n}^{(t)}\right) \tag{5b}\] where \(\Phi_{1,2}\) and \(\Psi_{1,2}\) represent the original constraints and their smooth surrogates, respectively. In this respect, the recall surrogate is given by, \[\Psi_{1}=\frac{\sum_{i=1}^{D_{k,n}}y_{k,n}^{(i)}\times\min\Bigl{\{}\hat{y}_{k, n}^{(i)},1\Bigr{\}}}{\sum_{i=1}^{D_{k,n}}y_{k,n}^{(i)}}-\alpha_{n} \tag{6}\] while \(\Psi_{2}=\Phi_{2}=\beta_{n}-\theta_{k,n}\) since the negative logarithm is already a convex function. It also confirms that the solutions of the optimization problem are equivalent to those obtained if only the original constraints were used. This optimization task turns out to be a non-zero-sum two-player game in which the \(\mathbf{W}_{k,n}^{(t)}\)-player aims at minimizing \(\mathcal{L}_{\mathbf{W}_{k,n}^{(t)}}\), while the \(\lambda\)-player wishes to maximize \(\mathcal{L}_{\lambda}\)[21, Lemma Figure 2: Explainable FDL building blocks 8]. While optimizing the first Lagrangian w.r.t. \(\mathbf{W}_{k,n}\) requires differentiating the constraint functions \(\Psi_{1}(\mathbf{W}_{k,n}^{(t)})\) and \(\Psi_{2}(\mathbf{W}_{k,n}^{(t)})\), to differentiate the second Lagrangian w.r.t. \(\lambda\) we only need to evaluate \(\Phi_{1}\left(\mathbf{W}_{k,n}^{(t)}\right)\) and \(\Phi_{2}\left(\mathbf{W}_{k,n}^{(t)}\right)\). Hence, a surrogate is only necessary for the \(\mathbf{W}_{k,n}\)-player; the \(\lambda\)-player can continue using the original constraint functions. The local optimization task can be written as, \[\min_{\mathbf{W}_{k,n}\in\Delta} \max_{\lambda,\,\|\lambda\|\leq R_{\lambda}} \mathcal{L}_{\mathbf{W}_{k,n}^{(t)}} \tag{7a}\] \[\max_{\lambda,\,\|\lambda\|\leq R_{\lambda}} \min_{\mathbf{W}_{k,n}\in\Delta}\mathcal{L}_{\lambda}, \tag{7b}\] where thanks to Lagrange multipliers, the \(\lambda\)-player chooses how much to weigh the proxy constraint functions, but does so in such a way as to satisfy the original constraints, and ends up reaching a nearly-optimal nearly-feasible solution [25]. These steps are all summarized in Algorithm 1. ``` Input:\(K\), \(m\), \(\eta_{\lambda}\), \(T\), \(L\)# See Table II Server initializes \(\mathbf{W}_{0}^{(0)}\) and broadcasts it to the CLs for\(t=0,\ldots,T-1\)do parallel for\(k=1,\ldots,K\)do Initialize \(M=\texttt{num\_constraints}\) and \(\mathbf{W}_{k,n,0}=\mathbf{W}_{n}^{(t)}\) Initialize \(\mathbf{A}^{(0)}\) (\(\mathbf{\hat{R}}^{(M+1)\times(M+1)}\) with \(\mathbf{A}_{m^{\prime},m}^{(0)}=1/(M+1)\) for\(l=0,\ldots,L-1\)do Receive the graph \(\mathcal{M}_{k,n}\) from the local model # Test the local model and calculate the attributions \(a_{k,n}^{(t)}=\texttt{Int\_Gradient}\left(\mathcal{M}_{k,n}\left(\mathbf{W}_{k,n,l}, \mathbf{x}_{k,n}\right)\right)\) # Mask the top pk dataset based on the attributions with zero padding # Calculate the log-odds metric \(\theta_{k,n}=\frac{1}{D_{k,n}^{(t)}}\sum_{i=1}^{D_{k,n}}\log\frac{\Pr\left( \hat{b}_{n,n}^{(t)}|\mathbf{x}_{k,n}^{(t)}\right)}{\Pr\left(\hat{b}_{n,n}^{(t) }|\mathbf{x}_{k,n}^{(t)}\right)}\) # Calculate the recall metric \(\rho_{k,n}=\pi^{+}\left(\mathcal{D}_{k,n}\left[\hat{b}_{n,n}^{(t)}=1\right]\right)\) Let \(\lambda^{(l)}\) be the top eigenvector of \(\mathbf{A}^{(l)}\) # Solve problem (4) via oracle optimization Let \(\hat{\mathbf{W}}_{k,n,l}=\mathcal{O}_{\delta}\left(\mathcal{L}_{\mathbf{W}_{k,n,l}}(\cdot,\hat{\lambda}^{(l)})\right)\) Let \(\Delta^{(l)}\) be a gradient of \(\mathcal{L}_{\lambda}(\hat{\mathbf{W}}_{k,n,l},\lambda^{(l)})\) w.r.t. \(\lambda\) # Exponentiated gradient ascent Update \(\hat{\mathbf{A}}^{(l+1)}=\mathbf{A}^{(l)}\odot\exp\left\{\eta_{\lambda}\Delta ^{(l)}_{\lambda}(\lambda^{(l)})\right\}\) # Column-wise normalization \(\mathbf{A}_{m}^{(l+1)}=\hat{\mathbf{A}}_{m}^{(l+1)}/\left\|\hat{\mathbf{A}}_{ m}^{(l+1)}\right\|_{1}^{2}\), \(m=1,\ldots,M+1\) end for return\(\hat{\mathbf{W}}_{k,n}^{(t)}=\frac{1}{L^{\kappa}}\sum_{l=0}^{L-1}\mathbf{W}_{k,n,l}\) Each local CL \((k,n)\) sends \(\mathbf{W}_{k,n}^{(t)}\) to the server. end parallel for return\(\mathbf{W}_{n}^{(t+1)}=\sum_{k=1}^{K}\frac{D_{k,n}}{D_{k}}\hat{\mathbf{W}}_{k,n}^{(t)}\) ``` **Algorithm 1**Explainable Federated Deep Learning ## IV Results This section analyzes the proposed Closed loop EFL framework in detail. To build the explainability-aware constrained traffic drop classification model, we use feature attributions which is the pillar of this approach. After that, we present the impact of considering jointly the recall and log-odds metrics as constraints for optimizing the FL classification problem by showing results of FL convergence and log-odds score. Finally, we study the correlation between features attributions, observed predictions, and _true_ predictions and draw some important conclusions. Specifically, to implement the model Tester and Explainer, we invoke DeepExplain framework, which includes state-of-the-art gradient and perturbation-based attribution methods [26]. It provides an attribution score based on the feature's contribution to the model's output, which we integrate with our proposed constrained traffic drop classification FL framework in a closed-loop iterative way. ### _Parameter Settings and Baseline_ Three primary slices eMBB, uRLLC and mMTC are considered to analyze the proposed Explainable FL policy. Here, the datasets are collected from the BSs and the overall summary of those datasets are presented in Table II. We use vector \(\beta\) for the explainability lower bound threshold and \(\alpha\) for the upper bound of recall score corresponding to the different slices. As a baseline, we adopt a vanilla FL [27] with post-hoc integrated gradient explanation,that is, a posterior explanation performed upon the end of the FL training. ### _Result Analysis_ In this scenario, resources allocated to slices according to their traffic patterns and radio conditions while ensuring a long term isolation via the constraints. * **Convergence:** As depicted in Fig. 3, we can conclude that the proposed constrained EFL resource allocation models of the different slices have converged faster than the baseline unconstrained IG post-hoc case. Here, the optimizer of EFL considers the relationships between the objectives and constraints of the two-player optimization problem, leading to improved performance compared to the uncon. IG post-hoc one, which accounts for only the objective function during optimization. * **Sensitivity analysis:** To analyze our proposed model's sensitivity, we choose the recall metric, which is the rate of actual positive values for measuring the performance of our binary classification model. From Fig. 4, we can observe that the recall score of the proposed one for all slices is in close proximity to the target threshold \(\gamma\) (i.e., around \(0.88\%\)), which is an acceptable value for operators and slices' tenants. * **Trustfulness:** In Fig. 5-(a), we observe the effect of changing the value top \(p\)% on the log-odds, considering proposed model for all slices. Also we present a comparative analysis of log-odds score in Fig.5-(b) for both cases which proof the superiority of the proposed constrained EFL model. So, the statistics of the log-odds score give us an approximate idea of our model's reliability and trustworthiness. It shows that the log-odds score is decreasing with respect to the top \(p\)% value, which conveys that our model is explainable and trustworthy in the training phase. Furthermore, in Fig. 6, the correlation heatmaps of the proposed XAI method of the eMBB slice has presented for further analysis. It helps us visualize the strength of relationships between different variables and, in our case, identify which feature variation impacts the most for SLA variation. To plot correlation matrix heatmap, we consider one matrix, \(\textbf{R}_{k,n}\) = [\(\textbf{a}_{k,n},\hat{\textbf{y}}_{k,n},\textbf{y}_{k,n}\)], where, \(\textbf{a}_{k,n}\) is the attribution score of features variable with dimensions \(D_{k,n}\times Q\) and \(\hat{\textbf{z}}_{k,n}\) is the predicted output variable with dimensions \(D_{k,n}\times 1\) and \(\textbf{y}_{k,n}\) is the true predicted value with dimensions \(D_{k,n}\times 1\). From the heatmap we see that the third feature, which is the channel quality, has the most impact on the recall value. If the third feature increases, the recall value will increase and vice versa. ## V Conclusion This paper has presented a novel closed-loop explainable federated learning (EFL) approach to achieve transparent zero-touch service management of 6G network slices at RAN in a non-IID setup. We have jointly considered explainability and sensitivity metrics as constraints in the traffic drop prediction task, which we have solved using a proxy-Lagrangian two-player game strategy. From the results, we conclude that the proposed EFL scheme is reliable and trustful compared to state-of-the-art unconstrained post-hoc FL. Finally, the heatmaps of the attributions correlation matrix are presented to showcase the features whose variation influence more the traffic drop. Figure 4: Analysis of Recall score with Lower bound of Recall score, \(\alpha=[0.9,0.95,0.95]\) and Upper bound of log-odds score, \(\beta=[-0.01,-0.01,-0.01]\) Figure 5: Analysis of log-odds score with Lower bound of Recall score, \(\alpha=[0.9,0.95,0.95]\) and Upper bound of log-odds score, \(\beta=[-0.01,-0.01,-0.01]\) Figure 3: Analysis of FL training loss vs FL rounds of Proposed EFL with Lower bound of Recall score, \(\alpha=[0.9,0.95,0.95]\) and Upper bound of log-odds score \(\alpha=-\)\(\alpha\)\ ## VI Acknowledgment This work has been supported in part by the projects SEMANTIC (861165), 6G-BRICKS (101096954) HORIZON-JU-SNS-2022 and ADROIT6G (101095363) HORIZON-JU-SNS-2022.